uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,563,015
arxiv
\section{Introduction} A good knowledge of the material's thermal conductivity is essential to managing heat flow in all thermal design and analysis. In most materials, their thermal conductivity is isotropic or direction independent. Recently, novel materials such as layered van der Waals solids \cite{slack_anisotropic_1962,liu_measurement_2014,lee_anisotropic_2015,luo_anisotropic_2015,lindroth_thermal_2016} and superlattices \cite{luckyanova_anisotropy_2013} have been shown to have anisotropic thermal conductivity. This allows for applications such as heat spreaders which conducts heat away very fast in one direction but limits heat flow in another direction \cite{suszko_thermally_2016}. Due to the anisotropy, experimental measurements developed so far generally requires multiple measurements along different crystal axes \cite{feser_probing_2012} . Other methods require variations of heating size\cite{schmidt_pulse_2008,liu_simultaneous_2013,jiang_time-domain_2017} or anisotropic heating or anisotropic detection in order to change or detect the desired direction of heat transport \cite{liu_measurement_2014,bogner_cross-_2017,li_anisotropic_2018}. Of course, independent measurements can be performed along different directions of the same anisotropic material \cite{luckyanova_anisotropy_2013,lee_anisotropic_2015,kim_elastic_2017} to obtain the respective thermal conductivities. Another interesting effect that is typically observed is size-dependent thermal conductivity. This happen when the experimentally measured thermal conductivity is lower than the bulk value. Experiments variations of sample size \cite{asheghi_phonon-boundary_1997,li_thermal_2003,hsiao_observation_2013,zhang_temperature_2015,ramu_electrical_2016} and heating length scales in both time \cite{siemens_quasi-ballistic_2010,minnich_thermal_2011,johnson_direct_2013,hu_spectral_2015} and frequency domain \cite{koh_frequency_2007,regner_broadband_2013} have both observed such effects. Electrical measurement methods and optical methods that directly vary heater sizes generally require multiple samples to be fabricated. Optical techniques that uses the same sample also require numerous spatial or temporal variations of the heating beam in order to observe such effects. Here, we propose an experimental method that uses spatial temporal temperature data to retrieve multiple thermal transport parameters in a single experiment. First, we demonstrate numerically that multiple thermal conductivity values in a multilayers system can be retrieved once sufficient points have been sampled. Second, we show that size-dependent thermal conductivity variations for an anisotropic material can be accurately recovered. Last but not least, we discuss the compatibility of our method to current measurement techniques and conclude possibility of extending our work to other thermal effects of interest. \section{Methodology} \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{Heating_profile_v2.eps} \caption{ Schematic of our proposed method to obtain multiple thermal transport parameters. Cylindrical symmetry is assumed in this setup. } \label{fig:aniso_case} \end{figure} Here, we assume a gaussian shaped transient heat flux on the sample surface at $t=0$ as shown in Fig. \ref{fig:aniso_case}. This simulates gaussian heating on a sample of finite thickness. The boundary condition bottom surface is assumed to be adiabatic. The proposed method then captures temperature versus time data on the material's surface and performing a spatial temporal transformation. The data in the transformed space is represented as a function of the spatial wavevector and the temporal frequency. Then, thermal conductivity parameters can be solved all at once using data from the transformed space. The temperature profile on the top surface changes with time according to Fourier's Law assuming radial symmetry in cylindrical coordinates, the heat equation is written as \cite{cahill_analysis_2004} $\frac{k_r}{r}\frac{\partial}{\partial r} (r \frac{\partial \Delta T}{\partial r}) +k_z \frac{\partial^2 \Delta T}{\partial z^2}=\rho c_p \frac{\partial \Delta T}{\partial t}$ where $k_r$ and $k_z$ are the radial and cross-plane thermal conductivities, respectively. $\rho$ is the density of the material and $c_p$ is the specific heat. We can be Hankel and then Fourier transformed to yield \begin{equation} \frac{\partial^2 {\bar{ \Delta T}(\kappa,\omega)}}{\partial z^2}=q^2 {\bar{ \Delta T}(\kappa,\omega)} \label{eq:hankel_fourier} \end{equation} where $q^2(\kappa,\omega)=\frac{k_r \kappa^2+\rho c i \omega}{k_z}$. $\kappa$ and $\omega$ are the Hankel spatial wavevector and Fourier transformed frequency respectively. The radial definition in Eq. \ref{eq:hankel_fourier} can be solved for geometries containing more than one layer \cite{carslaw_conduction_1986,feldman_algorithm_1999}. The matrix describing any of the layers can be written as \begin{equation} \begin{bmatrix} \bar{ \Delta T}_b\\ f_b \end{bmatrix}= \begin{pmatrix} A & B \\ C & D \hfill \end{pmatrix} \begin{bmatrix} \bar{ \Delta T}_t\\ f_t \end{bmatrix} \label{eq:matrix} \end{equation} where $\bar{ \Delta T}_b$ and $\bar{ \Delta T}_t$ are the back and top temperatures in the transformed space and $f_b$ and $f_t$ are the back and top heat flux. The components are $A=\cosh(q d), B=-\sinh(qd)/(k_z q), C=-k_z q \sinh(qd), D=A $ for a single layer case \cite{carslaw_conduction_1986} where $d$ is the thickness of the layer. For multilayers, each material's matrix in Eq. \ref{eq:matrix} are multiplied together with matrices in between for each interface. For our assumed boundary condition in Fig. \ref{fig:aniso_case}(a) or if the last layer is assume to be semi-infinite, then $\theta_t=-\frac{D}{C} f_t$. \section{Anisotropic Thermal Transport} Figure \ref{fig:aniso_case}(b) typically provides an example of spatially averaged temperature as a function of time after solving Eqs. \ref{eq:hankel_fourier} to \ref{eq:matrix}. The example here is for a multilayer sample shown in Fig. \ref{fig:aniso_case}(b) inset where the top surface is an isotropic material with thermal conductivity $k_{r1}=k_{z1}=205$ W/mK. The bottom material is thermally anisotropic with thermal conductivity $k_{r2}=149.35$ W/mK in-plane and $k_{z2}=1.4935$ W/mK cross-plane. Van der Waal solids such as MoS2 are known to have similar anisotropic thermal conductivities \cite{liu_measurement_2014}. The interface conductance is $G_{12}=120 $MW/m$^2$K between the two materials. This is an example of a typical multilayer sample used when optical reflectance of the top layer provides the temperature information \cite{jiang_tutorial:_2018}. Typically, the top layer is an isotropic material which is relatively thin ($d=100$ nm), so that bulk of the thermal transport happens in the bottom material which is of interest. \begin{figure*}[h!] \centering \begin{subfigure}[]{ \includegraphics[width=0.3\textwidth]{temp_time_averaged.eps}} \end{subfigure} ~ \centering \begin{subfigure}[]{ \includegraphics[width=0.3\textwidth]{temp_time_R_layers.eps}} \end{subfigure} ~ \centering \begin{subfigure}[]{ \includegraphics[width=0.3\textwidth]{abs_wk_aniso.eps}} \end{subfigure} \caption{ (b) Spatially-averaged temperature as a function of time. The averaging is performed assuming a gaussian profile with same radius as the pump beam. Such spatially-weighted temperature-time plots are typically what is obtained in optical methods to measure thermal transport. Inset: Schematic of multilayer sample. The top surface is an isotropic material with $k_{r1}=k_{z1}=205$ W/mK. The bottom material is anisotropic with $k_{r2}=149.35$ and $k_{z2}=1.4935$. The interface conductance is $G_{12}=120 $MW/m$^2$K. (a) Temperature profile $\Delta T(r,t)$ for sample as a function of radial distance and time. The heating beam at time $t=0$ is a gaussian beam with radius $w_0=15\mu$m. (b) Transformed $\bar{\Delta T}(\kappa,\omega)$ from the temperature data in (c) using Eq. \ref{eq:hankel_fourier}. The transformed data is plotted in log scale of the absolute value as the values are typically complex. Complex values from (d) can then be used to set up a system of equations for different values of $(\kappa,\omega)$ in order to solve for material parameters in a multiplayer such as in (a).}\label{fig:aniso_temp} \end{figure*} Now, we assume that the optically reflective or absorptive top-layer can provide us spatial temporal temperature data. Figure \ref{fig:aniso_temp} (a) shows the temperature profile for $\Delta T(r,t)$ on the sample as a function of time. It can be seen that the thermal decay happens very fast for the first few nanoseconds followed by a slower decay which happens over a longer period of time. If we take a Fourier and Hankel transform using Eq. \ref{eq:hankel_fourier} of the temperature data $\Delta T(r,t)$ in Fig. \ref{fig:aniso_temp} (a), we can obtain the the transformed $\bar{\Delta T}(\kappa,\omega)$ plotted in Fig. \ref{fig:aniso_temp}(b). Here, if we assume that $k_{r1,2},k_{z1,2}$ and $G_{12}$ are all unknown, we solve for these unknowns from $\bar{\Delta T}(\kappa,\omega)$. For a given set of points in the transformed space $(\kappa,\omega)$ in Fig. \ref{fig:aniso_temp}(b), each of them is a solution to Eq. \ref{eq:matrix} for the same values of $k_{r1,2},k_{z1,2}$ and $G_{12}$. Doing so for all points in the transformed space allows us to solve a system of simultaneous equations for the values of the transport coefficients $k_{r1,2},k_{z1,2},G_{12}$. Here we do not impose any assumption that $k_{r1}=k_{z1}$ and let the system of equations reveal if each material layer is isotropic or not. The accuracy of the results will depend on the number of points available for $(\kappa,\omega)$. We would like to highlight that the system of equations is highly non-linear for the multilayered version of Eq. \ref{eq:matrix}. The values of the system of equations are complex and differs by orders of magnitudes as shown in Fig. \ref{fig:aniso_temp}(b). Furthermore, thermal conductivities $k_{r,z}$ are orders of magnitude apart compared to interfacial conductance $G_{12}$, making the system a challenging one to solve in a stable manner. Thus, despite only five unknowns solve, many more equations are generally needed. \begin{table*}[] \begin{tabular}{lllllllllll} \toprule $N_{\omega}$ & $k_{r1}$ & S.E. & $k_{z1}$ & S.E. & $1/G_{12} $ & S.E. & $k_{r2} $ & S.E. & $k_{z2}$ & S.E. \\ \hline 50 &205 &1.25E-17& 205& 1.42E-20 &8.33E-09 &8.10E-23& 149.35& 1.19E-16 &1.4935 &7.39E-16\\ 25& 205& 7.06E-15 &205 &7.86E-18& 8.33E-09 &4.58E-20 &149.35& 6.73E-14& 1.4935 &4.17E-13\\ 10& 205& 7.81E-15& -158.92& 8.64E-18 &1.31E-08& 5.07E-20 &148.94 &7.44E-14& 1.5095& 4.62E-13\\ 5 &168.48& 2.05E-06& 197& 1.33E-09& -8.13E-09 &1.10E-11& 153& 2.02E-05 &1.4576 &0.00010074 \\ \hline\hline \end{tabular} \caption{Table of results obtained by varying the number of point $N_{\kappa,\omega}$ extracted from the transformed space $(\kappa,\omega)$. $N_{\kappa}=10$ for all cases shown. S.E. represents the standard error of the mean. A small set of values $N_{\omega}=25$ is sufficient to retrieve actual values accurately. } \label{table:results} \end{table*} In Table \ref{table:results}, we solved the system of equation using standard non-linear regression methods . We fixed the number of points in $\kappa$ space to be $N_{\kappa}=10$ so as to correspond to the smallest possible spatial resolution in the micrometer range, achievable with conventional imaging methods. Then, we vary the number of points in $\omega$ with $\omega_{max}=1\times 10^{10}$ rad/s. As we increase $N_\omega$, the observed experiment time increases and the solutions become more accurate. As shown As little as $N_\omega=25$ points is sufficient to reconstruct all values accurately. \section{Anisotropy with Size Dependence} \begin{figure}[h!] \centering \begin{subfigure}[]{ \includegraphics[scale=0.4]{temp_time_R.eps}} \end{subfigure} \centering \begin{subfigure}[]{ \includegraphics[scale=0.4]{Twk_graphite.eps}} \end{subfigure} \centering \begin{subfigure}[]{ \includegraphics[scale=0.4]{in_plane.eps}} \end{subfigure} \centering \begin{subfigure}[]{ \includegraphics[scale=0.4]{cross_plane.eps}} \end{subfigure} \caption{(a) Temperature versus time plot of graphite with the same initial heating size as Fig. \ref{fig:aniso_case}(c). The temperature fall is much sharper than in Fig.\ref{fig:aniso_case}(c) due to the much higher in-plane thermal conductivity of graphite. (b) Transformed $\bar{\Delta T}(\kappa,\omega)$ from the temperature data in (c) using Eq. \ref{eq:hankel_fourier}. The transformed data is plotted in log scale of the absolute value as the values are typically complex. (c,d) Reconstructed in-plane (c) and cross-plane (d) thermal conductivity as a function of spatial wavevector $\kappa$ (c) and time domain frequency $\omega$ (d) versus the reference spectrum. The reference spectrums are generated using phonon mean-free-path distributions of graphite \cite{minnich_phonon_2015} convoluted with experimental suppression functions in spatial \cite{ding_radial_2014} and time domains \cite{yang_heating-frequency-dependent_2015}. }\label{fig:graphite} \end{figure} The next example involves the use of the same method but assume a size-dependent thermal conductivity. This means that $k_{r,z}$ is a function of the transformed space parameters $(\kappa,\omega)$. Here, we choose graphite as a test case. This is because graphite is highly anisotropic, making the thermal transport of great interest. Furthermore, graphite has a in plane phonon mean-free-path (MFP) on that range of micrometers at room temperature while a small phonon MFP of hundred nanometers in cross plane (or the c-axis). Recent effort to resolve c-axis MFP requires careful measurement of exfoliated samples of different thicknesses\cite{zhang_temperature_2015} . In order to obtain the dependence of $k_{r,z}$ on $(\kappa,\omega)$, we need to obtain the radial and time domain suppression function and the phonon MFP distribution of graphite in the radial and cross plane direction. Here, we took phonon MFP data from Ref.\cite{minnich_phonon_2015} and suppression function in radial \cite{ding_radial_2014} and frequency domain \cite{yang_heating-frequency-dependent_2015}. We assume that in-plane thermal conductivity is purely suppressed radially and that cross-plane thermal conductivity is suppressed purely in the time domain. Figure \ref{fig:graphite}(a) shows the temperature versus time data of graphite with the same initial heating condition as the multilayer example (\ref{fig:aniso_case}(a)). One can see that graphite has a much faster decay time compared to multilayer case in Fig. \ref{fig:aniso_temp}(b) due to the high thermal conductivity. The data can be transformed in the same manner, resulting in a distribution in the phase space shown in \ref{fig:graphite}(b). Now, the system of equations is not going to be the case where all values $k_{r,z}$ are fixed. $k_{r,z}$ will be different for each set of $(\kappa,\omega)$. So we only have two equations for two unknowns by using the fact that $\bar\Delta T(\kappa,\omega)=\bar\Delta T(\kappa,\omega)^{\dagger}$. Solving $\bar\Delta T(\kappa,\omega)=\bar\Delta T(\kappa,\omega)^{\dagger}$ for each pair of $(\kappa,\omega)$, we assume a smooth solution of $k_{r,z}(\kappa,\omega)$ and obtain stable solutions for $k_{r,z}(\kappa,\omega)$. Here, we choose $N_k=50$ and $N_{\omega}=500$ in order to show a smooth size-dependent spectrum for $k_{r,z}(\kappa,\omega)$. Figures \ref{fig:graphite}(c) and (d) shows the reconstructed $k_{r}(\kappa)$ and $k_{z}(\omega)$ respectively. We assume independence of $k_{r}$ from $\omega$ and $k_{z}(\kappa)$ from $\omega$ such that the in-plane thermal conductivity only depends on radial heating size and the cross-plane thermal conductivity only depends on cross-plane heating size due to frequency dependent penetration depth. The reconstruction is almost in perfect agreement with the input size-dependent thermal conductivities, showing the potential of this method to accurately retrieve size-dependent thermal conductivity for various materials. \section{Discussion and Conclusion} One proposed experimental method to realize our scheme used to retrieve spatial temporal temperature data which can be captured by regular CCD cameras. Our method can be adapted to any optical transient methods such as time-domain thermoreflectance \cite{jiang_tutorial:_2018} where optical reflectance is used. Also, IR detectors in laser flash methods can also be used \cite{zhao_measurement_2016}, albeit at slower response time and poorer spatial resolution. Alternatively, nanoscale scanning or imaging methods \cite{gomes_temperature_2005, mecklenburg_nanoscale_2015,laraoui_imaging_2015,kilbane_far-field_2016} can be used to retrieve temperature data point-by-point and with much better spatial resolution. This requires potentially longer data collection. Nevertheless, no heating size or sample size is required to be changed. If materials have in-plane anisotropy such as black phosphorous, regular spatial decomposition in Cartesian coordinates can be performed to obtain $\bar{\Delta T}$ in the transformed coordinates. More complicated optical excitation patterns such as transient grating can also be used and transformed according to the symmetry of the system. Essentially, our method opens up the concept of data multiplexing into thermal transport measurements. Now, instead of sending and retrieving one frequency at a time, we can retrieve information encoded over various frequencies by sampling and decode all at once. Last but not least, our method offers a snapshot technique to gather size-dependent thermal conductivity information with the least experimental effort possible. Using methods in Machine Learning, we can potentially reduce the amount of spatial temporal data required to obtain thermal transport parameters \cite{zhang_machine_2018,xie_phonon_2018}. Furthermore, novel effects such as coherent phonons \cite{luckyanova_coherent_2012,ravichandran_crossover_2014} and hydrodynamic transport \cite{lee_hydrodynamic_2015} can potentially be directly observed from one experimental setting, provided the correct features in our transformed spectrum are identified. In short, our method can provide direct diagnosis for systematic design of nanostructured materials to modify the overall thermal properties.
1,108,101,563,016
arxiv
\section{Introduction} Liquid argon (LAr) detectors need to monitor the purity of the LAr since electronegative impurities (mainly $O_2$) capture ionization electrons, and hence degrade the performance of the detector. Different types of liquid argon purity monitors were developed for the LAr detectors in use or for future experiments. For the LAr calorimeters in the H1 \cite{H1} and the Atlas \cite{Atlas} experiments the necessary sensitivity of the monitors to electronegative impurities is of the order of $ppm$ (oxygen equivalent). The pulse height spectra from $\beta$-decay electrons and $\alpha$-particles are measured in a LAr ionization chamber. In the drift chamber of the ICARUS detector \cite{Amerio:2004ze} drift times of the order of $ms$ occur. In order to measure such long drift times, it is necessary to purify the LAr from to a level below $0.3\ ppb$ (oxygen equivalent). Purity monitors with this sensitivity were built \cite{aqumon}, measuring the lifetime of electrons which drift in a homogeneous electric field over a distance of about 10~cm. The drift electrons were extracted using an appropriately chosen photocathode, which is flashed periodically with a bright light pulse. Traditionally the problems encountered in designing purity monitors were (1) related to the creation of a sufficiently large drift electron cloud in order to produce clean signals above noise and (2) to the extraction of the purity with high precision and sensitivity. The purity monitor described in this paper is also based on a lifetime measurement of electrons. The method to determine the lifetime of electrons consists of measuring the attenuation of the charge of an electron cloud drifting in an electric field as a function of the drift time. The mean lifetime of the electrons is obtained from equation (\ref{exp}): \begin{equation} N(t_{drift}) = N_0 \cdot \exp{(-t_{drift}/\tau)}, \label{exp} \end{equation} where $N_0$ is the number of electrons at the beginning and $N(t_{drift})$ the number of electrons at the end of the drift path corresponding to a drift time $t_{drift}$. However, our purity monitor includes the following new features: \begin{itemize} \item a new almost monochromatic source of free electrons based on an energetic 5.3~MeV $\alpha$-source; \item a dipole geometry to introduce a very high field in the region of the cathode and anode, and a very low field in the drift region in-between (inhomogeneous field); \item a direct start and stop trigger for a source-event from the independent readout of the cathode and anode induced signals; \item a built-in variation of the drift time, due to the different path along the dipole field lines introducing a spread-in-time for the arrival of the electron cloud on the anode; \item an event-by-event measurement of the drift time and induced charges before the drift at the cathode and after the drift at the anode, yielding the attentuation as a function of the event-by-event varying drift-time. \end{itemize} \begin{figure}[htb] \begin{center} \includegraphics[width=0.5\textwidth]{scheme.eps} \caption{\label{scheme} Schematic view of the purity monitor.} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{source.eps} \caption{\label{source} Electronic microscope picture of the spherical platinum cathode with the $^{210}Pb$ deposited on the surface. The scale is shown in the lower-bottom corner.} \end{center} \end{figure} \section{The purity monitor} \label{sec:puri} The purity monitor described in this paper is shown schematically in Figure \ref{scheme}; it uses the ionization electrons produced by the $5.3~MeV \alpha$-particles emitted by $^{210}Po$ to measure the electron lifetime. The $\alpha$-emitter $^{210}Po$ is produced with a decay fraction of almost 100\% through $\beta$-decays in the decay chain of $^{210}Pb \rightarrow ^{210}Bi \rightarrow ^{210}Po$. The decay chain ends at the stable $^{206}Pb$ isotope. In the decay chain of $^{210}Pb$ several $\alpha$- and $\beta$-decays occur, but only the $\beta$-decay of $^{210}Bi$ with an endpoint energy of $1.2~MeV$ and the $\alpha$-decay of $^{210}Po$ with an energy of $5.3~MeV$ have decay probabilities of almost $100\%$, all other decays are very rare. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{p40-f.eps} \caption{\label{efield} Electric drift field on the axis between the electrodes for different diameters of the electrodes and a high voltage of $\pm$1.5~kV.} \end{center} \end{figure} The high ionization density of $\alpha$-particles in LAr, typically between $750\div 1500\rm\ MeV/cm$ as opposed to about 2~MeV/cm for a m.i.p., leads to an extremely high recombination rate \cite{Birks} of the argon ions with the electrons along the track of the $\alpha$-particles, thus, reducing the measurable electron charge \cite{diploma}. The recombination rate is reduced, when the ionization occurs in a strong electric field \cite{Imel}. At a typical drift field of 500~V/cm, less than 1\% is recovered as free electrons. To expose the $\alpha$ source to the mandatory electric fields of the order of $40\div 150$~kV/cm, it was deposited onto the surface of a spherical platinum high voltage cathode with a diameter of about 0.5~mm; applying a high voltage of 2~kV produces an electric field $E\approx V/r$, where $r$ is the radius of the sphere, of about 80~kV/cm at the surface. The spherical electrodes were made from a 76~$\mu$m thick platinum wire by melting one end of the wire in a flame of a Bunsen burner \cite{diploma}; the surface tension of the melting platinum was forming spherical drops. The 20~kBq~$^{210}Pb$-source \footnote{Purchased from AEA Technology QSA GmbH, D-38110 Braunschweig} with a mean lifetime of 31.9 years was dissolved in a 1.2 molar $HNO_3$-solution. A thin layer with an activity of about $100~Bq$ was deposited electrolytically on the spherical cathode. Figure \ref{source} shows an electron microscope picture of a cathode, having a thin layer of $^{210}Pb$ deposited on the surface; the diameter of the electrode is 458 $\pm 12~\mu$m. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{geometry.eps} \caption{\label{monitor} Mechanical design of the purity monitor.} \end{center} \end{figure} The anode also consists of a Pt sphere with a diameter of 335 $\pm 7 ~\mu$m. A symmetrical high voltage $\pm HV$ applied to the two electrodes produces approximately an electric dipole drift field. The electric drift field on the axis between the electrodes is shown in Figure \ref{efield} for a high voltage of 1.5~kV. \begin{figure}[tb] \begin{center} \includegraphics[width=0.8\textwidth]{setup.eps} \caption{\label{setup} Measuring set-up.} \end{center} \end{figure} The number of free ionization electrons remaining after recombination depends on the electric field along the entire track of the $\alpha$-particle, i.e., it depends on the high voltage applied to the cathode and its diameter. The range of the 5.3~MeV $\alpha$-particles in LAr is only about 50~$\mu$m, i.e., 1/5 of the used cathode radius. Thus, the electric field along the entire track of the $\alpha$-particle can (in the worst case) decrease by only 33\% from the value at the cathode surface. The number $N_0$ of electron-ion pairs produced by $\alpha$-particles depositing their total energy of 5.3~MeV in LAr is $N_0 = 5.3 \cdot 10^6 eV/w = 225 \cdot 10^3$, where $w = 23.6$~eV is the mean energy needed to create an electron-ion pair in LAr. We stress that this number is reduced if the alpha deposits a non-negligible fraction of its energy in the lead (the range in lead is 16~$\mu$m). Neglecting this effect, we anticipate here (see section~\ref{sec:edep} for more details) that the measured quenching of this charge by the recombination (recombination factor $R$) varied between 0.22 at a field on the cathode surface of 44~kV/cm to 0.39 at 154~kV/cm (see Figure \ref{charge}), a variation consistent with the Box model of recombination. The charge of the electron cloud at the cathode and the anode is obtained by integrating the current signals induced on the electrodes by the movement of the electrons in the drift field. The fast movement in the high field near the surface of the electrodes induces a fast rising current signal (see Figure \ref{pulses}) with a good signal to noise ratio. The argon ions have drift velocities orders of magnitude smaller than the electrons so that they do not contribute to the current signal. To summarize, the configuration of Figure \ref{scheme} combines the following desirable features: \begin{itemize} \item the high electric field on the cathode surface containing the $\alpha$ source suppresses the recombination, \item the fast drift velocity of the electrons in the high electric field near the surface of the electrodes induces a short (approx. 1 $\mu$s) current pulse, which can be measured. \item the small drift velocity of the electrons in the central region of the dipole field (minimal field strength is a few V/cm) allows to measure long drift times. \end{itemize} The mechanical design of the purity monitor is shown in Figure \ref{monitor}. Two circular polyethylene plates with the electrodes in the center are held by three Macor rods. The monitor is shielded by a stainless steel cylinder and covered at the bottom and at the top with steel plates. The distance between the electrodes was varied between 20~mm and 50~mm. The measuring set-up is shown in Figure \ref{setup}. A glass dewar holding the purity monitor was mounted in a vacuum chamber. Before filling the dewar with LAr, the chamber was heated to about 70$^{\circ}$C for at least one day and pumped to a pressure of about 10$^{-6}$~mbar. The LAr passed through a 5~$\ell$ purification cartridge containing 50\% BTS\footnote{Fluka No. 18820, Fluka Chemie GmbH, CH-9471 Buchs SG, Switzerland} catalyst and 50\% copper oxide. The BTS and the copper oxide were reduced by controlled flowing of hydrogen gas through the cartridge before it could be used to purify LAr from oxygen. The readout electronics for the two electrodes is operated at room temperature outside the vacuum chamber. It consists of a low noise charge preamplifier of the type used for the ICARUS drift chamber \cite{preamp} followed by a custom-made ac-coupled amplifier, which also acts as a bandpass filter transmitting frequencies from 530~Hz to 760~kHz (-3 dB values). The preamplifier integrates the current pulse from the electrode; its decay constant is about 250~$\mu$s. The circuit has an overall sensitivity of 10.8~mV/k$e^-$ corresponding to 68~mV/fC. Both electrodes have their own readout channels, which were carefully calibrated. The analog signals were sampled with 10~MHz and digitized by a 12 bit ADC card in a PC; the digital data were accumulated with a LabView program. \section{Results} \subsection{Signal shapes} Figure \ref{pulses} shows the measured pulse shapes from the cathode and the anode at a high voltage of $\pm$1.5~kV and an electrode distance of 20~mm. The integrated current (i.e. the charge) induced on the electrodes by the moving electron cloud is shown as a function of time. The fast rising leading edge of the signals is induced by the fast movement of the electrons in the high field near the surface of the electrodes. The decay of the signal is given by the decay time of the integrating electronics. In the anode signal the contribution to the signal over the whole drift time is seen: it starts with the fast movement of the electron cloud near the cathode, gets flat during the time of the slow drift through the central region of the drift field and rises sharply when the cloud reaches the anode. In the cathode signal the contribution from the drift in the low field region is hidden in the rounding of the signal after the sharp rise. The charge is given by the pulse height difference between the maximum (minimum) and the base line defined by the (average) signal measured at times before the cathode pulse starts. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{p20-1.eps} \caption{\label{pulses} Measured cathode and anode pulse shapes (raw data).} \end{center} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{p20-55.eps} \caption{\label{spectra} Measured cathode and anode pulse height distributions. The anode was at a distance of 20~mm from the cathode.} \end{center} \end{figure} Figure \ref{spectra} shows the pulse height spectra from the cathode and the anode measured at three different high voltages and at a separation of the electrodes of 20~mm. The peaks from $\alpha$-particles are clearly separated form noise and the small pulses from $\beta$-decays ($\beta$-decay electrons have a maximal range in LAr of 3~mm, compared to 50~$\mu$m for $\alpha$-particles, and hence, have a different distribution of induced signals between cathode and anode). The energy deposited by the $\alpha$-particles in LAr is reduced from the maximal energy of 5.3~MeV and smeared out by the energy deposition in the lead of the source (the range in lead is 16~$\mu$m). The maximal charge (for a fixed high voltage) is given by the end point of a pulse height spectrum and corresponds to the deposition of the total energy of 5.3~MeV in the LAr. The values for the maximal and the mean (corresponding to the peak of the distribution) measured charge are given in Table~\ref{tab1}. \begin{table}[ht] \begin{center} \begin{tabular}{|l|c|c|c|c|}\hline HV ($kV$) & Average charge &Maximal charge& Minimal drift time & Lifetime $\tau$ \\ & [$10^3$ electrons] & [$10^3$ electrons] & $[\mu s]$ & $[\mu s]$ \\ \hline 1.0 &36 &50 & 20 & 83 $\pm$2 \\ 1.5 &49 &62 & 14 & 105$\pm$2 \\ 2.0 &58 &70 & 12.5 & 112$\pm$2 \\ 2.5 &62 &76 & 11 & 109$\pm$2 \\ 3.0 &69 &82 & 10.5 & 120$\pm$3 \\ 3.5 &72 &88 & 10 & 108$\pm$3 \\ \hline \end{tabular} \end{center} \caption{ \label{tab1} The average and the maximal electron charge measured at the cathode, the minimal drift time and the lifetime measured at different high voltages for an electrode distance of 20~mm are given. The errors given for the lifetime are statistical fit errors only.} \end{table} \subsection{E-field dependence} \label{sec:edep} In order to study the recombination of the electron-ion pairs produced by $\alpha$'s as a function of the electric field, we consider the maximal charge (i.e. the end-point) of the electron clouds measured at the cathode. In this way, we try to suppress the uncertainties related to the energy loss of the $\alpha$'s in the source Pb before they enter the liquid Argon medium\footnote{We note that regardless of the actual thickness of the deposit of lead on the Pt sphere, solid angle considerations limit the statistics of $\alpha$'s to the few outer microns thickness.}. In Figure \ref{charge} the maximum measured charged is plotted as a function of the high voltage (or as a function of the electric field on the cathode surface). The strong dependence on the electric field supports the interpretation of the events in favour of the $\alpha$-decays; the $\beta$-decays would never show such a dependence. To our knowledge, this curve represents the first measurement of the recombination factor for $\alpha$-particles in LAr as a function of the electric field at field strengths $\gtrsim 40\rm\ kV/cm$. The observed curve fits well with the Box Model\cite{Imel}: \begin{equation} \frac{N}{N_0} = \frac{E}{C} \ln(1 + \frac{C}{E}), \end{equation} where $N$ is the number of electrons after recombination and $N_0$ before the recombination. $C$ is a constant depending on the ionizing particle and the medium, but not on the field. For the field, we use $E=f\times V$ where $V$ is the potential of the cathode, and $f$ is the amplification due to the sphere geometry. For a sphere of radius $r$, one expects $f=1/r$. From a fit to the measured curve as a function of the cathode voltage, we can extract $C=214\rm\ cm/kV$, $f=42\ \rm cm^{-1}$ and $N_0 = 141\cdot 10^3$ electrons. The 68\% C.L. for the parameters assuming a 10\% uncorrelated error on the points is $159<C<299\rm\ cm/kV$, $29<f<55\ \rm cm^{-1}$ and $(112<N_0<198) \cdot 10^3$ electrons. A systematic variation of the measured points by $\pm 10\%$ does not change appreciably the fitted $C$ and $f$ parameters which depend mostly on the changing slope of the curve. The fitted amplification factor $f=42\ \rm cm^{-1}$ is in perfect agreement with the measured properties of the cathode, since the radius of a sphere corresponding to such an amplification is $r\approx 1/f=240\ \mu m$\footnote{We recall here that the range of the $\alpha$ being so short, the error introduced by the variation of the field over the range is less than 30\%.}. The measured diameter of the electrode (see section~\ref{sec:puri}) is 458 $\pm 12~\mu$m. In order to compare this result to expectations, we developed a simulation of the $\alpha$-source. The number of ionization electrons was obtained by numerically integrating $dN/dx$ of the semi-empirical expression of Birks \cite{Birks} along the track of the $\alpha$-particles \cite{diploma}: \begin{equation} \frac{dN}{dx} = \frac{\frac{dE}{dx} \frac{1}{w}}{1 +k_B(E) \frac{dE}{dx}}, \end{equation} where $w$ is the mean energy to produce an electron-ion pair in LAr, $w$ = 23.6~eV. The Birks law parameters were extracted from measurements in a LAr TPC with m.i.p. or stopping protons up to $\approx 30\rm\ MeV/cm$ at electric fields up to 500~V/cm\cite{Cennini:ha}. In order to predict the recombination of the $\alpha$, one must extrapolate these parameters to very high ionization densities, between 750 and $1500\rm\ MeV/cm$. The E-field dependence of the Birks factor $k_B(E)$ was obtained from the Box Model. For $\alpha$-particles in LAr, we obtain $C = 210\rm\ cm/kV$. This is in excellent agreement with the observed shape of the curve. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{p20-hv.eps} \caption{\label{charge} Maximum measured charge at the cathode as a function of the applied high voltage (or as a function of the electric field on the cathode surface). Also shown is the average measured charge (see text for details).} \end{center} \end{figure} \subsection{Drift electron lifetime} Selecting $\alpha$-particle events with a cut on the pulse height, the measured anode to cathode charge ratio $Q_a/Q_c$ for each selected event is plotted versus the drift time in Figure \ref{tau0}, for a distance of the electrodes of 20~mm and high voltages of 1.5 and 2.5~kV. The drift time of the events varies between $15\div 55 \mu s$ for the 1.5~kV and $12\div 35 \mu s$ for the 2.5~kV. This variation is due to different drift paths of the electrons in the dipole field: the minimal drift time corresponds to electrons drifting on the axis between the electrodes; electrons starting off axis on the cathode sphere follow the dipole field lines, i.e., they have a longer drift path and, in addition, feel the smaller drift field in the central region. A direct fit of an exponential decay function to the charge ratio as a function of the drift time yields the mean lifetime $\tau$ of the electrons. Table \ref{tab1} summarizes the results obtained with different high voltages for an electrode distance of 20~mm. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{p20-20.eps} \caption{\label{tau0} The anode to cathode charge ratio is plotted as a function of the drift time. From an exponential fit the mean lifetime $\tau$ is obtained.} \end{center} \end{figure} The capture cross section of electrons by electronegative impurities depends on the electric drift field \cite{capture}, i.e., the observed drift-electron lifetime can also depend on the drift field and not only on the concentration of impurities. Since the drift field in our purity monitor is very inhomogeneous, this effect can in principle also distort the exponential decay curve of the charge. However, for drift fields $\leq$ 600~V/cm the lifetime of electrons in LAr for a $O_2$ concentration of 3.5~ppb was measured to be almost constant \cite{edep}. As seen from Figure \ref{efield}, the drift path in the high field ($\geq$ 1~kV/cm) is only a very small fraction of the total drift path and contributes very little to the drift time. Thus, only a small correction, depending on the applied high voltage and the distance between the electrodes, has to be applied to the measured drift time to normalize it to a constant drift field. This correction was not applied to the lifetimes given here. \section{Conclusion} To conclude, we have developed a novel LAr purity monitor using an $\alpha$-source in a very high electric field to produce the free drift electrons. The adopted dipole geometry has allowed to avoid the otherwise typical strong quenching of the $\alpha$. We have measured the recombination factor of the ionization charge from $\alpha$-particles as the function of the electric field, ranging from $40\div 150$~kV/cm. In a series of measurements performed in a dedicated setup, drift electron lifetimes of the order of 100~$\mu$s were measured at electrode distances of 20~mm with a precision of 2-5 \%. To measure longer lifetimes, up to a few ms, larger electrode distances could be needed, making it necessary to install field shaping electrodes in the long drift region. \section*{Acknowledgements} We thank Prof. B. Eichler from the Radiochemistry Department of the Paul Scherrer Institut (PSI), CH-5232 Villigen PSI, for his advice and for his help to prepare the $\alpha$ source. We thank P.~Picchi, F.~Pietropaolo and F.~Sergiampietri for useful discussions and suggestions. This work was supported by the Swiss National Research Foundation.
1,108,101,563,017
arxiv
\section{model system}\label{s2} We start from the Hamiltonian considering the coupling of a 2D TI and a $d$-wave superconductor, with $H =H_{\mathrm{TI}}+H_{\mathrm{SC}}+H_{\mathrm{I}}$. $H_{TI}$ describes a 2D TI in the square lattice~\cite{re200,re201}, \begin{equation} \begin{aligned} H_{TI} &=\sum_{{\bf k}}C_{{\bf k}}^\dagger (h_\mathbf{k}\sigma_3s_0+2\lambda_{0}\sin k_x\sigma_1s_3\\&+2\lambda_0\sin k_y\sigma_2 s_0)C_{{\bf k}}, \end{aligned} \end{equation} with $h_\mathbf{k}=h_0-2t(\cos k_x+\cos k_y)$. $C^\dagger_{\bf k}$ is a forth order wave vector with $C^\dagger_{\bf k}=(c^\dagger_{{\bf k}1\uparrow},c^\dagger_{{\bf k}2\uparrow},c^\dagger_{{\bf k}1\downarrow}, c^\dagger_{{\bf k}2\downarrow})$. The subscripts $1,2$ and $\uparrow$, $\downarrow$ represent the orbital and spin indices, respectively. $s_i$ and $\sigma_i$ are identity matrix $(i=0)$ or Pauli matrices $(i=1,2,3)$ in the spin and orbital spaces, respectively. $H_{SC}$ describes a $d$-wave superconductor, \begin{equation} H_{SC}=\sum_{{\bf k}\sigma}\varepsilon_{\bf k}d^\dagger_{{\bf k}\sigma}d_{{\bf k}\sigma}+\sum_{{\bf k}}\Delta_{\bf k}(d^\dagger_{{\bf k}\uparrow}d^\dagger_{{-\bf k}\downarrow}+h.c.), \end{equation} with $\varepsilon_{\bf k}=-2t(\cos k_x+\cos k_y)-\mu$, and $\Delta_{\bf k}=2\Delta_0 (\cos k_x-\cos k_y)$. $H_I$ is the interlayer single particle hopping term, with \begin{equation} H_{I}=-t_\perp \sum_{{\bf k}\alpha\sigma}(c_{{\bf k}\alpha\sigma}^\dagger d_{{\bf k}\sigma}+h.c.). \end{equation} To study the edge states and the Majorana corner states numerically, let us transform the Hamiltonian to the real space by preforming a Fourier transformation. Then the Hamiltonian is rewritten as, \begin{equation} \begin{aligned} H_{TI} &=-t\sum_{{\bf i}\alpha}(C_{{\bf i}}^\dagger \sigma_3s_0 C_{{\bf i}+\alpha}+h.c.)+h_0\sum_{{\bf i}\alpha}C_{{\bf i}}^\dagger \sigma_3s_0 C_{{\bf i}} \\&-\lambda_0\sum_{{\bf i}}(iC_{{\bf i}}^\dagger\sigma_1s_3C_{{\bf i}+\hat{x}}-iC_{{\bf i}}^\dagger\sigma_2s_0C_{{\bf i}+\hat{y}}+h.c.), \end{aligned} \end{equation} \begin{equation} \begin{aligned} H_{sc} &=-t\sum_{{\bf i}\alpha\sigma}(d_{{\bf i}\sigma}^\dagger d_{{\bf i}+\alpha,\sigma}+h.c.)-\mu\sum_{{\bf i}\sigma}d_{{\bf i}\sigma}^\dagger d_{{\bf i}\sigma} \\&+\sum_{\langle{\bf ij}\rangle}(\Delta_{\bf ij}d_{{\bf i}\uparrow}^\dagger d_{{\bf j}\downarrow}^\dagger+h.c.), \end{aligned} \end{equation} and \begin{equation} H_{I}=-t_\perp \sum_{{\bf i}\alpha\sigma}(c_{{\bf i}\alpha\sigma}^\dagger d_{{\bf i}\sigma}+h.c.), \end{equation} For the $d$-wave pairing, the site ${\bf j}$ is the nearest-neighbor site to the site ${\bf i}$, with $\Delta_{\bf ij}=\pm\Delta_0$ ($\pm$ depends on $\langle{\bf ij}\rangle$ along the $x$ direction or $y$ direction). The total Hamiltonian can be rewritten as the matrix form. In the momentum space, the matrix is a $12\times 12$ matrix $\hat{M}_{\bf k}$ with $H=\sum_{\bf k}\Psi^{\dagger}_{\bf k}\hat{M_{\bf k}}\Psi_{\bf k}$. The vector $\Psi^{\dagger}_{\bf k}$ is expressed as, \begin{equation} \Psi^\dagger_{\bf k}=(C^\dagger_{\bf k},C_{-{\bf k}},d^\dagger_{{\bf k}\uparrow},d^\dagger_{{\bf k}\downarrow},d_{-{\bf k}\uparrow},d_{-{\bf k}\downarrow}). \end{equation} In the real space, the Hamiltonian matrix $\hat{M}$ is a $N\times N$ lattice ($H=\Psi^{\dagger}\hat{M}\Psi$), with $N=8N_1+4N_2$ ($N_1$ and $N_2$ are the number of sites in the 2D TI layer and the $d$-wave superconducting layer, respectively). The vector $\Psi^{\dagger}$ is expressed as, \begin{eqnarray} \Psi^\dagger =(C^\dagger_1,C_{1},\cdots,C^\dagger_{N_1},C_{N_1},d^\dagger_{1\uparrow},\cdots,d_{N_2\downarrow}). \end{eqnarray} Here the vector $C^\dagger_{i}$ is the Fourier transformation of the vector $C^\dagger_{\bf k}$ Diagonalizing the Hamiltonian matrix, we obtain the retarded Green's Function matrix $\hat G$, with the elements being expressed as, \begin{equation} G_{ij}(E)=\sum_n\frac{u_{in}u^{*}_{jn}}{E-E_n+i\Gamma}. \end{equation} Here $u_{in}$ and $E_n$ are the eigen-vectors and eigen-values of the Hamiltonian matrix, respectively. In the momentum space, the spectral function of the 2D TI layer is calculated through, \begin{equation} A({\bf k},E)=-\frac{1}{\pi}\sum^4_{p=1}\mathrm{Im} G_{pp}({\bf k},E). \end{equation} The proximity induced pairing term for the orbital $\tau$ can be studied through the mean-field pairing order parameter, expressed as, \begin{equation} \Delta_\tau({\bf k})=\langle c^\dagger_{{\bf k}\tau\uparrow}c^\dagger_{-{\bf k}\tau\downarrow}\rangle=\sum_n{u^{*}_{\tau,n}({\bf k})u_{\tau+6,n}({\bf k})}f(E_n), \end{equation} where $f(x)$ is Fermi distribution function. In the real space, the effective pairing order parameter of the sites $i$ and $j$ for the orbital $\tau$ can be expressed as, \begin{equation} \Delta^\tau_{ij}=\sum_nu^{*}_{h(i),n}u_{h(j)+6,n}f(E_n), \end{equation} with $h(i)=\tau+8(i-1)$. We define the site-dependent $d$-wave pairing magnitude for the orbital $\tau$, with \begin{equation} \Delta^\tau_i=\mid \Delta^\tau_{i,i+\hat{x}}+\Delta^\tau_{i,i-\hat{x}} -\Delta^\tau_{i,i+\hat{y}}-\Delta^\tau_{i,i-\hat{y}}\mid. \end{equation} At the system edge, $\Delta^\tau_i$ is expressed as \begin{equation} \Delta^\tau_i=\mid 2(\Delta^\tau_{i,i+\hat{\alpha}}+\Delta^\tau_{i,i-\hat{\alpha}}) \mid \qquad (\alpha=x,y). \end{equation} \begin{figure} \centering \includegraphics[width=1.8in]{fig1.eps} \caption{(Color online) (a)The energy bands in the momentum space. (b) The spectral function as functions of the energy and the momentum in the momentum space. }\label{fig1} \end{figure} The local density of states (LDOS) of the site $i$ in the 2D TI layer can be calculated through the Green's function in the real space, with, \begin{equation} \rho_i(E)=-\frac{1}{\pi}\sum^4_{p=1}\mathrm{Im} G_{m+p,m+p}(E). \end{equation} with $m=8(i-1)$. In the following presented results, the parameters are set as $t=1$, $\lambda_0=0.5$, $h_0=3$, $\mu=-0.3$, $\Delta_0=0.2$, $t_\perp=0.8$, and $\Gamma=0.01$. Our main results are not sensitive to the parameters we considered. We first study the energy spectrum in the system bulk. The energy bands obtained by diagonalizing the $12\times12$ Hamiltonian in the momentum space are presented in Fig.~1(a). One striking feature is that the $\mathcal{C}_4$ rotational symmetry of the energy bands is broken, namely, the quasiparticle bands are significantly different along the $(k_x,0)$ direction and the $(0,k_y)$ direction. However, such a rotational symmetry is preserved by adding the $d$-wave pairing term directly into the 2D TI Hamiltonian and neglecting the inter-layer hopping~\cite{re100,RN4,re34}. Experimentally, the energy bands can be detected through the spectral function. Here for the 2D TI layer, we have checked numerically that the spectra are fully gapped in the whole Brillouin zone. The spectral function of the 2D TI layer as functions of the momentum and the energy is presented in Fig.~1(b). Also, here the $\mathcal{C}_4$ symmetry is broken and such asymmetrical behavior may be detected by later experiments. \begin{figure} \centering \includegraphics[width=3in]{fig2.eps} \caption{(Color online) Numerical results considering a cylinder geometry. (a) The eigenvalues of the Hamiltonian with the open boundary condition along the $x$-direction being considered. (b) The spectral function at the $i_x=1$ boundary. (c) The spectral function at the $i_x=N_x$ boundary. (d-f) Similar to panels (a-c) but for the open boundary condition along the $y$-direction. }\label{fig2} \end{figure} We now study the edge states through considering the cylinder geometry with the open boundary condition along one direction and the periodic boundary condition along the other. The energy bands with the open boundary condition along the $x$ direction are presented in Fig.~2(a). The whole energy bands should include both the bulk states and the edge states. Note that, at these two boundaries, the energy bands are still fully gapped. The gapped edge states can be studied more clearly through the spectral functions at the two system boundaries, which are presented in Figs.~2(b) and 2(c), respectively. As is displayed, the spectra at these two boundaries are exactly the same. An energy gap around 0.05 is revealed. This gapped behavior originated from the proximity induced effective $d$-wave pairing term~\cite{re100,RN4,re34}. We turn to discuss the edge states when the open boundary rotates to the $y$ direction. The corresponding energy bands and the spectral functions at the system boundaries ($i_y=1$ and $i_y=N_y$) are presented in Figs.~2(d)-2(f). As presented, both the energy bands and the spectral functions are different from those shown in Figs.~2(a)-2(c), indicating the $\mathcal{C}_4$ symmetry breaking. Interestingly, here from the energy band spectra [Fig.~2(a)], the edge states are gapless, significantly different from previous theoretical results~\cite{re100,RN4,re34}. The spectral functions at the two boundaries are also different, namely, at the $i_y=1$ boundary, the spectrum is still gapped, with the gap magnitude around 0.08. This value is even larger than those at the $i_x=1$ and $i_x=N_x$ boundaries. At the $i_y=N_y$ boundary, the edge state is gapless. Thus for the cylinder system, the four boundaries are nonequivalent and the zero energy states appear at the $i_y=N_y$ boundary. Such asymmetric behavior may be detected by later experiments. \begin{figure} \centering \includegraphics[width=3in]{fig3.eps} \caption{(Color online) (a) Schematic illustration of a 2D TI being grown on a d-wave high-T$_c$ superconductor. (b) The eigenvalues of the Hamiltonian in the real space. (c) The intensity plot of the zero energy LDOS in the real space. }\label{fig3} \end{figure} Now let us discuss the possible Majorana corner states. We consider a 2D TI (with a finite-size lattice $40\times40$) being placed on a larger high-T$_c$ superconductor, as sketched in Fig.~3(a). Considering open boundaries along both $x$ and $y$ directions, the Hamiltonian matrix is diagonalized. The corresponding eigenvalues are presented in Fig.~3(b). Four zero energy eigenvalues are revealed. Note that there is no zero energy eigenvalue when periodic boundary conditions are taken. Therefore, these four eigenvalues should locate at the system edges, relating to Majorana zero modes. For the topological superconducting system, four zero energy eigenvalues should come from two zero energy physical quasiparticles, corresponding to two pairs of Majorana zero modes at the system boundaries. The distributions of the two pairs of Majorana zero modes can be seen from the zero energy LDOS, as displayed in Fig.~3(c). These two pairs locate at two lower corners of the system (one pair at each corner). As for the upper corners, no zero energy state exists. The asymmetric results presented above can be understood through exploring the pairing order parameter in the 2D TI layer. The order parameter for the orbital 1 as a function of the momentum ${\bf k}$ is presented in Fig.~4(a). As is seen, here both the $\mathcal{C}_4$ symmetry and the inversion symmetry for the magnitude of the order parameter are broken. We can separate the whole pairing order parameter as the singlet channel $\Delta_e$ and the triplet channel $\Delta_o$ [$\Delta_1({\bf k})=\Delta_e({\bf k})+\Delta_o({\bf k})]$, with $\Delta_e({\bf k})=1/2[\Delta_1({\bf k})+\Delta_1(-{\bf k})]$ and $\Delta_o({\bf k})=1/2[\Delta_1({\bf k})-\Delta_1(-{\bf k})]$. The corresponding numerical results for these two channels are displayed in Figs.~4(b) and 4(c), respectively. For the singlet channel, the result is consistent with the $d_{x^2-y^2}$ pairing symmetry. For the triplet channel, a $p$-wave symmetry is revealed. The whole effective pairing symmetry should be $d$+$p$ wave. As a result, both the $\mathcal{C}_4$ symmetry and the inversion symmetry are broken for the spin-dependent spectral function for the orbital 1. As to the whole spectral function, the $\mathcal{C}_4$ symmetry is broken, while the inversion symmetry preserves due to the time reversal symmetry of the system. \begin{figure} \centering \includegraphics[width=3.3in]{fig4.eps} \caption{(Color online) Intensity plots of the pairing order parameter of the 2D TI layer. (a) The whole pairing order parameter. (b) The pairing order parameter in the singlet channel part. (c) The pairing order parameter in the triplet channel part. }\label{fig4} \end{figure} In the real space with open boundaries along both the $x$ direction and the $y$ direction, the site dependent $d$-wave pairing order parameters $\Delta_i$ for the orbital 1 are displayed in Fig.~5(a). The two dimensional cuts of $\Delta_i$ along the four boundaries are plotted in Fig.~5(b). As is seen, at the $i_y=1$ boundary, the magnitude is relatively large. For the $i_x=1$ and $i_x=N_x$ boundaries , the same magnitudes are revealed while they are smaller than that at the $i_y=1$ boundary. As for the $i_y=N_y$ boundary, the magnitude of the induced pairing order parameter is rather small. Therefore, with a cylinder geometry, the topological protected gapless edge states may survive at this boundary. These results are well consistent with the numerical results of the spectral function at the system boundaries [Fig.~2]. In the mean time, for the finite-size system with open boundaries, the Majorana corner states will generally emerge at the corners where the sign of the $d$-wave pairing term changes sign~\cite{re100,RN4,re34}. However, since the effective $d$-wave order at the $i_y=N_y$ boundary is too small, the upper Majorana corner states will not come into being. Therefore, here the Majorana corner states only emerge at lower boundaries, as presented in Fig.~3. \begin{figure} \centering \includegraphics[width=3in]{fig5.eps} \caption{(Color online) (a) The intensity plot of the site-dependent $d$-wave pairing order parameter of the 2D TI layer. (b) The replot of the $d$-wave pairing order parameter at the four boundaries. }\label{fig5} \end{figure} The effective quasiparticle pairing in the 2D TI layer can also be explored through the anomalous Green's function~\cite{supp}. In the momentum space, our analytical results of the anomalous Green's function verify that the singlet channel and the triplet channel pairing indeed coexist and leading to the $\mathcal{C}_4$ symmetry breaking. Numerically, the imaginary parts of the anomalous Green's function are consistent with the pairing order parameter shown in Fig.~4. The asymmetric behavior at the system boundaries can also be well understood through the anomalous Green's function and the effective Hamiltonian at the system boundaries~\cite{supp}. Without coupling to the superconductor, the 2D TI is gapless at the boundaries, with the linear quasiparticle dispersion crossing the Fermi energy. The effective Hamiltonian at boundaries should include the intraorbital hopping term and the interorbital hopping term. When coupling to a superconductor, both terms contribute to the effective pairing at the boundaries. For the $i_y=N_y$ boundary, the interorbital hopping constant is opposite to the intraorbital one. These two contributions cancel out. Therefore, the effective pairing magnitude at this boundary is rather small. For the $i_y=1$ boundary, these two contributions add up directly. The pairing magnitude at this boundary is relatively large. For the $i_x=1$ or $i_x=N_x$ boundary, the interorbital hopping constant is imaginary. An effective $d$-wave pairing term is also induced, while its magnitude is relatively smaller than that of the $i_y=1$ boundary. The analytical results of the anomalous Green's function at these four boundaries are consistent with the above discussions. At last, we would like to make several remarks. First, previously the proximity effect has been widely used to artificially create topological superconductors. Very recently, based on the proximity effect, several proposals have also been proposed to realize the second order topological superconductor in various hybrid systems. At this stage, studying the proximity effect in a more strict way, especially for the possible higher order topological superconducting system, is timely and of broad interest. Secondly, for the finite-size system, the Majorana bound state only emerge at lower corners. The zero energy degeneracy is reduced to half of those obtained from previous phenomenological theoretical results. The reduced degeneracy may make the Majorana bound states be more controllable. Thirdly, Our main results can be well understood. In the momentum space, the proximity induced pairings include both the $d$-wave component and the $p$-wave component, leading to the breaking of $\mathcal{C}_4$ symmetry. At system boundaries in the real space, the proximity induced pairing terms are mainly contributed by the edge states of the TI. The edge states are contributed by the intraorbital channel and the interorbital channel. The coexistence of these two channels may strengthen or weaken the induced pairing gap, leading to the interesting asymmetric behaviours. At last, it is needed to pinpoint that the present system is different from both the first order topological superconductor and the higher order one, namely, the system can be gapless for a certain one-dimensional boundary while it is fully gapped for other boundaries. In the meantime, the Majorana bound states emerge at partial corners of the finite system. It is also interesting that the zero energy states may shift from the upper boundary to the lower corners when the boundary condition along the $x$-direction changes.
1,108,101,563,018
arxiv
\subsection{Model definition} \label{subsec:model} We define and test clustering property following~\cite{908985}. Let us described the model of experiment. We assume that data space~$\mathcal{U}$ has dimension~$d$ and finite granularity, say, a coordinate is an integer $n$-bit number. So, $U = \{0,1,\ldots,2^n-1\}^d$. Each point of the space corresponds to a grid cell. A space-filling curve (below SFC for shortness) introduces a bijection~$\omega\colon U \to \{0,1,\ldots,2^{nd}-1\}$. A \textit{query} is any subset $q\subseteq \mathcal{U}$. Consider rectangular queries being intersections of coordinate half-spaces. More generally (see~\cite{908985}), one can consider queries corresponding to connected simply connected domains. \begin{remark} Here we understand~$\mathcal{U}$ as a subset of the lattice~$\mathcal{Z} = \mathbb{Z}^d$. We need some other identification of queries with geometrical objects to define connected and simply connected sets correctly. Namely, we consider the Euclidean space~$E = \mathcal{U}\otimes_\mathbb{Z}\mathbb{R}$. Consider a closed unit cube~$C$ in~$E$. It is a fundamental domain of the action~$\mathcal{Z} \lefttorightarrow E$. Given a query~$q$, denote by~$C_q$ the set \[ C_q := \bigcup_{p\in q}(p + C) \subset E \] that consists of shifts of the cube~$C$ by all points of the query. We say that a query~$q$ is connected (or simply connected) if so is the interior of~$C_q$. For instance, a two point query $q=\{x,y\}$ is connected if and only if $C_q^\circ$ is connected, i.\,e.~$x$~and~$y$ differ by~$1$ in one coordinate and coincide in all the others. \end{remark} \begin{definition} A subset~$p\subseteq q$ of a query is called a \textit{cluster} with respect to a SFC~$\omega$ if it is a maximal subset such that the points (or cells) of~$p$ are numbered consequently by~$\omega$. We denote the number of clusters in~$q$ by~$c_q(\omega)$. \end{definition} \begin{definition} A \textit{clustering property} of a SFC~$\omega$ with respect to a (maybe parametric) class of queries~$\mathcal{Q}$ as the average number~$c_{\mathcal{Q}}(\omega)$ of clusters in~$q\in \mathcal{Q}$ (or the limits/asymptotics of cluster number as a function in the parameters if exist). \end{definition} Of course, there are also implicit parameters being the space granularity parameter~$n$ and the distribution over~$\mathcal{Q}$. Usually, for fixed parameters the set~$\mathcal{Q}$ is finite, and the distribution is assumed to be uniform. If we specify a probabilistic measure~$\mu$ on~$\mathcal{Q}$, then \[ c_\mathcal{Q}(\omega) := \int_\mathcal{Q}c_q(\omega)d\mu. \] We consider the class of cubic queries~$\mathcal{Q}_\ell$ where~$\ell$ is the side length of cubes. In~\cite{908985} there were considered parametric classes of queries of same shape parametrized by their scales. Also, limit asymptotics of average cluster number of a shape (cubes, spheres and some others) as a function in the scale were considered. \subsection{Simulation results} \label{subsec:simulation} Our main goal is to minimize number of disk accesses. This number depends on capacity of disk pages, model of memory access, some particular algorithms of access, insertion and deletion. We omit the technical details and compute average number of clusters, or \textit{continuous runs} over a subspace representing a query region. In~\cite{908985} the analytical results for different curves were tested on different query shapes and an increasing range of sizes. Note that the number of different query shapes is exponential in the dimensionality. Consequently, for a large grid space and high dimensionality, each simulation run may require an excessively large number of queries. So we restrict simulations for~$d=2,3,4$. For a given query shape and size, we do not test all the query positions but perform a statistical simulation by random sampling of queries. For query shapes, we choose squares and cubes. In~\cite{908985} the asymptotic and simulation results we shown to be very close and were considered as identical from round-off errors. Also, results coincided for different shapes in simulations and analytic calculation with asymptotics. So, we consider only quadratic and cubic queries due to reliability of the estimation method. The results of the experiment are listed in Table~\ref{tab:results}. For $d=2$ we compare average number of clusters for $10000$ random queries on $1024 \times 1024$ grid (in~\cite{908985} for~$d=2$ the grid is the same and there were~$200$ queries for a given combination of shape and size). \begin{table} \begin{tabular}{|c|rrr|} \hline \multicolumn{4}{|c|}{$d=2$} \\ \hline $\ell$ & Z & Hilbert & H \\ \hline 2 & 2.62 & 2.00 & 1.99 \\ 3 & 4.51 & 3.00 & 3.01 \\ 4 & 6.36 & 4.01 & 3.99 \\ 5 & 8.25 & 4.99 & 5.00 \\ 6 & 10.23 & 6.00 & 6.00 \\ 7 & 12.26 & 7.00 & 7.00 \\ 8 & 14.23 & 8.03 & 8.00 \\ 9 & 16.14 & 9.01 & 9.02 \\ 10 & 18.00 & 9.94 & 9.97 \\ 11 & 20.04 & 10.98 & 10.98 \\ 12 & 22.24 & 12.07 & 12.00 \\ 13 & 24.06 & 12.99 & 12.99 \\ 14 & 26.04 & 14.00 & 14.00 \\ 15 & 28.17 & 15.04 & 15.02 \\ \hline \end{tabular} \begin{tabular}{|c|rrr|} \hline \multicolumn{4}{|c|}{$d=3$} \\ \hline $\ell$ & Z & Hilbert & H \\ \hline 2 & 5.34 & 4.02 & 4.00 \\ 3 & 13.51 & 9.04 & 9.01 \\ 4 & 25.58 & 16.08 & 16.04 \\ 5 & 41.63 & 25.07 & 24.99 \\ 6 & 61.62 & 36.10 & 36.03 \\ 7 & 85.74 & 49.08 & 49.00 \\ 8 & 113.96 & 64.38 & 64.13 \\ 9 & 145.76 & 80.90 & 81.00 \\ 10 & 181.04 & 99.85 & 99.75 \\ 11 & 221.63 & 120.50 & 120.85 \\ 12 & 267.50 & 144.72 & 144.77 \\ 13 & 314.00 & 169.28 & 169.21 \\ 14 & 363.72 & 195.11 & 194.73 \\ 15 & 421.75 & 225.17 & 224.99 \\ \hline \end{tabular} \begin{tabular}{|c|rrr|} \hline \multicolumn{4}{|c|}{$d=4$} \\ \hline $\ell$ & Z & Hilbert & H \\ \hline 2 & 10.74 & 7.95 & 8.05 \\ 3 & 40.49 & 26.96 & 26.98 \\ 4 & 102.33 & 64.39 & 64.14 \\ 5 & 208.39 & 125.23 & 125.01 \\ 6 & 372.55 & 216.60 & 217.18 \\ 7 & 600.43 & 343.52 & 343.02 \\ 8 & 911.06 & 513.73 & 512.52 \\ 9 & 1312.09 & 730.78 & 729.02 \\ 10 & 1810.43 & 991.12 & 995.21 \\ 11 & 2440.48 & 1331.96 & 1331.06 \\ 12 & 3185.88 & 1734.03 & 1728.66 \\ 13 & 4080.00 & 2203.18 & 2197.00 \\ 14 & 5091.67 & 2732.45 & 2726.83 \\ 15 & 6329.08 & 3378.49 & 3375.01 \\ \hline \end{tabular} \caption{Average number of clusters in cubic queries with the cube side~$\ell$ for $d=2,3,4$.} \label{tab:results} \end{table} \FloatBarrier \subsection{Construction} \label{subsec:construction} This section is devoted to the construction of cyclic fractal space-filling curve for any $d>1$ without using symmetries. For any dimension $d$, we will traverse half-sized cells in the initial cube in the same way. Taking a $d$-bit number k as an index in traversal (counting from $0$), we obtain the corresponding cell coordinate bits as consecutive bits of the number \[ g_d(k) := k\oplus \lfloor (k \mod 2^d)/2 \rfloor \mod 2^d \] (the symbol~$\oplus$ means bitwise sum, or, xor). This function permutes the set~$\{0,\ldots,2^d-1\}$. Therefore, the function~$g_d^{-1}$ is well-defined on the set~$\{0,\ldots,2^d-1\}$. As we claimed, in the cells we do not apply any reflections or rotations to the cells and sub-cells. For the curve construction we need only the local mutations. For explicit computation of correspondence between indexes and cells we need to calculate the index shifts and find all direction reversals. \subsection{Local mutation} \label{subsec:loc-mutation} For convenience let us assume that grid cells are unit cubes, and the initial big cube has side length~$2^n$. Actually, for any dimension~$d>1$ we will apply the same local mutation. This mutation will always act on the central~$4 \times 2 \times \ldots \times 2$-parallelepiped. \begin{lemma} Given~$d>1$, for any~$n\geqslant 2$ the restriction of the graph composed of $2^d$ half-size cycles in the cube with side length~$2^n$ onto the central~$4\times 2 \times \ldots \times 2$-parallelepiped form the same graph, namely, if we denote its vertices with $\{0,1,2,3\}\times\{0,1\}^{d-1}$, then the edge set would be \[ (\{0\}\times p, \{1\}\times p)\text{ and } (\{2\}\times p, \{3\}\times p)\text{ for all } p\in\{0,1\}^{d-1}. \] \end{lemma} \begin{proof} Assume that we have the grid of integral points in the cube~$[0,2^{n+1}-1]^d$, and we initially have the cyclic traversals of cubes of side~$2$. They form a grid of~$2^n$ cells. Then we consequently apply mutations gathering cycles into cycles traversing cells of sizes~$4, 8, \ldots, 2^{n+1}$. Each time we consider the central $4\times 2\times \ldots \times 2$-parallelepiped in some cell of size~$4,8,\ldots,2^{n+1}$, then each of these parallelepipeds has even minimal first coordinate and odd minimal other coordinates. This implies that the restrictions of initial $2^{nd}$ cycles on them are same and coincide with the written above graph. At the same time, these parallelepipeds have pairwise non-intersecting sets of vertices. Therefore, mutations of previous steps of construction do not affect the final step. \end{proof} On Fig.~\ref{fig:figure-d2n3step1},~\ref{fig:figure-d2n3step2},~\ref{fig:figure-d2n3step3} we see examples of mutations. On these figures we color some black edges red. Then we draw a number of green edges such that together the green and red edges form cycles. After the mutation we remove red edges and color green edges black. Note that if in the red-green cycle we contract all the red edges, then we obtain exactly the graph corresponding to the traversal of the cube of size~$2$ and the same dimension. Obviously, we will see the same behavior in any dimension. \begin{example} \FloatBarrier Consider the case of~$d=2$ and~$n=3$ (side length~$8$). \begin{figure}[ht] \centering \includegraphics{pict-2.eps} \qquad \includegraphics{pict-3.eps} \caption{Join of cycles in squares of side~$2$ into cycles in squares of side~$4$.} \label{fig:figure-d2n3step1} \end{figure} \begin{figure}[ht] \centering \includegraphics{pict-4.eps} \qquad \includegraphics{pict-5.eps} \caption{Join of cycles in squares of side~$4$ into cycles in squares of side~$8$.} \label{fig:figure-d2n3step2} \end{figure} \begin{figure}[ht] \centering \includegraphics{pict-6.eps} \caption{H-curve for $d=2$, $n=3$.} \label{fig:figure-d2n3step3} \end{figure} On fig.\,\ref{fig:figure-d2n3step1} we join cycles of side length~$2$, after on fig.\,\ref{fig:figure-d2n3step2} we join cycles of side~$4$, and on fig.\,~\ref{fig:figure-d2n3step3} we see the result. \FloatBarrier \end{example} \begin{definition} We call the constructed above family of curves \textit{H-curves} for all~$d>1, n$. Also, we will call \textit{H-curves} the limit curves for all~$d>1$. \end{definition} We name them this way for the form of the second iteration of plane curve. Next iterations also looks like the letter `H', but more tangled and shaggy. For $d \geqslant 3$ we can consider these curves as high-dimensional ``generalizations'' of letter `H'. \begin{example} \FloatBarrier \begin{figure} \centering \includegraphics{pict-0.eps} \qquad \includegraphics{pict-7.eps} \caption{Example of H-curve for $d=3,n=2$ and the central mutation.} \label{fig:hc3,2} \end{figure} On fig.\,\ref{fig:hc3,2} we see the example of H-curve for~$d=3,n=2$ (side length~$4$) and how looks the mutation in three-dimensional case. In the higher dimensions it looks the same, but less illustrative. \FloatBarrier \end{example} \begin{theorem} For any~$n\in\mathbb{N}$ and any~$d>1$ the H-curve cyclically traverses all the unit cells. Each move to the next cell is a move to an adjacent cell. For any~$k<n$, for one cycle the curve one time enters and one time leaves any of cells of grid of~$2^{dk}$-side cells, and the traversal of these cells is~H-curve for the pair $(d,k)$. For $n \to \infty$ we can choose infinitely decreasing sequence of cells such that each one contains all the following, and we obtain the sequence that converges to the continuous map~$h\colon S^1 \to [0,1]^d$. \end{theorem} \begin{proof} The first part of statement is obvious by the construction of curve and by choice of the mutation. The second part is obvious for~$k=n-1$ by the iteration of construction and for any~$k$ by induction from~$n-1$ down to~$1$. For the third part we need to take unit cells in such a way that \begin{itemize} \item for increasing~$n$ the matching between smaller cells and intervals is a subdivision of matching between larger cells and intervals; \item end points of~$[0,1]$ are mapped to the same point. \end{itemize} Actually, it is enough to take the first cell of initial subdivision of each next time to take the first sub-cell, where the cycle enters the cell. The condition that $0$ and $1$ are mapped into the same point is obvious. Continuity is standard and follows from the same reasons as for Hilbert and other curves. \end{proof} We will compare below properties of H-curve, Hilbert curve and Z-curve. \begin{remark} Actually, for dimension~$2$ there is only one construction method of the Hilbert. As it was noticed in~\cite{16Curves}, for higher dimensions there are many ways to generalize the construction of the curve to any dimension such that its restriction to~$d=2$ gives the usual plane Hilbert curve. In~\cite{16Curves} there are $5$ ways to do this. The commonly used version seems to be called Butz-Hilbert curve in~\cite{16Curves}. For higher dimensions is seems that different variations of Hilbert curve would give very close results. At the same time, their constructions have the same complexity (computational and mathematical). So, we will compare H-curve with the commonly used Butz-Hilbert curve. \end{remark} \begin{remark} One of curves constructed in~\cite{16Curves} is called there \textit{inside-out curve}. For $n=2$, it returns to the cell adjacent to the initial point, but for bigger~$n$ it loses continuity. In this sense H-curve can be called an \textit{inside-out-repeat curve} as a curve moving from the center to the perimeter in one octant, back to the center, out into another octant and so on cyclically. \end{remark} On fig.\,\ref{fig:d2n4} we see Z-curve, Hilbert curve and H-curve on plane. On fig.\,\ref{fig:d3n2} we see Z-curve, Butz--Hilbert curve and H-curve in the same axes. \begin{figure} \centering \includegraphics[scale=0.8]{pict-12.eps} \quad \includegraphics[scale=0.8]{pict-10.eps} \quad \includegraphics[scale=0.8]{pict-11.eps} \caption{Z-curve, Hilbert curve, H-curve for $d=2,n=4$} \label{fig:d2n4} \end{figure} \begin{figure} \centering \includegraphics[scale=0.8]{pict-8.eps} \includegraphics[scale=0.8]{pict-9.eps} \includegraphics[scale=0.8]{pict-0.eps} \caption{Z-curve, Butz--Hilbert curve, H-curve for $d=3,n=2$} \label{fig:d3n2} \end{figure} \subsection{Index shifts and direction reversals} \label{subsec:shifts-and-reversals} Our next goal is to describe the correspondence between a unit cell with coordinates~$\overline{a}=(a_0,\ldots,a_d)$ in $d$-dimensional cube with side length~$2^n$ and its index~$r$ in the traversal along H-curve. Say, we \textit{encode} the point~$\overline{a}$ by the index~$r$ and~\textit{decode} the index~$r$ to the point~$\overline{a}$. So, we want to describe two mutually inverse functions \[ \xymatrix{ \{0,\ldots,2^n-1\}^d \ar@/_/[rr] &&\mathbb{Z}/2^{nd}\mathbb{Z}. \ar@/_/[ll] } \] We will construct the functions recursively. To make the construction easier, let us introduce some notation. Let us write bits of $d$-bit numbers~$a_i$ into the matrix~$n\times d$ as rows. Denote $n$-bit numbers in the rows of transposed matrix by~$\alpha^0,\ldots,\alpha^d$. These coordinates are also known as coordinates in \textit{Z-order}. It is easy to pass from $(a_i)$ to $(\alpha^j)$ and back, but~$\alpha^j$ are more convenient for algorithm design. So, we will describe functions \[ \xymatrix{ \{0,\ldots,2^d-1\}^n \ar@/_/[rr]_-{\mathtt{encode}} &&\mathbb{Z}/2^{nd}\mathbb{Z}. \ar@/_/[ll]_-{\mathtt{decode}} } \] Geometrical sense of~$\alpha$-coordinates corresponds to the iterative construction of curve. Each cell can be coded by~$d$ bits of coordinates. These~$d$ bits form the number~$\alpha^j$ for $j$-th iteration. When subdividing the cube into the grid of~$2^d$ cells, we choose one of them which has coordinates~$(\alpha^0,\ldots,\alpha^{d-1})$. \begin{lemma} The central $4\times 2 \times \ldots \times 2$-parallelepiped in $d$-dimensional cube with side~$2^n$ consists of the set of points \[ c_\alpha = (\alpha, \overline{\alpha},\ldots, \overline{\alpha}) \text{ and } c_\alpha' = (\alpha, \overline{\alpha},\ldots, \overline{\alpha}\oplus 1) \] for all~$\alpha\in\mathbb{F}_2^n$. \end{lemma} \begin{proof} Denote the central cube of size~$2$ by~$C$ and the central~$4\times 2 \times \ldots \times 2$-parallelepiped by~$P$. Each half-size cube~$\alpha$ has a unique unit cell in~$C$. Denote it by~$c_\alpha$. Each half-size cube~$\alpha$ has two unit cells in~$P$,~$c_\alpha$ is on of them. Denote the other one by~$c_\alpha'$. The index~$\alpha$ for both~$c_\alpha$ and~$c_\alpha'$ corresponds to the first coordinate of f unit cell in~$\alpha$-coordinates. Our goal is to find remaining~$\alpha$-coordinates of these points. Bits of~$\alpha$ geometrically mean the choice of half-size cube in the first subdivision operation. To get the cell~$c_\alpha \in C$, we should take opposite coordinate choices for each coordinate on each next iteration. This exactly implies that all the next $\alpha$-coordinates equal~$\overline{\alpha}$. To take~$c_\alpha'$, we should take the same sub-cells until the last subdivision. At the last iteration we should change the first coordinate to get the adjacent cell along the first coordinate. This exactly means that~$c_\alpha'$ has all next coorinates equal~$\overline{\alpha}$ until the last one which equals~$\overline{\alpha}\oplus 1$. \end{proof} \begin{corollary} The traversal of H-curve enters the half-size sub-cell~$\alpha$ at one of unit cells~$c_\alpha$ and~$c_\alpha'$ and leaves at other one. \end{corollary} We have fixed the traversal of the central~$4\times 2 \times \ldots \times 2$-parallelepiped. Therefore, if we know the order of traversal of the pair~$(c_\alpha,c_\alpha')$, then we know if we need to reverse the traversal of half-size sub-cell~$\alpha$. (As it was noted above, they are neighbors in the sub-cell traversal). Now we want to determine when the direction of half-size sub-cube traversal is either the same or opposite to the direction of traversal of these sub-cubes. Suppose the direction is the same. Then before the mutation we pass from one cell of~$\{c_\alpha,c_\alpha'\}$ to another one, so, traversing the remaining part of the cycle traversing the sub-cube, we pass them in the opposite order, because the first one becomes the leaving unit cell, and other one becomes the entering unit cell for sub-cube. Vice versa, if the direction changes, then the order remains the same. To avoid confusion, we consider the chain part traversing the sub-cube, but not whole the cube, because in the cycle the proposition that a cell follows other one is nonsense. \begin{lemma} Consider a $d$-dimensional cube of side length~$2^n$. In the construction of H-curve the edges $(0,\ldots,0)-(1,0,\ldots,0)$ and~$(2^n-1,\ldots,2^n-1)-(2^n-2,2^d-1,\ldots,2^d-1)$ (denote them correspondingly $\overline{0}-\overline{0}'$ and $\overline{1}-\overline{1}'$) are passed in the same direction (along the first coordinate) for~$d$ odd and in the opposite direction for~$d$ even. \end{lemma} \begin{proof} The proof consists of two steps: to pass to~$n=1$ and to directly calculate for~$n=1$. At first, we pass to~$n=1$. Indeed, we obtain the traversal of the cube of size~$2^n$ by joining together traversals of~$2^{n-1}$ cubes with a central mutation. Note that the mutations does not affect the edge from/to corner vertices. So, for~$n>1$ the proposition is the same as for~$n=1$, and we can put~$n=1$ without loss of generality. Fix some~$d>1$. In Z-order the representations of the vertices are the following (we write the square brackets and index~$2$ to distinguish decimal and binary numbers): \begin{align*} \overline{0} = [\underbrace{0\ldots0}_d]_2, \quad & \overline{0}' = [\underbrace{0\ldots0}_{d-1}1]_2, \\ \overline{1} = [\underbrace{1\ldots1}_d]_2, \quad & \overline{1}' = [\underbrace{1\ldots1}_{d-1}0]_2. \\ \end{align*} Note that~$g(0)=[\underbrace{0\ldots0}_{d}]_2$ and~$g(1)=[\underbrace{0\ldots0}_{d-1}1]_2$. It only remains to find~$g^{-1}(\overline{1})$ and~$g^{-1}(\overline{1}')$. One of the following two cases holds: \begin{itemize} \item Let $d$ be even. Then \[ g^{-1}([\underbrace{1\ldots1}_{d}]_2) = 2\cdot\frac{2^d-1}{3}, \quad g^{-1}([\underbrace{1\ldots1}_{d-1}0]_2) = 2\cdot\frac{2^d-1}{3} + 1. \] We see that~$\overline{1}'$ follows $\overline{1}$. \item Let $d$ be odd. Then \[ g^{-1}([\underbrace{1\ldots1}_{d}]_2) = \frac{2^{d+1}-1}{3}, \quad g^{-1}([\underbrace{1\ldots1}_{d-1}0]_2) = \frac{2^{d+1}-1}{3} - 1. \] We see that~$\overline{1}$ follows $\overline{1}'$. \end{itemize} The calculations can be easily checked directly. This concludes the proof. \end{proof} \begin{corollary} For $d$ even there are no traverse reversals. For $d$ odd the only traverse reversal happens for~$n=2$. \end{corollary} \begin{proof} As we have seen above, the direction of a bigger cubes traversal from~$\overline{0}$ to~$\overline{0}'$ is the same as for unit cells in cubes with side length~$2$ if in the cube of side length~$2$ the traversal of the edge $\overline{1}-\overline{1}'$ is opposite to the traversal direction of the edge $\overline{0}-\overline{0}'$. So, the direction for~$(n,d)$ for~$n>1$ is the same as for~$(1,d)$ for even~$d$ and opposite for odd~$d$. Therefore, there are no any reversals for~$d$ even and the only reversal for~$d$ odd is when~$n=2$. (For~$d$ odd and~$n>1$ the directions are opposite to the direction for~$n=1$, thus, they coincide.) \end{proof} \begin{theorem} For any $d>1$ and~$n\geqslant 1$ H-curve starts the traversal of sub-cell~$\alpha$ at its unit sub-cell~$(\overline{\alpha}, \ldots, \overline{\alpha}, \overline{\alpha}\oplus p(\overline{\alpha}))$, and the direction of traversal changes if and only if~$d$ is odd and~$n=2$. \end{theorem} \begin{proof} Actually, it only remains to find which one of~$c_\alpha$ and~$c_\alpha'$ is the initial point. Note that the function~$g$ is $\mathbb{F}_2$-linear as a function~$g\colon \mathbb{F}_2^d \to \mathbb{F}_2^d$. Geometrically the operation~$\oplus\alpha$ corresponds to the composition of reflections along coordinate hyperplanes corresponding to bits equal~$1$ in~$\alpha$. Therefore, we can find only the initial point of the sub-cube corresponding to~$\alpha=[0\ldots0]_2$. In the traversal of this sub-cube (before the mutation) $c_\alpha$ and~$c_\alpha'$ follows each other. So, after the mutation the second one becomes the entering unit cell of a sub-cube, and first one becomes the leaving unit cell. From the reasoning above it follows that for~$d$ even the entering point is~$\overline{1}$ and for~$d$ odd the entering point is~$\overline{1}'$. Restoring generality of~$\alpha$ and due~$\mathbb{F}_2$-linearity, we can rewrite the initial point of sub-cube~$\alpha$ with the parity function~$p$ as the point~$(\overline{\alpha}, \ldots, \overline{\alpha}, \overline{\alpha}\oplus p(\overline{\alpha}))$ in Z-order. \end{proof} \subsection{Algorithmic construction} \label{subsec:alg} Here we briefly describe algorithms of two functions: \begin{itemize} \item[\texttt{encode}] which maps $d$-dimensional array of cell coordinates in the cube $\{0,\ldots,2^n-1\}^d$ to the index, \item[\texttt{decode}] performing the inverse function. \end{itemize} Here index means the number of cell in the traversal. It can be considered as an arbitrary integer number or a number in $\{0,\ldots,2^{nd}-1\}$ due to $2^{nd}$-periodicity. For convenience, we will evaluate coordinates in \textit{Z-order}: instead of $d$ $n$-bit numbers we consider $n$ $d$-bits numbers composed of corresponding bits of coordinates. If we write down $d$ $n$-bit numbers as rows of bit matrix, then the corresponding $n$ $d$-bit numbers in Z-order become rows of the transposed matrix. Denote the coordinates of the cell with index~$r$ by~$(a_0,\ldots,a_{d-1})$. Denote the corresponding Z-order numbers by~$(\alpha^0,\ldots,\alpha^{n-1})$. Denote~$g_n(k) = (k \mod 2^n)\oplus (\lfloor k/2 \rfloor \mod 2^{n-1})$. Note that~$g_n$ is a bijection on the set~$\{0,\ldots,2^n-1\}$, so~$g_n^{-1}$ is well-defined on this set. Denote by~$p(x)$ the \textit{parity} of~$x$, i.\,e. $1$ if the number of odd bits in~$x$ is odd and~$0$ otherwise. \subsubsection{Encode} Given dimension~$d$, depth~$n$, numbers $\overline{\alpha}=(\alpha^0,\ldots,\alpha^{n-1})$, we calculate the index~$r = \mathtt{encode}(d, n, \overline{\alpha})$ as follows. \begin{itemize} \item Put~$r_0 = g_d^{-1}(\alpha_0)$. \item Put~$r = \mathtt{encode}(d,n-1,(\alpha^1,\ldots,\alpha^{n-1}))$. \item Put~$r' = \mathtt{encode}(d, n-1, (e,\ldots,e\oplus p(e)))$, where~$e = (-1-r_0)\mod 2^d$ (bitwise complement). \item Return~$r_0\cdot 2^{d(n-1)} + (r - r' \mod 2^{d(n-1)})$. \end{itemize} \subsubsection{Decode} Given dimension~$d$, depth~$n$, and index~$r$ we calculate~$\overline{\alpha}=(\alpha^0,\ldots,\alpha^{n-1})$ with the function~$\mathtt{decode}(d, n, r, i=0)$ ($i$ is the argument with the default value~$0$) as follows. \begin{itemize} \item If~$i \geqslant n$, the function returns~$\overline{\alpha}=(\alpha^0,\ldots,\alpha^{n-1})$. \item Put $\alpha_i = g(\rho_i)$, where~$\rho_i = \lfloor r / 2^{d(n-1-i)} \rfloor$. \item Put~$r' = \mathtt{encode}(d, n-1, (e,\ldots,e\oplus p(e)))$, where~$e = (-1-r_0)\mod 2^d$ (bitwise complement). \item Put $r'' = r - r' \mod 2^{d(n-1)}$. \item $\mathtt{decode}(d, n-1, r'', i + 1)$. \end{itemize} \subsubsection{Tail recursion} Here we see that each of \texttt{decode} and \texttt{encode} call two of these functions for smaller~$d$. But one of these calls is a call to get the index of a corner of an~$(n,d)$-cube or an adjacent cell by the first coordinate. In practice, we should keep more points than the number of corners of~$(n',d)$-cubes for~$n' < n$. So they can be precomputed and stored (or lazily evaluated on demand), so the first call will require only~$O(1)$ operations asymptotically. This improvement makes~\texttt{decode} and~\texttt{encode} tail recursive. Of course, we can choose initial point other way (for example, put into correspondence the zero index to the point with zero coordinates), but then we should apply the same additional corrections for mutations. In the chosen way we always remove the edges with the same indexes. So, actually, there is no significant difference. With precomputed corner indexes and implementation of tail recursions as loops on~\texttt{C}, the profiling results of \texttt{encode} and~\texttt{decode} functions for pair~$(n,d)=(7,7)$ for a million calls are the following (see~Table~\ref{tab:profiling}). \FloatBarrier \begin{table} \begin{tabular}{|l|r|} \hline function & average time spent with function descendents, ms/call \\ \hline \hline \texttt{encode\_h} & $0.08$ \\ \texttt{decode\_h} & $0.06$ \\ \texttt{encode\_Hilbert} & $0.31$ \\ \texttt{decode\_Hilbert} & $0.47$ \\ \hline \end{tabular} \caption{Profiling results} \label{tab:profiling} \end{table} \FloatBarrier So, we can see that H-curve computes significantly faster than the Hilbert curve. \section*{Introduction} \label{sec:intro} \input{introduction} \section{Constructions of curves} \label{sec:constructions} \input{constructions-of-curves} \section{H-curve} \label{sec:hcurve} \input{h-curve} \section{Clustering property} \label{sec:clustering_property} \input{clustering-property} \section{Conclusion} \label{sec:conclusion} In this paper we introduced a new way to construct cyclic space-filling curves. A particular simple family of curves is created (we call them H-curves). This family has a very close clustering property to Hilbert curves. At the same time, their construction is simpler and significantly faster. So, for a number of applications H-curves may be preferable than Hilbert curves.
1,108,101,563,019
arxiv
\section{Introduction} It is known that the analytic solutions for tachyon condensation \cite{Schnabl:2005gv,Erler:2009uj,Okawa:2006vm} in open bosonic string field theory \cite{Witten:1985cc} as well as the ones \cite{Erler:2007xt,Aref'eva:2008ad,Gorbachev:2010zz} in cubic superstring field theory \cite{Arefeva:1989cp} are formally gauge equivalent to identity based solutions \cite{Arroyo:2010fq,Zeze:2010sr,Arroyo:2010sy,Arefeva:2010yd,Erler:2012qn}. Identity based solutions are constructed as a product of certain linear combination of ghost number one operators with the identity string field \cite{Kishimoto:2014lua,Kishimoto:2009nd,Inatomi:2011an}. Although identity based solutions are pathological solutions in the sense that they bring ambiguous analytic result for the value of the energy \cite{Erler:2012dz}, by performing a gauge transformation over these solutions, it is possible to construct well behaved solutions. For instance, in reference \cite{Zeze:2010sr}, a one-parameter family of solutions has been found which interpolates between an identity based solution and the Erler-Schnabl's tachyon vacuum solution \cite{Erler:2009uj}. This result has been extended for the case of cubic superstring field theory \cite{Arroyo:2010sy}, namely, a one-parameter family of solutions has been found which interpolates between an identity based solution and the Gorbachev's tachyon vacuum solution \cite{Gorbachev:2010zz}. Motivated by the above results, and the recently discovered Erler's half brane solution \cite{Erler:2010pr} in cubic superstring field theory, in this paper, starting with the identity based solution \cite{Arroyo:2010fq,Arefeva:2010yd,AldoArroyo:2012if,Arroyo:2013pha} \begin{align} \label{Iden1Intro} \widehat{\Phi}_I = \Big( (c+B \gamma^2)(1-K) \Big) \otimes \sigma_3, \end{align} by performing a gauge transformation of $\widehat{\Phi}_I $, we study the construction of the following one-parameter family of solutions \begin{eqnarray} \label{SolphiIntro} \widehat{\Phi}_{\lambda} = \Phi_{1,\lambda} \otimes \sigma_3 + \Phi_{2,\lambda} \otimes i \sigma_2, \end{eqnarray} where the string fields $\Phi_{1,\lambda}$ and $\Phi_{2,\lambda}$ are given by \begin{eqnarray} \label{solphi3In} \Phi_{1,\lambda} &=& Q(Bc)f(K,\lambda) + \lambda(2\lambda-1)c f(K,\lambda) + 4i\lambda(1-\lambda) c GB c G \widetilde{f}(K,\lambda) ,\\ \label{solphi4In} \Phi_{2,\lambda} &=& Q(Bc) G \widetilde{f}(K,\lambda) + \lambda(2\lambda-1)c G \widetilde{f}(K,\lambda) + 4i \lambda(1-\lambda) c G B c f(K,\lambda), \end{eqnarray} with $f(K,\lambda)$ and $\widetilde{f}(K,\lambda)$ being functions of $K$\footnote{The $K$ field is an element of the so-called $KBc$ subalgebra introduced in the references \cite{Okawa:2006vm,Erler:2006hw,Erler:2006ww,Schnabl:2010tb}.} and the parameter $\lambda$ \begin{eqnarray} \label{gaugeF1In} f(K,\lambda) &=& \frac{\lambda^2 (1-2 \lambda )^2+\left(16 \lambda^3-32 \lambda^2+18 \lambda -1\right) \lambda \, K }{\lambda ^2(1-2 \lambda )^2 +2 \lambda \left(8 \lambda ^3-16 \lambda ^2+10 \lambda -1\right) K+K^2} \; , \\ \label{gaugeF2In} \widetilde{f}(K,\lambda) &=& \frac{4 i (1-\lambda) \lambda \, K}{\lambda ^2(1-2 \lambda )^2 +2 \lambda \left(8 \lambda ^3-16 \lambda ^2+10 \lambda -1\right) K+K^2} \; . \end{eqnarray} Moreover, by explicit and detailed computation of the normalized value of the energy \begin{eqnarray} \label{NorV1Intro} E(\widehat{\Phi}_{\lambda}) = \frac{\pi^2}{3} \Big[\langle Y_{-2} \Phi_{1,\lambda} Q \Phi_{1,\lambda} \rangle + \langle Y_{-2} \Phi_{2,\lambda} Q \Phi_{2,\lambda} \rangle \Big] \end{eqnarray} associated to the solution $\widehat{\Phi}_{\lambda}$, we obtain \begin{align} \label{FinalResult1Intro} E(\widehat{\Phi}_{\lambda}) = \begin{cases} 0, & \lambda = 0 \;,\;\; \text{Perturbative Vacuum Solution,} \\ -1/2, & \lambda = 1/2 \;,\;\; \text{Half Brane Solution,} \\ -1, & \big(\lambda < 0\big) \vee \big(\kappa \leq \lambda <\frac{1}{2}\big) \vee \big(\lambda >\frac{1}{2}\big) \;,\;\; \text{Tachyon Vacuum Solution,} \end{cases} \end{align} where $\kappa$ is a numerical constant defined as \begin{eqnarray} \label{kappa1Intro} \kappa = \frac{2}{3}-\frac{1}{6} \left(\frac{25}{2}+\frac{3}{2} \sqrt{69}\right)^{1/3}-\frac{1}{6} \left(\frac{25}{2}-\frac{3}{2} \sqrt{69}\right)^{1/3} \approx 0.122561. \end{eqnarray} Note that for most of the values of the parameter $\lambda$, the solution represents the tachyon vacuum, while the two isolated points $\lambda=0$ and $\lambda=1/2$ correspond to the perturbative vacuum and the half brane solution respectively. We expect that the construction of a one-parameter family of solutions using identity based solutions, in cubic superstring field theory, will provide us with relevant tools to analyze other important solutions, such as the multibrane solutions \cite{AldoArroyo:2012if,Arroyo:2013pha}, and the recently proposed Erler's analytic solution for tachyon condensation in Berkovits non-polynomial open superstring field theory \cite{Erler:2013wda}. Since the algebraic structure of Berkovits theory \cite{Berkovits:1995ab} is similar to the cubic superstring field theory, the results of our work can be naturally extended, however, the presence of a non-polynomial action in Berkovits theory will bring us challenges in the search of new solutions. This paper is organized as follows. In section 2, we review the modified cubic superstring field theory and introduce some notations and conventions. Since the explicit form of our one-parameter family of solutions is expressed in terms of elements of the $GKBc\gamma$ algebra, in section 3, we study in detail this algebra. In section 4, by performing a gauge transformation of an identity based solution, we show the construction of the one-parameter family of solutions. In section 5, we analyze correlation functions involving the $G$ field and as a pedagogical application of these correlators, we show the computation of the energy for the half brane solution. In section 6, we evaluate the energy associated to the one-parameter family of solutions. In section 7, a summary and further directions of exploration are given. \section{Modified cubic superstring field theory, notations and conventions} The action of the modified cubic superstring field theory which takes into account the $GSO(+)$ and $GSO(-)$ sectors is given by \cite{Arefeva:1989cp} \begin{eqnarray} \label{action1} S = -\frac{1}{g^2} \Big[\frac{1}{2} \langle Y_{-2} \Phi_1 Q \Phi_1 \rangle + \frac{1}{3} \langle Y_{-2} \Phi_1 \Phi_1 \Phi_1 \rangle + \frac{1}{2} \langle Y_{-2} \Phi_2 Q \Phi_2 \rangle - \langle Y_{-2} \Phi_1 \Phi_2 \Phi_2 \rangle \Big], \end{eqnarray} where $Q$ is the BRST operator of the open Neveu-Schwarz superstring theory. The operator $Y_{-2}$ is inserted at the open string midpoint and is written as the product of two inverse picture changing operators $Y_{-2}=Y(i)Y(-i)$, where $Y(z)=-\partial \xi e^{-2 \phi} c(z)$. The ghost number one string fields $\Phi_1$ and $\Phi_2$ belong to the $GSO(+)$ and $GSO(-)$ sectors, and are Grassman odd and Grassman even respectively. Varying the action (\ref{action1}) with respect to the string fields $\Phi_1$ and $\Phi_2$ yields the following equations of motion \cite{Arefeva:1988nn} \begin{eqnarray} \label{Eqm1} Q \Phi_1 + \Phi_1*\Phi_1 - \Phi_2*\Phi_2 &=& 0 , \\ \label{Eqm2} Q \Phi_2 + \Phi_1*\Phi_2 - \Phi_2*\Phi_1 &=& 0. \end{eqnarray} Regarding to the star product, we are going to use the left handed convention of \cite{Schnabl:2005gv,Erler:2009uj}. There are other sources which use the right handed convention \cite{Okawa:2006vm,Berkovits:2000hf}, for details related to the connection between these two conventions see reference \cite{Erler:2010pr}. Using the equations of motion (\ref{Eqm1}), (\ref{Eqm2}) and the cyclicity relation \begin{eqnarray} \label{Rela1} \langle Y_{-2} \Phi_1 \Phi_2 \Phi_2 \rangle = - \langle Y_{-2} \Phi_2 \Phi_1 \Phi_2 \rangle = \langle Y_{-2} \Phi_2 \Phi_2 \Phi_1 \rangle, \end{eqnarray} where an additional minus sign arises due to the fact that $\Phi_2$ belongs to the $GSO(-)$ sector\footnote{Since a string field belonging to the $GSO(-)$ sector has half-integer conformal weight, $\Phi_2$ changes its sign under the conformal transformation $\mathcal{R}_{2\pi}$ representing the $2\pi$ rotation of the unit disk \cite{Ohmori:2003vq}.}, we can write the action (\ref{action1}) as \begin{eqnarray} \label{action2} S = -\frac{1}{6 g^2} \Big[\langle Y_{-2} \Phi_1 Q \Phi_1 \rangle + \langle Y_{-2} \Phi_2 Q \Phi_2 \rangle \Big]. \end{eqnarray} Since $\Phi_2$ has opposite Grassmannality as compared to the $GSO(+)$ string field $\Phi_1$, it seems that they fail to obey common algebraic relations. This problem can be resolved by attaching the $2 \times 2$ internal Chan-Paton matrices to the string fields and the operator insertions as \cite{Berkovits:2000hf,Arefeva:2002mb} \begin{align} \label{chanfield1} \widehat{Q} &= Q \otimes \sigma_3 , \;\;\;\; \widehat{Y}_{-2} = Y_{-2} \otimes \sigma_3, \\ \label{chanfield2} \widehat{\Phi} &= \Phi_1 \otimes \sigma_3 + \Phi_2 \otimes i \sigma_2. \end{align} Using these definitions, the action (\ref{action1}) can be written in a compact way \begin{eqnarray} \label{action3} S = -\frac{1}{2 g^2} \text{Tr}\Big[ \frac{1}{2} \langle \widehat{Y}_{-2} \widehat{\Phi} \widehat{Q} \widehat{\Phi} \rangle + \frac{1}{3} \langle \widehat{Y}_{-2} \widehat{\Phi} \widehat{\Phi} \widehat{\Phi} \rangle \Big], \end{eqnarray} and the equations of motion (\ref{Eqm1}) and (\ref{Eqm2}) are reduced to a single equation \begin{eqnarray} \label{EqmC1} \widehat{Q }\widehat{\Phi} + \widehat{\Phi} \widehat{\Phi} = 0 . \end{eqnarray} For a given ghost number zero string field $\widehat{U}= U_1 \otimes \mathbb{I} + U_2 \otimes \sigma_1$, we can construct a gauge transformation of the string field $\widehat{\Phi}$ as follows \begin{eqnarray} \label{Gauge1} \widehat{\Psi} = \widehat{U} (\widehat{Q} + \widehat{\Phi}) \widehat{U}^{-1} . \end{eqnarray} It turns out that the action (\ref{action3}) is invariant under this gauge transformation (\ref{Gauge1}). If $\widehat{\Phi}$ is a solution of the equation of motion (\ref{EqmC1}) then a string field $\widehat{\Psi}$, related to $\widehat{\Phi}$ by means of the equation (\ref{Gauge1}), is also a solution. In order to find analytic solutions of the equation of motion (\ref{EqmC1}), we can employ the prescription studied in reference \cite{Arroyo:2010fq}, namely, (i) find a simplest identity based solution of the equation of motion\footnote{Although the identity based solution formally satisfies the equation of motion (\ref{EqmC1}), it is a pathological solution in the sense that it brings ambiguous analytic result for the value of the energy \cite{Arroyo:2010fq,Kishimoto:2014lua,Kishimoto:2009nd,Inatomi:2011an}.}, (ii) perform a gauge transformation over this identity based solution such that the resulting string field, consistently, represents a well behaved solution \cite{Arroyo:2010sy,Zeze:2010sr}. In this paper, following the above procedures, we are going to construct a one-parameter family of solutions $\widehat{\Phi}_{\lambda}$ and evaluate the energy associated to these solutions. It turns out that, depending on the value of the parameter $\lambda$, the solutions $\widehat{\Phi}_{\lambda}$ describe three distinct gauge orbits corresponding to the perturbative vacuum, the half brane and the tachyon vacuum solution. Before deriving the explicit form of the solution $\widehat{\Phi}_{\lambda}$, in the next section we will introduce the so-called $GKBc\gamma$ algebra. \section{The $GKBc\gamma$ algebra, definitions and star products} The $GKBc\gamma$ algebra is an extension of the well known $KBc\gamma$ algebra \cite{Erler:2006hw,Erler:2006ww,Erler:2007xt}. Essentially, we add the new element $G$ to the $KBc\gamma$ algebra. This string field $G$ lives in the $GSO(-)$ sector, and is related to the worldsheet supercurrent $G(z)$ \cite{Erler:2010pr}. To derive some identities involving the star product of the basic string fields $G$, $K$, $B$, $c$ and $\gamma$ together with the action of the BRST operator $Q$ over elements of the $GKBc\gamma$ algebra, it will be useful to write the following representation of these fields in terms of operators acting on the identity string field $ |I\rangle=U_{1}^\dag U_{1} |0\rangle$ \begin{eqnarray} \label{KK} K &\equiv& \frac{1}{2} \hat{\mathcal{L}} U_{1}^\dag U_{1} |0\rangle, \\ \label{BB} B &\equiv& \frac{1}{2} \hat{\mathcal{B}} U_{1}^\dag U_{1} |0\rangle, \\ \label{GG} G &\equiv& \frac{1}{2} \hat{\mathcal{G}} U_{1}^\dag U_{1} |0\rangle, \\ \label{cc} c &\equiv& U_{1}^\dag U_{1} \tilde c (0)|0\rangle, \\ \label{gg} \gamma &\equiv& U_{1}^\dag U_{1} \tilde \gamma (0)|0\rangle. \end{eqnarray} The operators $\hat{\mathcal{L}}$, $\hat{\mathcal{B}}$, $\hat{\mathcal{G}}$, $\tilde c(0)$ and $\tilde \gamma(0)$ are defined in the sliver frame ($\tilde z$ coordinate)\footnote{To map a point $z$ in the upper half plane to a point $\tilde z$ in the sliver frame, we are using the conformal transformation $\tilde z = \frac{2}{\pi} \arctan z$ \cite{Erler:2009uj}. There is another convention for the conformal transformation which is given by $\tilde z = \arctan z$ \cite{Schnabl:2005gv}. In this convention, instead of the factor $1/2$ in front of the R.H.S. of equations (\ref{KK})-(\ref{GG}), we should have the factor $1/\pi$.}, and they are related to the worldsheet energy-momentum tensor, the $b$ field, the worldsheet supercurrent, the $c$ and $\gamma$ ghosts fields respectively, for instance \begin{eqnarray} \label{Lhat01} \hat{\mathcal{L}} &\equiv& \mathcal{L}_{0} +\mathcal{L}^{\dag}_0 = \oint \frac{d z}{2 \pi i} (1+z^{2}) (\arctan z+\text{arccot} z) \, T(z) \, , \\ \label{Bhat01} \hat{\mathcal{B}} &\equiv& \mathcal{B}_{0} +\mathcal{B}^{\dag}_0 = \oint \frac{d z}{2 \pi i} (1+z^{2}) (\arctan z+\text{arccot} z) \, b(z) \, , \\ \label{Ghat01} \hat{\mathcal{G}} &\equiv& \mathcal{G}_{1/2} +\mathcal{G}^{\dag}_{1/2} = \sqrt{\frac{2}{\pi}} \oint \frac{d z}{2 \pi i} (1+z^{2})^{1/2} (\arctan z+\text{arccot} z) \, G(z) \, , \end{eqnarray} while the operator $U_{1}^\dag U_{1}$ in general is given by $U^\dag_r U_r = e^{\frac{2-r}{2} \hat{\mathcal{L}}}$, so we have chosen $r=1$, note that the string field $U_{1}^\dag U_{1} |0\rangle$ represents to the identity string field. To compute star products of string fields involving the operators $\hat{\mathcal{L}}$, $\hat{\mathcal{B}}$ and $\hat{\mathcal{G}}$, it should be useful to define the operators \begin{eqnarray} \label{Lm1} \mathcal{L}_{-1} &\equiv& \frac{\pi}{2} \oint \frac{d z}{2 \pi i} (1+z^{2}) T(z) = \frac{\pi}{2} (L_{-1}+L_{1}) \, , \\ \label{Bm1} \mathcal{B}_{-1} &\equiv& \frac{\pi}{2} \oint \frac{d z}{2 \pi i} (1+z^{2}) b(z) = \frac{\pi}{2} (b_{-1}+b_{1}) \, , \\ \label{Gm12} \mathcal{G}_{-1/2} &\equiv& \sqrt{\frac{\pi}{2}} \oint \frac{d z}{2 \pi i} (1+z^{2})^{1/2} G(z) \, . \end{eqnarray} Given two string fields $\phi$ and $\varphi$ belonging to the $GSO(+)$ or the $GSO(-)$ sector, we can show that \begin{eqnarray} \label{StaP1} (\hat{\mathcal{B}}\phi)*\varphi &=& \hat{\mathcal{B}}(\phi*\varphi) + (-1)^{\text{gn}(\phi)} \phi * \mathcal{B}_{-1} \varphi \; , \\ \label{StaP2} \phi * (\hat{\mathcal{B}}\varphi)&=& (-1)^{\text{gn}(\phi)}\hat{\mathcal{B}}(\phi*\varphi)-(-1)^{\text{gn}(\phi)}(\mathcal{B}_{-1}\phi)*\varphi \; , \\ \label{StaP3} (\hat{\mathcal{B}}\phi)*(\hat{\mathcal{B}}\varphi)&=&-(-1)^{\text{gn}(\phi)} \hat{\mathcal{B}}\mathcal{B}_{-1}(\phi*\varphi)+ (\mathcal{B}_{-1} \phi)*(\mathcal{B}_{-1} \varphi)\; , \\ \label{StaP4} (\hat{\mathcal{G}}\phi)*\varphi &=& \hat{\mathcal{G}}(\phi*\varphi) + (-1)^{\text{gn}(\phi)} \phi * \mathcal{G}_{-1/2} \varphi \; , \\ \label{StaP5} \phi * (\hat{\mathcal{G}}\varphi)&=& (-1)^{\text{gn}(\phi)}\hat{\mathcal{G}}(\phi*\varphi)-(-1)^{\text{gn}(\phi)}(\mathcal{G}_{-1/2}\phi)*\varphi \; , \\ \label{StaP6} (\hat{\mathcal{G}}\phi)*(\hat{\mathcal{G}}\varphi)&=&-(-1)^{\text{gn}(\phi)} \hat{\mathcal{G}}\mathcal{G}_{-1/2}(\phi*\varphi)+ (\mathcal{G}_{-1/2} \phi)*(\mathcal{G}_{-1/2} \varphi) \nonumber \\ &&+(-1)^{\text{gn}(\phi)} 2 \hat{\mathcal{L}} (\phi*\varphi) +(-1)^{\text{gn}(\phi)} \phi * \mathcal{L}_{-1} \varphi \nonumber \\ &&-(-1)^{\text{gn}(\phi)}(\mathcal{L}_{-1}\phi)*\varphi \; , \\ \label{StaP7} (\hat{\mathcal{L}}^{n}\phi)*\varphi &=& \sum_{n'=0}^{n} {n \choose n'} \hat{\mathcal{L}}^{n-n'}(\phi*\mathcal{L}_{-1}^{n'}\varphi)\; , \\ \label{StaP8} \phi*(\hat{\mathcal{L}}^{n}\varphi) &=& \sum_{n'=0}^{n} {n \choose n'} (-1)^{n'} \hat{\mathcal{L}}^{n-n'}((\mathcal{L}_{-1}^{n'}\phi)*\varphi)\; , \\ \label{StaP9} (\hat{\mathcal{L}}^{m}\phi)*(\hat{\mathcal{L}}^{n}\varphi) &=& \sum_{m'=0}^{m}\sum_{n'=0}^{n} {m \choose m'} {n \choose n'} (-1)^{n'} \hat{\mathcal{L}}^{m+n-m'-n'}((\mathcal{L}_{-1}^{n'}\phi)*(\mathcal{L}_{-1}^{m'}\varphi))\; , \end{eqnarray} where $\text{gn}(\phi)$ takes into account the Grassmannality of the string field $\phi$. The above results, containing the operator $\hat{\mathcal{G}}$, are new and they are an extension of the result derived in \cite{Schnabl:2005gv}. Regarding the wedge states with insertions, the star product of two of them is written in the form \begin{eqnarray} \label{StaPWed1} U_{r}^\dag U_{r} \tilde \phi (\tilde x) |0\rangle * U_{s}^\dag U_{s} \tilde \psi (\tilde y) |0\rangle = U_{t}^\dag U_{t} \tilde \phi \big(\tilde x + \frac{1}{2}(s-1)\big) \tilde \psi \big(\tilde y - \frac{1}{2}(r-1)\big) |0\rangle, \end{eqnarray} where $t=r+s-1$, and by $\tilde \phi (\tilde x)$ we denote a local operator $\phi (z)$ expressed in the sliver frame, which in the special case of primary field with conformal weight $h$ is given by \begin{eqnarray} \label{gf3cor2} \tilde{\phi}(\tilde{z}) = \big(\frac{dz}{d\tilde{z}}\big)^h \phi (z) = \big(\frac{\pi}{2}\big)^h \cos^{-2h}\big( \frac{\pi \tilde{z}}{2} \big) \phi \Big( \tan \big( \frac{\pi \tilde{z}}{2} \big) \Big). \end{eqnarray} Since we are using the conformal transformation $\tilde{z} = \frac{2}{\pi} \arctan z$ which is a bit different from the one used in Schnabl's original paper $\tilde{z} = \arctan z$ \cite{Schnabl:2005gv}, we have a factor $1/2$ in the R.H.S. of equation (\ref{StaPWed1}) instead of the factor $\pi/4$ which is present in the reference \cite{Schnabl:2005gv}. It will be useful to know the action of the BRST, $\mathcal{L}_{-1}$, $\mathcal{B}_{-1}$ and $\mathcal{G}_{-1/2}$ operators on the star product of two string fields \begin{eqnarray} \label{Qact1} Q (\phi*\varphi) &=& (Q \phi)*\varphi + (-1)^{\text{gn}(\phi)}\phi *(Q \varphi), \\ \label{Qact2} \mathcal{L}_{-1} (\phi*\varphi) &=& (\mathcal{L}_{-1} \phi)*\varphi + \phi *(\mathcal{L}_{-1} \varphi), \\ \label{Qact3} \mathcal{B}_{-1} (\phi*\varphi) &=& (\mathcal{B}_{-1} \phi)*\varphi + (-1)^{\text{gn}(\phi)}\phi *(\mathcal{B}_{-1} \varphi), \\ \label{Qact4} \mathcal{G}_{-1/2} (\phi*\varphi) &=& (\mathcal{G}_{-1/2} \phi)*\varphi + (-1)^{\text{gn}(\phi)}\phi *(\mathcal{G}_{-1/2} \varphi) \, . \end{eqnarray} Let us derive the algebra associated to the set of operators defined by equations (\ref{KK})-(\ref{gg}). As a pedagogical illustration, we explicitly compute the product $G^2$ \begin{eqnarray} \label{GG1} G^2 \equiv G*G = \frac{1}{2} \hat{\mathcal{G}} U_{1}^\dag U_{1} |0\rangle * \frac{1}{2} \hat{\mathcal{G}} U_{1}^\dag U_{1} |0\rangle = \frac{1}{4} \hat{\mathcal{G}} U_{1}^\dag U_{1} |0\rangle * \hat{\mathcal{G}} U_{1}^\dag U_{1} |0\rangle, \end{eqnarray} using equation (\ref{StaP6}) and the commutators $[\mathcal{G}_{-1/2},\hat{\mathcal{L}}]=0$, $[\mathcal{L}_{-1},\hat{\mathcal{L}}]=0$, we obtain \begin{eqnarray} \label{GG2} G^2 = \frac{2}{4} \hat{\mathcal{L}} \Big( U_{1}^\dag U_{1} |0\rangle * U_{1}^\dag U_{1} |0\rangle \Big) = \frac{1}{2} \hat{\mathcal{L}} U_{1}^\dag U_{1} |0\rangle, \end{eqnarray} therefore we have that $G^2=K$. Following the same steps, using equations (\ref{StaP1})-(\ref{StaP9}), the commutator relation $[\mathcal{G}_{-1/2},\tilde \gamma(0)]= - \frac{1}{2} \partial \tilde c(0)$ and the anti-commutator $\{\mathcal{G}_{-1/2},\tilde c(0)\}= - 2 \tilde \gamma(0)$, we can show that \begin{align} \label{GG3} \{G,G\} &= 2 K, \;\;\; [K,B]=0, \;\;\; [K,G]=0, \;\;\; \{B,G\} = 0,\\ \label{BG1} \partial c &= [K,c], \;\;\; \partial \gamma = [K,\gamma], \;\;\; B^2 = 0, \;\;\; c^2 = 0, \\ \label{B2} \{G,c\} & =- 2 \gamma, \;\;\; [G,\gamma]= -\frac{1}{2} \partial c \, , \end{align} where the expressions $\partial c$ and $\partial \gamma$ have been defined as $\partial \phi \equiv U_{1}^\dag U_{1} \partial \tilde \phi (0) |0\rangle $. The action of the BRST operator $Q$ on the basic string fields $K$, $G$, $B$, $c$ and $\gamma$ is given by \begin{align} \label{QK} Q K &=0, \;\;\; Q G =0, \;\;\; Q B =K, \\ \label{Qc} Q c &= cKc-\gamma^2, \\ \label{Qg} Q \gamma &= c \partial \gamma - \frac{1}{2} \gamma \partial c \, . \end{align} Now we are in position to study and present the construction of a one-parameter family of solutions. \section{One-parameter family of solutions from an identity based solution} It is known that a solution to the equation of motion (\ref{EqmC1}) is given by the following simplest identity based solution \cite{Arroyo:2010fq,Arefeva:2010yd,AldoArroyo:2012if,Arroyo:2013pha} \begin{align} \label{Iden1} \widehat{\Phi}_I = \Big( (c+B \gamma^2)(1-K) \Big) \otimes \sigma_3. \end{align} Using this identity based solution (\ref{Iden1}), we will show that it is possible to construct a one-parameter family of solutions $\widehat{\Phi}_{\lambda}$ which depending on the value of the parameter $\lambda$ will describe three distinct gauge orbits corresponding to the perturbative vacuum, the half brane and the tachyon vacuum solution. Let us write the explicit form of the aforementioned gauge transformation \begin{eqnarray} \label{solphi1} \widehat{\Phi}_{\lambda}= \widehat{U}_{\lambda}(\widehat{Q}+\widehat{\Psi}_I)\widehat{U}^{-1}_{\lambda}, \end{eqnarray} $\widehat{U}_{\lambda}$ is a ghost number zero string field given by\footnote{We would like to bring few motivational words explaining the choice (\ref{gaugeU1}). As in the bosonic case \cite{Arroyo:2010fq}, for superstring field theory we can also construct a gauge transformation which relates the identity based solution (\ref{Iden1}) with the half brane solution \cite{Erler:2010pr}. The gauge transformation which does this job precisely corresponds to a $\widehat{U}$ given by \begin{eqnarray} \label{gaugefootnote} \widehat{U} = \Big( 1+cB[K-1] \Big) \otimes \mathbb{I} + i cBG \otimes \sigma_1 . \end{eqnarray} Applying a supersymmetric analog of the Zeze map \cite{Zeze:2010sr}, we consider a slight modification of (\ref{gaugefootnote}) in which a real parameter $\lambda$ is inserted in the $cBK$ and $cBG$ pieces in the gauge transformation such that for $\lambda=0$ and $\lambda=1$, we recover the perturbative and tachyon vacua respectively.} \begin{eqnarray} \label{gaugeU1} \widehat{U}_{\lambda} &=& \Big( 1+cB[K+(\lambda-1)(2\lambda+1)] \Big) \otimes \mathbb{I} + 4i\lambda(1-\lambda)cBG \otimes \sigma_1 , \\ \label{gaugeU2} \widehat{U}^{-1}_{\lambda} &=& \Big( 1-cB\frac{K-1+f(K,\lambda)}{K} \Big) \otimes \mathbb{I} - c\frac{\widetilde{f}(K,\lambda)}{K}BG \otimes \sigma_1 , \end{eqnarray} where $f(K,\lambda)$ and $\widetilde{f}(K,\lambda)$ are the following functions \begin{eqnarray} \label{gaugeF1} f(K,\lambda) &=& \frac{\lambda^2 (1-2 \lambda )^2+\left(16 \lambda^3-32 \lambda^2+18 \lambda -1\right) \lambda \, K }{\lambda ^2(1-2 \lambda )^2 +2 \lambda \left(8 \lambda ^3-16 \lambda ^2+10 \lambda -1\right) K+K^2} \; , \\ \label{gaugeF2} \widetilde{f}(K,\lambda) &=& \frac{4 i (1-\lambda) \lambda \, K}{\lambda ^2(1-2 \lambda )^2 +2 \lambda \left(8 \lambda ^3-16 \lambda ^2+10 \lambda -1\right) K+K^2} \; . \end{eqnarray} Then, the one-parameter family of solutions is obtained by performing the above gauge transformation over the identity based solution (\ref{Iden1}) \begin{eqnarray} \widehat{\Phi}_{\lambda} &=& \widehat{U}_{\lambda}\widehat{Q}\widehat{U}^{-1}_{\lambda} + \widehat{U}_{\lambda} \Big( (c+B \gamma^2)(1-K) \otimes \sigma_3 \Big) \widehat{U}^{-1}_{\lambda} \nonumber \\ \label{solphi2} &=& \Phi_{1,\lambda} \otimes \sigma_3 + \Phi_{2,\lambda} \otimes i \sigma_2, \end{eqnarray} where the string fields $\Phi_{1,\lambda}$ and $\Phi_{2,\lambda}$ are given by \begin{eqnarray} \label{solphi3} \Phi_{1,\lambda} &=& Q(Bc)f(K,\lambda) + \lambda(2\lambda-1)c f(K,\lambda) + 4i\lambda(1-\lambda) c GB c G \widetilde{f}(K,\lambda) ,\\ \label{solphi4} \Phi_{2,\lambda} &=& Q(Bc) G \widetilde{f}(K,\lambda) + \lambda(2\lambda-1)c G \widetilde{f}(K,\lambda) + 4i \lambda(1-\lambda) c G B c f(K,\lambda). \end{eqnarray} A check of the equation of motion for the above solution is straightforward. At this point we can ask about the interval where the parameter $\lambda$ should belong, the answer to this question will be studied later, for the time being, let us analyze the solution for particular values of this parameter. For the value of the parameter $\lambda=0$, we identically obtain $\widehat{\Phi}_{\lambda=0}=0$ and thus this case corresponds to the perturbative vacuum. For the value $\lambda = 1$, we see that $ \widetilde{f}(K,\lambda=1)=0$ and $f(K,\lambda=1)=1/(1+K)$, therefore we obtain \begin{eqnarray} \label{sollam1} \widehat{\Phi}_{\lambda=1}= \big[ Q(Bc) + c \big]\frac{1}{1+K} \otimes \sigma_3. \end{eqnarray} This solution precisely represents the tachyon vacuum solution. The energy of this solution (\ref{sollam1}) has been evaluated in references \cite{Gorbachev:2010zz,Arroyo:2010fq} given a result in agreement with Sen's first conjecture. For the value $\lambda = 1/2$, we get $ \widetilde{f}(K,\lambda=1/2)=i/(1+K)$ and $f(K,\lambda=1/2)=1/(1+K)$, so in this case the solution can be written as \begin{eqnarray} \label{sollam2} \widehat{\Phi}_{\lambda=1/2}= \Big[ Q(Bc) - c GB c G \Big]\frac{1}{1+K} \otimes \sigma_3 + \Big[ i Q(Bc)G +i c GB c \Big]\frac{1}{1+K} \otimes i \sigma_2 . \end{eqnarray} This solution has been studied in reference \cite{Erler:2010pr} and since the evaluation of its energy brings a result which is half of the value of the tachyon vacuum energy, the solution (\ref{sollam2}) has been called as the half brane solution. Note that to recognize the kind of solution we have, we must calculate the energy associated to the solution. For any solution of the form $\widehat{\Phi} = \Phi_1 \otimes \sigma_3 + \Phi_2 \otimes i \sigma_2$, employing equation (\ref{action2}), we can write the normalized value of the energy $E$ as follows \begin{eqnarray} \label{NorV1} E(\widehat{\Phi}) \equiv -2 \pi^2 g^2 S = \frac{\pi^2}{3} \Big[\langle Y_{-2} \Phi_1 Q \Phi_1 \rangle + \langle Y_{-2} \Phi_2 Q \Phi_2 \rangle \Big]. \end{eqnarray} To evaluate the energy (\ref{NorV1}) for the solution (\ref{solphi2}) with a generic value of the parameter $\lambda$, we will require to define and study correlation functions involving elements of the $GKBc\gamma$ algebra. In the next section, we are going to consider correlation functions including the $G$ field and as a pedagogical application of these correlators, we will show the computation of the energy for the half brane solution. \section{Correlation functions and the half brane energy} To compute the energy for solutions constructed out of elements of the $GKBc\gamma$ algebra, it will be useful to know correlation functions defined on a semi-infinite cylinder of circumference $l$ denoted by $C_{l}$. A point $z$ on the upper half-plane can be mapped to a point $\tilde z \in C_{l}$, which has the property that $\tilde z \simeq \tilde z + l$, through the conformal transformation \begin{eqnarray} \label{gf2cor1P} \tilde{z} = \frac{l}{\pi} \arctan z, \end{eqnarray} The expression for the conformal transformation of primary fields with conformal weight $h$ is given by \begin{eqnarray} \label{gf3cor2P} \tilde{\phi}(\tilde{z}) = \big(\frac{dz}{d\tilde{z}}\big)^h \phi (z) = \big(\frac{\pi}{l}\big)^h \cos^{-2h}\big( \frac{\pi \tilde{z}}{l} \big) \phi \Big( \tan \big( \frac{\pi \tilde{z}}{l} \big) \Big). \end{eqnarray} Using (\ref{gf2cor1P}) and (\ref{gf3cor2P}), we can derive the following correlation function involving the $b(z)$, $c(z)$ and $\gamma(z)$ ghost fields \begin{align} \label{gf8cor3P1} \langle Y_{-2} c(\tilde{x}) \gamma(\tilde{y})\gamma(\tilde{z}) \rangle_{C_l} &= \frac{l^2}{2 \pi^2} \cos \Big( \frac{\pi (\tilde{y}-\tilde{z})}{l} \Big), \\ \label{gf8cor3P2} \langle Y_{-2} b(\tilde{v}) c(\tilde{w}) c(\tilde{x}) \gamma(\tilde{y})\gamma(\tilde{z}) \rangle_{C_l} &= \frac{l \csc \left(\frac{\pi (\tilde{v}-\tilde{w})}{l}\right) \csc \left(\frac{\pi (\tilde{v}-\tilde{x})}{l}\right) \sin \left(\frac{\pi (\tilde{w}-\tilde{x})}{l}\right) \cos \left(\frac{\pi (\tilde{y}-\tilde{z})}{l}\right)}{2 \pi } . \end{align} Using (\ref{gf8cor3P2}), let us compute the correlator $\langle Y_{-2} B c(\tilde{w}) c(\tilde{x}) \gamma(\tilde{y})\gamma(\tilde{z}) \rangle_{C_l}$. Since the $B$ field can be defined as a line integral insertion of the $b(z)$ ghost field inside correlation functions on the cylinder \cite{Okawa:2006vm}, we can write \begin{align} \label{way3} \langle Y_{-2} B c(\tilde{w}) c(\tilde{x}) \gamma(\tilde{y})\gamma(\tilde{z}) \rangle_{C_l} = \langle Y_{-2} \int_{-i \infty}^{i\infty} \frac{d \tilde{v}}{2 \pi i} b(\tilde{v}) c(\tilde{w}) c(\tilde{x}) \gamma(\tilde{y})\gamma(\tilde{z}) \rangle_{C_l}. \end{align} Plugging (\ref{gf8cor3P2}) into the R.H.S. of equation (\ref{way3}) and employing the integral \begin{eqnarray} \label{InA1} \int_{-i\infty}^{i \infty} d \tilde v \,\csc \left(\frac{\pi (\tilde{v}-\tilde{w})}{l}\right) \csc \left(\frac{\pi (\tilde{v}-\tilde{x})}{l}\right)=2i(\tilde{w}-\tilde{x}) \csc \left(\frac{\pi (\tilde{w}-\tilde{x})}{l}\right), \end{eqnarray} we obtain \begin{align} \label{CBccgg1} \langle Y_{-2} B c(\tilde{w}) c(\tilde{x}) \gamma(\tilde{y})\gamma(\tilde{z}) \rangle_{C_l} = \frac{l}{2 \pi^2} (\tilde{w}-\tilde{x}) \cos \Big( \frac{\pi (\tilde{y}-\tilde{z})}{l} \Big). \end{align} In the same way, by writing the $G$ field as a line integral insertion of the worldsheet supercurrent $G(z)$ inside correlation functions on the cylinder, we can derive the following correlators \begin{eqnarray} \label{Gccg1} \langle Y_{-2} G c(\tilde x)c(\tilde y) \gamma(\tilde z) \rangle_{C_l} = \frac{l^2}{2 \pi ^2} \Big[ \cos \left(\frac{\pi \left(\tilde y-\tilde z \right)}{l}\right) - \cos \left(\frac{\pi \left(\tilde x-\tilde z \right)}{l}\right) \Big], \end{eqnarray} \begin{align} \label{GBcccg1} \langle Y_{-2} G B c(\tilde w) c(\tilde x) c(\tilde y) \gamma(\tilde z) \rangle_{C_l} = \;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ =\frac{l \left((\tilde{x}-\tilde{y}) \cos \left(\frac{\pi (\tilde{w}-\tilde{z})}{l}\right)+(\tilde{y}-\tilde{w}) \cos \left(\frac{\pi (\tilde{x}-\tilde{z})}{l}\right)+(\tilde{w}-\tilde{x}) \cos \left(\frac{\pi (\tilde{y}-\tilde{z})}{l}\right)\right)}{2 \pi ^2} \, , \end{align} \begin{align} \label{GBcggg1} \langle Y_{-2} G B c(\tilde w) \gamma(\tilde x)\gamma (\tilde y) \gamma(\tilde z) \rangle_{C_l} = \frac{l \left(\cos \left(\frac{\pi (\tilde{x}-\tilde{y})}{l}\right)+\cos \left(\frac{\pi (\tilde{x}-\tilde{z})}{l}\right)+\cos \left(\frac{\pi (\tilde{y}-\tilde{z})}{l}\right)\right)}{8 \pi ^2}. \end{align} With the aid of these correlation functions, we are ready to evaluate the energy associated to the half brane solution. Using equation (\ref{NorV1}) for the particular case of the solution (\ref{sollam2}), and noting that the BRST exact terms do not contribute to the evaluation of the energy, we obtain \begin{align} \label{NorV1H1} E(\widehat{\Phi}_{\lambda=1/2}) = \frac{\pi^2}{3} \Big[\langle\langle cGBcG \frac{1}{1+K} Q(cGBc) G \frac{1}{1+K} \rangle \rangle - \langle \langle cGBc \frac{1}{1+K} Q(cGBc) \frac{1}{1+K} \rangle \rangle \Big], \end{align} where the notation $\langle \langle \cdots \rangle \rangle$ means that $\langle \langle \cdots \rangle \rangle \equiv \langle Y_{-2} \cdots \rangle$. Employing equations (\ref{GG3})-(\ref{Qg}), after a lengthy algebraic manipulations, from equation (\ref{NorV1H1}) we arrive to \begin{align} \label{NorV1H2} E(\widehat{\Phi}_{\lambda=1/2}) = \frac{\pi^2}{3} \Big[\langle\langle Kc\frac{1}{1+K}\gamma^2 \frac{1}{1+K}\rangle \rangle + 3 \langle\langle KcK\frac{1}{1+K}\gamma^2 \frac{1}{1+K} \rangle \rangle -\frac{2}{3} \langle\langle GcK^2\frac{1}{1+K}c\gamma \frac{1}{1+K}\rangle \rangle \nonumber \\ + \langle\langle G\gamma \frac{1}{1+K}cKc\frac{1}{1+K}\rangle \rangle -5 \langle\langle BcKc\frac{1}{1+K}\gamma^2 \frac{1}{1+K}\rangle \rangle-4 \langle\langle Bc\gamma K\frac{1}{1+K}c\gamma \frac{1}{1+K}\rangle \rangle \nonumber \\ +2 \langle\langle B c \gamma K^2 \frac{1}{1+K}c\gamma \frac{1}{1+K}\rangle \rangle + 4 \langle\langle G B c\gamma \frac{1}{1+K}\gamma^2 \frac{1}{1+K}\rangle \rangle-6 \langle\langle B c \gamma K \frac{1}{1+K} c \gamma K\frac{1}{1+K}\rangle \rangle \nonumber \\ + 4 \langle\langle GBcK\frac{1}{1+K}\gamma^3 \frac{1}{1+K}\rangle \rangle-3 \langle\langle GBc\frac{1}{1+K}cKc\gamma \frac{1}{1+K}\rangle \rangle-3 \langle\langle GBcK\frac{1}{1+K}cKc\gamma \frac{1}{1+K}\rangle \rangle \Big] \end{align} All the above correlators can be computed using equations (\ref{gf8cor3P1}) and (\ref{CBccgg1})-(\ref{GBcggg1}), for instance, let us explicitly compute the correlator $\langle\langle GBcK\frac{1}{1+K}cKc\gamma \frac{1}{1+K}\rangle \rangle $ \begin{align} \label{NorV1H2A1} \langle\langle GBcK\frac{1}{1+K}cKc\gamma \frac{1}{1+K}\rangle \rangle = \int_{0}^{\infty} dt_1 dt_2\,e^{-t_1-t_2} \partial_{s_1} \partial_{s_2} \Big[\langle\langle GBc\Omega^{s_1+t_1}c\Omega^{s_2}c\gamma \Omega^{t_2} \rangle \rangle \Big] \Big{|}_{s_1=s_2=0}, \end{align} where we have used the fact that $\Omega^t = e^{-tK}$. The correlator $\langle\langle GBc\Omega^{s_1+t_1}c\Omega^{s_2}c\gamma \Omega^{t_2} \rangle \rangle$ is given by \begin{align} \label{NorV1H2A2} \langle\langle GBc\Omega^{s_1+t_1}c\Omega^{s_2}c\gamma \Omega^{t_2} \rangle \rangle = \langle Y_{-2} GBc(s_1+s_2+t_1+t_2)c(s_2+t_2)c(t_2)\gamma(t_2) \rangle_{C_{s_1+s_2+t_1+t_2}} . \end{align} The R.H.S. of equation (\ref{NorV1H2A2}) can be evaluated using equation (\ref{GBcccg1}), so that we obtain the result \begin{align} \label{NorV1H2A3} \partial_{s_1} \partial_{s_2} \Big[\langle\langle GBc\Omega^{s_1+t_1}c\Omega^{s_2}c\gamma \Omega^{t_2} \rangle \rangle \Big] \Big{|}_{s_1=s_2=0} = \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ = \frac{t_1 \left(\cos \left(\frac{\pi t_1}{t_1+t_2}\right)-1\right)+t_2 \left(-\pi \sin \left(\frac{\pi t_1}{t_1+t_2}\right)+\cos \left(\frac{\pi t_1}{t_1+t_2}\right)-1\right)}{2 \pi ^2 \left(t_1+t_2\right)} \, . \end{align} Performing the change of variables $t_1 \rightarrow u v$, $t_2 \rightarrow u - u v$, $\int_{0}^{\infty} dt_1 dt_2 \rightarrow \int_{0}^{\infty} du \int_{0}^{1} dv \, u $, and using the result (\ref{NorV1H2A3}), from equation (\ref{NorV1H2A1}), we get \begin{align} \label{NorV1H2A4} \langle\langle GBcK\frac{1}{1+K}cKc\gamma \frac{1}{1+K}\rangle \rangle &= \int_{0}^{\infty} du \int_{0}^{1} dv \, \frac{e^{-u} u (\pi (v-1) \sin (\pi v)+\cos (\pi v)-1)}{2 \pi ^2} \nonumber \\ &= -\frac{1}{\pi ^2} \, . \end{align} Performing similar computations for the rest of terms appearing on the R.H.S. of equation (\ref{NorV1H2}) and adding the results up, the energy turns out to be \begin{align} \label{NorV1H2A5} E(\widehat{\Phi}_{\lambda=1/2}) = \frac{\pi^2}{3} \Big[ - \frac{3}{2 \pi^2} \Big] = - \frac{1}{2}, \end{align} this is precisely $1 / 2$ times the normalized value of the tachyon vacuum energy which has the value $E(\widehat{\Phi}_{\lambda=1}) = -1$. Let us summarize the results for the normalized value of the energy (\ref{NorV1}) which has been obtained for the particular values of the parameter $\lambda=\{0,1/2,1\}$ \begin{align} \label{NorV1H2A6} E(\widehat{\Phi}_{\lambda}) = \begin{cases} 0, & \lambda = 0 \;,\;\; \text{Perturbative Vacuum Solution,} \\ -1/2, & \lambda = 1/2 \;,\;\; \text{Half Brane Solution,} \\ -1, & \lambda = 1 \;,\;\; \text{Tachyon Vacuum Solution.} \end{cases} \end{align} Finally, we would like to evaluate the energy $E(\widehat{\Phi}_{\lambda})$ for a generic value of the parameter $\lambda$. This computation will be performed in the next section. \section{Energy of the one-parameter family of solutions} In order to evaluate the energy associated to the one-parameter family of solutions $\widehat{\Phi}_{\lambda}$ for a generic value of the parameter $\lambda$, it will be useful to express the functions (\ref{gaugeF1}) and (\ref{gaugeF2}) as superpositions of wedge states $\Omega^t = e^{-tK} $, to this end, let us start by rewriting the solution (\ref{solphi2}) as follows \begin{eqnarray} \label{solphi2xd} \widehat{\Phi}_{\lambda}= \Phi_{1,\lambda} \otimes \sigma_3 + \Phi_{2,\lambda} \otimes i \sigma_2, \end{eqnarray} where the $GSO(\pm)$ components $\Phi_{1,\lambda}$ and $\Phi_{2,\lambda}$ are given by \begin{eqnarray} \label{solphi3xd} \Phi_{1,\lambda} &=& Q(Bc)f(K,\lambda) + p c f(K,\lambda) + q c GB c G \widetilde{f}(K,\lambda) ,\\ \label{solphi4xd} \Phi_{2,\lambda} &=& Q(Bc) G \widetilde{f}(K,\lambda) +p c G \widetilde{f}(K,\lambda) + q c G B c f(K,\lambda), \end{eqnarray} and \begin{eqnarray} \label{gaugeF1xd} f(K,\lambda) &=& \frac{p^2 +w K }{(K-r_1)(K-r_2)} \; , \\ \label{gaugeF2xd} \widetilde{f}(K,\lambda) &=& \frac{q K}{(K-r_1)(K-r_2)} \; . \end{eqnarray} The set of parameters $p$, $q$, $w$, $r_1$ and $r_2$ have been defined as \begin{align} \label{defpqw} p = \lambda(2\lambda-1), \;\;\;\;\;\;\;\; q = 4i \lambda(1-\lambda), \;\;\;\;\;\;\;\; w = \lambda (16 \lambda^3-32 \lambda^2+18 \lambda -1), \\ \label{defr1} r_1 = -8 \lambda ^4+16 \lambda ^3-10 \lambda ^2+\lambda-4 \sqrt{4 \lambda ^8-16 \lambda ^7+26 \lambda ^6-21 \lambda ^5+8 \lambda ^4-\lambda ^3}, \\ \label{defr2} r_2 = -8 \lambda ^4+16 \lambda ^3-10 \lambda ^2+\lambda+4 \sqrt{4 \lambda ^8-16 \lambda ^7+26 \lambda ^6-21 \lambda ^5+8 \lambda ^4-\lambda ^3}. \end{align} Using partial fraction decomposition, the functions defined by equations (\ref{gaugeF1xd}) and (\ref{gaugeF2xd}) can be expressed as \begin{eqnarray} \label{gaugeF1xd22} f(K,\lambda) &=& \frac{\alpha_1}{K-r_1} + \frac{\beta_1}{K-r_2} \; , \\ \label{gaugeF2xd22} \widetilde{f}(K,\lambda) &=& \frac{\alpha_2}{K-r_1} + \frac{\beta_2}{K-r_2} \; , \end{eqnarray} where the parameters $\alpha_1$, $\alpha_2$, $\beta_1$ and $\beta_2$ are given by \begin{align} \label{alphabeta1} \alpha_1 &= \frac{p^2+r_1 w}{r_1-r_2} \, , \;\;\;\; \beta_1 = -\frac{p^2+r_2 w}{r_1-r_2} \, , \\ \label{alphabeta2} \alpha_2 &= \frac{q r_1}{r_1-r_2} \, , \;\;\;\;\;\;\; \beta_2 =-\frac{q r_2}{r_1-r_2} \, . \end{align} The way how we have written the functions (\ref{gaugeF1xd22}) and (\ref{gaugeF2xd22}) allow us to represent these functions as the following integrals \begin{eqnarray} \label{gaugeF1xd33} f(K,\lambda) &=& \int_{0}^{\infty} dt \big[\alpha_1 e^{r_1 t}+ \beta_1 e^{r_2 t}\big] \Omega^{t} \; , \\ \label{gaugeF2xd33} \widetilde{f}(K,\lambda) &=& \int_{0}^{\infty} dt \big[\alpha_2 e^{r_1 t}+ \beta_2 e^{r_2 t}\big] \Omega^{t} \; . \end{eqnarray} This integral representation constitutes a superposition of wedge states $\Omega^t = e^{-tK}$ \cite{Schnabl:2010tb}. In order for these integrals (\ref{gaugeF1xd33}) and (\ref{gaugeF2xd33}) to provide convergent results, we should require \begin{eqnarray} \label{condilam1} \big(\Re r_1 < 0\big) \wedge \big(\Re r_2 < 0\big). \end{eqnarray} Using equations (\ref{defr1}) and (\ref{defr2}), from this inequality (\ref{condilam1}) we obtain the following conditions for the parameter $\lambda$ \begin{eqnarray} \label{condilam2} \big(\lambda < 0\big) \vee \big(\kappa \leq \lambda <\frac{1}{2}\big) \vee \big(\lambda >\frac{1}{2}\big), \end{eqnarray} where $\kappa$ is a numerical constant defined as \begin{eqnarray} \label{kappa1} \kappa = \frac{2}{3}-\frac{1}{6} \left(\frac{25}{2}+\frac{3}{2} \sqrt{69}\right)^{1/3}-\frac{1}{6} \left(\frac{25}{2}-\frac{3}{2} \sqrt{69}\right)^{1/3} \approx 0.122561 \end{eqnarray} It is interesting to note that the region (\ref{condilam2}) does not contain the points $\lambda=0$ and $\lambda = 1/2$ which corresponds to the perturbative vacuum and the half brane solution respectively. Physically this means that the various values: $\lambda=0$, $\lambda=1/2$ and the ones defined by the region (\ref{condilam2}) formally correspond to distinct gauge orbits within the formal solution (\ref{solphi1}). Now, we are going to evaluate the energy $ E(\widehat{\Phi}_{\lambda})$ associated to a parameter $\lambda$ belonging to the region (\ref{condilam2}). We might anticipate the result using the following argument. Due to the fact that the energy is a gauge invariant quantity, and since $\lambda$ belonging to the region (\ref{condilam2}) corresponds to an specific gauge orbit, to compute the energy, we can choose a particular value for the parameter $\lambda$ contained in this region, for instance $\lambda=1$ which we know corresponds to the tachyon vacuum solution, therefore we should obtain the following result for the energy \begin{eqnarray} \label{Earg1} E(\widehat{\Phi}_{\lambda}) = -1 \, , \;\;\; \text{for} \;\; \big(\lambda < 0\big) \vee \big(\kappa\leq \lambda <\frac{1}{2}\big) \vee \big(\lambda >\frac{1}{2}\big). \end{eqnarray} Employing the solution (\ref{solphi2xd}) together with the integral representation of the functions $f$ and $\widetilde{f}$ given by (\ref{gaugeF1xd33}) and (\ref{gaugeF2xd33}), we would like to check the validity of the above result. Using equation (\ref{NorV1}) for the case of the solution (\ref{solphi2xd}), and noting that the BRST exact terms do not contribute to the evaluation of the energy, we obtain \begin{align} \label{NorV1H1Sollam1} E(\widehat{\Phi}_{\lambda}) = \frac{\pi^2}{3} \Big[p^2 \langle\langle c f Q(c)f \rangle \rangle + q^2 \langle\langle c GB c G \widetilde{f} Q(c GB c) G \widetilde{f} \rangle \rangle + 2 p q \langle\langle c GB c G \widetilde{f} Q(c)f \rangle \rangle \nonumber \\ + p^2 \langle\langle c G \widetilde{f} Q(c)G \widetilde{f} \rangle \rangle + q^2 \langle\langle c GB c f Q(c GB c) f \rangle \rangle + 2 p q \langle\langle c GB c f Q(c)G \widetilde{f} \rangle \rangle \Big]. \end{align} Employing the identities (\ref{GG3})-(\ref{Qg}), the correlation functions (\ref{gf8cor3P1}), (\ref{CBccgg1})-(\ref{GBcggg1}), the integrals (\ref{gaugeF1xd33}) and (\ref{gaugeF2xd33}), we can evaluate all the correlation functions which will appear from the R.H.S. of (\ref{NorV1H1Sollam1}). For instance, let us compute $\langle\langle c f Q(c)f \rangle \rangle$ \begin{align} \label{AcorrA1} \langle\langle c f Q(c)f \rangle \rangle = -\langle Y_{-2} c f(K,\lambda) \gamma^2 f(K,\lambda) \rangle = - \int_{0}^{\infty} dt_1 dt_2 \, h(t_1) h(t_2) \langle Y_{-2} c \Omega^{t_1} \gamma^2 \Omega^{t_2} \rangle , \end{align} where we have defined $h(t) = \alpha_1 e^{r_1 t}+ \beta_1 e^{r_2 t}$. Using the correlation function (\ref{gf8cor3P1}), we can derive the correlator \begin{eqnarray} \label{AcorrA2} \langle Y_{-2} c \Omega^{t_1} \gamma^2 \Omega^{t_2} \rangle = \frac{(t_1+t_2)^2}{2 \pi^2}. \end{eqnarray} Plugging (\ref{AcorrA2}) into equation (\ref{AcorrA1}) and performing the change of variables $t_1 \rightarrow u v$, $t_2 \rightarrow u - u v$, $\int_{0}^{\infty} dt_1 dt_2 \rightarrow \int_{0}^{\infty} du \int_{0}^{1} dv \, u $, we get \begin{align} \label{AcorrA3} \langle\langle c f Q(c)f \rangle \rangle &= - \int_{0}^{\infty} du \int_{0}^{1} dv \, \frac{u^3 \big(\alpha _1 e^{r_1 u v}+\beta _1 e^{r_2 u v}\big) \big(\alpha _1 e^{r_1 (u-u v)}+\beta _1 e^{r_2 (u-u v)}\big)}{2 \pi^2} \\ &= -\frac{2 \alpha _1 \beta _1 r_1 r_2 (r_1^2+r_2 r_1+r_2^2) +3 \alpha _1^2 r_2^4+3 \beta _1^2 r_1^4}{\pi ^2 r_1^4 r_2^4} \nonumber \\ \label{AcorrA3Aux} &= \frac{3 - 38 \lambda + 64 \lambda^2 - 32 \lambda^3}{\pi^2 \lambda^2 (2 \lambda-1)^3} \, . \end{align} The integral (\ref{AcorrA3}) exists only when $\Re \, r_{1,2} < 0$, and for such $r_1,r_2$, this integral has the value shown in equation (\ref{AcorrA3Aux}). Note that we have a singularity at $\lambda =0$ and $\lambda = 1/2$, while in the case where $\lambda$ belongs to the region $(0,\kappa)$, the expression (\ref{AcorrA3Aux}) is clearly well-defined. Therefore aside from these two singular points, it seems that the result of the integral does not differentiate between different regions of $\lambda$. We wonder if the same phenomenon can happen for the remaining integrals coming from the rest of terms on the R.H.S. of equation (\ref{NorV1H1Sollam1}). It turns out that the expressions for the remaining integrals will not be as simple as the one shown in (\ref{AcorrA3}). For instance, from the second term on the R.H.S. of equation (\ref{NorV1H1Sollam1}), after performing algebraic manipulations, we obtain a lot of terms and just as an illustration, let us show one of them \begin{align} \label{AcorrExtra1} \mathcal{I}(\lambda) \equiv \langle Y_{-2} B K c \widetilde{f}(K,\lambda) \gamma K \widetilde{f}(K,\lambda) c \gamma \rangle. \end{align} Using the integral representation (\ref{gaugeF2xd33}), and defining the function $g(t) = \alpha_2 e^{r_1 t}+ \beta_2 e^{r_2 t}$, we can write equation (\ref{AcorrExtra1}) as follows \begin{align} \label{AcorrExtra2} \mathcal{I}(\lambda) = \int_{0}^{\infty} dt_1 dt_2 \, g(t_1) g(t_2) \langle Y_{-2} B K c \Omega^{t_1} \gamma K \Omega^{t_2} c \gamma \rangle. \end{align} Employing the correlation function (\ref{CBccgg1}), we can derive the correlator \begin{align} \label{AcorrExtra3} \langle Y_{-2} B K c \Omega^{t_1} \gamma K \Omega^{t_2} c \gamma \rangle = \frac{\pi t_2 \left(t_1+t_2\right) \sin \left(\frac{\pi t_2}{t_1+t_2}\right)+\left(t_1^2+\left(2+\pi ^2\right) t_2 t_1+t_2^2\right) \cos \left(\frac{\pi t_2}{t_1+t_2}\right)}{2 \pi ^2 \left(t_1+t_2\right){}^2}. \end{align} Plugging (\ref{AcorrExtra3}) into equation (\ref{AcorrExtra2}) and performing the change of variables $t_1 \rightarrow u v$, $t_2 \rightarrow u - u v$, $\int_{0}^{\infty} dt_1 dt_2 \rightarrow \int_{0}^{\infty} du \int_{0}^{1} dv \, u $, the integral over the variable $v$ can be easily done, so that we get \begin{align} \label{AcorrExtra4} \mathcal{I}(\lambda) = \int_{0}^{\infty} du \frac{\alpha_2 u e^{r_1 u} \big(\beta _2+\frac{u^2}{\pi^2}\alpha_2 (r_1-r_2)^2 + \alpha_2 \big)+\beta _2 u e^{r_2 u} \big(\alpha _2+\frac{u^2}{\pi^2}\beta_2(r_1-r_2)^2 + \beta_2\big)}{ 2\pi^2 + 2(r_1-r_2)^2 u^2}. \end{align} The above integral exists only when $\Re \, r_{1,2} < 0$, and unlike the integral (\ref{AcorrA3}), here we were not able to write a simple analytic expression for the result of this integral (\ref{AcorrExtra4}). Nevertheless, for the parameter $\lambda$ belonging to the region (\ref{condilam2}), integrals like (\ref{AcorrExtra4}) can be evaluated numerically with arbitrary precision. The numerical evaluation of these type of integrals blows up in the range where $\lambda \in (0,\kappa)$. Carrying out similar computations for the rest of terms on the R.H.S. of equation (\ref{NorV1H1Sollam1}), adding the results up and performing numerical integration\footnote{The explicit expression for the result of the energy in terms of integrals over the variable $u$ is shown in appendix A.} together with the definitions (\ref{defpqw})-(\ref{defr2}), (\ref{alphabeta1}), (\ref{alphabeta2}), the energy turns out to be \begin{align} \label{AcorrA4} E(\widehat{\Phi}_{\lambda}) = \frac{\pi^2}{3} \Big[ -0.303963550927... \Big] = \frac{\pi^2}{3} \Big[ - \frac{3}{\pi^2} \Big] = - 1. \end{align} Collecting the results (\ref{NorV1H2A6}) and (\ref{AcorrA4}), we can summarize the main result of our paper \begin{align} \label{FinalResult1} E(\widehat{\Phi}_{\lambda}) = \begin{cases} 0, & \lambda = 0 \;,\;\; \text{Perturbative Vacuum Solution,} \\ -1/2, & \lambda = 1/2 \;,\;\; \text{Half Brane Solution,} \\ -1, & \big(\lambda < 0\big) \vee \big(\kappa \leq \lambda <\frac{1}{2}\big) \vee \big(\lambda >\frac{1}{2}\big) \;,\;\; \text{Tachyon Vacuum Solution,} \end{cases} \end{align} namely, depending on the value of the parameter $\lambda$, the solution represents three distinct gauge orbits corresponding to the perturbative vacuum, the half brane and the tachyon vacuum solution. \section{Summary and discussion} We have studied and constructed a one-parameter family of solutions which contains the perturbative vacuum, the half brane and the tachyon vacuum solution in the modified cubic superstring field theory. To our knowledge, this is the first explicit example of a solution which describes these three distinct gauge orbits. To evaluate the energy associated to the one-parameter family of solutions we have performed analytic computations, however it would be nice to confirm our results by employing numerical techniques such as the curly $\mathcal{L}_0$ level expansion \cite{Arroyo:2009ec,Arroyo:2011zt,AldoArroyo:2009hf} or the usual Virasoro $L_0$ level expansion scheme \cite{Moeller:2000xv,Kishimoto:2011zza,Arroyo:2014pua}. The numerical analysis should be important, for instance, to check if the solution behaves as a regular element in the state space constructed out of Fock states \cite{Schnabl:2010tb,Takahashi:2007du,AldoArroyo:2011gx}. In the case of open bosonic string field theory, using elements of the $KBc$ subalgebra, in reference \cite{Erler:2012dz}, the existence of physically distinct solutions has been analyzed such as the perturbative vacuum, the tachyon vacuum and the MNT ghost brane \cite{Okuda:2006fb,Masuda:2012kt}. Following the lines developed in this paper, it would be nice to find a one-parameter family of solutions which describes these distinct gauge orbits. Finally, we would like to comment that the construction of solutions based on gauge transformation of identity based solutions can be generalized in order to consider more cumbersome solutions, such as the multibrane solutions \cite{AldoArroyo:2012if,Arroyo:2013pha}, and the recently proposed Erler's analytic solution for tachyon condensation in Berkovits open superstring field theory \cite{Erler:2013wda}. Since the algebraic structure of Berkovits theory \cite{Berkovits:1995ab} is similar to the cubic superstring field theory, the results of our work can be naturally extended, however the presence of a non-polynomial action will bring us challenges in the search of new solutions within Berkovits theory. \section*{Acknowledgements} I would like to thank Ted Erler for useful discussions. I also thank the referee for his appreciable work in the peer-review process; his comments helped me to improve the paper. Finally, I would like to give a special thank to my family: my wife Diany, my son Davi, and my newborn daughter Sofia for their valuable company during the elaboration of this work. This work has been supported by CNPq grant 303073/2012-8.
1,108,101,563,020
arxiv
\section{Introduction} Machine learning models are typically designed and fine-tuned for optimal accuracy, which often results in layers of weights that are difficult to explain or understand. In the meantime, recent successes of machine learning systems have attracted adoption from more end-users, who need to better understand the model in order to trust or properly use such machine learning systems. To make these two ends meet, researchers and practitioners alike have adopted several approaches, including 1) using approximate models just for explanation\cite{ancona2019explaining}; 2) linear local explanation for complex global models (e.g. LIME\cite{lime}); 3) example-based explanation by finding and showing most influential training data points\cite{kabra2015understanding}. These approaches all have their own merits, but none of them deliver everything needed by end-users\cite{rudin2019stop}. The fundamental limitation of these approaches is that they assume that 1) certain aspects of machine learning systems, especially complex deep neural networks, cannot be understood by human beings, and 2) typical human users can only understand simple concepts such as linear systems. We have an opportunity to improve on previous attempts with two assumptions. First, human users are intelligent, just not in the same way as machines. Humans can identify patterns intelligently but may not be able to scale up to thousands of data points easily. Second, machine learning systems are built to reflect actual physical systems that follow logical and physical rules. What worked well most likely can be explained, even though the explanation could be complex. What cannot be explained most likely is not a good reflection of the underlying physical properties. We intend to make improvements in this area by 1) presenting various aspects of the actual model through verbatim model manifestation (instead of trying to approximate the models), and 2) identifying and generating a manageable number of data points to present to users in the local context of the point-of-interest, so that human users can use their own intelligence to understand what the actual model is trying to do within a limited scope that is manageable by a human being. With this intuition, we aim to design an approach to facilitate human users' understanding of machine learning models through 1) verbatim manifestation of certain aspects of the underlying machine learning systems and 2) contextualized visualization of carefully curated or generated data points that facilitates human understanding. In other words, we try to build a bridge between machine and human intelligence to address machine learning models' explainability problems. Furthermore, we observe that a typical human user does not need to understand the complete machine learning model to gain confidence in the results from the model. The user only needs to understand the rationale behind the decision related to the current task. In this paper, we present a three-stage human-in-the-loop XAI system, a high-level illustration of which is depicted in Figure~\ref{idea}. For a given (mispredicted) point-of-interest, our framework tries to carve out its local decision boundary and delineate the model behavior through a neighborhood manifestation. Our framework leverages variational autoencoders (VAE) to generate neighborhood examples that cross the decision boundary. Human users are involved in exploring the neighborhood through three carefully designed intervention points. These intervention points help human users limit the neighborhood's scope and enable them to gain insights from the model behavior. The source code of our work is public available on GitHub: \url{https://github.com/drchangliu/xai}. The main contributions of our work are: \begin{itemize} \item We proposed a novel human-in-the-loop framework that could mitigate the trust crisis between human users and machine learning models. \item Several case studies are presented to illustrate the potential of our approach to facilitating human understanding of complex machine learning models. \item A general framework to depict the local decision boundary around the (mispredicted) instance-of-interest. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{idea.png} \caption{A high-level illustration of our proposed framework. a) For a (mispredicted) point-of-interest (red x) and a trained machine learning model, b) our framework tries to carve out the local decision boundary and delineate the model behavior through a manageable neighborhood manifestation. c) Images of \textit{sandals} and \textit{ankle boot} from the fashionMNIST dataset that cause confusion to a classifier. Human users can understand the classification errors by seeing the context that some sandals have boot-shape heels. Another classification error is from the Caltech 101 dataset. Trust crisis can be mitigated given the context that some chairs have fan-shaped bases.} \label{idea} \end{figure} \section{Related Work} Machine learning researchers and practitioners have always used techniques and tools to better understand machine learning models. In this section, we examine a few state-of-the-art tools that are publicly accessible in an attempt to shed some light on how they can help software engineers adopt machine learning components. To understand the information flow of a deep network, Ancona et al.\cite{ancona2017towards} has studied the problem of assigning contributions to each input feature of a network. Such methods are known as \textit{attribution methods}, which can be divided into two categories: perturbation-based and backpropagation-based. The perturbation-based methods, such as Occlusion \cite{zeiler2014visualizing}, LIME \cite{lime} and Shapely value \cite{ancona2019explaining}, change the input features and measure the difference between the new output and the original output, while backpropagation-based methods compute the attributions for all input features through the network. Backpropagation-based methods include the feature-wise manner and the layer-wise manner. Feature-wise approaches includes Gradient*Input \cite{shrikumar2016not} and Integrated Gradients \cite{sundararajan2017axiomatic}). Layer-wise approaches includes Layer-wise Relevance Propagation \cite{bach2015pixel}, Class activation maps \cite{Simonyan2014DeepIC}\cite{selvaraju2017grad}\cite{chattopadhay2018grad}\cite{wang2020score} and DeepLIFT \cite{shrikumar2017learning}. Among these related research efforts, LIME \cite{lime} and DEEPVID \cite{wang2019deepvid} are the two most relevant methods as compared to our framework. LIME, proposed by Ribeiro et al., was an approach that was able to explain the predictions of any model\cite{lime}. LIME utilized a locally interpretable model to interpret the black-box model's prediction results and constructed the relationship between the local sample features and the prediction results. Explanations from LIME do not exactly reflect the underlying model. LIME describes the prediction outcomes obtained even with different complex models, such as Random Forest, Support Vector Machine, Bagged Trees, or Naive Bayes. LIME can handle different input data types, including tabular data, image data, or text data. DEEPVID, proposed by Wang et al., was a visual analytics system that leverages knowledge distillation and generative modeling to generate a visual interpretation for image classifiers \cite{wang2019deepvid}. Given an image of interest, DEEPVID applied a generative model to generate samples near it. These generated samples were used to train a local interpretable model to explain how the original model makes the decision. The difference between our approach and DEEPVID is that, instead of utilizing interpretable models such as linear regression to provide interpretation, our approach visualizes boundary examples directly. End-users can then leverage their human intelligence to interpret the model decision. DeepDIG \cite{karimi2019characterizing}\cite{karimi2020decision}, developed by Karmi et al, was a framework that used to characterize the decision boundary for deep neural networks. The main contribution can be divided into two parts. The first part is to generating borderline instances that are near the decision boundary. This part is completed in three steps, the first and second steps are used to generate adversarial instances by Autoencoder. The third step is used to generate the borderline instances based on the binary search and adversarial instances produced after step one and step two. The second contribution is related to the characterization that is used to measure the decision boundary complexity in the input space and embedding space. The input space complexity is calculated by the generated borderline instances from the first contribution. The embedding space complexity is measured by developing a linear Support Vector Machine (SVM) model. \section{The proposed human-in-the-loop framework} Given a trained machine learning model and a (mispredicted) point-of-interest, we intend to generate a neighborhood that can enable a better human understanding of the model. The generated neighborhood needs to satisfy three critical criteria: \begin{itemize} \item The instances in the neighborhood need to be semantically close to the point-of-interest. \item The decision boundary is at least partially visible within the neighborhood. \item The neighborhood needs to maintain the number of instances in a manageable size so that human users can gain insight from it. \end{itemize} To generate a neighborhood that can satisfy the above three criteria, we propose the human-in-the-loop framework that contains three stages, as shown in Figure~\ref{pipeline}. In the first stage, a neighborhood is generated based on the given sample through a trained generative model. In the second stage, the pre-trained machine learning model is used to yield classification on the generated instances to carve out the local decision boundary and delineate the model behavior. Next, three intervention points are provided to enable human users for a throughout exploration for gaining insights. In the following section, we explain each stage in detail. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{pipeline.png} \caption{The proposed human-in-the-loop framework. It contains three stages. In stage (I), a neighborhood is generated based on the given sample through a trained variational autoencoder. In stage (II), the pre-trained machine learning model is used to yield classification on the generated instances to carve out the local decision boundary and delineate the model behavior. In stage (III), human users are enabled with three intervention points to explore the neighborhood: a) refined multifacet path exploration, b) ``zoom-in''\& ``zoom-out'' area exploration, and c) boundary-crossing morphing exploration.} \label{pipeline} \end{figure} \subsection{Stage (I): Neighborhood Generation} Stage one can be described as a stochastic process that generates neighbors from the given point-of-interest. There are two approaches to accomplish such a procedure: Variational Auto-Encoders (VAEs) and Generative Adversarial Networks (GANs). Both of these two generative methods assume an underlying latent space that is mapped to the original data space through a deterministic parameterized function. The generative model often consists of an encoder that can map the given data into the latent space, and a decoder that can decode the latent space vector back to the original space. In this work, we adopt VAE as the generative model because of its more straightforward model structure. As shown in Figure~\ref{VAE}, we train an encoder-decoder CNN-VAE with ten latent dimensions on the MNIST dataset to learn the underlying latent distribution. A hyper-parameter \textit{step-length} is applied to each latent space via linear interpolation to generate the perturbed latent vectors. The perturbed latent vectors are then fed through the decoder to generate neighbors around the point-of-interest. More formally, a VAE model that consists of encoder $q_\theta(z|x)$ and decoder $q_\phi(x|z)$ are trained on the dataset X, where $X = \{(x_1,y_1), (x_2,y_2), ..., (x_n,y_n)\}$, $x_i \in R^D$ and $y_i\in[1,c]$. The VAE is trained with the negative log-likelihood with regularizer. The loss function $l_i$ for data instance $x_i$ is: \begin{equation} -E_{z-q_\theta{(z|x_i)}}[log_{p_\theta}(x_i|z)] +KL(q_\theta(z|x_i)||p(z)), \end{equation} where $z\in R^d$ denotes the \textit{d-dimension} embedding space learned by the VAE encoder. Utilized by the trained VAE, examples near the point-of-interest can be generated and form the neighborhood. A hyper-parameter \textit{step-length} needs to be chosen to determine the border of the neighborhood. In practice, we set \textit{step-length} equal to one as the default value. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{VAE.png} \caption{The architecture of our selected generative model, i.e., a Variational AutoEncoder (VAE)} \label{VAE} \end{figure} \subsection{Stage (II): Neighborhood Classification} To identify and visualize the local decision boundary, the given trained machine learning model is applied to the generated instances. The classification results are highlighted with different colors so that the model behavior can be delineated. We call this classification results as classified neighborhood. A classified neighborhood is one where every data point within the neighborhood has been classified by the model-under-investigation so that the decision boundary is identified and visualized verbatim. Because the actual model is used, this is a verbatim manifestation of the model decision boundary within the neighbourhood. In practice, a larger value of \textit{step-length} is recommended to ensure a decision boundary with clear difference between the opposite sides. In our experiments, we set the \textit{step-length} to 1. \subsection{Stage (III): Human-in-the-loop Exploration} Three intervention points are provided in our human-in-the-loop stage. Specifically, \begin{itemize} \item a refinement intervention point that provides a multifacet refined neighborhood exploration. \item a ``zoom-in" \& ``zoom-out" intervention point that enables human users to take a closer look at the certain region of interest. \item a morphing intervention point that selects two examples from each side of the decision boundary and creates a visualization path. \end{itemize} For the first intervention point, human users are enabled to identify the dimensions of interest, i.e., specific dimensions from the d-dimensional latent space. Next, we allow the human to adjust the hyper-parameter \textit{step-length} along the selected latent dimension for exploration. A larger value of the \textit{step-length} will enrich the semantic variation, while a smaller value can provide a more concentrated result. The \textit{step-length} serves as a "tuning knob" to adjust traversal speed in the latent space, which helps human users to understand how a prediction is carved out from specific changes. Human users are allowed to identify two hidden dimensions of interest for the second intervention point and construct a morphing matrix based on these two-dimension spaces. Allowing the morphing of two dimensions simultaneously can provide a richer context around the point-of-interest. The second intervention point acts as a ``zoom-in''\& ``zoom-out'' effect to assist human users in gathering insights from the generated examples. For the third intervention point, a few instances that are semantically close to the given point-of-interest at two sides of the decision boundary are provided. Next, a morphing path between the two instances are created and the path passes through the point-of-interest. The algorithm for identifying the nearest neighbor and creating the morphing path is shown in Algorithm 1. Such morphing traverses data manifold while crossing the decision boundary, which can delineate the model behavior and explain how and why a particular image is relevant to the prediction. \begin{algorithm}[h] \SetAlgoLined \textbf{Given:} Dataset $(X,Y)$\\ \textbf{Given:} Classifier $F()$ to be interpreted\\ \textbf{Given:} Pretrained \emph{VAE: (VAE-enc, VAE-dec)}\\ \textbf{Given:} Data instance of interest $(x_i, y_i)$, where $y_i=c_1$, but mispredicted $F(x_i)=c_2$\\ \begin{algorithmic}[1] \STATE $\emph{enc-}x_i = \emph{VAE-enc}(x_i)$ \FOR{{$(x_j,y_j) \in (X,Y), y_j=c_1$}} \STATE $\emph{enc-}x_j = \emph{VAE-enc}(x_j)$ \STATE update $x_j$ s.t. $\|\emph{enc-}x_j - \emph{enc-}x_i\|_{L1}$ is smallest \ENDFOR \FOR{{$(x_k,y_k) \in (X,Y), y_k=c_2$}} \STATE \emph{enc-}$x_k =$ \emph{VAE-enc}$(x_k)$ \STATE update $x_k$ s.t. $\|\emph{enc-}x_k - \emph{enc-}x_i\|_{L1}$ is smallest \ENDFOR \STATE interval=($\emph{enc-}x_k - \emph{enc-}x_i)$/num-neighbors \STATE neighbors=[] \STATE labels=[] \FOR{{i=0, i$\leq$num-neighbors; i++}} \STATE neigh = $\emph{enc-}x_i \pm $interval \STATE neighbors.append(neigh) \STATE labels.append($F$(neigh)) \ENDFOR \STATE Visualize(neighbors, labels) \end{algorithmic} \caption{Pseudocode for the proposed method} \end{algorithm} \section{Experiment Setup} To verify the effectiveness of our proposed framework, we conduct several experiments on two datasets. Section 4.1 describes the datasets and the trained machine learning model architectures. Section 4.2 presents the detailed experimental settings for our framework. \subsection{Dataset and Trained machine learning Architecture} We investigate the proposed framework against two datasets, MNIST and FashionMNIST. The MNIST dataset is a large database of handwritten digits, while FashionMNIST is a dataset of Zalando's article images. The images in these datasets are 28x28 grayscale images associated with a label of 10 classes. Both MNIST and FasionMNIST are commonly used for training various image processing machine learning models. The details of the datasets and the chosen model performance are shown in Table~\ref{tab1}. \begin{table} \caption{Description of the investigated datasets.}\label{tab1} \begin{tabular}{|l|l|l|} \hline & MNIST & FashionMNIST\\ \hline \# of training examples & 60,000 & 60,000\\ \hline \# of testing examples & 10,000 & 10,000\\ \hline \# of output classes & 10 & 10\\ \hline Original data space (i.e., \# of dimension) & 784 & 784\\ \hline Test accuracy of the chosen model & 94.1& 92.5\\ \hline \end{tabular} \end{table} \subsection{Our proposed framework settings} In this subsection, we describe the training detail of each stage. Stage (I) utilizes an autoencoder that is pre-trained on the dataset to generate the neighborhood based on the given point-of-interest. Table~\ref{tab2} demonstrates the hyper-parameters of the pre-trained autoencoder for both datasets. Since MNIST contains simpler data points than FashionMNIST, we use a 10-dimensional latent space to represent the images in MNIST, while a 20-dimensional latent space for FashionMNIST. \begin{table} \caption{Description of variational autoencoder models used in Stage (I) and classifiers that need to be explained. The model architecture, activation function, and the number of hidden layers are shown accordingly.}\label{tab2} \begin{tabular}{|l|l|l|} \hline & VAE & Classifier\\ \hline MNIST & \textit{CNV}(32,64, 64), \textit{ReLU}, 10 & \textit{Linear}(20,10), \textit{ReLU}\\ \hline FashionMNIST & \textit{CNV}(32,64, 64), \textit{ReLU}, 20 & \textit{Linear}(20,10), \textit{ReLU}\\ \hline \end{tabular} \end{table} \section{Result} This section will first apply our proposed framework to the MNIST dataset and illustrate how our framework works by providing multiple examples. Then, we apply our method to the FashionMNIST dataset. The examples we presented here demonstrate our framework's potential for improving human understanding of the black-box machine learning models. Note that due to the page limits we only present a handful case studies on two datasets. We also apply our framework on other datasets such as 3-D point cloud data. More interesting examples can be found in our GitHub Page. \subsection{MNIST} A CNN model trained on the MNIST dataset for digit classification is selected and yields a 94.1\% accuracy on the testing dataset. A mispredicted example is chosen for the case study. Figure~\ref{stage1} and Figure~\ref{stage2} show the selected mispredicted point-of-interest and the stage (I) and stage (II) process. As shown in Figure~\ref{stage1}, the neighborhood of the point-of-interest is generated in grey-scale. The examples in the neighborhood satisfied the criteria in Section 3 as they are all semantically close to the original data point. The classified neighborhood is shown in Figure~\ref{stage2}. The colors refer to the classification results. We observe that despite being classified to the same label, images close to the decision boundary have higher fidelity. This observation is consistent with our intuition that the model is more likely mispredicting samples near the decision boundary. One can also draw a similar conclusion by visually examining the classified neighborhood: examples near the decision boundary often have an ambiguous shape that sometimes confuses machine learning models. Through stage (I) and stage (II), our framework generates examples that delineate the model behavior by depicting the local decision boundary. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{stage1.png} \caption{Stage (I) of our framework. In Stage (I), the neighborhood of the point-of-interest is generated. The examples in the neighborhood satisfied the criteria in Section 3 as they are all semantically close to the original data point.} \label{stage1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{stage2.png} \caption{Stage (II) of our framework. In Stage (II), the generated neighborhood is classified with the given trained machine learning model. Purple color indicates the image is classified as \textit{digit-4}, orange color indicates the image is classified as \textit{digit-9} and all other classification results are marked as color grey. We also observe that despite being classified to the same label, images close to the decision boundary have higher fidelity.} \label{stage2} \end{figure} After getting the classified neighborhood that carves the local decision boundary around the point-of-interest, human users could be invited to explore the neighborhood using their own intelligence. Figure~\ref{stage3a}, Figure~\ref{stage3b_2}, Figure~\ref{stage3b_1} and Figure~\ref{stage3c} illustrate the three possible human-in-the-loop exploration strategies. From Figure~\ref{stage3a}, one can observe that at stage (III-a) there exist three interesting ways of morphing between \textit{digit-4} and \textit{digit-9}. Therefore, human users can gain insights by investigating the relevant features that have been changed along the process of \textit{digit-4} morphing to \textit{digit-9}. In this example, the three identified morphing paths revealed three related features: 1) the tartness of the circle, 2) the size of the circle and, 3) the straightness of the line. Next, human users can combine two paths for a ``zoom-in''\& ``zoom-out'' investigation. Combining two paths allows human users to gather richer information related to the decision boundary. As shown in Figure~\ref{stage3b_1} and Figure~\ref{stage3b_2}, two possible combinations are chosen and presented, and the \textit{step-length} are adjusted for the ``zoom-in'' effect and the ``zoom-out'' effect. From the denser region manifestation, one might conclude that 1) an "open-circle" at the top could help the given predictor correctly identify a \textit{digit-4}, and 2) lines with roundness instead of tartness could mislead the predictor to mispredict a \textit{digit-4} to \textit{digit-9}. Such conclusions could help human users better understand how the model behaves in a certain region. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{stage3a.png} \caption{Stage (III-a) of the framework. In Stage (III-a), three paths are identified and the morphing is highlighted with different colors. In this example, the three identified morphing paths revealed three related features: 1) the tartness of the circle, 2) the size of the circle and, 3) the straightness of the line.} \label{stage3a} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{stage3b_2.png} \caption{Stage (III-b) of the framework. In Stage (III-b), the combination of two paths is presented to achieve a “zoom-in” effect for better carving out the model behavior. From this denser region manifestation, one might conclude that 1) an "open-circle" at the top could help the given predicter correctly identify a \textit{digit-4}, and 2) lines with roundness instead of tartness could mislead the predictor to mispredict a \textit{digit-4} to \textit{digit-9}.} \label{stage3b_2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{stage3b_1.png} \caption{Stage (III-b) of the framework. In Stage (III-b), the combination of two paths is presented to achieve a “zoom-out” effect for better carving out the model behavior.} \label{stage3b_1} \end{figure} Figure~\ref{stage3c} demonstrates the result generated by our third intervention point. As shown in the Figure, a \textit{digit-4} is mispredicted as \textit{digit-9}. By examining the morphing from the nearest digit 4 (in purple) to the nearest digit 9 (in orange), the circled area can be identified by human intelligence as one of the explanations for the misprediction. Two other examples are shown in Figure~\ref{stage3c_1} and Figure~\ref{stage3c_2}. The local decision boundary of the model near the two selected instance-of-interest are displayed, end-users can better understood the model behavior by visually examining these samples. In these two cases, we could observe that the mispredictions are likely to be caused by the circle areas in the image's top-left region. Note that human users can leverage their own intelligence to generate their own understanding with respect to the model behavior. Our framework only provides the intervention points that bridge the gaps between human minds and the black-box nature of machine learning models. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{stage3c.png} \caption{Stage (III-c) of the framework. In this stage, two nearest data samples from the original dataset are selected to bridge the gaps between the point-of-interest and real samples on two sides of the decision boundary.} \label{stage3c} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{stage3c_1.png} \caption{Stage (III-c) of the framework. In this stage, two nearest data samples from the original dataset are selected to bridge the gaps between the point-of-interest and real samples on two sides of the decision boundary.} \label{stage3c_1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{stage3c_2.png} \caption{Stage (III-c) of the framework. In this stage, two nearest data samples from the original dataset are selected to bridge the gaps between the point-of-interest and real samples on two sides of the decision boundary.} \label{stage3c_2} \end{figure} \subsection{FashionMNIST} We provide another experiment using FashionMNIST dataset. In Figure~\ref{fashionMNIST}, a sandal is mispredicted as an ankle boot (in green) by a pre-trained CNN. Without the context that some sandals are boot-shaped, it would be difficult to understand the cause of this error. We select this mispredicted image as an item-of-interest and apply the trained VAE to extract its latent vector. Next, we explore the latent space around the extracted latent vector and generate a manageable number of neighbor images. The trained CNN is then applied to classify the generated images. The decision boundary can be observed as the classified label is morphing from sandal (in purple) to ankle boot (in orange). By visually displaying the neighborhood and the decision boundary (the area that purple turns into orange), the end-user can observe the smooth transition between sandal and ankle boot. Human users can easily draw the conclusion that the circled areas might cause the misprediction, i.e., if a boot-shaped image with blank space at the circled areas, it is likely the image will be classified as ankle boot. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{FashionMNIST.png} \caption{A sandal is mispredicted as Ankle boot in FashionMNIST dataset. Without the context that some sandal have boot-shaped, it would be difficult to understand the cause of this error. The neighborhood manifestation provided by our framework enable human users to explore the context environment thus gain understanding of this type of mistakes.} \label{fashionMNIST} \end{figure} \section{Workflow of human users of the proposed framework} This study aims to improve explainability of machine learning models in a human-centric fashion. In this section, we present how a human user or a software engineer can leverage our framework to understand why a given ML model misclassifies a data point. There are three human intervention points. \subsection{Identifying the point-of-interest} First, the human user identifies a mispredicted point-of-interest, which software engineers routinely encounter as they debug software systems with ML components. \subsection{Identifying interesting dimensions and appropriate step lengths} Second, the key question from a user's perspective is: how and why a particular region of the point of interest is relevant to the prediction. That is where human users can again contribute by identifying the most interesting dimensions of semantic changes. Our framework leverages a powerful generative model, variational autoencoders, to generate a neighborhood of closely related data points. The generated neighborhood displays a progressive set of plausible variations of the point-of-interest and visualizes the semantic changes across all directions. The human user can use his common sense judgement to identify more interesting dimensions and more appropriate step lengths of changes on these dimensions so that changes in neighboring data-points are perceivable but not too dramatic. \subsection{Selecting two most revealing dimensions to generate a matrix for decision boundary visualization} Third, human users then select two most revealing dimensions so that a matrix of data-points can be generated to visualize the efforts of gradual changes on both dimensions. This matrix represents the neighborhood of interest. All generated data-points in the neighborhood are passed through the actual model-under-investigation so that the decision boundary is identified and visualized verbatim. Human users can gain knowledge and insights by walking through the classified instances and examining the decision boundary. These three intervention points provide helpful exploration tools to help human users see, select, and manipulate the neighborhood of the data-point-of-interest and the decision boundary within it and therefore better understand the behavior of the underlying model. \section{Discussion, Limitations and Future Works} This paper proposes a human-in-the-loop framework to improve human understanding of the black-box machine learning models locally through verbatim neighborhood manifestation. However, the proposed method is limited in several ways. First, the neighborhood is generated based on the reconstructed data point. We lack a quantitative measure of the fidelity of the generated neighborhood to the original samples. Though the generated samples are derived from the VAE that was directly trained on the original dataset, some details are lost. Second, we adopt a standard VAE to encode the data point into latent space. Moving in such a latent space typically affects several factors of variation at once, and different directions interfere with each other~\cite{mathieu2019disentangling}. This entanglement effect poses challenges for interpreting these directions' semantic meaning and, therefore, hinders human users from understanding the machine learning models. Each of the limitations mentioned above points to a potential direction for future work. We want to quantify the fidelity of the generated data through metrics such as mean-absolute-error or binary-cross-entropy. For the second limitation, we are considering leveraging disentangle-VAE to generate neighborhoods along with semantic meaningful directions. We are also interested in learning a set of latent space directions inducing orthogonal transformations that are easy to distinguish from each other and offer robust semantic manipulations in the neighborhood manifestation. These future works introduce exciting challenges for bridging the gaps between the black-box nature of machine learning models and human understanding. \section{Conclusion} Machine learning models are mainly being developed and fine-tuned for optimal accuracy, while understanding these models has not attracted much attention. Existing XAI models focus on providing approximate hit-or-miss explanations, which do not involve humans in explaining and neglect human intelligence. We propose a human-in-the-loop explanation framework that reframes the explanation problem as a human-interactive problem to tackle this limitation. Our approach utilizes a generative model to enrich the (mispredicted) point-of-interest neighborhood and crave out the local decision boundary by highlighting the model prediction results. We provide three human-involved exploration intervention points that assist human users to leverage their own understanding of the model behavior. We conducted case studies on two datasets, and the experimental results demonstrate the potential of our framework for building a bridge between machine and human intelligence. \bibliographystyle{splncs04}
1,108,101,563,021
arxiv
\section{Introduction} In option pricing, Feynman-Kac formula \cite{KarShreve97} establishes a link between the conditional expectation of the value of a contract payoff function under the risk-neutral measure and the solution of a partial differential equation. In the research areas covered by this theorem, various numerical pricing techniques can be developed. Existing numerical methods can be classified into three major groups: partial integro-differential equations methods, Monte Carlo simulations and numerical integration methods. Each of them has its advantages and disadvantages for different financial models and specific applications. In this paper, we concentrate on the last group for the pricing of European type option. The point-of-departure for pricing European option with numerical integration techniques is the risk-neutral valuation formula: \begin{equation}\label{riskneutralvaluation} V(x, t_0 = 0) = e^{-rT} {\mathbb{E}}_{{\mathbb{Q}}} [V(S_T, T) |S_0 = x] = e^{-rT} \ \int_{{\mathbb{R}}} V(y, T)\tilde{f}(y |x)dy \end{equation} with ${\mathbb{E}}_{{\mathbb{Q}}}$ the expectation operator under risk-neutral measure ${\mathbb{Q}}$, $S_t$ the underlying asset price at $t$ and $T$ the option maturity. $V(x,t)$ denotes the option value at $t$ with $x$ the state variable. $\tilde{f}(y |x)$ is the probability density function of $S_T$ given $S_0=x$ and $r$ the risk-free interest rate. Unfortunately, for many relevant pricing processes, their probability densities are usually unknown. On the other hand, the Fourier transform of these densities, i.e, the characteristic functions, are often available. For instance, from the Levy-Khinchine theorem \cite{ContTankov04} the characteristic functions of Levy processes are known. Or characteristic functions have been derived in the pure diffusion context with stochastic volatility \cite{Heston93} and with stochastic interest rates \cite{BakChen97}. Hence, the Fourier transform methods for option pricing have been naturally considered by many authors (see \cite{CarrMadan99} and references therein). Subsequently, some new numerical methods are proposed. For example, The quadrature method (QUAD) method was introduced by Andricopoulos et al \cite{AndWidDu03}, the Convolution method (CONV) was presented by Lord et al \cite{LordFangBer08}. A fast Hilbert transform approach was considered by Feng and Linetsky \cite{FengLinet08}. The highly efficient Fourier-cosine series (COS) technique, based on Fourier-cosine series expansion of the density function, was proposed by Fang and Oosterlee \cite{FangOosterlee08} and has generated other developments by Hurn et al \cite{HurnLindsayClelland13} or by Ding et al \cite{DingU11}. Recently, Necula et al \cite{NeculaDriFar16} have employed the modified Gram-Charlier series expansion, known as the Gauss-Hermite expansion, for the density function and obtained a closed form pricing formula for European option. In this manuscript, we consider an alternative and propose to expand the probability density function $\tilde{f}(y)$, restricted on a finite interval $[a,b]$, using Legendre polynomials when the characteristic function is known. For approximating non periodic function on a finite interval, among the class of basis functions, it is usually recommended to use either Legendre polynomials or Chebyshev polynomials (see page 510 table A.1 in \cite{Boyd00}). Legendre polynomial offers tractability property allowing to compute analytically many quantities of interests. For example, Legendre polynomial has an analytical formula for its Fourier transform as in (\ref{FourierTransform}), which is instrumental and used to recover the coefficients $A_n$ in the series expansion of the density function (\ref{ThLegendreSeries}). The Fourier transform for Chebyshev polynomials does not have a simple closed form and requires some numerical approximations (see discussion in \cite{EvansWebster99}). Moreover the experiments show this formula is numerically stable for large $n$. Generally, the classical Legendre series offers the simplest method of representing a function using polynomial expansion means \cite{Fishback07}. Also the recent analysis by Cohen and Tan \cite{CohenaTan12} shows Legendre polynomial approximation yields an error at least an order of magnitude smaller than the analogous Taylor series approximation and the authors strongly suggest that Legendre expansions, instead of Taylor expansions, should be used when global accuracy is important. Finally, polynomials are convenient to manipulate in general and we compute simply the European option pricing formula by integrating the payoff against Legendre polynomial functions. Adrien Marie Legendre, a French mathematician who discovered the famous polynomials, was never aware of that how much it will be used in developing mathematics. This Legendre polynomial is being used by mathematicians and engineers for variety of mathematical and numerical solutions. For example, in physics, Legendre and Associate Legendre polynomials are widely used in the determination of wave functions of electrons in the orbits of an atom \cite{DickeWittke60, Hollas92} and in the determination of potential functions in the spherically symmetric geometry \cite{Jackson62}. In numerical analysis, Legendre polynomials are used to efficiently calculate numerical integrations by Gaussian quadrature method \cite{MugYeIq06}. Legendre polynomials is not widely used in quantitative finance but not new. For example, Pulch et al \cite{PuchEm09} consider the fair price of options as the expected value of a random field where the input volatility parameter is written as a linear function of uniform random variable. The Polynomial chaos theory using Legendre polynomial yields an efficient approach for calculating the required fair price. Or in \cite{IbsenAlmeida05}, the authors develop arbitrage free interest rate models for a family of term structures parametrized by linear combinations of Legendre polynomials. Each polynomial provides a clear interpretation in terms of the type of movements that they generate for the term structure (see also \cite{AlmeidaDuarteFer98} and \cite{Almei04b}). To our knowledge, it is the first time that Legendre polynomials are used to expand the probability density function of asset prices and option pricing. To recover rapidly and accurately the density function, our key insight relies on the close relation of the characteristic function with the series coefficients of the Legendre polynomials expansion of the density function (see our result in theorem \ref{PropDensityProxy}). Based on this representation for the density function, approximations formulas for pricing European type options are derived. To obtain highly accurate result for European call option, the implementation involves integrating high degree Legendre polynomials against exponential function. Some numerical instabilities arise because of serious subtractive cancellations in its formulation (\ref{IntLegendreExp1}) in proposition \ref{PropIntLegendreExp}. To overcome this difficulty, we rewrite this quantity as solution of a second-order linear difference equation in proposition (\ref{SndOrderDiffEq}). To solve this equation in a stable way, we use Olver's algorithm which allows to evaluate these quantities to machine accuracy. Then we develop an analysis to provide estimations of the errors. We believe that a rigorous error estimate is of first importance because the accuracy of our expansion formulas depends on the regularity of the density function. Once done, it brings confidence in the derived expansion and sheds light on the needed assumptions (see our results in propositions \ref{PropositionEpsilon2}, \ref{proposition:epsilon3} and \ref{PropositionEpsilon4}). This paper is structured as follows. In section 2, we develop the series expansion of the density function using Legendre polynomials. Based on this, we derive, in section 3, the formulas for pricing European type options and propose a robust and stable procedure for the implementation. An error analysis is presented in section 4. Some numerical experiments are given in section 5. The final section concludes. \section{Series expansion of density function with Legendre polynomials} The objective is to estimate the density function $\tilde{f}(y)$ using Legendre polynomials given its characteristic function. \subsection{Generalized Fourier Series-Legendre Polynomials} The Legendre polynomials $(P_n(t))_{n \geq 0}$ form a complete basis over the interval $[-1,1]$ and can be defined, in term of power series, by \begin{equation}\label{LegendrePowerSeries} P_n(t) = \frac{1}{2^n} \sum_{k=0}^{\lfloor \frac{n}{2} \rfloor} (-1)^k C_n^k C_{2n-2k}^n t^{n-2k} \end{equation} with $\lfloor r \rfloor$ the floor function and $C_n^k = \frac{n!}{k!(n-k)!}$ the binomial coefficients \cite{Lebedev72, Davis75}.\\ The Legendre basis polynomials can be generalized to cover an arbitrary interval $[a, b]$ by the change of variable $t = \frac{(2x - (a + b))}{(b - a)}$ which leads to the following \begin{equation}\label{LegendrePowerSeriesNormalized} P_n(x) = \frac{1}{2^n} \sum_{k=0}^{\lfloor \frac{n}{2} \rfloor} (-1)^k C_n^k C_{2n-2k}^n \left[\frac{(2x - (a + b))}{(b - a)}\right]^{n-2k}. \end{equation} Sturm-Liouiville theory guarantees the orthogonality of Legendre polynomials and it also shows that we can represent functions on $[a, b]$ as a linear combination of Legendre Polynomials. Thus for suitable $f(x)$ on $[a, b]$ we have the generalized Fourier series \begin{equation}\label{LegendreSeries} f(x) = \sum_{n=0}^{\infty} A_nP_n \left( \frac{2x-(a+b)}{b-a} \right) \end{equation} where $\{ A_n \}_{n=0}^{\infty}$ is a set of coefficients. To find each $An$, we use the orthogonality relation \begin{equation} \int_a^b P_n\left( \frac{2x-(a+b)}{b-a} \right)P_m \left( \frac{2x-(a+b)}{b-a}\right) dx=\delta_{n=m} \frac{(b-a)}{2m+1} \end{equation} and then multiply both sides of expression (\ref{LegendreSeries}) by $P_m \left( \frac{2x-(a+b)}{b-a} \right)$ and integrate to obtain \begin{align} \int_a^b f(x)P_m\left( \frac{2x-(a+b)}{b-a}\right)dx &= \sum_{n=0}^{\infty} A_n\int_a^b P_n \left( \frac{2x-(a+b)}{b-a} \right) P_m \left( \frac{2x-(a+b)}{b-a}\right)dx\\ &= (b-a) \frac{A_m}{2m+1}. \end{align} so that \begin{equation}\label{Ancoefficients} A_n = \frac{2n+1}{b-a} \int_a^b f(x)P_n\left( \frac{2x-(a+b)}{b-a}\right)dx \end{equation} \subsection{Approximate risk-netural probability density function using standard Fourier series} we briefly revise the definition of complex Fourier series \cite{Tolstov62, Davis75}. For a suitable function $f(t)$ supported on $[-\pi, \pi]$, the complex Fourier series representation is given by \begin{equation} f(t) = \sum_{k=-\infty}^{+\infty} B_k e^{ikt}, \, \, \, with\, \, \, B_k = \frac{1}{2\pi} \int_{- \pi}^{\pi} f(t)e^{-ikt}dt. \end{equation} If we extend the series to support function with a finite range of $[a, b]$, the complex Fourier series expansion can be defined as: \begin{equation}\label{fComplexFourier} f(x) = \sum_{k=-\infty}^{\infty} B_ke^{i(\frac{2\pi}{b-a}x)k}, \,\,\, with \,\,\, B_k = \frac{1}{b-a}\int_a^b f(x)e^{-ik(\frac{2\pi}{b-a}x)}dx. \end{equation} The formula is achieved through use change of variables: \begin{equation} x = \frac{b-a}{2\pi}t + \frac{a+b}{2} \,\, or \,\, t = \frac{2\pi}{b-a}x - \frac{\pi (a+b)}{b-a} \end{equation} Being given a probability density function $\tilde{f}(x)$ and its characteristic function $\varphi(u)$, these two functions form a Fourier pair: \begin{align} \varphi(u) = & \int_{{\mathbb{R}}}e^{iux}\tilde{f}(x)dx \\ \tilde{f}(x) = & \frac{1}{2\pi}\int_{{\mathbb{R}}} e^{-iux}\varphi(u)du. \end{align} A necessary condition for $\tilde{f}(x)$ to be a probability density function is that $\tilde{f}(x) \to 0$ as $\mid x \mid \to \infty $, and therefore there is guaranteed to be an interval $[a,b]$ such that for all $x \in ( -\infty, a] \cup [b, \infty)$ it can be asserted that $\tilde{f}(x) < \epsilon$ for any arbitrary small positive $\epsilon$.\\ Let's consider $f(x)$ as the restriction of $\tilde{f}(x)$ on $[a,b]$. We shall discuss the appropriate choice of $[a, b]$ in section \ref{subsection:Truncation Range}. From (\ref{LegendreSeries}) and (\ref{fComplexFourier}), $f(x)$ can be expressed either in a complex Fourier series or a Legendre polynomials series. As the aim of this paper is to apply Legendre polynomials for a pricing formula, we show how one can precisely approximate $f(x)$ with Legendre series and formulate the coefficients in the expansion knowing the characteristic function. To achieve this, we use (\ref{fComplexFourier}) in (\ref{Ancoefficients}) and assume we can change the order of integration to write \begin{equation}\label{AnChangeIntegrationOrder} A_n = \frac{2n+1}{b-a} \sum_{k = -\infty}^{+\infty} B_k \int_a^b P_n \left( \frac{2x-(a+b)}{b-a} \right) e^{i2\pi(\frac{xk}{b-a})} dx \end{equation} Through change of variables, $x = \frac{b-a}{2}t + \frac{a+b}{2}$, and a closed-form expression for \begin{equation}\label{FourierTransform} \int_{-1}^1 P_n(x)e^{ i \lambda x}dx = i^n \sqrt{\frac{2 \pi}{\lambda}} J_{n+\frac{1}{2}}(\lambda), \, \lambda \in {\mathbb{C}} \end{equation} with $J_{\nu}(z)$ Bessel function of first kind (see \cite{OlverLozierBoiCl10} p.217 and p.456), it comes \begin{equation}\label{LegendreFourierCoeff} \int_a^b P_n\left( \frac{2x-(a+b)}{b-a}\right) e^{i2\pi(\frac{xk}{b-a})} dx = \left\lbrace \begin{array}{l} i^n \frac{(b-a)}{2} e^{\frac{i \pi k(a+b)}{b-a}}\sqrt{\frac{2}{k}} J_{n+\frac{1}{2}}(\pi k), \, \, k \neq 0,\\ (b-a)\delta_{n=0}, \, \, k=0. \end{array} \right. \end{equation} and so \begin{equation}\label{SeriesforAn} A_n = \frac{2n+1}{\sqrt{2}} \left[ \sum_{k \neq 0} B_k i^n e^{\frac{i \pi k(a+b)}{b-a}}\frac{J_{n+\frac{1}{2}}(\pi k)}{\sqrt{k}} + B_0 \sqrt{2} \delta_{n=0}\right] \end{equation} Knowing the characteristic function, we write \begin{equation} \label{RelationBkBkTildeRk} B_k = \widetilde{B}_k - R_k \end{equation} with \begin{equation} \widetilde{B}_k := \frac{1}{b-a} \varphi \left( \frac{-2k \pi}{b-a} \right) \end{equation} and \begin{equation}\label{Rkdefinition} R_k := \frac{1}{b-a} \int_{{\mathbb{R}} -[a,b]} \widetilde{f}(x) e^{-i2\pi(\frac{xk}{b-a})} dx. \end{equation} $R_k$ is expected to be small and can be bound as \begin{equation} \mid R_k \mid \leq \frac{1}{b-a} \left[ \int_{-\infty}^{a} \widetilde{f}(x) dx + \int_{b}^{ +\infty } \widetilde{f}(x) dx \right] = \frac{1}{b-a} \left[ \widetilde{F}(a) + 1 - \widetilde{F}(b) \right]. \end{equation} where $\widetilde{F}(x)$ is the cumulative distribution function of $\widetilde{f}(x)$.\\ Finally, using (\ref{RelationBkBkTildeRk}), $A_n$ can be written as \begin{equation}\label{SeriesforAnFinal} A_n = \frac{2n+1}{\sqrt{2}} \left[ \sum_{k \neq 0} \widetilde{B}_k i^n e^{\frac{i \pi k(a+b)}{b-a}}\frac{J_{n+\frac{1}{2}}(\pi k)}{\sqrt{k}} + \widetilde{B}_0 \sqrt{2} \delta_{n=0}\right] + R_{A_n} \end{equation} with \begin{equation}\label{ExpRAn} R_{A_n} = -\frac{2n+1}{\sqrt{2}} \left[ \sum_{k \neq 0} R_k i^n e^{\frac{i \pi k(a+b)}{b-a}}\frac{J_{n+\frac{1}{2}}(\pi k)}{\sqrt{k}} + R_0 \sqrt{2} \delta_{n=0}\right]. \end{equation} Before summarising the result of the development above in the next theorem, we introduce a couple of definitions and notation taken from \cite{Tolstov62}: a function $f(x)$ is said to be {\it{piecewise smooth}} on the interval $[a,b]$ if either $f(x)$ and its derivative are both continuous on $[a,b]$, or they have only a finite number of {\it{jump discontinuities}} on $[a,b]$. If $x_0$ is a point of discontinuity of a function $f(x)$ and if the right-hand and left-hand limits exist, $x_0$ is said to be a point of {\it{jump discontinuity}}. We set \begin{equation} f^n_k(x) = B_k P_n \left( \frac{2x-(a+b)}{b-a} \right) e^{i2\pi(\frac{xk}{b-a})}, \,\, x \in [a,b], \, k \in {\mathbb Z} , \,\, n \in N. \end{equation} and consider, for a given $n \in {\mathbb{N}}$, the series of functions \begin{equation} \label{seriesfnk} \sum_{k= -\infty}^{+\infty} f^n_k(x). \end{equation} \begin{theorem}\label{PropDensityProxy} Let's denote by $f(x)$, the restriction of the probability density function $\tilde{f}(x)$ on $[a,b]$ large enough such that $f(a) = f(b)$ and $\varphi(x)$ the characteristic function associated to $\tilde{f}(x)$. Assume that $f(x)$ is a continuous piecewise smooth function and that the series (\ref{seriesfnk}) is uniformly convergent on $\in [a,b]$ for all $n \in {\mathbb{N}}$. Then we have the following Legendre series representation \begin{equation}\label{ThLegendreSeries} f(x) = \sum_{n=0}^{\infty} A_nP_n \left( \frac{2x-(a+b)}{b-a} \right) \end{equation} with $A_n$ given in (\ref{SeriesforAnFinal}). \end{theorem} \begin{proof} $f(x)$, being continuous and piecewise smooth on $[a,b]$, can be written as in (\ref{LegendreSeries}) and (\ref{fComplexFourier}) (see e.g \cite{Tolstov62} and \cite{Lebedev72}). The uniform convergence of the series (\ref{seriesfnk}) allows to interchange the order of integration in the expression of $A_n$ in (\ref{AnChangeIntegrationOrder}). \end{proof} \remark{ \begin{itemize} \item The representation (\ref{ThLegendreSeries}) allows to retrieve the density function accurately when the characteristic function $\varphi(x)$ is known by truncating the infinite sums in $n$ of (\ref{ThLegendreSeries}) and in $k$ of (\ref{SeriesforAnFinal}) and neglecting the terms $R_{A_n}$ (see section \ref{sec:Numerical experiments} for illustrations). \item In quantitative finance, the probability density function $\tilde{f}(x)$ of asset prices tends to be smooth in general. When analytical formulas are available as for Black Scholes model in equation (\ref{GaussianDerivatives}) and in Merton jump diffusion model in equation (\ref{Mertondensity}), we observe that their density functions are infinitely differentiable. The Malliavin calculus or the stochastic calculus of variations can be applied to the study of existence and smoothness of density for the solution of a stochastic differential equation (SDE) (see e.g \cite{Nualart06} or \cite{Bally03}). \end{itemize} } \section{A new computational method for option pricing} \label{sec:optionpricing} \subsection{Option pricing} \label{subsec:optionpricing} Here, we show how to evaluate European style options using the asymptotic expansion of the density function obtained previously. We denote the log-asset prices by \begin{equation} x:=ln \left( \frac{S_0}{K} \right) \,\, and \, \, y:=ln \left( \frac{S_T}{K} \right), \end{equation} with $S_t$ the underlying price at time $t$ and $K$ the strike price. The payoff for European options, in log-asset price, reads \begin{equation}\label{callput} V(y,T)=[\alpha.K(e^y-1)]^+ \,\, with \,\, \alpha = \left\lbrace \begin{array}{lr} 1 & \text{for a call}, \\ -1 & \text{for a put}, \\ \end{array} \right. \end{equation} and \begin{equation}\label{digital} V(y,T)= 1_{\alpha y \geq 0} \,\, with \,\, \alpha = \left\lbrace \begin{array}{lr} 1 & \text{for a digital call}, \\ -1 & \text{for a digital put}, \\ \end{array} \right. \end{equation} \vspace{0.3cm} In the following, we focus on the pricing formula for European call option and European digital call option. The European put option and European digital put option prices can be deduced by parity. Indeed, call/put and digital options are very popular in the financial markets for hedging and speculation. They are also important to financial engineers as building blocks for constructing more complex option products. For example, it is well-known that the price of European-type option with twice differentiable payoff can be replicated model free by a static portfolio consisting of pure discount bond, at the money European Call and put options and a continuum out of the money European Call and put options (see e.g \cite{Nachman98} and \cite{CarrMadan02}). Moreover, pricing and hedging of digital options are challenging because of payoff discontinuity (see discussions in remark \ref{regularitypayoff} and \cite{AvellanedaLaurence99}). So it is instrumental to be able to price these options accurately in a robust way. With (\ref{riskneutralvaluation}), the European call price is given by \begin{equation} V(x,0) = e^{-rT} K{\mathbb{E}} [(e^y-1)^+] = e^{-rT} K\int_{- \infty}^{+\infty}(e^y-1)^+ \tilde{f}(y | x)dy \end{equation} Since the density rapidly decays to zero as $y \to \pm \infty$, we truncate the infinite integration range without loosing significant accuracy to $[a, b] \subset {\mathbb{R}}$ and obtain the approximation \begin{equation}\label{V1Approximation} V_1(x,0) = e^{-rT} K\int_{a}^{b}(e^y-1)^+ f(y | x)dy. \end{equation} In the second step, we replace $f(y | x)$ by its Legendre series representation (\ref{ThLegendreSeries}) to obtain the following proposition \begin{proposition}\label{PropositionPricingFormulas} Under the hypotheses of theorem (\ref{PropDensityProxy}), we obtain an approximation of (\ref{riskneutralvaluation}) given by the following Legendre polynomial pricing formula \begin{equation} \label{V4Approximation} V_4(x,0) = e^{-rT} \sum_{n=0}^N A^M_nV_n \end{equation} where $A^M_n$ and $V_n$ are defined respectively by \begin{equation}\label{CoefficientsA_NM} A^M_n = \frac{2n+1}{\sqrt{2}} \left[ \sum_{k = -M, \neq 0}^{M} \widetilde{B}_k i^n e^{\frac{i \pi k(a+b)}{b-a}}\frac{J_{n+\frac{1}{2}}(\pi k)}{\sqrt{k}} + \widetilde{B}_0 \sqrt{2} \delta_{n=0}\right] \end{equation} \begin{equation}\label{VnExpression} V_n = \left\lbrace \begin{array}{lr} K \beta \left[ e^{\frac{a+b}{2}} \int_{\alpha}^1 e^{\beta t}P_n(t)dt - \frac{P_{n-1}(\alpha) - P_{n+1}(\alpha) }{2n+1} \right] & \text{for European call} \\ & \\ \frac{P_{n-1}(\alpha) - P_{n+1}(\alpha) }{2n+1} & \text{for European digital call} \end{array} \right. \end{equation} with $\alpha = \frac{a+b}{a-b}$ and $\beta = \frac{(b-a)}{2}$. \end{proposition} \begin{proof} For the European call price, we use the representation (\ref{ThLegendreSeries}) of $f(y | x)$, perform two truncations in the infinite sums: One for $n$ in (\ref{ThLegendreSeries}) to $N$ and an another one for $k$ in (\ref{SeriesforAnFinal}) to $[-M, M]$ and neglect the remaining term $R_{A_n}$ in (\ref{ExpRAn}). Then we get an estimation of the price given by \begin{equation} V_4(x,0) = e^{-rT} K \sum_{n=0}^N A^M_n \int_a^b (e^y-1)^+P_n \left( \frac{2y-(a+b)}{b-a} \right) dy. \end{equation} Without loss of generality, we suppose $a << 0$, $b >>0$ and with a change of variable $t = \frac{2y-(a+b)}{b-a}$, the last expression becomes \begin{equation} V_4(x,0) = e^{-rT} K \sum_{n=0}^N A^M_n \beta \left[ e^{\frac{a+b}{2}} \int_{\alpha}^1 e^{\beta t}P_n(t)dt - \int_{\alpha}^1P_n(t)dt \right]. \end{equation} By using the Legendre polynomial property \begin{equation}\label{LegendreDerivativeProperty} (2n+1)P_n(t) = P^{'}_{n+1}(t)-P^{'}_{n-1}(t) \end{equation} and $P_n(1) = 1, \, \forall \, n \geq 0$, we get \begin{equation} \int_{\alpha}^1P_n(t)dt = \frac{P_{n-1}(\alpha) - P_{n+1}(\alpha) }{2n+1} \end{equation} The European digital call price is computed similarly. \end{proof} \begin{remark} \begin{itemize} \item The computation of $\int_{\alpha}^1 e^{\beta t}P_n(t)dt$ in $V_n$ for the European call price needs attention. We provide an analytical formula in proposition (\ref{PropIntLegendreExp}). Its computation is straightforward for small values of $n$. For $n >>1$ several accuracy and stability issues arise because of rapid accumulation of round-off errors \cite{SeleRaHerPeFer13, KlemmSigLar90}. \item The valuation for other contracts like asset-or-nothing options, gap options or standard power options \cite{Haug07} can be computed similarly. \end{itemize} \end{remark} \subsection{Alternate computational procedure} \label{subsection:AlternateComputational} The computation of the Legendre pricing formula (\ref{V4Approximation}) is straightforward for small value of $N$ by using the expression for $V_n$ in proposition (\ref{PropIntLegendreExp}). To obtain accurate pricing, we need to consider $N, M >>1$. $M$ large does not have any implementation difficulty. However, for $N>>1$, the computation of $V_n$ using (\ref{IntLegendreExp1}) introduces instability and inaccuracy issues because of cancellations \cite{KlemmSigLar90}. The objective in this section is to provide an alternative computational procedure for $V_n$ with machine accuracy.\\ Let's write \begin{equation}\label{Un} U_n = \int_{\alpha}^1 e^{\beta t}p_n(t)dt \end{equation} Using integration by parts, we get \begin{equation}\label{UnIIP} U_n = W_n - \frac{1}{\beta} \int_{\alpha}^1 e^{\beta t}p^{'}_n(t)dt \end{equation} with $W_n = \frac{1}{\beta} ( e^{\beta} - e^{\beta \alpha} P_n(\alpha))$.\\ With (\ref{LegendreDerivativeProperty}), it is easy to show \begin{equation} \label{LegendreDerivativeAsLegendrePoly} p^{'}_{n}(t) = \left\lbrace \begin{array}{lr} \frac{2}{||p_{n-1}||^2} p_{n-1}(t) + \sum_{i=0, \, 2i \leq (n-3)} \frac{2}{||p_{2i}||^2} p_{2i}(t) & \text{for odd} \, n \geq 3, \\ \frac{2}{||p_{n-1}||^2} p_{n-1}(t) + \sum_{i=0, \, 2i+1 \leq (n-3)} \frac{2}{||p_{2i+1}||^2} p_{2i+1}(t) & \text{for even} \, n \geq 2 \\ \end{array} \right. \end{equation} where $||p_m||^2 = \frac{2}{2m+1}, \, m \geq 0$.\\ Using (\ref{UnIIP}) with (\ref{LegendreDerivativeAsLegendrePoly}), we then get \begin{equation}\label{Unrecurrence} U_n = W_n-\frac{2(n-1)+1}{\beta}U_{n-1} + U_{n-2}- W_{n-2} \end{equation} given $U_0$ and $U_1$. The next proposition summarizes the second-order linear difference equation for the computation of $U_n$: \begin{proposition}\label{SndOrderDiffEq} By posing $Y_n = U_n-W_n$, $Y_n$ satisfies the following second-order linear difference equation \begin{equation}\label{ynrecurrence} Y_{n-1}-\frac{1}{\beta}(2n+1)Y_n - Y_{n+1} = \frac{1}{\beta}(2n+1)W_n. \end{equation} given $Y_0$ and $Y_1$. \end{proposition} The computation of $U_n$ or $Y_n$ using these forward recurrences is straightforward but it generates instabilities and inaccuracies for large $n$. It is a well known issue as discussed in \cite{Olver67, Cash77, Gautschi67}. \\ An excellent technique which evaluates $U_n$ in a stable way to machine accuracy is Olver's method \cite{Olver67}. The approach consists to treat the difference equation as a boundary-value problem rather than using initial-value technique. This rewrites the recurrence relation as a triple of recurrence relations, two of which are evaluated forwards to an index greater than the desired $N$, the number of additional steps required for a given accuracy being determined as part of the procedure. The third relation is then evaluated by backward recurrence (see \cite{Olver67} for details). \section{Error analysis} \label{ErrorAnalysis} First, let's write the successive approximations introduced in the derivation of the pricing formula (\ref{V4Approximation}). \begin{equation} V(x,0) = \int_{-\infty}^{\infty}V(y,T) \tilde{f}(y | x)dy = V_1(x,0) + \epsilon_1 \end{equation} with \begin{equation}\label{ErrorAnv1} V_1(x,0) = \int_{a}^{b}V(y,T) \tilde{f}(y | x)dy \end{equation} and \begin{equation} \label{eps1} \epsilon_1 = \int_{{\mathbb{R}}-[a,b] } V(y,T) f(y | x)dy \end{equation} $\epsilon_1$ corresponds to an integral truncation error. \\ Using (\ref{ThLegendreSeries}), it comes \begin{equation}\label{ErrorAnv1v2} V_1(x,0) = \sum_{k=0}^{\infty} A_kV_k = V_2(x,0) + \epsilon_2 \end{equation} where $A_k$ is defined in (\ref{Ancoefficients}), $V_k$ in (\ref{VnExpression}) with \begin{equation}\label{ErrorAnv2} V_2(x,0) = \sum_{k=0}^{N-1} A_kV_k \end{equation} and \begin{equation}\label{eps2} \epsilon_2 = \sum_{k=N}^{+\infty} A_kV_k \end{equation} $\epsilon_2$ corresponds to the series truncation error.\\ By posing $C_m^k = \int_a^b P_k\left( \frac{2y-(a+b)}{b-a}\right) e^{i2\pi(\frac{ym}{b-a})} dy$, $A_k$ is written as \begin{equation}\label{ErrorAnAk} A_k = \frac{2k+1}{b-a} \left[ \sum_{m=-\infty}^{+ \infty} B_m C_m^k \right] \end{equation} Using expression (\ref{ErrorAnv2}), we get \begin{equation} V_2(x,0) = V_3(x,0) + \epsilon_3 \end{equation} with \begin{equation} V_3(x,0) = \frac{1}{b-a} \sum_{k=0}^{N-1} \sum_{m=-M}^M V_k (2k+1) B_m C_m^k \end{equation} and \begin{equation} \label{epsilon3} \epsilon_3 = \sum_{k=0}^{N-1} \frac{V_k (2k+1)}{b-a} \sum_{m \, \in {\mathbb Z} -[-M,M]} B_m C_m^k \end{equation} $\epsilon_3$ represents another series truncation error.\\ Finally using (\ref{RelationBkBkTildeRk}), we have \begin{equation} V_3(x,0) = V_4(x,0) + \epsilon_4 \end{equation} with \begin{equation} V_4(x,t) = \frac{1}{b-a} \sum_{k=0}^{N-1} \sum_{m=-M}^M V_k (2k+1) \widetilde{B}_m C_m^k \end{equation} and \begin{equation} \epsilon_4 = -\frac{1}{b-a} \sum_{k=0}^{N-1} \sum_{m=-M}^M V_k (2k+1) R_m C_m^k \end{equation} $\epsilon_4$ represents another integral truncation error.\\ To summarize we obtain \begin{equation} \label{vv4eps1234} V(x,0) = V_4(x,0) + \epsilon_1 + \epsilon_2 + \epsilon_3 + \epsilon_4. \end{equation} $V_4(x,0)$ can be complex. By taking the real part in (\ref{vv4eps1234}), it comes \begin{equation} V(x,0) = Re(V_4(x,0)) + \epsilon_1 + \epsilon_2 + Re(\epsilon_3) + Re(\epsilon_4) \end{equation} because $V(x,0)$, $\epsilon_1$ and $\epsilon_2$ are real by definition.\\ Secondly, the key to bound the errors lies in the decay rate of the generalized Fourier series coefficients. The convergence rate depends on the smoothness of the functions on the expansion interval.\\ We summarize in the next theorem taken from \cite{WangXiang12}, the decay rates of the coefficients in the Legendre series expansion and the error bounds of the truncated Legendre series in the uniform norm. let $\| . \|_T$ be the Chebyshev-weighted seminorm defined by $$\| u \|_T = \int_{-1}^1 \frac{\mid u'(x)\mid}{\sqrt{1-x^2}} dx $$, $E_{\rho}$ denotes the Bernstein ellipse in the complex plane $$E_{\rho} = \{ z \in {\mathbb{C}} \vert z = \frac{1}{2} (u + u^{-1}), u = \rho e^{i \theta}, -\pi \leq \theta \leq \pi \} $$ and $$f_n(x) = \sum_{j=0}^n a_j P_j(x)$$ the truncated Legendre series expansions of $f(x)$. \begin{theorem}\label{WangXiangTheorem} If $f$, $f'$,...,$f^{k}$ are absolutely continuous on $[-1,1]$ and $\| f^{(k)} \|_T = F_k < \infty$ for some $k \geq 1$ ($H_{abs}(k)$), then for each $n > k+1$, \begin{equation} \label{BoundAnAbsoluteCont} \mid a_n \mid \leq \frac{F_k}{(n-\frac{1}{2}) (n-\frac{3}{2})...(n-\frac{2k-1}{2})} \sqrt{\frac{\pi}{2(n-k-1)}}. \end{equation} If $f$ is analytic inside and on the Bernstein ellipse $E_{\rho}$ with foci $\pm 1$ and major semiaxis and minor semiaxis summing to $\rho >1$ ($H_a(E_{\rho})$), then for each $n \geq 0 $, \begin{equation}\label{BoundAnAnalyticBern} \mid a_n \mid \leq \frac{(2n+1) \ell(E_{\rho})M}{\pi \rho^{n+1}(1-\rho^{-2})} \end{equation} where $M = \max_{z \in E_{\rho}} \mid f(z) \mid$ and $\ell(E_{\rho})$ denotes the length of the circumference of $E_{\rho}$. \end{theorem} \subsection{ Bound for $\epsilon_2$ } $A_k$ and $V_k$ correspond respectively to the Legendre series coefficients of the $f(x)$ and of the payoff functions. The density function is generally smoother than the payoff function in finance and we expect the coefficient $A_k$ to decay faster than $V_k$. The following proposition makes it precise. \begin{proposition}\label{PropositionEpsilon2} Let's assume $\int_a^b V^2(y,T)dy < +\infty$ and define \begin{equation} g(y) \equiv f \left( \frac{b-a}{2}y + \frac{a+b}{2} \right). \end{equation} There are two cases: \begin{enumerate} \item Under $H_{abs}(k)$ with $k > 1$ for $g$, we have \begin{equation} \mid \epsilon_2 \mid \leq \frac{G_k}{(k-1)(N-\frac{1}{2})(N-\frac{3}{2})...(N-\frac{2k-3}{2})} \sqrt{\frac{ \pi }{2(N-k)}} \end{equation} where $\| g^{(k)} \|_T = G_k < \infty$. \item $g$ analytic on $[-1,1]$. Then we get \begin{equation} \mid \epsilon_2 \mid \leq \frac{(2N \rho +3\rho - 2N-1)\ell(E_{\rho})M } {\pi \rho^{N+1}(\rho-1)^2(1-\rho^{-2}) } \end{equation} where $\tilde{g}$ is the analytic continuation of $g$ on and within $E_{\rho}$ with $\rho > 1$, $M \equiv \max_{z \in E_{\rho}} \mid \tilde{g}(z) \mid$ and $\ell(E_{\rho})$ denotes the length of the circumference of $E_{\rho}$. \end{enumerate} \end{proposition} \begin{proof} We have $V_k \to 0$ as $k \to \infty$. Indeed \begin{align} \mid V_k \mid &=\mid \int_{a}^{b}V(y,T) P_k \left( \frac{2y-(a+b)}{b-a} \right)dy \mid \\ & \leq \| v \|_{L^2[a,b]} \sqrt{\frac{b-a}{2k+1}}. \end{align} where we have used Cauchy-Schwartz inequality and $\| P_k \| = \sqrt{ \frac{2}{2k+1} }$.\\ So for $N >> \infty$, we write \begin{equation} \mid \epsilon_2 \mid \leq \sum_{k=N}^{\infty} \mid A_k.V_k\mid \leq \sum_{k=N}^{\infty} \mid A_k \mid \equiv E(N) \end{equation} Error $\epsilon_2$ is thus dominated by the Legendre series truncation of $f(x)$. \\ To bring the analysis in the interval $[-1,1]$, we perform a change of variable $y = \frac{2x-(a+b)}{b-a}$ and define $g(y) = f(\frac{b-a}{2}y + \frac{a+b}{2})$.\\ We consider 2 cases: \begin{itemize} \item Under $H_{abs}(k)$ with $k \geq 1$ for $g$, using (\ref{BoundAnAbsoluteCont}) and following the arguments in \cite{WangXiang12}, it comes \begin{equation} E(N) \leq \frac{G_k}{(k-1)(N-\frac{1}{2})(N-\frac{3}{2})...(N-\frac{2k-3}{2})} \sqrt{\frac{ \pi }{2(N-k)}} \end{equation} \item $g$ analytic on $[-1,1]$. \\ By the the theory of analytic continuation, there always exists a Bernstein ellipse $E_{\rho}$ with $\rho > 1$ such that $\tilde{g}$, the continuation of $g$, is analytic on and within $E_{\rho}$. Using (\ref{BoundAnAnalyticBern}) and following the arguments in \cite{WangXiang12}, we derive \begin{equation} E(N) \leq \frac{(2N \rho +3\rho - 2N-1)\ell(E_{\rho})M } {\pi \rho^{N+1}(\rho-1)^2(1-\rho^{-2}) } \end{equation} \end{itemize} \end{proof} \subsection{ Bound for $\epsilon_3$ } \begin{proposition}\label{proposition:epsilon3} If \begin{enumerate} \item \begin{equation} \label{Erroranalysisboundary} f(b) = f(a), f^{(1)}(b) = f^{(1)}(a), ..., f^{(l-1)}(b) = f^{(l-1)}(a) \end{equation} \item $f^{(l)}(x)$ is integrable \end{enumerate} Then \begin{equation} \epsilon_3 = \mathcal{O}(\frac{C_{N-1}}{M^l}) \hspace{0.5cm} for \mid M \mid >> 1 \end{equation} with $C_{N-1} \equiv \sum_{k=0}^{N-1} \frac{\mid V_k \mid (2k+1)}{b-a}$.\\ In particular if the function $f$ is differentiable to all orders and $(1)$ is satisfied for any $l$, then $\epsilon_3$ decreases faster than $\frac{1}{\mid M \mid^l}$ for any finite power of $l$. This is the exponential convergence property. \end{proposition} \begin{proof} Let's fix $k \in [0,N-1]$. For $\mid M \mid >>1 $, \begin{equation} \mid \sum_{m \, \in {\mathbb Z} -[-M,M]} B_m C_m^k \mid \leq \sum_{m \, \in {\mathbb Z} -[-M,M]} \mid B_m \mid \mid C_m^k \mid \end{equation} By applying theorem 4 p.42 in \cite{Boyd00}, we get \begin{equation}\label{Bmlastbound} \mid B_m \mid \leq \frac{C_1}{\mid m \mid^l} \end{equation} for a constant $C_1$ independent of $m$.\\ Using (\ref{LegendreFourierCoeff}), we have \begin{equation}\label{Cmkfirstbound} \mid C_m^k \mid \leq \frac{C_2}{ \sqrt{ \mid m \mid} } \mid J_{k+\frac{1}{2}}(\pi m) \mid \end{equation} for a constant $C_2$ independent of $m, k$.\\ Applying the property $ n \in {\mathbb Z}, \, J_{\nu}(ze^{n \pi i}) = e^{n \nu \pi i} J_{\nu}(z)$ to $n=1$, we get $\mid J_{\nu}(-z) \mid = \mid J_{\nu}(z)\mid$. Using the asymptotic result for $x \in {\mathbb{R}}, \, x \to +\infty$ (theorem 2.13 in \cite{MartinK}) \begin{equation} J_{\nu}(x) \backsim \sqrt{\frac{2}{\pi x}} cos(x - \frac{\pi}{4} - \frac{\nu \pi}{2}), \end{equation} The expression in (\ref{Cmkfirstbound}) becomes, for $m >> 1$ \begin{equation}\label{Cmklastbound} \mid C_m^k \mid \leq \frac{C_3}{ \mid m \mid }. \end{equation} with a constant $C_3$ independent of $m, k$.\\ So with (\ref{Bmlastbound}) and (\ref{Cmklastbound}), it comes \begin{equation} \mid B_m \mid C_m^k \mid \leq \frac{C_4}{ \mid m \mid^{l+1} } \end{equation} for a constant $C_4$ independent of $m, k$.\\ The series truncation error below (see \cite{BenderOrzag78} for proof) behaves like, for $M >> 1$, \begin{equation}\label{SeriesTruncationAlg} \sum_{m=M+1}^{\infty} \frac{1}{m^n} \backsim \frac{1}{(n-1)M^{n-1}}. \end{equation} Applying (\ref{SeriesTruncationAlg}) with $n=l+1$, we finally obtain, for $M >>1$, \begin{equation} \sum_{m \, \in {\mathbb Z} -[-M,M]} \mid B_m \mid \mid C_m^k \mid = \mathcal{O} \big(\frac{1}{M^l} \big). \end{equation} and can deduce directly $ \epsilon_3 = \mathcal{O}(\frac{C_{N-1}}{M^l})$. \end{proof} \remark{In practice, we believe the condition (\ref{Erroranalysisboundary}) in proposition \ref{proposition:epsilon3} is {\it{nearly}} satisfied if the boundary points $a$ and $b$ are chosen appropriately. Indeed, $f(x)$, being a probability density function, converges to $0$ when $\mid x \mid \to \infty$. Let's consider the benchmark and highly tractable Black Scholes model where the gaussian density function, $f_{m,\sigma}(x)$ with mean $m$ and standard deviation $\sigma$, and its derivatives are known analytically and given by \begin{equation}\label{GaussianDerivatives} f^{(n)}_{m, \sigma}(x) = \frac{(-1)^n H_n(\frac{x-m}{\sigma}) f_{m, \sigma}(x)}{\sigma^n} \end{equation} with $H_n$ the {\it{Hermite polynomials}} defined by $H_n(x) = (-1)^n e^{\frac{x^2}{2}} \frac{d^n}{dx^n} e^{-\frac{x^2}{2}}$.\\ Figure \ref{figure:GaussianDerivatives} shows the graph of $f^{(n)}_{m, \sigma}$ for various values $n$. With $a = -1.7813$ and $b = 1.7188$, we observe clearly that condition (\ref{Erroranalysisboundary}) is {\it{closely}} satisfied. We will give insight in the choice of [a, b] in Section \ref{subsection:Truncation Range}. \begin{figure}[htbp] \centering \includegraphics[width=1.2 \textwidth, trim={1cm 7.5cm 1cm 6.5cm}, clip]{GaussianDerivatives} \caption{Various derivatives of the Gaussian density function in Black Scholes model (see section \ref{subsection:BSmodel}). The parameters are $S_0 = 1, r = 0, T = 1, \sigma = 0.25 $ and $a = -1.7813$, $b = 1.7188$ with $L = 7$ for the truncation range (\ref{TruncationRange}).} \label{figure:GaussianDerivatives} \end{figure} \subsection{ Bound for $\epsilon_1$ and $\epsilon_4$ } $\epsilon_1$ is simply bounded as $|\epsilon_1| \leq \int_{{\mathbb{R}}-[a,b] } |V(y,T)| \tilde{f}(y | x)dy$ and is small as soon as $\tilde{f}(y)$ decays to 0 faster than $V(y,T)$ in the tail.\\ $\epsilon_4$ is essentially bounded by the integral truncation of the density function as stated in the following proposition. \begin{proposition}\label{PropositionEpsilon4} \begin{equation} \mid \epsilon_4 \mid \leq C_{N,M}. \epsilon \end{equation} where $C_{N,M} \equiv \frac{ \sum_{k=0}^{N-1} \sum_{m=-M}^M \mid V_k (2k+1) C_m^k \mid }{b-a}$ and $\epsilon \equiv \frac{1}{b-a} \left[ \tilde{F}(a) + 1 - \tilde{F}(b) \right]$ with $\tilde{F}(x)$ the cumulative distribution function of $\tilde{f}(x)$. \end{proposition} \begin{proof} $R_k$, being defined in (\ref{Rkdefinition}), can be bounded as \begin{equation} \mid R_k \mid \leq \frac{1}{b-a} \left[ \int_{-\infty}^{a} \tilde{f}(x) dx + \int_{b}^{ +\infty } \tilde{f}(x) dx \right] = \frac{1}{b-a} \left[ \tilde{F}(a) + 1 - \tilde{F}(b) \right] = \epsilon \end{equation} It comes \begin{equation} \mid \epsilon_4 \mid \leq \frac{ \sum_{k=0}^{N-1} \sum_{m=-M}^M \mid V_k (2k+1) C_m^k \mid }{b-a} \epsilon = C_{N,M} \epsilon. \end{equation} \end{proof} \remark{ \label{regularitypayoff} \begin{itemize} \item Our error analysis relies on the smoothness of the density function and not on the regularity of the payoff function. We just require the payoff function some integrability conditions on bounded interval and that the density function decays faster to $0$ at infinity. This is particularly relevant in quantitative finance. Indeed, the density functions of asset prices tend to be smoother. And the payoffs of some contracts are discontinuous as for the digital option (\ref{digital}) or have a kink at the strike level as for call and put options (\ref{callput}). \item Some well-established option pricing methods depend on the regularity of the payoff function. For example, in the Carr-Madan approach \cite{CarrMadan99} and its variants, the Fourier transform of a version of valuation formula (\ref{riskneutralvaluation}) is taken with respect to the log-strike price. Damping of the payoff is then necessary as, for example, a call option is not $L^1$-integrable with respect to the logarithm of the strike price. The method’s accuracy depends on the correct value of the damping parameter. Or in the widely used Monte carlo method with discretisation of SDEs (e.g Euler or Milstein schemes), the smoothness of the payoff function impact directly the order of convergence of the approximation schemes (see \cite{Glasserman03} section 6 or \cite{KloedenPlaten92} section 14.5). \end{itemize} } \section{Numerical experiments} \label{sec:Numerical experiments} In this section, we perform a variety of numerical tests to illustrate the robustness and accuracy of the new computational pricing method using Legendre polynomial. The payoff functions in finance can be continuous or discontinuous. Here the European call options and the European digital call options are considered. It allows to show that the convergence of the pricing method using Legendre polynomial does not depend on the continuity of the payoff (see discussion in section \ref{subsec:optionpricing} and remark \ref{regularitypayoff}). We cover a representative class of models widely studied and used in quantitative finance: \begin{itemize} \item Black Scholes Model; \item Merton Jump Diffusion Models and Kou Jump Diffusion Models; \item Heston Stochastic Volatility Model. \end{itemize} They represent different schools of thoughts for the modelling of asset prices as random processes. In their seminal work in \cite{BlackScholes73}, Black and Scholes modelize the asset prices as a {\it{geometric Brownian motion}} i.e asset prices with continuous paths and a constant volatility. It leads to the famous Black-Scholes formula which gives a theoretical estimate of the price of European-style options. It is perhaps the world's most well-known options pricing model and usually used as a benchmark model by the quantitative finance community. However, one of the main shortcoming of Black and Scholes model is to assume the underlying volatility is constant over the life of the derivative, and unaffected by the changes in the price level of the underlying security. It cannot explain long-observed features of the implied volatility surface such as volatility smile and skew, which indicate that implied volatility does tend to vary with respect to strike price and expiry. By assuming that the volatility of the underlying price is a stochastic process rather than a constant, it becomes possible to model derivatives more accurately (see e.g \cite{Gatheral06} and \cite{Wilmott06}). And Heston model is one of the most popular stochastic volatility models for derivatives pricing. An another school of modelling asset prices consists to introduce jumps as a way to explain why the skew is so steep for very short expirations and why the very short-dated term structure of skew is inconsistent with any stochastic volatility model. Or the strongest argument for using discontinuous models is simply the presence of jumps in observed prices (see figure 1 in \cite{TankovVoltchkova09} and \cite{ContTankov04} or \cite{Gatheral06}). Merton jump diffusion model and Kou jump diffusion model are among the most popular jumps models used in quantitative finance. In the equity and exchange rates (FX) derivatives market, liquid options like European call or put contracts are quoted for different maturities or tenors with various strikes representing the moneyness. FX markets are particularly liquid at benchmark tenors, such as 1 month (M), 2M, 3M, 6M, 1 year (Y), 2Y and possibly longer dated options \cite{Clark11}. For liquid equity index like Eurostoxx 50 or Nikkei 225, we can observed quotes for maturities from 1 month up to 4 and 5 years \cite{EDS08}. With this in mind, for the tests to be comprehensive, we consider short, standard and long maturities (0.1, 1, 3, 10 years) and in/at/out of the money options. \subsection{Truncation Range} \label{subsection:Truncation Range} For practical usage, it is important to determine appropriately and as systematically as possible the range $[a,b]$ to minimise the integral truncation errors $\epsilon_1$ and $\epsilon_4$. Being given the characteristic function of $X = \log( \frac{S_T}{K} )$, we can compute its cumulants $c_n$, defined in (\ref{CumulantsCharacteristics}), and uses the following formula proposed in \cite{FangOosterlee08}: \begin{equation} \label{TruncationRange} [a,b] := \left[ c_1 - L \sqrt{ c_2 + \sqrt{c_4} }, c_1 + L \sqrt{ c_2 + \sqrt{c_4} } \right] \end{equation} The cumulants for each model are given in appendix B. As shown in the error analysis section, the accuracy of the Legendre polynomial pricing method is affected by the choice of the interval $[a, b]$. Some experience is helpful when choosing the correct truncation range. The value for $L$ is taken to be in $[7,12]$ and will be made explicit for each model in the tests. Expression (\ref{TruncationRange}) uses $c_n$ up to degree 4. Similar range formula involving the first two moments of $X$ is implemented in \cite{HurnLindsayClelland13}. In general, using high order cumulants captures better the tail behaviour of the distribution. \subsection{Black Scholes Model}\label{subsection:BSmodel} For this Model, the SDE for the asset price $S_t$, under risk neutral measure, is given by \begin{equation} \label{BSSDE} \frac{dS_t}{S_t} = rdt + \sigma dW_t. \end{equation} with $W_t$ the Brownian motion, $r$ the risk free rate and $\sigma$ the volatility parameter.\\ The characteristic function $\varphi(u)$ of $\log(\frac{S_T}{K})$ is \begin{equation} \label{BSCharacteristicF} \varphi(u) = \exp(\mu u i - \frac{1}{2} u^2 \sigma^2T) \end{equation} with $\mu = -\frac{1}{2}\sigma^2 T-\log(K)$.\\ The set of parameters is $S_0 = 1, r = 0, T = 10, \sigma = 0.25 $. With some experiments, choosing $L$ around $7$ is appropriate for the truncation range (\ref{TruncationRange}) and this value is consistent with thosed used in \cite{HurnLindsayClelland13}.\\ The other details of the model are provided in section \ref{annex:BS}.\\ We examine a long maturity with $T=10$. Figure \ref{BSDensity} compares the true Gaussian density and the recovered density functions using respectively $N=M=12$ and $N=M=32$ for the truncation in formula (\ref{ThLegendreSeries}). With $N=M=12$, the approximating density captures the form of the true density although we observe some slight negative values in the tail. With $N=M=32$, it is indistinguishable from the true density function. \\ For the pricing, we consider a discontinous payoff with the digital option. As shown in Figure \ref{BSCvTests}, the error convergence of the method is exponential in terms of $N$ and $M$ respectively. Indeed, in Black Scholes model, the density function of $\log ( \frac{S_T}{K})$ is gaussian and so is infinitely differentiable with exponential decay to 0 for large $x$. Further, we observe the error convergence rate is basically the same for different strike prices. \begin{figure}[htbp] \centering \includegraphics[width=0.6 \textwidth, height=0.4 \textwidth, trim={1cm 6.5cm 1cm 6cm}, clip]{BlackScholesDensityTestfigure} \caption{Comparison of the true Gaussian density (solide line) and its approximation based on $N=M=12$ (solide line with '+' marker) and $N=M=32$ (solide line with 'o' marker) for maturity $T=10$.} \label{BSDensity} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.9 \textwidth, trim={1cm 7.5cm 1cm 8cm}, clip]{BSCvTests} \caption{Error convergence for pricing European digital call option with $T = 10$ in Black Scholes model.} \label{BSCvTests} \end{figure} \subsection{Merton Jump Diffusion Model} In this model \cite{Merton76, AndersenAndreasen99}, the SDE for the asset price $S_t$, under risk neutral measure, is written as \begin{equation} \label{MertonSDE} \frac{dS_t}{S_t-} = (r-\lambda.m(t))dt + \sigma dW_t + (J(t)-1)d\pi(t). \end{equation} where $W(t)$ is a Brownian motion, $\pi(t)$ a Poisson counting process with constant jump intensity $\lambda$ and r the deterministic risk-free interest rate. $\{J(t)\}_{t \geq 0}$, representing the jump size, is a sequence of independent log normal random variables of the form $J(t) = e^{\mu + \gamma N(t)}$ with $N(t)$ a standard gaussian random variable and $m \equiv E[J(t)-1]$. $\pi, \, W$ and $J$ are all assumed to be independent. \\ The characteristic function $\varphi(u)$ of $\log(\frac{S_T}{K})$ is \begin{equation} \label{MertonCharacteristicF} \varphi(u) = \exp \left( iu\tilde{b}T - \frac{u^2 \sigma^2T}{2} + \lambda T(e^{iu \mu - \frac{\gamma^2 u^2}{2}} -1) \right) \end{equation} where $\tilde{b} = b - \frac{\log(K)}{T}$ and $b = -\frac{1}{2} \sigma^2 - \lambda (e^{\mu + \frac{\gamma^2}{2}}-1) $.\\ The set of parameters is calibrated to market data from \cite{AndersenAndreasen99} with $r=0$ and maturity 3 years: $S_0 = 1, T = 3, \, \sigma = 0.1765 $, $\lambda = 0.089$, $\mu = -0.8898$, $\gamma = 0.4505$. Some experience shows that $L = 10$ is appropriate for the truncation range (\ref{TruncationRange}). It corresponds also to the value recommended in \cite{FangOosterlee08}. The other details of the model are provided in section \ref{annex:Merton}.\\ We study a standard maturity with $T=3$. Figure \ref{MertonDensity} compares the true density and the recovered density functions using respectively $N=M=50$ and $N=M=80$. The true density is computed using formula (\ref{Mertondensity}) with a truncation in the infinite sum at $50$. First we observe the {\it{Merton}} density function, showing a sharp peak, is less smooth than the Gaussian density function. With $N=M=50$, the approximating density captures reasonably well the form of the true density although we can observe some slight negative values in low probability area. With $N=M=80$, the difference between the true density and the approximating density is not discernible. \\ We consider the digital option for pricing. As shown in Figure \ref{MertonCvTests}, the error convergence of the method is still exponential in $N$ and $M$ respectively. But the convergence is slower than in the Black Scholes model as expected in view of the sharp peak density function. And the error convergence rate is basically the same for different strike prices. \begin{figure}[htbp] \centering \includegraphics[width=0.8 \textwidth, height=0.5 \textwidth, trim={1cm 6.5cm 1cm 6cm}, clip]{MertonDensityTestfigure} \caption{Comparison of the true density function, (solide line) and its approximation based on $N=M=50$ (solide line with '+' marker) and $N=M=84$ (solide line with 'o' marker) for maturity $T=3$ in Merton jump diffusion model.} \label{MertonDensity} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.85 \textwidth, trim={1cm 7.5cm 1cm 6cm}, clip]{MertonCvTests} \caption{Error convergence for pricing European digital call option with $T = 3$ in Merton Jump Diffusion model.} \label{MertonCvTests} \end{figure} \subsection{Kou Jump Diffusion Model} In this model \cite{Kou02}, the dynamic of the asset price, $S(t)$, under risk neutral probability, is \begin{equation} \label{KouSDE} \frac{dS_t}{S_t-} = \mu dt + \sigma dW_t + d \left( \sum_{i=1}^{N(t)}(V_i-1) \right). \end{equation} with $W_t$ a standard Brownian motion, $N(t)$ a Poisson process with rate $\lambda$. $\{ V_i \}$ is a sequence of independent identically distributed (i.i.d.) nonnegative random variables such that $Y = \log(V)$ has an asymmetric double exponential distribution with the density \begin{equation} f_{Y}(y) = p. \eta_1 e^{- \eta_1 y}1_{y \geq 0} + q. \eta_2 e^{ \eta_2 y}1_{y < 0}, \,\, \eta_1>1, \eta_2 > 0, \end{equation} $p,q \geq 0, \, p+q = 1$, representing the probabilities of upward and downward jumps and $\mu = \lambda \left( \frac{p}{1- \eta_1} + \frac{1-p}{\eta_2+1} \right)$.\\ The characteristic function $\varphi(u)$ of $\log(\frac{S_T}{K})$ is given by \begin{equation} \label{KouCharacteristicF} \varphi(u) = \exp \left( iu\tilde{b}T - \frac{u^2 \sigma^2T}{2} + \lambda T iu \left( \frac{p}{\eta_1-iu}-\frac{1-p}{\eta_2+iu} \right) \right) \end{equation} where $\tilde{b} = b - \frac{\log(K)}{T}$ and $b = -\frac{1}{2} \sigma^2 - \lambda (e^{\mu + \frac{\gamma^2}{2}}-1) $. \\ The set of parameters is from \cite{Kou02} with $r=0$ and maturity 1 year:\\ $S_0 = 1, T = 1, \sigma = 0.16 $, $\lambda = 1$, $p = 0.4$ and $\eta_1 = 10, \, \eta_2 = 5$. Some experience shows that $L = 10$ is appropriate for the truncation range (\ref{TruncationRange}). It corresponds also to the value recommended in \cite{FangOosterlee08}. The other details of the model are provided in section \ref{annex:Kou}.\\ Analytical formula for density function being not available, Figure \ref{KouDensity} presents the recovered density functions for $T = 1$ and $3$ respectively. We observe a sharper-peaked density for $T=1$.\\ For the pricing, we study the European call option with a standard maturity $T=1$ y. As shown in Figure \ref{KouCvTests}, the error convergence of the method is exponential in $N$ and $M$ respectively. Further, the error convergence rate is basically the same for different strike prices. \begin{figure}[htbp] \centering \includegraphics[width=0.7 \textwidth, height=0.5 \textwidth, trim={1cm 6.6cm 1cm 6cm}, clip]{KouDensityTestfigure} \caption{ Recovered density functions in Kou jump diffusion model for $T = 1$y (solide line with 'o' marker) and $T = 3$y (solide line with '+' marker) with $N=M=80$.} \label{KouDensity} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.85 \textwidth, trim={1cm 7.5cm 1cm 6cm}, clip]{KouCvTests} \caption{Error convergence for pricing European call option with $T = 1$ in Kou Jump Diffusion model.} \label{KouCvTests} \end{figure} \subsection{Heston Stochastic Volatility Model } In this model \cite{Heston93} under risk neutral measure, the SDEs are given by \begin{align}\label{HestonSDE} d\tilde{x}_t & = -\frac{1}{2}u_tdt + \sqrt{u_t}dW_{1t}\\ du_t & = \lambda( \bar{u} - u_t)dt + \eta \sqrt{u_t}dW_{2t} \end{align} where $\tilde{x}_t$ denotes the log-asset price variable and $u_t$ the variance the asset price process. Parameters $\lambda \geq 0, \ \bar{u} \geq 0$ and $\eta \geq 0$ represent the speed of mean reversion, the mean level of variance and the volatility of volatility, respectively. Furthermore, the Brownian motions $W_{1t}$ and $W_{2t}$ are assumed to be correlated with correlation coefficient $\rho$. \\ The characteristic function $\varphi(x)$ of $\log(\frac{S_T}{K})$ can be represented by \begin{equation} \label{HestonCharacteristicF} \varphi(x) = e^{ -ix \log(K) + \frac{u_0}{\eta^2} \left( \frac{1-e^{-DT}}{1-G\mathrm{e}^{-DT}} \right) (\lambda - i \rho \eta x -D) + \frac{\lambda \bar{u}}{\eta^2} \left[ T(\lambda - i \rho \eta x -D) -2 \log \left( \frac{1-Ge^{-DT}}{1-G} \right) \right]} \end{equation} with $D = \sqrt{(\lambda - i \rho \eta x)^2 + (x^2 + ix) \eta^2}$ and $G = \frac{\lambda - i\rho \eta x -D}{\lambda - i\rho \eta x + D}$.\\ This characteristic function is uniquely specified, since we take $\sqrt{x+yi}$ such that its real part is nonnegative, and we restrict the complex logarithm to its principal branch. In this case the resulting characteristic function is the correct one for all complex $\omega$ in the strip of analyticity of the characteristic function \cite{LordKahl10}. \\ The set of parameters is calibrated to market data from \cite{Crisostomo14} with $r=0$ and a short maturity $T = 0.1$:\\ $S_0 = 1, \, T = 0.1, \, \lambda = 0.9626, \, \bar{u} = 0.2957, \, \eta = 0.7544, \rho = -0.2919, \, u_0 = 0.0983$. \\ Since the analytical formula for $c_4$ is involved, instead of (\ref{TruncationRange}), as recommended in \cite{FangOosterlee08}, we use the following truncation range: \begin{equation} \label{TruncationRangeHeston} [a,b] := \left[ c_1 - 12 \sqrt{ |c_2| }, c_1 + 12 \sqrt{ |c_2| } \right] \end{equation} Cumulant $c_2$ may become negative for sets of Heston parameters that do not satisfy the Feller condition, i.e, $2 \bar{u} \lambda > \eta^2$. We therefore use the absolute value of $c_2$. The formulas for cumulants are reported in section (\ref{annex:Heston}).\\ Analytical formula for density function being not available, Figure \ref{HestonDensity} provides an illustration for the recovered density functions with $T = 0.1$ and $T = 1$ respectively. For $T=0.1$, the density is much more peaked. \\ We examine the European call option with a short maturity $T=0.1$ year for pricing. Although the sharp peaked density, the error convergence of the method is still exponential in $N$ and $M$ respectively, as shown in Figure \ref{HestonCvTests}. Moreover, the error convergence rate is basically the same for different strike prices. \begin{figure}[htbp] \centering \includegraphics[width=0.73 \textwidth, height=0.5 \textwidth, trim={1cm 6.5cm 1cm 6cm}, clip]{HestonDensityTestfigure} \caption{ Recovered density functions in Heston stochastic volatility model for $T = 0.1$ y (solide line with 'o' marker) and $T = 1$ y (solide line with '+' marker) with $N=M=80$.} \label{HestonDensity} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.85 \textwidth, trim={1cm 7.3cm 1cm 7.2cm}, clip]{HestonCvTests} \caption{Error convergence for pricing European call option with short maturity $T = 0.1$ in Heston stochastic volatility model.} \label{HestonCvTests} \end{figure} \section{Conclusion and discussions} In this paper, we have introduced a method for pricing European-style options combining Fourier series and generalized Fourier series with Legendre polynomials. It can be used as long as a characteristic function for the underlying price process is available. It consists to expand the density function as Legendre series and observe that the coefficients can be accurately retrieved from the characteristic functions. This representation of the density function is then uses for the risk-neutral valuation and approximation formulas are derived. These formulas involve the expression (\ref{IntLegendreExp}). However its direct implementation, using formulation (\ref{PropIntLegendreExp}), gives rise of rapid accumulations of round-off errors for large values of $n$. We rewrite these quantities as solution of second-order difference equations and compute them with machine precision using the stable {\it{Olver}} algorithm. Also derivation of the pricing method has been accompanied by an error analysis. Errors bounds have been derived and the study relies more on smoothness properties which are not provided by the payoff functions, but rather by the density function of the underlying stochastic models. This is particularly relevant in quantitative finance for option pricing where the payoffs of the contract are generally not smooth functions. In our numerical experiments, we chose a class of models widely used in quantitative finance. The payoff covered are continuous (call option) and discontinuous (digital call option). The tests considered, with various strike prices and maturities, show exponential convergence rate.\\ We suggest a couple of interesting avenues of research: \begin{itemize} \item Here, we have used Olver's algorithm for the computations of the integrals involving Legendre polynomials and exponential functions (\ref{IntLegendreExp}). Indeed, Olver's method consists to replace the original problem by an equivalent boundary value problem, which is solved by Gaussian elimination without pivoting. Extensions or reformulations of Olver's method have been made. For example Van der Cruyssen \cite{Cruyssen79} have shown that if the algebraic equations arising from the use of Olver's method are solved using an LU decomposition method then the total amount of work required is almost halved. See Lozier \cite{Lozier80} for a more detailed discussion. It would be interesting to reconsider and adapt or extend existing algorithms in order to reduce the amount of computational effort. \item Accurately valuing financial claims plays a key role in financial modelling, but the risk management of these derivative instruments is at least as important (see e.g \cite{Wilmott06} or \cite{AvellanedaLaurence99}). To undertake this function, we need to compute the {\it{Greeks}} defined as the sensitivity of the price of derivatives to a change in underlying parameters on which the value of an instrument is dependent. Series expansions for the sensitivity factors, e.g $\Delta = \frac{\partial V}{\partial S_0}$, $ \Gamma = \frac{\partial^2 V}{\partial S_0^2}$ or $\nu = \frac{\partial V}{\partial \sigma}$ are let for future research. \item In this manuscript, we have focused on the pricing of European-style options, which are instrumental and the building blocks for constructing more complex option products. Extending Legendre polynomials pricing method to cover more exotic contracts like forward start options, quanto options, spread options or options with early-exercise features (see e.g \cite{BompisHok14}, \cite{Pelsser00}, \cite{Haug07}) are exciting area of research. \item The calibration, which consists to determine the parameters of a parametric model, is an instrumental preliminary step for option pricing and hedging. Usually, it corresponds to find parameters that make the models consistent with market quotes (e.g a set of European call or put prices for various strikes and maturities) and the formulas derived in proposition \ref{PropositionPricingFormulas} can be used. This is formulated as a minimisation of some loss functions (e.g the squared difference between the quoted and model prices) and commonly leads to a non convex optimisation problem. Standard procedures based on the derivatives of the loss function (e.g quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm) may be not appropriate. Indeed different starting points can lead to quite different solutions, which can have a significant impact on option pricing and sensitivity factors \cite{AzencottGadhyanGlowinski15}. Similarly, different calibration criteria lead to different results \cite{DetlefsenHardle07}. In \cite{GiliSchumann11}, the authors suggested to use heuristic techniques, differential evolution and particle swarm optimisation, which seem to bring some promising results. Exploring recent literature about real life applications of contemporary numerical optimisation and classification techniques in different fields such as \cite{TaorminaChau15, ZhangChau09, DimensionReductionZhangChau09, MayDandyMaier11} is part of our future research. \end{itemize} \section{Annexe} \subsection{Appendix A} We propose in the following proposition some analytical formulas for the computation of expression \begin{equation}\label{IntLegendreExp} \int_{\alpha}^1 P_n(t)e^{\beta t}dt. \end{equation} \begin{proposition}\label{PropIntLegendreExp} for $\beta \neq 0$, \begin{align} \int_{\alpha}^1 P_n(t) e^{\beta t}dt =& \frac{1}{2^n} \sum_{k=0}^{\lfloor \frac{n}{2} \rfloor} (-1)^k C_n^k C_{2n-2k}^n \left[ IEP(\beta, n-2k,1)-IEP(\beta, n-2k,\alpha) \right] \label{IntLegendreExp1} \end{align} with \begin{equation} IEP(\beta,n,t):=e^{\beta t} \sum_{i=0}^n t^i \frac{n!}{\beta^{n+1-i} i!}(-1)^{n-i}. \end{equation} \end{proposition} \begin{proof} For (\ref{IntLegendreExp1}), by using successively integration by parts formula, we show easily \begin{equation} \int t^n e^{\beta t} dt = e^{\beta t} \sum_{i=0}^n t^i \frac{n!}{\beta^{n+1-i} i!}(-1)^{n-i}. \end{equation} We obtain (\ref{IntLegendreExp1}) by using the formula (\ref{LegendrePowerSeries}) for Legendre polynomial. \end{proof} The computation of (\ref{IntLegendreExp}) with (\ref{IntLegendreExp1}) for small values of $n$ does not pose any problems. When $n >> 1$ accuracy and stability issues arise because of serious substractive cancellations in the summation (\ref{IntLegendreExp1}). In section (\ref{subsection:AlternateComputational}), we propose to use Olver algorithm to implement these terms in a stable way with machine precision. \subsection{Appendix B} Here, we provide some analytical formulas for the cumulants of $\log(\frac{S_T}{K})$, the density function of $\log(S_T)$ and the option pricing in the class of models discussed in section \ref{sec:Numerical experiments}.\\ Let $X$ be a random variable and $\Phi_X$ its characteristic function. We can define an unique continuous function $\Psi_X$ in a neighbourhood of zero such that \begin{equation} \Psi_X(0) = 0 \,\,\,\, \textrm{and} \,\,\,\, \Phi_X(z) = exp[\Psi_X(z)]. \end{equation} The function $\Psi_X$ is called the {\it{cumulant generating function}}. The {\it{cumulants}} of $X$ are defined by \begin{equation} \label{CumulantsCharacteristics} c_n(X) = \frac{1}{i^n} \frac{\partial^n \Psi_X}{\partial u^n}(0). \end{equation} (see \cite{ContTankov04} for details). We present the cumulants $c_1$, $c_2$ and $c_4$, needed to determine the truncation range in (\ref{TruncationRange}). \subsubsection{Black Scholes model} \label{annex:BS} With $r=0$ in (\ref{BSSDE}), we have \begin{align} c_1& = \log \left( \frac{S_0}{K} \right) - \frac{1}{2}\sigma^2 T\\ c_2& = \sigma^2 T\\ c_4& = 0 \\ \log(S_T) & \sim N( \log(S_0) - \frac{1}{2}\sigma^2 T, \sigma \sqrt{T})\\ V(x,t) & = N \left( \frac{\log(\frac{S_0}{K}) - \frac{1}{2} \sigma^2 T}{ \sigma \sqrt{T}} \right) \end{align} where $V(x,t)$ is the analytical digital call price with strike $K$. \subsubsection{Merton Jump Diffusion Model} \label{annex:Merton} With $r=0$ in (\ref{MertonSDE}), we have \begin{align} c_1& = T (\tilde{b} + \lambda \mu)\\ c_2& = T( \sigma^2 + \lambda(\mu^2 + \gamma^2)) \\ c_4& = T \lambda (3 \gamma^4 + 6 \mu^2 \gamma^2 + \mu^4) \\ f_{X_T}(x)& = e^{- \lambda T} \sum_{k=0}^{\infty} \frac{ (\lambda T)^k}{k!} \frac{1}{\sqrt{ 2 \pi (\sigma^2T + k \gamma^2)}} e^{-\frac{1}{2} \frac{(x - \tilde{b}T - k \mu)^2}{\sigma^2 T + k \gamma^2}} \label{Mertondensity} \\ V(x,t) & = e^{- \lambda T} \sum_{k=0}^{\infty} \frac{ (\lambda T)^k}{k!} N \left( \frac{ \log(\frac{S_0}{K}) + bT + k \mu }{\sqrt{\sigma^2 T + k \gamma^2}} \right) \end{align} with $b = -\frac{1}{2} \sigma^2 - \lambda(e^{\mu + \frac{\gamma^2}{2}} - 1) $, $\tilde{b} = b - \frac{\log(K)}{T}$, $f_{X_T}(x)$ the probability density function of $X_T \equiv \log(\frac{S_T}{K})$ and $V(x,t)$ the analytical digital call price with strike $K$. \subsubsection{Kou Jump Diffusion Model} \label{annex:Kou} With (\ref{KouSDE}), we have \begin{align} c_1& = T \left(\tilde{b} + \lambda \left( \frac{p}{\eta_1}-\frac{1-p}{\eta_2} \right) \right)\\ c_2& = T \left( \sigma^2 + 2\lambda \left( \frac{p}{\eta_1^2}+\frac{1-p}{\eta_2^2} \right)\right) \\ c_4& = 24T \lambda \left( \frac{p}{\eta_1^4}+\frac{1-p}{\eta_2^4} \right) \end{align} The pricing formula for a European call option is involved and can be found in Theorem 2 \cite{Kou02}. \subsubsection{Heston Stochastic Volatility Model} \label{annex:Heston} With (\ref{HestonSDE}), we have \begin{align} c_1& = (1-e^{-\lambda T}) \frac{(\bar{u}-u_0)}{2 \lambda} - \frac{1}{2} \bar{u}T\\ c_2& = \frac{1}{8 \lambda^3} ( \eta T \lambda e^{- \lambda T}(u_0 - \bar{u}) (8 \lambda \rho - 4 \eta) + \lambda \rho \eta(1-e^{-\lambda T})(16\bar{u} - 8u_0)\\ &+ 2 \bar{u} \lambda T ( -4 \lambda \eta \rho + \eta^2 + 4 \lambda^2) + 8 \lambda^2 (u_0 - \bar{u})(1-e^{-\lambda T})\\ & + \eta^2 ( (\bar{u} -2 u_0)e^{-2 \lambda T} + \bar{u} (6 e^{- \lambda T} -7) +2u_0 ) ) \end{align} \section*{Acknowledgements} We thank the reviewers and the associate editor for their constructive remarks to improve the quality of this paper. The authors would like also to thank C.W Oosterlee (Delft University of Technology) and C. Necula (University of Zurich) for helpful comments. \newpage
1,108,101,563,022
arxiv
\section{Introduction} ``Elastic" or ``exclusive" production of $\rho^0$ mesons by photons, $\gamma p \rightarrow \rho^0 p$, has been extensively studied in fixed target experiments up to photon-proton centre-of-mass energies $W\simeq$ 20~GeV, using both real and virtual photons~\cite{bible}-\cite{nmc}. Recently, the cross section for this reaction has also been obtained in an indirect measurement at the HERA $ep$ collider, using quasi-real photons with space-like virtuality $Q^2$ between $4\cdot 10^{-8}$ and $2\cdot 10^{-2}$ GeV$^2$, at an average centre-of-mass energy $\langle W \rangle$ of 180 GeV~\cite{maciek}. At $W$ values up to about 20 GeV, elastic photoproduction of $\rho^0$ mesons has the characteristic features of soft diffractive processes: the dependence of the production cross section on $W$ is weak, the dependence on $t$, the square of the four-momentum transferred at the proton vertex, is approximately exponential, and the vector meson is observed to retain the helicity of the photon ($s$-channel helicity conservation, SCHC). Such energy and $t$ dependences are also characteristic of hadronic diffractive processes. The similarity between $\rho^0$ photoproduction and hadronic processes can be understood in the framework of the Vector Dominance Model (VDM)~\cite{saku}, in which the photon is assumed to fluctuate into a vector meson before interacting with the target nucleon; the reaction $\gamma p \rightarrow \rho^0 p $ is thus related to the elastic process $\rho^0 p \rightarrow \rho^0 p $. At sufficiently high energies, diffractive interactions are usually described in terms of the exchange of a pomeron, an object with the quantum numbers of the vacuum. Regge theory provides a framework in which many of the features of hadronic reactions can be described~\cite{goulianos}. In particular, the energy dependence of diffractive processes is related to the intercept of the pomeron trajectory. Several models offer a description of diffractive vector meson production~\cite{dl}-\cite{ginzburg}; some of them are in the framework of perturbative QCD. The study of vector meson photoproduction at the energies available at HERA may thus help to clarify the nature of the pomeron. This paper reports a measurement of the elastic $\rho^0$ photoproduction cross section at $\langle W \rangle$ of 70 GeV, based on about 6,000 $ep \rightarrow ep \pi^+ \pi^-$ events collected by the ZEUS experiment in 1993. In this measurement the final state electron and proton are not detected and the relevant kinematic quantities are determined from the measured three-momenta of the $\rho^0$ decay products, assuming that they are pions. The paper is organised as follows. After defining the variables relevant to $\rho^0$ production, we describe the experimental conditions and the event selection criteria, and then discuss the acceptance corrections and the background. From the analysis of the differential cross section $d\sigma/dM_{\pi\pi}$, where $M_{\pi\pi}$ is the invariant mass of the $\pi^+ \pi^-$ system, we obtain the integrated cross section $\sigma_{\gamma p\rightarrow \rho^0 p}$. We then discuss the differential cross section $d\sigma/dt$ and the angular distributions of the decay pions. Finally, from the value of $d\sigma/dt$ at $t=0$, the total $\rho^0 p$ cross section is derived using the optical theorem and assuming VDM. \section{Elastic $\rho^0$ photoproduction at HERA} Elastic $\rho^0$ photoproduction was investigated by means of the reaction (see Fig. \ref{rhodiag}) $$ e(k)~ + p(P) \rightarrow e(k') + \rho^0(V) + p (P'), $$ where the symbols in parenthesis indicate the four-momenta of the particles involved. For unpolarised electrons and protons, two independent variables describe inclusive $ep$ scattering, since the $ep$ centre-of-mass energy $\sqrt s =2\sqrt{E_eE_p} = 296$ GeV is fixed by the energies of the electron ($E_e$) and of the proton ($E_p$) beams. The variables can be any two of the following four: \begin{itemize} \item $-Q^2 = q^2 = (k-k')^2$, the four-momentum squared carried by the virtual photon; \item $x =Q^2/(2P\cdot q)$, the Bjorken variable; \item $y = (q\cdot P)/(k\cdot P)~,$ the fraction of the electron energy transferred by the photon to the hadronic final state, measured in the proton rest frame; \item $W$, the centre-of-mass energy of the $\gamma^* p$ system, where $$W^2 = (q+P)^2 = -Q^2 + 2y(k\cdot P)+M^2_p,$$ $M_p$ being the proton mass. \end{itemize} The hadronic final state, containing the scattered proton and the pions from the decay $\rho^0 \rightarrow \pi^+ \pi^-$, is described by additional variables, including the invariant mass $M_{\pi\pi}$ of the two decay pions, the square of the four-momentum transferred at the proton vertex, $t$, and the polar and azimuthal angles, defined in section~\ref{angular}, of the decay pions in the $\pi \pi$ centre-of-mass frame. For the data presented here, only the three-momenta of the final state pions were measured. Events in which the scattered electron was detected in the ZEUS calorimeter were rejected, thereby restricting $Q^2$ to be below $Q^2_{\max}\approx 4$~GeV$^2$. The median $Q^2$ is approximately $10^{-4}$~GeV$^2$. To explain how the relevant kinematic quantities are obtained from the four-momenta of the two pions, we first consider the case $Q^2=Q^2_{\min}= M_e^2 \frac{y^2}{1-y}$ ($M_e$ is the electron mass) and then discuss the effect of larger $Q^2$. For $Q^2=Q^2_{\min}$ ($\approx 10^{-9}$~GeV$^2$ for the kinematic range covered by the present data), the virtual photon is emitted with zero transverse momentum and with longitudinal momentum $p_{Z \gamma}\simeq - E_\gamma$ in the direction opposite to that of the proton beam\footnote{Throughout this paper we use the standard ZEUS right-handed coordinate system, in which $X = Y = Z = 0$ is the nominal interaction point, the positive $Z$-axis points in the direction of flight of the protons (referred to as the forward direction) and the $X$-axis is horizontal, pointing towards the centre of HERA.}. Energy and momentum conservation relate the photon energy $E_{\gamma}$ to the two-pion system energy $E_{\pi \pi}$ and longitudinal momentum $p_{Z\pi \pi}$ by $2E_{\gamma} \simeq (E_{\pi \pi} - p_{Z\pi \pi})$. The photon-proton centre-of-mass energy can then be expressed as: $$ W^2 \simeq 4 E_\gamma E_p \simeq 2 (E_{\pi \pi} - p_{Z\pi \pi}) E_p.$$ \noindent The $\rho^0$ transverse momentum squared in the laboratory frame, $p_T^2$, approximates to $-t$: \begin{eqnarray} t & = & (q-V)^2 = -\mbox{$Q^2$} -2q\cdot V + M^2_{\pi \pi} \nonumber \\ & \simeq & -2E_\gamma(E_{\pi \pi}+ p_{Z \pi \pi}) + M^2_{\pi \pi} \nonumber \\ & \simeq & -(E^2_{\pi \pi} - p^2_{Z \pi \pi}) + M^2_{\pi \pi} \nonumber \\ & = & -p^2_T, \nonumber \label{tvspt} \end{eqnarray} \noindent where, in addition to $2E_{\gamma} \simeq (E_{\pi \pi} - p_{Z\pi \pi})$, we have used the approximation $Q^2=0$. Non-zero values of $Q^2$ cause $p_T^2$ to differ from $-t$ by $\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$} Q^2$; the effect on the measured distributions is discussed in section~\ref{ptt}. For this measurement, the minimum kinematically allowed value of $|t|$ is negligible, $|t_{\min}|\simeq 10^{-8}$~GeV$^2$ at $W=70$~GeV. Fig.~\ref{papercomp} shows the scatter plot of the reconstructed versus the true values of $W$ and $t$ for the sample of Monte Carlo events used to evaluate the acceptance (cf. section~\ref{ptt}). The difference of the reconstructed and the true value of $W$ has a mean value of zero and an r.m.s. spread of 1.4~GeV. The analogous difference for $t$ has also a mean value of approximately zero and an r.m.s. spread of 0.06~GeV$^2$. The events far from the diagonal have $Q^2 \gg Q^2_{\min}$. Throughout the analysis, the variable $W$ was calculated using the above approximation. For $t$, the $Q^2$ dependence of the relation between $t$ and $p_T^2$ was taken into account in the acceptance correction, as discussed in section~\ref{ptt}. In the one photon exchange approximation, the $e p$ and the $\gamma^* p$ cross sections for elastic $\rho^0$ production are related by \begin{eqnarray} \frac{d^2\sigma_{ep \rightarrow ep\rho^0}}{dydQ^2} = \frac{\alpha}{2\pi Q^2} \left[\left( \frac{1+(1-y)^2}{y} - \frac{2(1-y)}{y} \cdot \frac{Q_{\min}^2}{Q^2}\right) \cdot \sigma_T^{\gamma^*p \rightarrow \rho^0 p}(W,Q^2) + \right. \nonumber \\ \left. \frac{2(1-y)}{y} \cdot \sigma_L^{\gamma^*p \rightarrow \rho^0 p}(W,Q^2)\right], \label{bornc} \end{eqnarray} where $\alpha$ is the fine structure constant and $\sigma_T^{\gamma^*p \rightarrow \rho^0 p}(W,Q^2)$ and $\sigma_L^{\gamma^*p\rightarrow \rho^0 p}(W,Q^2)$ are the respective cross sections for transversely and longitudinally polarised virtual photons. Following VDM, these are related by \begin{eqnarray} \sigma_L^{\gamma^*p \rightarrow \rho^0 p}(W,Q^2) \simeq \sigma_T^{\gamma^*p \rightarrow \rho^0 p}(W,Q^2) \cdot \frac{Q^2}{M_{\rho}^2}, \label{sigmal} \end{eqnarray} \noindent with \begin{eqnarray} \sigma_T^{\gamma^*p \rightarrow \rho^0 p}(W,Q^2) = \sigma_{\gamma p\rightarrow \rho^0 p}(W) \left/ \left(1+\frac{Q^2}{M_{\rho}^2}\right)^2 \right., \label{sigmat} \end{eqnarray} \noindent where $\sigma_{\gamma p \rightarrow \rho^0 p}(W)$ is the cross section for elastic photoproduction ($Q^2=0$) of $\rho^0$ mesons, and $M_{\rho}$ is the $\rho^0$ meson mass. Substituting the latter two expressions into equation~(\ref{bornc}) yields: \begin{eqnarray} \frac{d^2\sigma_{ep \rightarrow ep \rho^0}}{dy dQ^2} =\Phi(y,Q^2) \cdot \sigma_{\gamma p \rightarrow \rho^0 p }(W(y)), \label{crs} \end{eqnarray} \noindent where the function \begin{eqnarray} \Phi(y,Q^2)=\frac{\alpha}{2 \pi Q^2} \left\{\left[ \frac{1+(1-y)^2}{y} - \frac{2(1-y)}{y} \left(\frac{Q_{\min}^2}{Q^2} - \frac{Q^2}{M_{\rho}^2} \right) \right] \frac{1 }{\left(1+\frac{Q^2}{M_{\rho}^2}\right)^2} \right\} \label{flux} \end{eqnarray} \noindent is the effective photon flux. The cross section $\sigma_{\gamma p \rightarrow \rho^0 p}(\langle W \rangle)$ for elastic $\rho^0$ photoproduction is thus obtained as the ratio of the corresponding acceptance corrected electron-proton cross-section, integrated over the $y$ and $Q^2$ ranges covered by the measurement, and the effective photon flux $\Phi(y,Q^2)$ integrated over the same $y$ and $Q^2$ ranges. This procedure determines the cross section, assuming VDM, for elastic production of $\rho^0$ mesons at $Q^2=0$ averaged over the $W$ range of the measurement. \section{Experimental conditions} \subsection{HERA} During 1993, HERA operated at a proton energy of $820$ GeV and an electron energy of $26.7$~GeV; 84 colliding electron-proton bunches were stored, with an additional 6 unpaired proton and 10 unpaired electron bunches. These additional pilot bunches were used for background studies. The time between bunch crossings was $96$~ns. Typical bunch currents were 10~mA for both the electron and the proton beam, providing luminosities of approximately $6\cdot10^{29}$~cm$^{-2}$~s$^{-1}$. \subsection{The ZEUS detector} The components of the ZEUS detector are described in detail in ref.~\cite{status93}. A short description of those most relevant to the present analysis follows. Charged particles are tracked by the vertex detector (VXD) and the central tracking detector (CTD) which operate in a magnetic field of 1.43 T provided by a thin superconducting solenoid. The VXD \cite{VXD} is a cylindrical drift chamber that surrounds the beam pipe and consists of 120 radial cells, each with 12 sense wires. The VXD resolution in the plane transverse to the beam direction, for the data presented here, is 50 $\mu$m in the central region of a cell and 150 $\mu$m near the cell edges. The CTD~\cite{CTD} consists of 72 cylindrical drift chamber layers, organised in 9 superlayers covering the polar angle region $15^\circ < \theta < 164^\circ$. In 1993, the spatial resolution in the plane perpendicular to the beam was $\simeq 260~\mu$m. For the data presented in this paper, the combined CTD and VXD information provides resolutions for the primary $ep$ interaction vertex of 1.1 cm in $Z$ and 0.2 cm in the $XY$ plane. The momentum resolution, for tracks traversing all superlayers, is $\sigma(p_t)/p_t \approx \sqrt{(0.005)^2p_t^2 + (0.016)^2}$, where $p_t$ is in GeV. The high resolution uranium-scintillator calorimeter (CAL) \cite{CAL} consists of a forward (FCAL), a barrel (BCAL) and a rear (RCAL) part, respectively covering the polar regions $2.2^\circ$ to $36.7^\circ$, $36.7^\circ$ to $129.1^\circ$, and $129.1^\circ$ to $176.6^\circ$. The calorimeter parts are subdivided transversely into towers and longitudinally into electromagnetic (EMC) and hadronic (HAC) sections. A section of a tower is called a cell; each cell is viewed by two photomultiplier tubes. Holes of $20 \times 20$~cm$^2$ in the centre of FCAL and RCAL accommodate the HERA beam pipe. From test beam data, the energy resolution was found to be $ \sigma_E/E = 0.18/\sqrt{E(\mbox{GeV})} $ for electrons and $\sigma_E/E = 0.35/\sqrt{E(\mbox{GeV})}$ for hadrons. The performance, energy and time calibration of the calorimeter are continuously monitored using pedestal triggers, charge and light injection as well as the uranium radioactivity. The additional features relevant to the present analysis are the sub-nanosecond time resolution and the low noise of approximately 15 MeV for the electromagnetic and 25 MeV for the hadronic cells. The veto wall is used to tag events in which a proton has scattered off a residual gas molecule in the beam pipe (``proton-gas" events) upstream of the ZEUS detector. It consists of two layers of scintillators, with overall dimensions of 500~cm $\times$ 600~cm, on both sides of an 87~cm thick iron wall centred at $Z= -7.3$~m. The C5 beam monitor, a small lead-scintillator counter assembly located at $Z=-3.2$~m, records the arrival times of halo particles associated with the proton and electron bunches within $10$ cm of the beam axis. It is used to verify the relative timing of the beams and to reject events due to proton-gas interactions. The luminosity detector (LUMI) \cite{lumi} measures the luminosity by means of the Bethe-Heitler reaction $ep\rightarrow ep \gamma$; a detailed description of the method used is given in~\cite{maciek}. The bremsstrahlung events are identified by measuring the radiated photon in a lead-scintillator calorimeter placed in the HERA tunnel downstream of the interaction point in the direction of the outgoing electron. \subsection{Untagged $\rho^0$ photoproduction trigger} ZEUS uses a three level trigger system \cite{status93}. The data presented here come from the ``untagged photoproduction trigger'', designed to select vector meson photoproduction events \cite{tesi}. The term ``untagged'' refers to the fact that the scattered electron escapes undetected through the beam pipe hole in the RCAL and is not detected in the LUMI detector. The trigger conditions can be summarised as follows: \begin{itemize} \item First-level-trigger (FLT): \begin{itemize} \item At least 464 MeV deposited in the electromagnetic section of RCAL. \item At least one track candidate in the CTD. \item Less than 3750 MeV deposited in the calorimeter towers surrounding the beam pipe in the forward direction. This requirement suppressed proton beam-gas events and a significant fraction of the photoproduction cross section. \end{itemize} The trigger was vetoed if hits were present in the C5 or in the veto wall counters, with timing consistent with that of a $p$-gas collision occurred upstream of the interaction point. The resulting FLT rate was $\approx 10$~Hz at a luminosity of $6\cdot10^{29}$~cm$^{-2}$~s$^{-1}$. \item Second-level-trigger (SLT): \begin{itemize} \item Events with calorimeter timing indicating that the interaction had occurred upstream of the interaction point were rejected. \end{itemize} \item Third-level-trigger (TLT): \begin{itemize} \item Cosmic ray events were discarded on the basis of calorimeter timing. \item A tighter calorimeter timing rejection was applied. \item A cut on the $Z$ value of the reconstructed event vertex of $\pm~85$ cm was imposed. \end{itemize} The rate of events passing the untagged photoproduction trigger at the third level was about 1.2 Hz. For some fraction of the data-taking period a factor of 3 prescale was applied. \end{itemize} The requirements that a signal be detected in RCAL and a track be seen in the CTD effectively selected events with photon energies between 0.5 and 4 GeV, corresponding to $40 < W < 120$ GeV. \section{Event selection} \label{cuts} During 1993, approximately $7\cdot 10^6$ events were recorded, corresponding to a total integrated luminosity of about 550 nb$^{-1}$. The data presented in this paper correspond to an effective luminosity, accounting for the effects of the prescaling of the trigger mentioned in the previous section, of $(240 \pm 8)$~nb$^{-1}$. The following offline requirements were imposed in order to obtain the final $\rho^0$ photoproduction sample: \begin{itemize} \item exactly two tracks from particles of opposite charge and both associated with a reconstructed vertex; \item transverse momentum greater than 200 MeV and hits in at least the 3 innermost CTD superlayers for each of the two tracks, thus restricting the data to a region of well understood track reconstruction efficiency; this restriction approximately translates into one on the track pseudorapidity ($\eta=-\ln{\tan{\theta/2}}$) of $|\eta|~\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$}~1.8$; \item vertex position within $\pm$~40~cm of the nominal interaction point and within a radial distance of 1.5 cm from the beam axis (the interaction region was centred at $Z=-6$~cm and had an r.m.s. width of $\pm 11$~cm); \item total energy in the forward calorimeter $E_{FCAL} \leq$ 1 GeV, thereby limiting the contamination from the reaction $e p \rightarrow e \rho^0 X$, where $X$ is a hadronic state of mass $M_X$ into which the proton had dissociated; \item no energy deposits larger than 200 MeV in any calorimeter cell outside a circular region around the track impact point with a radius of 30~cm in the EMC and 50~cm in the HAC, thus rejecting events with neutral particles or with charged particles outside the region of sensitivity of the tracking system. \end{itemize} \noindent A total of 13570 events satisfied the above criteria. The pion mass was assigned to each track and the analysis was restricted to events reconstructed in the kinematic region defined by: \begin{eqnarray} 60 < & W & < 80 ~\mbox{GeV}, \nonumber\\ 0.55 < & M_{\pi\pi} & < 1.0 ~\mbox{GeV},\label{kin} \\ & p_T^2 & < 0.5 ~\mbox{GeV}^2 . \nonumber \end{eqnarray} In the chosen energy range the acceptance is well understood. The restricted mass range reduces the contamination from reactions involving other vector mesons, in particular from elastic $\phi$ and $\omega$ production, as well as from photon conversions. The restricted $p_T$ range reduces the contamination from events with diffractive dissociation of the proton ($ep \rightarrow e\rho^0 X$). The final sample contains 6381 events. The invariant mass spectrum is shown in Fig.~\ref{massbef} before and after the offline selection. The data are dominated by the $\rho^0$ signal. The corresponding mass spectra of like-sign two track events are also shown as the shaded areas. The small peak just above the $\pi \pi$ threshold in Figs.~\ref{massbef}b and c is due to $\phi \rightarrow K^+ K^-$ events, where the pion mass has been erroneously assigned to the tracks. \section{Monte Carlo generators and acceptance calculation} \label{ptt} The reaction $ep \rightarrow e\rho^0p$ was modelled using two different Monte Carlo ge\-ne\-ra\-tors. The first generator, DIPSI~\cite{dipsi}, describes $\rho^0$ photoproduction in terms of pomeron exchange. Based on the model of Ryskin~\cite{misha}, it assumes that the exchanged photon fluctuates into a $q\bar{q}$ pair which then interacts with the pomeron emitted by the incident proton. The pomeron is described in terms of a gluon ladder. The cross section is proportional to $[\alpha_s(\bar q^2)]^2 \cdot [\bar x g(\bar x, \bar q^2)]^2$, where $\alpha_s(\bar q^2)$ is the strong coupling constant, $\bar x g(\bar x, \bar q^2)$ is the gluon momentum density in the proton, $\bar x$ is the fraction of the proton's momentum carried by the gluon ladder and $2 \bar q^2$, in the leading logarithm approximation, is the upper limit for the virtuality of the two $t$-channel gluons of the gluon ladder. Once $\alpha_s$ and the gluon momentum density are fixed, the $W$ and $t$ dependences are determined. The process under study is sensitive to values of $\bar x \approx M_{\rho}^2/W^2 \approx 10^{-4}$ and $\bar q^2 \approx 0.25~M_{\rho}^2 \approx 0.15$~GeV$^2$. The latter is below the expected region of validity of the calculation; a parametrisation for the product $[\alpha_s(\bar q^2)]^2 \cdot [\bar x g(\bar x, \bar q^2)]^2$ was however found for which the model describes all measured distributions well~\cite{tesi_luciano}. For this choice of the parameters the cross section has a very weak dependence on $W$. The two-pion invariant mass $M_{\pi\pi}$ was generated so as to reproduce, after reconstruction, the measured distribution. The second generator, LEVEC, was developed within the HERWIG framework~\cite{herwig}. It assumes expression~(\ref{sigmat}) for the $Q^2$ dependence of the cross section; the contribution of longitudinal photons is neglected. The generated events were weighted such that all other distributions (i.e. those over $W$, $M_{\pi\pi}$, $p_T^2$ and the angular distributions of the decay pions), after detector simulation and event reconstruction, have the same shape as those of the data. For both programs, the angular distribution of the decay pions was assumed to be that expected on the basis of SCHC~\cite{shilling-wolf}. The acceptance for elastic $\rho^0$ production was calculated using both the DIPSI and the LEVEC generators. Fig.~\ref{tacc}a, b and c respectively show the acceptance as a function of $M_{\pi\pi}$, $W$ and $p_T^2$. The acceptance includes the effects of the geometric acceptance of the apparatus, of the detector efficiency and resolution, and of the trigger and reconstruction efficiencies. The trigger efficiency is $\approx 43\%$. The average acceptance is about 7\%. The acceptance increases with increasing $\pi \pi$ mass, has a broad maximum for $W~\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$}~70$~GeV and is almost independent of the $\rho^0$ transverse momentum squared. In order to convert the measured $dN/dp_T^2$ distribution to the differential cross section $d\sigma/dt$, the $p_T^2$ acceptance (Fig.~\ref{tacc}c) was multiplied by a correction factor $F$, which is, bin by bin, the ratio of the $p_T^2$ and $t$ distributions at the generator level. Fig.~\ref{tacc}d shows $F$ as a function of $p_T^2$. To produce the acceptance corrected cross sections, the generator DIPSI was used. The difference between the results for the acceptance obtained with the two generators was taken as an estimate of the systematic error due to the model dependence of the acceptance calculation. {}From the comparison of the distributions of the reconstructed and generated events, the resolutions in $M_{\pi\pi}$ and $p_T^2$ were found to be $\sigma_{M_{\pi\pi}} \simeq 20$ MeV~--~consistent with the mass widths found for neutral strange particles reconstructed using the CTD information~\cite{k0}~--~and $\sigma_{p_T^2} \simeq 0.01$~GeV$^2$. The DIPSI and LEVEC simulations were also used to produce samples of elastic $\omega$ and $\phi$ photoproduction events for the study of the contamination from such processes. The reaction $ep \rightarrow e \rho^0 X$, where $X$ is the hadronic state resulting from the diffractive dissociation of the proton (see section \ref{anal}), was simulated using PYTHIA~\cite{pythia}. A cross section of the form $d^2\sigma /dt dM_X^2 \propto e^{bt}/M_X^\beta$, with $b=4.5$ GeV$^{-2}$, was assumed; the maximum value of $M_X$ was fixed by the condition $M_X^2/W^2 \le 0.1$~\cite{chapin}. The exponent $\beta$ was varied between 2 and 3, consistent with the result $\beta=2.20~\pm~0.03$ recently obtained at Fermilab~\cite{CDF} for the diffractive dissociation of the proton in $\bar p p$ collisions. A second generator (RHODI) was also used, based on the model calculation of ref.~\cite{jeff_misha}. In both generators the $\rho^0$ decay angular distributions were assumed to be the same as those of the elastic events. \section{Backgrounds} \label{anal} After applying the selection criteria described in section \ref{cuts}, the data still contain contributions from various background processes to the reaction $ep \rightarrow e \pi^+ \pi^- p$: \begin{itemize} \item inelastic $\rho^0$ production, $ep \rightarrow e \rho^0 X$, in which the proton diffractively dissociates into a hadronic state $X$ not detected in the calorimeter; \item elastic production of $\omega$ and $\phi$ mesons; \item beam-gas interactions. \end{itemize} In order to estimate the contamination from the inelastic channel $e p \rightarrow e \rho^0 X$, the energy distribution in the forward calorimeter ($E_{FCAL}$) was studied. Fig.~\ref{fcale} shows the distributions for both elastic and inelastic events as obtained from Monte Carlo simulations based on the generators described above, along with the distribution for the data. To obtain these plots, the cut $E_{FCAL}<1$~GeV was not applied. The plot for elastic Monte Carlo events (Fig. \ref{fcale}a) shows the FCAL energy spectrum resulting from noise (mainly the uranium radioactivity). Nearly all events have $E_{FCAL}<1$~GeV. For the inelastic Monte Carlo sample as well as for the data, the $E_{FCAL}$ spectrum extends to much higher values. Energy deposits in the forward calorimeter greater than~1~GeV were therefore ascribed to inelastic reactions in which part of the diffractively produced hadronic state $X$ was detected in FCAL. The number of residual inelastic events in the data with FCAL energy smaller than 1 GeV was then estimated as \begin{eqnarray} N_{in} = \left\{ \frac{N(E_{FCAL}<1~\mbox{GeV})}{N(E_{FCAL}>1~\mbox{GeV})} \right\}_{MC} \times \left\{N(E_{FCAL}>1~\mbox{GeV})\right\}_{DATA}. \nonumber \end{eqnarray} \noindent The number of Monte Carlo and data events, $\{N(E_{FCAL}>1~\mbox{GeV})\}_{MC}$ and $\{N(E_{FCAL} > $ $1 ~\mbox{GeV})\}_{DATA}$, were computed in the region $1<E_{FCAL}<8$~GeV, where the inelastic Monte Carlo describes the data well. The overall contamination integrated up to $|t| = 0.5$~GeV$^2$ was estimated to be $11\% \pm 1\%~(\mbox{stat.}) \pm 6\%~(\mbox{syst.})$. This was obtained using PYTHIA with $\beta=2.5$. The systematic error reflects the sensitivity of the result to the value of the exponent $\beta$, which was varied between 2 and 3, and to the use of RHODI instead of PYTHIA. As a check, the sum of the elastic and inelastic Monte Carlo distributions of Fig.~\ref{fcale}a and~\ref{fcale}b were fitted to the data of Fig.~\ref{fcale}c; the normalisations of the simulated distributions were free parameters of the fit. The $\chi^2$ of the fit has a broad minimum around $\beta=2.5$. This method gave a $12\%$ contamination, consistent with the result given above. The result depends very little on the shape of the generated $t$ distribution: the estimate of the contamination varies by less than the quoted statistical error for a change of the exponential slope $b$ between 4 and 6~GeV$^{-2}$. The contamination was also studied as a function of $t$ and $W$. It was found to vary from 6\% for $|t|<0.05$ GeV$^2$ to 19\% for $|t| \simeq 0.5$ GeV$^2$; it increases by $3 \pm 2\%$ as $W$ increases from 65 to 75 GeV. The contamination due to the elastic production of $\phi$ and $\omega$ mesons was estimated from Monte Carlo simulations. The selection cuts described in section \ref{cuts} were applied to the simulated events after reconstruction. As an example, the contamination due to $\omega$ production was estimated as $$\frac{A_{\omega}\sigma_{\omega}} {A_{\omega}\sigma_{\omega}+A_{\rho}\sigma_{\rho} },$$ where $A_{\omega}$ is the acceptance for elastic $\omega$ events. Assuming a cross section ratio of $\sigma_{\omega}/\sigma_{\rho}=0.1$~\cite{totdat}, a contamination of $(1.3\pm0.2)\%$ is obtained. A similar procedure applied to $\phi$ events results in an estimated contamination of $(0.3 \pm 0.1)\%$, mainly due to the $\phi \rightarrow 3 \pi$ decay mode. These contributions were not subtracted but were included in the systematic error. The contributions from inelastic $\omega$ and $\phi$ production were negligible. Electron beam-gas and proton beam-gas contaminations were deduced from the pilot bunch event samples to which the cuts described in section \ref{cuts} were applied. The number of events passing the cuts was then scaled by the ratio between the electron (proton) current in the paired bunches and the current in the electron (proton) pilot bunches. The contamination due to electron-gas interactions was estimated to be 2.3 $\pm$ 0.5\%, while that due to proton-gas events was found to be 0.3 $\pm$ 0.2\%. All subsequent results are shown after subtraction of the contributions from inelastic proton diffraction and beam-gas interactions. \section{Results} \subsection{Differential cross section $d\sigma/dM_{\pi\pi}$} \label{mspec} \noindent Fig.~\ref{mplot} shows the acceptance corrected differential cross section $d\sigma/dM_{\pi\pi}$. The mass distribution is skewed compared to a Breit-Wigner distribution: there is an enhancement of the low mass side and a suppression of the high mass side. This distribution can be understood in terms of the interference between the resonant $\pi^+ \pi^-$ production and a non-resonant Drell-type background~\cite{drell} as discussed by S\"oding \cite{soeding}. In order to extract the contribution of the resonant part of the differential cross section $d\sigma/dM_{\pi\pi}$, we followed a procedure similar to that described in refs.~\cite{bulos,park,omega}. The function \begin{eqnarray} d\sigma/dM_{\pi\pi} = f_{\rho} \cdot BW_{\rho}(M_{\pi\pi}) + f_I \cdot I(M_{\pi\pi}) + f_{PS} \label{masf} \end{eqnarray} was fitted to the measured mass distribution. The term \begin{eqnarray} BW_{\rho}(M_{\pi\pi}) = \frac{M_{\pi\pi} M_{\rho} \Gamma_{\rho}(M_{\pi\pi})} {(M_{\pi\pi}^2-M_{\rho}^2)^2 + M_{\rho}^2 \Gamma_{\rho}^2(M_{\pi\pi})} \label{breit} \end{eqnarray} \noindent is a relativistic Breit-Wigner function, with a momentum dependent width~\cite{jackson} \begin{eqnarray} \Gamma_{\rho}(M_{\pi\pi}) = \Gamma_0 \left(\frac{p^*}{p^*_0}\right)^3 \frac{M_{\rho}}{M_{\pi\pi}}, \label{gamma1} \end{eqnarray} \noindent where $\Gamma_0$ is the width of the $\rho^0$, $p^*$ is the $\pi$ momentum in the $\pi \pi$ rest frame and $p^*_0$ is the value of $p^*$ at the $\rho^0$ nominal mass $M_{\rho}$. The function \begin{eqnarray} I(M_{\pi\pi}) = \frac{M_{\rho}^2-M_{\pi\pi}^2}{(M_{\pi\pi}^2-M_{\rho}^2)^2 + M_{\rho}^2 \Gamma_{\rho}^2(M_{\pi\pi})} \label{interf} \end{eqnarray} \noindent is a parametrisation of the interference term. The background term $f_{PS}$ was taken to be constant. The free parameters in the fit were $M_{\rho}$, $\Gamma_0$ and the coefficients $f_{\rho}$, $f_I$ and $f_{PS}$. The results of the fit are presented in table \ref{mtablfit} and in Fig.~\ref{mplot}. The $\chi^2/ndf$ is 1.4, for $ndf$=13. The fitted values of the $\rho^0$ mass and width are in good agreement with the accepted ones~\cite{pdb}. The background term $f_{PS}$ is consistent with zero, within a large error; similar results for $f_{PS}$ were obtained by earlier experiments~\cite{bulos,park} using the functional form~(\ref{masf}). The contribution of the resonant term increases with $|t|$, ranging from $86\%$ of the events for $|t|=0.01$~GeV$^2$ to $95\%$ for $|t|=0.5$ GeV$^2$. The interference and the background terms were also studied as a function of $W$ and of the decay pions' angular variables, $\cos{\theta_h}$ and $\phi_h$, defined in section~\ref{angular}. No dependence on these variables was found. The fit was repeated using different assumptions for the functional form of $d\sigma/d M_{\pi\pi}$. \begin{itemize} \item Parametrisation~(\ref{masf}) is only an effective one, as it leaves the interference term independent of the resonant and non-resonant terms, which is strictly speaking inconsistent with the S\"oding mechanism. We therefore fitted the following functional form to the invariant mass distribution: \begin{eqnarray} d\sigma/dM_{\pi\pi} = \left| A \frac{ \sqrt{ M_{\pi\pi} M_{\rho} \Gamma_{\rho}}} {M_{\pi\pi}^2 - M_{\rho}^2 +i M_{\rho}\Gamma_{\rho}} + B \right|^2, \label{wolf} \end{eqnarray} \noindent where $A$, $B$, $M_{\rho}$ and $\Gamma_0$ were free parameters of the fit. For $\Gamma_{\rho}$ expression~(\ref{gamma1}) was used. The non-resonant amplitude $B$ was taken to be constant and real; it was also constrained to be non-negative. The results for the parameters are given in table~\ref{tabwolf}. The $\chi^2/ndf$ was 1.4, with $ndf=14$. \item The following alternative expressions for the width of the $\rho^0$ were adopted in the functions~(\ref{breit},\ref{interf}): \begin{eqnarray} \Gamma_{\rho}(M_{\pi\pi}) = \Gamma_0 \left(\frac{p^*}{p^*_0}\right)^3, \label{gamma2} \end{eqnarray} \begin{eqnarray} \Gamma_{\rho}(M_{\pi\pi}) = \Gamma_0 \left(\frac{p^*}{p^*_0}\right)^3 \frac{2} {1+(p^*/p^*_0)^2}. \label{gamma3} \end{eqnarray} \item The Breit-Wigner was parametrised, following refs.~\cite{ballam,jackson}, as \begin{eqnarray} BW_{\rho}(M_{\pi\pi}) = \frac{1}{p^*}\frac{M_{\pi\pi} M_{\rho} \Gamma_{\rho}(M_{\pi\pi})} {(M_{\pi\pi}^2-M_{\rho}^2)^2 + M_{\rho}^2 \Gamma_{\rho}^2(M_{\pi\pi})} \label{breit_ballam} \end{eqnarray} \noindent and expression~(\ref{gamma3}) was used for the width. \item The parametrisation given in ref.~\cite{egloff} was used: \begin{eqnarray} d\sigma/dM_{\pi\pi} = f_{\rho} \cdot BW_{\rho}(M_{\pi\pi}) \cdot \left\{1+ C_1 \left[ (M_{\rho}/M_{\pi\pi})^2 -1 \right] + C_2 \left[ (M_{\rho}/M_{\pi\pi})^2 -1\right]^2\right\}. \label{egloff1} \end{eqnarray} \noindent The fit was repeated for the three mass dependent widths (\ref{gamma1}),~(\ref{gamma2}) and~(\ref{gamma3}). \item The phenomenological parametrisation proposed by Ross and Stodolsky~\cite{stodolsky} was used: \begin{eqnarray} d\sigma/dM_{\pi\pi} = f_{\rho} \cdot BW_{\rho}(M_{\pi\pi}) \cdot (M_{\rho}/M_{\pi\pi})^n + f_{PS}, \label{stodolsky} \end{eqnarray} \noindent where the factor $(M_{\rho}/M_{\pi\pi})^n$ accounts for the skewing of the shape of the $\rho^0$ signal. The term $f_{PS}$ was taken to be constant. Here again the fit was repeated for the three mass dependent widths (\ref{gamma1}),~(\ref{gamma2}) and~(\ref{gamma3}). The parameter $n$ was found to be $n=4.9 \pm 0.5$, $n=5.8 \pm 0.5$ and $n=4.9 \pm 0.5$ for the three forms of the width, respectively. \end{itemize} \noindent In none of these cases did the quality of the fit change appreciably, as can be seen from table~\ref{tabcross}, in which the values of $\chi^2/ndf$ obtained for the various functional forms are summarised. The fitted values of the $\rho^0$ mass and width varied from 763 to 772 MeV and from 141 to 155 MeV, respectively. As we discuss in the next section, the values of the resonant part of the cross section were quite stable. \subsection{Integrated $\gamma p \rightarrow \rho^0 p$ cross section} \label{integrated} The cross section $\sigma_{\gamma p \rightarrow \pi^+ \pi^- p}$ at $Q^2=0$, integrated over the $M_{\pi\pi}$ and $t$ regions specified in~(\ref{kin}) and averaged over the range $60<W<80$~GeV, can be obtained from the data as $$ \sigma_{\gamma p \rightarrow \pi^+ \pi^- p} = \frac{ N_{\pi^+\pi^-}}{L \epsilon \Phi},$$ where $N_{\pi^+ \pi^-}$ is the number of observed events after background subtraction, $\epsilon$ is the overall acceptance, $L$ is the effective luminosity and $\Phi=0.02419$ is the effective photon flux factor (see eq. \ref{crs}) integrated over the specified $W$ and $Q^2$ ranges. In order to extract the cross section for the {\em resonant} process $\gamma p \rightarrow \rho^0 p$, it was assumed that the $\rho^0$ meson decays to $\pi^+ \pi^-$ with a $100\%$ branching ratio and the fit procedure described in section~\ref{mspec} (with expressions~(\ref{masf}-\ref{interf})) was used. The resonant part of the total $\pi^+ \pi^-$ signal is given by the parameter $f_{\rho}$ multiplied by the integral of the relativistic Breit-Wigner curve, that is the area under the dotted curve in Fig. \ref{mplot}. There is some arbitrariness in the choice of the integration limits of the Breit-Wigner curve. The integral was carried out in the range $2 M_{\pi}< M_{\pi\pi} < M_{\rho}+5\Gamma_0$, where the $\rho^0$ mass and width values were taken from the fit; the quantity $M_{\pi}$ is the pion mass. This requires an extrapolation beyond the measured region. The upper limit for the integration range approximately corresponds to the mass of the nearest resonance, the $\rho (1450)$, with the same quantum numbers and quark content as the $\rho^0$. The value of the cross section, for $2 M_{\pi}< M_{\pi\pi} < M_{\rho}+5\Gamma_0$, $|t|<0.5$~GeV$^2$ and averaged over the region $60<W<80$~GeV, was measured to be: \begin{eqnarray} \sigma_{\gamma p \rightarrow \rho^0 p} = 14.7\pm 0.4~(\mbox{stat.}) \pm2.4~(\mbox{syst.})~\mu\mbox{b}. \label{result} \end{eqnarray} \noindent The systematic error is dominated by the uncertainties on the acceptance determination ($13\%$), on the number of $\rho^0$ signal events (7\%), and on the inelastic background determination ($7\%$). If the integration is limited to the measured region, $0.55 < M_{\pi\pi} < 1 $~GeV, the cross section value is 12.4 $\mu$b, i.e. lower by a factor $\xi=1.19$. If the integral is computed up to $M_{\rho}+4 \Gamma_0$, the cross section decreases by $3\%$; if instead it is extended to $M_{\rho}+6 \Gamma_0$, the cross section increases by $2\%$. The uncertainty on the acceptance determination has three main contributions: \begin{enumerate} \item the uncertainty on the calorimeter trigger efficiency near the threshold ($ 9 \%$); \item the difference between the results obtained with DIPSI and with LEVEC ($8 \%$); \item the sensitivity of the results to the cuts, notably that on the minimum number of CTD superlayers traversed by each track ($ 6\%$). \end{enumerate} The various alternative functional forms for $d\sigma/dM_{\pi\pi}$ described in the previous section were also used to extract the resonant part of the signal. The values obtained were centred around the result given in~(\ref{result}) but spanned the range $\sigma_{\gamma p \rightarrow \rho^0 p}=13.6$-15.4~$\mu$b, corresponding to a $^{+5}_{-8}\%$ maximum variation. The corresponding variation of $\xi$ is $^{+8}_{-6}\%$. The method of Spital and Yennie~\cite{spital} was also used to obtain the cross section, yielding \begin{eqnarray} \sigma_{\gamma p \rightarrow \rho^0 p}=\frac{\pi \Gamma_0}{2} \left.\frac{d\sigma}{ dM_{\pi\pi}}\right|_{M_{\rho}}=15.5 \pm 0.4~\mu\mbox{b}, \label{spital} \end{eqnarray} \noindent where the $\rho^0$ mass and width were those given in table~\ref{mtablfit}. The result obtained with the Spital and Yennie method depends linearly on the value used for $\Gamma_0$; it is also sensitive to the value of $M_{\rho}$, since $d\sigma / dM_{\pi\pi}$ is a steep function of $M_{\pi\pi}$ in the region $M_{\pi\pi} \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$} 750$~MeV, as can be seen from Fig.~\ref{mplot}. If the values of $M_{\rho}$ and $\Gamma_0$ given in ref.~\cite{pdb} are used, the cross section changes by less than $1 \%$. If $\Gamma_0$ is kept fixed and $M_{\rho}$ is varied between 760 and 780~MeV, the corresponding change in the cross section is $23 \%$. Table~\ref{tabcross} summarises the results. We have taken the spread into account by including a $\pm~7\%$ contribution to the systematic uncertainty of the cross section. The effect of real photon radiation by the incoming or the outgoing electron and that of vacuum polarisation loops is to lower the measured value of the cross section. The size of the correction was estimated to be smaller than 4\%~\cite{kurek}. The correction was not applied; instead a $4\%$ contribution was added to the systematic uncertainty. Table~\ref{syserr} summarises the contributions to the systematic error. The total systematic error was obtained by summing all contributions in quadrature. Result~(\ref{result}) for the cross section $\sigma_{\gamma p \rightarrow \rho^0 p}$ is presented in Fig. \ref{xplot}a together with a partial compilation of low energy measurements. The figure only includes results explicitly corrected for the interference term and the non-resonant background. We also show the ZEUS result~\cite{maciek}, obtained indirectly using tagged photoproduction data from the 1992 data-taking period. Fig.~\ref{xplot}b shows the result~(\ref{spital}) obtained with the method of Spital and Yennie, along with a compilation of low energy results obtained with the same technique. The dashed curve in both figures is a parametrisation by Schuler and Sj\"{o}strand~\cite{schuler}; the $W$ dependence of the curve is based on Regge theory and on the assumption that the intercept of the pomeron is $\alpha(0)=1+0.0808$ (the soft pomeron). The same general trend of the data is seen in the two figures. There are however differences in the results obtained by individual experiments; these differences at least in part reflect the ambiguity in the definition of the $\rho^0$ production cross section due to the finite width of the $\rho^0$. The comparison between different experiments -- and between the experimental results and the theoretical expectations -- should thus be taken with caution. The curve satisfactorily reproduces the energy dependence of the data. \subsection{Differential cross section $d\sigma/dt$} \label{t} \noindent Fig.~\ref{tplot}a shows the acceptance corrected differential cross section $d\sigma/dt$, integrated over the $\rho^0$ mass region $2 M_{\pi} < M_{\pi\pi} < M_{\rho}+5\Gamma_0$. It was obtained by multiplying the differential cross section for the measured range $0.55<M_{\pi\pi} < 1$~GeV by the factor $\xi=1.19$ discussed in section~\ref{integrated}. This assumes that, in each $t$ bin, the ratio of the integral of the relativistic Breit-Wigner distribution over the range $2 M_{\pi} <M_{\pi\pi} < M_{\rho}+5\Gamma_0$ to that over the range $0.55<M_{\pi\pi} < 1$~GeV is the same, i.e. that the mass and the width of the $\rho^0$ are the same in each bin. The contamination from inelastic $\rho^0$ production was subtracted bin by bin. Background and interference terms were also subtracted; their contribution was found by repeating the fitting procedure described in section~\ref{mspec} in each $t$ bin, fixing the values of the $\rho^0$ mass and width to those given in table~\ref{mtablfit}. Fig.~\ref{tplot}b shows the results obtained for $d\sigma/dt$ by applying the Spital and Yennie method in each $t$ bin. The data were fitted with the function \begin{eqnarray} \frac{ d\sigma}{d|t|} = A_t \cdot e^{-b_t |t|} \label{single} \end{eqnarray} \noindent in the range $|t| < 0.15$ GeV$^2$ and with the function \begin{eqnarray} \frac{ d\sigma}{d|t|} = A^{\prime}_{t}\cdot e^{-b^{\prime}_{t} |t| + c^{\prime}_{t} t^2} \label{double} \end{eqnarray} \noindent in the range $ |t|<0.5$~GeV$^2$. Both functions describe the data well. The results of the fits using expression~(\ref{double}) are shown in Fig.~\ref{tplot}. Table~\ref{fits} gives the values of the parameters obtained in the fits. The difference between the results of the fits to the points of Fig.~\ref{tplot}a and b was taken as an estimate of the systematic uncertainty on the determination of the resonant fraction of the cross section in each bin. The other contributions to the systematic errors on $A_t$, $A_t^{\prime}$ and $b_t$, $b_t^{\prime}$ are the uncertainties on the acceptance and on the inelastic background determination. All contributions to the systematic errors were summed in quadrature. If the first bin is excluded from the fit to the points of Fig.~\protect\ref{tplot}a, the values of the slopes $b_t$ and $b_t^{\prime}$ increase by 9\% and 7\%, respectively; the variation is smaller for the fit of Fig.~\protect\ref{tplot}b. Fig.~\ref{bplot} shows the result for the slope $b_t^{\prime}$ as obtained from fitting eq.~(\ref{double}) to the points of Fig.~\ref{tplot}a together with a compilation of fixed target results. All the results shown were obtained from fits of the form~(\ref{double}); the data were explicitly corrected for interference and non-resonant background, with the exception of those of ref.~\cite{jones}. The data of refs.~\cite{jones,gladding} have the somewhat large minimum $|t|$ values of 0.2 and 0.15~GeV$^2$, respectively. For $W>4$~GeV, the results do not depend strongly on $W$, as expected from Regge theory, which predicts a logarithmic dependence of the slope on $W$ if one trajectory dominates. \subsection{Decay angular distributions} \label{angular} The $\rho^0$ decay angular distributions can be used to determine elements of the $\rho^0$ spin-density matrix~\cite{shilling-wolf}. In the $s$-channel helicity frame, in which the $\rho^0$ direction in the photon-proton centre-of-mass frame is taken as the quantisation axis, the decay angular distribution $H(\cos\theta_h,\phi_h,\Phi_h)$ is a function of the polar angle $\theta_h$ of the $\pi^+$ in the $\rho^0$ centre-of-mass frame, of the azimuthal angle $\phi_h$ between the decay plane and the $\gamma$-$\rho^0$ plane (the $\rho^0$ production plane) and of the angle $\Phi_h$ between the $\rho^0$ production plane and the electron scattering one. For $t=t_{\min}$, the photon and the $\rho^0$ are collinear and $\phi_h$ is not defined. In the present experiment, the lepton scattering plane is not measured since neither the recoil proton nor the scattered electron are detected. Furthermore, the azimuthal angle $\phi_h$ can be determined only if the direction of the virtual photon is approximated by that of the incoming electron. It has been verified by Monte Carlo calculations that this is a good approximation. The experimental resolution in $\phi_h$ is approximately $40^{\circ}$; it is a function of $t$ and improves with increasing $|t|$. The resolution in $\cos{\theta_h}$ is $\approx 0.03$. In the following we present the results for the one dimensional distributions $H_{\cos \theta_h} (\cos \theta_h)$ and $H_{\phi_h}(\phi_h)$, obtained from $H(\cos\theta_h,\phi_h,\Phi_h)$ after integrating over $\phi_h$, $\Phi_h$ and over $\cos \theta_h$, $\Phi_h$, respectively. For unpolarised or transversely polarised electrons and a $J^P=1^{-}$ state decaying into two pions, the functions $H_{\cos \theta_h} (\cos \theta_h)$ and $H_{\phi_h}(\phi_h)$ can be written as~\cite{shilling-wolf,joos}: \begin{eqnarray} H_{\cos \theta_h}=\frac{3}{4}[1-r_{00}^{04}+(3r_{00}^{04}-1)\cos^2{\theta_h}], \label{W1} \end{eqnarray} \begin{eqnarray} H_{\phi_h}=\frac{1}{2\pi}(1-2r_{1-1}^{04}\cos2\phi_h), \label{W2} \end{eqnarray} \noindent where $r_{00}^{04}$ and $r_{1-1}^{04}$ are $\rho^0$ density matrix elements. In particular $r_{00}^{04}$ gives the probability that the $\rho^0$ is longitudinally polarised. Assuming $s$-channel helicity conservation (SCHC), $r_{00}^{04}$ can be related to $R$, the ratio of the $\rho^0$ production cross sections for longitudinally and transversely polarised virtual photons~\cite{joos}: \begin{eqnarray} R=\frac{1}{\varepsilon} \frac{r_{00}^{04}} {1-r_{00}^{04} }, \label{R} \end{eqnarray} \noindent where $\varepsilon$ is the virtual photon polarisation, i.e. the ratio of the longitudinally to transversely polarised photon fluxes. The present data have $\varepsilon = 0.998$, essentially constant over the $W$ range covered by the measurement. Fig.~\ref{cost} shows the differential cross sections $d\sigma/d\cos \theta_h$ and $d\sigma/d\phi_h$. The curves are the results of the fits of the functions~(\ref{W1}) and~(\ref{W2}) to the data. A dominant $\sin^2\theta_h$ contribution is visible, indicating that the $\rho^0$ mesons are mostly transversely polarised. This is reflected by the result of the fit, which yields $r_{00}^{04}=0.055 \pm 0.028$, where the error is statistical only. The fitted value of $r_{1-1}^{04}$ is $ 0.008 \pm 0.014$, consistent with zero, as expected on the basis of SCHC. If one uses expression~(\ref{R}) to determine $R$ from the fitted value of $r_{00}^{04}$, one obtains $R=0.06 \pm 0.03$. The average value of $Q^2$ for the data discussed in this paper, computed assuming the $Q^2$ dependence given in~(\ref{crs}), is $\approx 0.1 M_{\rho}^2= 0.05$~GeV$^2$. This gives, using expression~(\ref{sigmal}), $R=0.1$, consistent with our result. \subsection{Total $\rho^0 p$ cross section} By using the vector dominance model and the optical theorem, the measured value of $d\sigma/dt$ at $t=0$ can be used to obtain the $\rho^0 p$ total cross section. For $\rho^0 p$ scattering the optical theorem reads: \begin{eqnarray} \left.\frac{ d\sigma_{\rho^0 p \rightarrow \rho^0 p}}{dt}\right|_{t=0}= \frac{1+\eta ^2}{16\pi} \sigma_{tot}^2(\rho^0 p), \label{optical} \end{eqnarray} \noindent where $\eta$ is the ratio of the real to the imaginary part of the forward $\rho^0 p$ scattering amplitude. We assume that $\eta=0$. On the other hand, within VDM, elastic $\rho^0$ photoproduction is related to the elastic $\rho^0 p$ cross section; in particular, for $t=0$: \begin{eqnarray} \left. \frac{d\sigma_{\gamma p \rightarrow \rho^0 p}}{dt}\right|_{t=0}= \frac{4 \pi \alpha}{f_{\rho}^2} \left. \frac{d\sigma_{\rho^0 p \rightarrow \rho^0 p}}{dt}\right |_{t=0}, \label{vmd1} \end{eqnarray} \noindent where $4 \pi \alpha/f_{\rho}^2$ is the probability for the $\gamma \rightarrow \rho^0$ transition. We take $f_{\rho}^2/4\pi=2.20$~(cf.~e.g.~\cite{bible}, p.~393). Using the intercept $d\sigma_{\gamma p \rightarrow \rho^0 p}/dt|_{t=0}= 133 \pm 11 \pm 27$~$\mu$b/GeV$^2$ given in section~\ref{t}, and combining equations~(\ref{vmd1}) and~(\ref{optical}), one finds $\sigma_{tot}(\rho^0 p)=28.0 \pm 1.2~(\mbox{stat.}) \pm 2.8~(\mbox{syst.})~\mbox{mb}$, where the errors reflect only the uncertainty on the measured value of the intercept. The systematic error does not include the uncertainties from the model dependence of the assumptions made nor from the values of $\eta$ and $4\pi/f_{\rho}^2$. The result is consistent with those found at lower energies (see e.g.~\cite{bible,nmc_quasi}). It is also in agreement with the expected value of $27.8$~mb at $W=70$~GeV, obtained from the parametrisation~\cite{schuler} of $\sigma_{\rho^0 p}^{tot} \approx 1/2 [\sigma^{tot}(\pi^+ p) + \sigma^{tot}(\pi^- p)]=13.63 (W^2)^{0.0808} + 31.79 (W^2)^{- 0.4525}$, based on the soft pomeron, the additive quark model~\cite{aqm} and on fits~\cite{dl1} to $\pi p$ data at centre-of-mass energies ranging between $6$ and 25~GeV. \section{Summary} The ZEUS detector at HERA has been used to study the photoproduction process $\gamma p \rightarrow \rho^0 p$. The integrated cross section as well as the differential cross sections $d\sigma/dM_{\pi\pi}$ and $d\sigma/dt$ at an average photon-proton centre-of-mass energy $\langle W \rangle =$ 70~GeV have been measured. The $\pi \pi$ mass spectrum is skewed, compared to a relativistic Breit-Wigner distribution, as also observed at low energy. The integrated $\gamma p \rightarrow \rho^0 p$ cross section, for $60<W<80$~GeV, $|t|<0.5$~GeV$^2$ and $2 M_{\pi}< M_{\pi\pi} < M_{\rho}+5\Gamma_0$, is $14.7\pm 0.4~(\mbox{stat.}) \pm2.4~(\mbox{syst.})~\mu\mbox{b}$. This result, in conjunction with the measurements at low energy, is consistent with the weak energy dependence expected on the basis of Regge theory and a pomeron intercept of $1.08$ (the soft pomeron). This is at variance with the behaviour of the cross section for elastic photoproduction of $J/\psi$ mesons~\cite{zeus_psi} and elastic production of $\rho^0$ mesons for $Q^2>7$~GeV$^2$~\cite{zeus_hiq2rho}. The differential cross section $d\sigma/dt$ has an approximately exponential shape with a slope consistent with the results obtained by low energy experiments with $W>4$~GeV. This is also in accord with Regge theory, which expects a logarithmic dependence of the slope on $W$. The $\rho^0$ decay angular distributions have been studied. The $\rho^0$ mesons are mainly produced in a transversely polarised state; the probability to find the $\rho^0$ with longitudinal polarisation is $r_{00}^{04}=0.055\pm 0.028$. The value of $r_{1-1}^{04}$ is $0.008 \pm 0.014$, in accord with $s$-channel helicity conservation (SCHC). If SCHC is assumed, one obtains $R=0.06 \pm 0.03$ for the ratio of the $\rho^0$ production cross sections for longitudinally and transversely polarised virtual photons, consistent with the value expected if VDM is assumed. {}From the measured value of $d\sigma/dt$ at $t=0$, within the VDM framework and by using the optical theorem, a value $\sigma^{tot}_{\rho^0 p}=28.0 \pm 1.2~(\mbox{stat.}) \pm 2.8~(\mbox{syst.})~\mbox{mb}$ for the total $\rho^0p$ cross section at $W=70$~GeV is found, in agreement with extrapolations of the pion-proton total cross section data based on Regge theory and a soft pomeron. \section*{Acknowledgements} We thank the DESY Directorate for their strong support and encouragement. The remarkable achievements of the HERA machine group were essential for the successful completion of this work, and are gratefully appreciated. It is a pleasure to thank N.N.~Nikolaev, M.G.~Ryskin and A.~Sandacz for many enlightening discussions. We are also very grateful to K.~Kurek for his calculation of the radiative corrections.
1,108,101,563,023
arxiv
\section{Introduction} \label{sec:introduction} Hyperk\"ahler manifolds occupy a special position at the intersection of Riemannian, symplectic and algebraic geometry. A hyperk\"ahler structure involves a Riemannian metric, as well as a triple of complex structures satisfying the quaternionic relations. Moreover we require that the metric is K\"ahler with respect to each complex structure, so we have a triple (in fact a whole two-sphere) of symplectic forms. Of course, there is no Darboux theorem in hyperk\"ahler geometry because the metric contains local information. However, many of the constructions and results of symplectic geometry, especially those related to moment maps, do have analogues in the hyperk\"ahler world. The prototype is the hyperk\"ahler quotient construction \cite{HKLR}, and more recent examples include hypertoric varieties \cite{BD} and cutting \cite{DSmod}. In this article we shall explore a hyperk\"ahler analogue of Guillemin, Jeffrey and Sjamaar's construction of \emph{symplectic implosion} \cite{GJS}. This may be viewed as an abelianisation procedure: given a symplectic manifold \( M \) with a Hamiltonian action of a compact group \( K \), the implosion \( M_{\textup{impl}} \) is a new symplectic space with an action of the maximal torus \( T \) of \( K \), such that the symplectic reductions of \( M_{\textup{impl}} \) by \( T \) agree with the reductions of \( M \) by \( K \). However the implosion is usually not smooth but is a singular space with a stratified symplectic structure. The implosion of the cotangent bundle \( T^*K \) acts as a universal object here; implosions of general Hamiltonian \( K \)-manifolds may be defined using the symplectic implosion \( (T^*K)_{\textup{impl}} \). This space also has an algebro-geometric description as the geometric invariant theory quotient of \( K_\C \) by a maximal unipotent subgroup~\( N \). In \cite{DKS} we introduced a hyperk\"ahler analogue of the universal implosion in the case of \( \Lie{SU}(n) \) actions. The construction proceeds via quiver diagrams, and produces a stratified hyperk\"ahler space \( Q \). The hyperk\"ahler strata can be described in terms of open sets in complex symplectic quotients of the cotangent bundle of \( K_\C=\Lie{SL}(n, \C) \) by subgroups containing commutators of parabolic subgroups. There is a maximal torus action, and hyperk\"ahler quotients by this action yield not single complex coadjoint orbits but rather their canonical affine completions which are Kostant varieties. In this article, we shall develop some of the ideas of \cite{DKS}, focusing on some aspects, such as toric geometry and gauge theory constructions, which may generalise to the case of an arbitrary compact group \( K \). In particular, we shall show the existence in the case \( K=SU(n) \) of a hypertoric variety inside the implosion, which has a natural description in terms of quivers. This is a hyperk\"ahler analogue of the result of \cite{GJS} that the universal symplectic implosion \( (T^*K)_{\textup{impl}} \) naturally contains the toric variety associated to a positive Weyl chamber for \( K \). The layout of the paper is as follows. In \S1 we review the theory of symplectic implosion described in \cite{GJS}, and in \S2 we recall how hyperk\"ahler implosion for \( K=\Lie{SU}(n) \) is introduced in \cite{DKS}. In \S3 we recall some of the theory of hypertoric varieties and describe a hypertoric variety which maps naturally to the universal hyperk\"ahler implosion \( Q \) for \( K=\Lie{SU}(n) \). In \S4 we recall the stratification given in \cite{DKS} of \( Q \) into strata which are hyperk\"ahler manifolds, and in \S5 we refine this stratification to obtain strata \( Q_{[\sim,\mathcal{O}]} \) which are not hyperk\"ahler but which reflect the group structure of \( K=\Lie{SU}(n) \) and can be indexed in terms of Levi subgroups and nilpotent orbits in the complexification \( K_\C \) of \( K \). In \S6, \S7 and \S8 we use Jordan canonical form to describe open subsets of the refined strata by putting their quivers into standard forms. Finally in \S9 we explore briefly the relationship between the finite-dimensional picture of the universal hyperk\"ahler implosion \( Q \) for \( K=\Lie{SU}(n) \) and an infinite-dimensional point of view involving the Nahm equations. \subsubsection*{Acknowledgements.} The work of the second author was supported by a Senior Research Fellowship of the Engineering and Physical Sciences Research Council (grant number GR/T016170/1) during much of this project. The third author is partially supported by the Danish Council for Independent Research, Natural Sciences. \section{Symplectic implosion} \label{sec:symplectic-implosion} Our study of hyperk\"ahler implosion in \cite{DKS} was motivated by the theory of symplectic implosion, due to Guillemin, Jeffrey and Sjamaar \cite{GJS}. For this we start with a symplectic manifold \( M \) with a Hamiltonian symplectic action of a compact Lie group \( K \) with maximal torus \( T \). If \( \lambda \) is a central element of \( \lie k^* \) the symplectic reduction \( M \symp_\lambda^s K \) is \( \mu^{-1}(\lambda)/K \) where \( \mu : M \rightarrow \lie k^* \) is the moment map for the action of \( K \) on \( M \). For a general element \( \lambda \in \lie k^* \), we define the symplectic reduction \( M \symp_\lambda^s K \) to be the space \( (M \times \mathsf O_{-\lambda}) \symp_0^s K \), where \( \mathsf O_\lambda \) is the coadjoint orbit of \( K \) through \( \lambda \) with the standard Kirillov-Kostant-Souriau symplectic structure. This reduction may be identified with \( \mu^{-1}(\lambda)/\Stab_K(\lambda) \) where \( \Stab_K(\lambda) \) is the stabiliser of \( \lambda \) under the coadjoint action of \( K \). The imploded space \( M_{\textup{impl}} \) is a stratified symplectic space with a Hamiltonian action of the maximal torus \( T \) of \( K \), such that \begin{equation} M \symp_\lambda^s K = M_{\textup{impl}} \symp^s_\lambda T \end{equation} for all \( \lambda \) in the closure \( \lie t_{+}^* \) of a fixed positive Weyl chamber in \( \lie t^* \). The key example is the implosion of the cotangent bundle \( T^*K \). Now \( T^*K \) carries a \( K \times K \) action, which we can think of as commuting left and right actions of \( K \). The left action is \( (k, \xi) \mapsto (hk,\xi) \) while the right action is \( (k, \xi) \mapsto (kh^{-1}, Ad(h).\xi) \). The moment maps for the left and right actions are \begin{equation*} (k, \xi) \mapsto -Ad(k).\xi \end{equation*} and \begin{equation*} (k, \xi) \mapsto \xi \end{equation*} respectively. We shall implode \( T^*K \) with respect to the right action. Explicitly, \( (T^*K)_{\textup{impl}} \) is obtained from \( K \times \lie t_{+}^* \), by identifying \( (k_1, \xi) \) with \( (k_2, \xi) \) if \( k_1, k_2 \) are related by the action of an element of the commutator subgroup of \( \Stab_{K}(\xi)\). Thus if \( \xi \) is in the interior of the chamber, its stabiliser is a torus and no collapsing occurs, and an open dense subset of \( (T^*K)_{\textup{impl}} \) is just the product of \( K \) with the interior of the Weyl chamber. Now symplectic reduction by the right action of \( T \) at level \( \lambda \) (in the closed positive Weyl chamber) will fix \( \xi \) to be \( \lambda \), and collapse by the product of \( T \) with the commutator subgroup of \( \Stab_{K}(\lambda) \), which is equivalent to collapsing by \( \Stab_{K}(\lambda) \). Now we have \begin{equation*} (T^*K)_{\textup{impl}}\symp_\lambda^s T = K/\Stab_{K}(\lambda)=\mathsf O_\lambda = (T^*K) \symp_{\lambda}^s K \end{equation*} as required. \( (T^*K)_{\textup{impl}} \) inherits a Hamiltonian \( K \times T \)-action from the Hamiltonian \( K \times K \)-action on \( T^*K \). This gives us a universal implosion, in the sense that the implosion \( M_{\textup{impl}} \) of a general symplectic manifold \( M \) with a Hamiltonian \( K \)-action can be obtained as the symplectic reduction \( (M \times (T^*K)_{\textup{impl}}) \symp_0^s K \). It is also shown in \cite{GJS} that the implosion \( (T^*K)_{{\textup{impl}}} \) may be embedded in the complex affine space \( E = \oplus V_{\varpi} \), where \( V_{\varpi} \) is the \( K \)-module with highest weight \( \varpi \). and we take the sum over a minimal generating set for the monoid of dominant weights. We denote a highest weight vector of \( V_{\varpi} \) by \( v_{\varpi} \). In this picture, the symplectic implosion may be realised as the closure \( \overline{K_\C v} \), where \( v = \sum v_\varpi \) is the sum of the highest weight vectors, and \( K_{\C} \) denotes the complexification of \( K \). In terms of the Iwasawa decomposition \( K_\C=KAN \) we have that the maximal unipotent subgroup \( N \) is the stabiliser of \( v \), so an open dense set in the implosion is \( K_\C v = K_{\C}/N \). Taking the closure gives lower-dimensional strata in the implosion, which may be identified with quotients \( K_{\C}/[P,P] \) where \( P \) ranges over parabolic subgroups of \( K_{\C} \). Of course, taking \( P \) to be the Borel \( B \) gives the top stratum \( K_{\C}/N = K_{\C} /[B,B] \). In fact the full implosion may be identified with the Geometric Invariant Theory (GIT) quotient of \( K_{\C} \) by the nonreductive group \( N \): \begin{equation*} K_\C \symp N = \Spec(\mathcal{O}(K_\C)^N), \end{equation*} This may also be viewed as the canonical affine completion of the quasi-affine variety \( K_\C / N \). (We refer to \cite{DK} for background on nonreductive GIT quotients). Using the Iwasawa decomposition as above, and recalling that \( T_{\C}=TA \), we see that \( \overline{K_\C v} = \overline{KAv} = K (\overline{T_\C v}) \), the sweep under the compact group \( K \) of a toric variety \( X= \overline{T_\C v} \). As \( T_{\C} \) normalises \( N \), we have that \( N \) stabilises every point in \( X \); in fact \( X \) is the fixed point set \( E^N \) for the action of \( N \) on the vector space \( E \). The action of the compact torus \( T \) defines a moment map \( \mu_{T} : X \rightarrow \lie t^* \) whose image is (minus) \( \lie t_{+}^* \), so \( -\lie t_{+}^* \) is the Delzant polytope for the toric variety \( X \). Equation (6.6) in \cite{GJS} defines a \( T \)-equivariant map \( s : \lie t_{+}^* \rightarrow X \) which is a section for \( -\mu_{T} \). The map \( s \) extends to a \( K \times T \)-equivariant map \( K \times \lie t_{+}^* \rightarrow KX \), which induces a homeomorphism from \( (T^*K)_{{\textup{impl}}} \) onto \( \overline{K_\C v} \) Recall that the moment map for the left \( K \) action on \( T^*K \) is \begin{equation*} \mu_{K} : (k, \xi) \mapsto -Ad(k).\xi \end{equation*} Note that two points \( (k_1, \xi), (k_2, \xi) \) in \( T^*K \) with the same \( \lie k^* \) coordinate have the same image under \( \mu_{K} \) if and only if \( k_1 k_2^{-1} \in {\rm Stab}_{K}(\xi) \). In particular two points of \( K \times \lie t^* \) which are identified in the implosion will have the same image under \( \mu_K \), so this map descends to the implosion. We have a commutative diagram \smallskip \begin{equation*} \begin{array}{ccccc} (T^*K)_{{\textup{impl}}} & \stackrel{\mu_{K}}{\longrightarrow} & \lie k^* & \rightarrow & \lie k^*/K \\ { \downarrow} & & & & || \\ X & \stackrel{-\mu_{T}}{\longrightarrow} & \lie t^* & \rightarrow & \lie t^* / W \end{array} \end{equation*} where the left vertical arrow is induced by \( (k,\xi) \mapsto s(\xi) \), and the rightmost arrow in each row is the obvious quotient map. \medskip In \cite{DKS} we introduced a new model for the symplectic implosion for \( K=SU(n) \), in terms of \emph{symplectic quivers}. These are diagrams \begin{equation} \label{eq:symplectic} 0 = V_0 \stackrel{\alpha_0}{\rightarrow} V_1 \stackrel{\alpha_1}{\rightarrow} V_2 \stackrel{\alpha_2}{\rightarrow} \dots \stackrel{\alpha_{r-2}}{\rightarrow} V_{r-1} \stackrel{\alpha_{r-1}}{\rightarrow} V_r = \C^n. \end{equation} where \( V_i \) is a vector space of dimension \( n_i \). The group \( \prod_{i=1}^{r-1} \Lie{SL}(V_i) \) acts on quivers by \begin{align*} \alpha_i &\mapsto g_{i+1} \alpha_i g_i^{-1} \quad (i = 1,\dots, r-2),\\ \alpha_{r-1} &\mapsto \alpha_{r-1} g_{r-1}^{-1}. \end{align*} There is also of course a commuting action of \( \Lie{GL}(n,\C) = \Lie{GL}(V_r) \) by left multiplication of \( \alpha_{r-1} \). We considered the GIT quotient of the space of quivers by \( \prod_{i=1}^{r-1} \Lie{SL}(V_i) \), focusing particularly on the full flag case when \( n_i =i \) for all \( i \). It turns out that such a quiver lies in a closed orbit if and only if, for each \( i \) we have \begin{compactenum} \item \( \alpha_i \) is injective, or \item \( V_i = \im\alpha_{i-1} \oplus \ker\alpha_i \). \end{compactenum} We may now decompose \( \C^i = \ker \alpha_i \oplus \C^{m_i} \), where \( \C^{m_i} = \C^i \) if \( \alpha_i \) is injective and we take \( \C^{m_i} = \im \alpha_{i-1} \) otherwise. This defines a decomposition of the quiver into two subquivers; for one subquiver the maps are all injective while for the other they are all zero. We may therefore focus on the injective quiver. As explained in \S 4 of \cite{DKS}, we may contract any edges of this quiver where the maps are isomorphisms. More precisely, if \( m_i = m_{i-1} \) then we have \( m_i \leq i-1 < i \), so we actually have a \( GL(m_i) \) action on \( \C^{m_i} \) and the isomorphism \( \C^{m_{i-1}} \rightarrow \C^{m_i} \) may be set to be the identity, so this edge of the quiver may be removed. After this process the dimensions of the spaces in the injective quiver are given by a strictly increasing sequence of integers ending with \( n \). The upshot is that we have a stratification of the GIT quotient by \( \prod_{i=2}^{n-1} \Lie{SL}(i) \) of the space of full flag quivers. There are \( 2^{n-1} \) strata, indexed by the strictly increasing sequences of positive integers ending with \( n \), or equivalently by the ordered partitions of \( n \). Moreover, the injectivity property makes it easy to analyse the structure of each stratum. For we may now use the action of \( \prod_{i=2}^{n-1} \Lie{SL}(i) \) and \( SL(n) \) to put the \( \alpha_i \) into a standard form where all entries are zero except for the \( (j,j) \) entries (\( j=1,\ldots, m_i) \), which equal \( 1 \). The freedom involved in putting the \( \alpha_i \) into this standard form is exactly an element of the commutator \( [P,P] \), where \( P \) is the parabolic subgroup of \( \Lie{SL}(n) \) corresponding to the ordered partition of \( n \). We conclude that the strata can be identified with \( \Lie{SL}(n)/[P,P] \). In fact, the full GIT quotient may be identified with the symplectic implosion for \( SU(n) \) and the strata are just the strata of the implosion discussed above. We may also realise the toric structure discussed above in this model. For we can instead put \( \alpha_i \) into a slightly different standard form where the \( (j,j) \) entries (\( j=1,\ldots, m_i) \) can now equal a nonzero scalar \( \sigma_i \), not necessarily \( 1 \). This standard form is now preserved by an element of the parabolic \( P \), and the \( \sigma_i \) define an algebraic torus of dimension given by the length of the injective quiver (equivalently, we are considering the fibration \( T^{r-1}_{\C} \rightarrow SL(n)/[P,P] \rightarrow SL(n)/P \) for each stratum). These tori fit together to form the toric variety. Now the generalised flag variety \( SL(n)/P \) is a homogeneous space for the compact group \( SU(n) \), so we see again that the sweep of the toric variety under the \( SU(n) \) action is the full implosion. Alternatively, we may allow the entries \( \sigma_i \) to depend on \( j \) as well as \( i \). Now we have an action on such configurations of the product of the maximal tori of \( SL(m_i) \), and this action may be used to bring the quiver into the form above where \( \sigma_i \) depends only on \( i \). \section{Hyperk\"ahler implosion} \label{sec:towards-hyperk-impl} In \cite{DKS} a hyperk\"ahler analogue of the symplectic implosion was introduced for the group \( K= SU(n) \). Motivated by the quiver model for symplectic implosion described in the preceding section, we look at quiver diagrams of the following form: \begin{equation*} 0 = V_0\stackrel[\beta_0]{\alpha_0}{\rightleftarrows} V_1\stackrel[\beta_1]{\alpha_1}{\rightleftarrows} V_2\stackrel[\beta_2]{\alpha_2}{\rightleftarrows}\dots \stackrel[\beta_{r-2}]{\alpha_{r-2}}{\rightleftarrows}V_{r-1} \stackrel[\beta_{r-1}]{\alpha_{r-1}}{\rightleftarrows} V_r = \C^n \end{equation*} where \( V_i \) is a complex vector space of complex dimension \( n_i \) and \( \alpha_0 = \beta_0 =0 \). The space \( M \) of quivers for fixed dimension vector \( (n_1, \dots, n_r) \) is a flat hyperk\"ahler vector space. There is a hyperk\"ahler action of \( \Lie{U}(n_1) \times \dots \times \Lie{U}(n_r) \) on this space given by \begin{equation*} \alpha_i \mapsto g_{i+1} \alpha_i g_i^{-1},\quad \beta_i \mapsto g_i \beta_i g_{i+1}^{-1} \qquad (i=1,\dots r-1), \end{equation*} with \( g_i \in \Lie{U}(n_i) \) for \( i=1, \dots, r \). Let \( \tilde{H} \) be the subgroup, isomorphic to \( \Lie{U}(n_1) \times \dots \times \Lie{U}(n_{r-1}) \), given by setting \( g_r=1 \), and let \(H = SU(n_1) \times \dots \times SU(n_{r-1}) \leqslant \tilde{H}\). \begin{definition} \label{defQ} The \emph{universal hyperk\"ahler implosion for \( \Lie{SU}(n) \)} will be the hyperk\"ahler quotient \( Q = M {\sslash\mkern-6mu/} H \), where \( M, H \) are as above with \( r=n \) and \( n_j =j \), \( (j=1, \dots, n) \), (that is, the case of full flag quivers). \end{definition} The hyperk\"ahler moment map equations for the \( H \)-action are (in the full flag case) \begin{align} \label{eq:mmcomplex} \alpha_i \beta_i - \beta_{i+1} \alpha_{i+1} &= \lambda^\C_{i+1} I \qquad (0 \leqslant i \leqslant n-2) \end{align} where \( \lambda^\C_i \in \C \) for \( 1 \leqslant i \leqslant n-1 \), and \begin{equation} \label{eq:mmreal} \alpha_i \alpha_i^* - \beta_i^* \beta_i + \beta_{i+1} \beta_{i+1}^* - \alpha_{i+1}^* \alpha_{i+1} = \lambda^{\mathbb R}_{i+1} I \quad (0 \leqslant i \leqslant n-2), \end{equation} where \( \lambda^{\mathbb R}_i \in {\mathbb R}\) for \( 1 \leqslant i \leqslant n-1 \). Now \( Q \) has a residual action of \( (S^1)^{n-1} =\tilde{H}/H \) as well as an action of \( \Lie{SU}(n_r) = \Lie{SU}(n) \). In \S4 we will identify \( (S^1)^{n-1} \) with \( T \), the maximal torus of \( SU(n) \). There is also an \( Sp(1)=\Lie{SU}(2) \) action which is not hyperk\"ahler but rotates the complex structures. Using the standard theory relating symplectic and GIT quotients, we have a description of \( Q=M {\sslash\mkern-6mu/} H \), as the quotient (in the GIT sense) of the subvariety defined by the complex moment map equations (\ref{eq:mmcomplex}) by the action of \begin{gather} H_\C = \prod_{i=1}^{n-1}\Lie{SL}(n_i, \C) \notag \\ \label{SLaction1} \alpha_i \mapsto g_{i+1} \alpha_i g_i^{-1}, \quad \beta_i \mapsto g_i \beta_i g_{i+1}^{-1} \qquad (i=1,\dots n-2),\\ \label{SLaction2} \alpha_{n-1} \mapsto \alpha_{n-1} g_{n-1}^{-1}, \quad \beta_{n-1} \mapsto g_{n-1} \beta_{n-1}, \end{gather} where \( g_i \in \Lie{SL}(n_i, \C) \). We introduce the element \( X = \alpha_{n-1} \beta_{n-1} \in \Hom (\C^n, \C^n) \), which is invariant under the action of \( \prod_{i=1}^{n-1}\Lie{GL}(n_i,\C) \) and transforms by conjugation under the residual \( \Lie{SL}(n,\C)=\Lie{SL}(n_r,\C) \) action on \( Q \). We thus have a \( T_{\C} \)-invariant and \( \Lie{SL}(n,\C) \)-equivariant map \( Q \rightarrow \lie{sl}(n,\C) \) given by: \begin{equation*} (\alpha, \beta) \mapsto X - \frac1n \tr(X) I_n \end{equation*} where \( I_n \) is the \( n\times n \)-identity matrix. In fact this is the complex moment map for the residual \( SU(n) \) action on \( Q \). It is shown in \cite{DKS} that \( X \) satisfies an equation \( X(X+ \nu_1) \dots (X+ \nu_{n-1})=0 \) where \( \nu_i = \sum_{j=i}^{n-1} \lambda_j \). This generalises the equation \( X^n =0 \) in the quiver construction of the nilpotent variety in \cite{KS}. In general it is useful to compare our construction with that in \cite{KS}. There one performs a hyperk\"ahler quotient by \( \tilde{H} \), rather than \( H \), so all \( \lambda_i \) are zero. In our situation the \( \lambda_i \) are not constrained to be zero, and in fact give the value of the complex moment map for the residual \( T \) action on \( Q \). In \cite{DKS} we first analysed the points of the implosion that give closed orbits for the \( T_{\C} \) action, or equivalently, quivers that satisfy the equations (\ref{eq:mmcomplex}) and give closed orbits for the action of \( \tilde{H}_{\C} \) as well as \( H_\C \). Such quivers can be split into a sum of a quiver with \( \alpha_i \) injective and \( \beta_i \) surjective, and a collection of quivers where the non-zero maps are isomorphisms. In general one must consider quivers that satisfy (\ref{eq:mmcomplex}) and give a closed orbit for the \( H_{\C} \) action but not necessarily for the \( \tilde{H}_{\C} \) action. However for each such quiver we may rotate complex structures so that the closed orbit condition is actually satisfied for a larger subgroup of \( \tilde{H}_{\C} \). In this way we obtain a stratification for the implosion. Using the methods that appeared in the analysis of the symplectic implosion, we described in \cite{DKS} \S7 the strata for the hyperk\"ahler implosion in terms of complex-symplectic quotients of \( T^*SL(n,\C) \) by extensions of abelian groups by commutators of parabolics. In more detail, we can follow the argument in the symplectic case to standardise the surjective maps \( \beta_i \) as \( (0 | I) \). The equations (\ref{eq:mmcomplex}) now enable us to find \( \alpha_i \) in terms of \( \alpha_{i+1} \) and \( \lambda_{i+1}^{\C} \). Now knowledge of (the tracefree part of) \( X =\alpha_{n-1} \beta_{n-1} \), together with the equations (\ref{eq:mmcomplex}), enables us to work down the quiver inductively determining all the \( \alpha_i \). Further details of some of these arguments are given in \S 5 and \S 6, as well as in \cite{DKS}. The universal hyperk\"ahler implosion \( Q \) contains an open set which may be identified with \( SL(n,\C) \times_{N} \lie b \), the complex-symplectic quotient of \( T^* SL(n,\C) \) by the maximal unipotent \( N \). This arises as the locus of full flag quivers with all \( \beta_i \) surjective. The full implosion \( Q \) may in fact be identified with the non-reductive GIT quotient (\( SL(n, \C) \times \lie b) \symp N \). The hyperk\"ahler torus quotients of \( Q \) can be identified for any fixed complex structure with the complex-symplectic reductions of \( Q \) by the complexified torus \( T_{\C} \) in the sense of GIT. That is, we take the GIT quotients with respect to \( T_{\C} \) of the level sets of the complex moment map for the action of \( T \) on \( Q \). These complex-symplectic reductions give us the Kostant varieties which are the subsets of \( \lie{sl}(n, \C) \) obtained by fixing the eigenvalues. In particular torus reduction at level \( 0 \) gives the nilpotent variety. If, by contrast, we take the geometric (rather than GIT) complex-symplectic reduction at level \( 0 \) of \( SL(n, \C) \times_{N}\lie b \), we obtain \( (SL(n,\C) \times_{N} \lie n)/T_{\C} \), which is the Springer resolution \( SL(n, \C) \times_{B} \lie n \) of the nilpotent variety. As in the symplectic case, the hyperk\"ahler implosion is usually a singular stratified space. In fact the symplectic implosion may be realised as the fixed point set of a circle action on the hyperk\"ahler implosion, so if the latter is smooth then so is the former, which implies by results of \cite{GJS} that \( K \) is, up to covers, a product of copies of \( SU(2) \). If \( K=SU(2) \) the implosion is just flat \( {\mathbb H}^2 \). \section{Hypertoric varieties} Classical toric varieties arise as symplectic quotients of \( \C^d \) by a subtorus of \( (S^1)^d \), and have a symplectic action of a compact torus \( T \) whose real dimension is half that of the toric variety \cite{G}, \cite{De}. The image of the toric variety under the associated moment map is called the Delzant polytope, and the toric variety is determined up to \( T \)-equivariant isomorphism by \( T \) and this polytope in \( \lie{t}^* \). We recall that a \emph{hypertoric} (or \emph{toric hyperk\"ahler}) variety is, by analogy, obtained as a hyperk\"ahler quotient of flat quaternionic space \( {\mathbb H}^d \) by a subtorus \( \bf N \) of \( T^d \). If the subtorus is of codimension \( n \) in \( T^d \), the associated hypertoric variety \( {\mathbb H}^d {\sslash\mkern-6mu/} {\bf N} \) has real dimension \( 4n \) and has a hyperk\"ahler action of \( T^n \cong T^d/{\bf N} \). The hyperk\"ahler moment map for this action is a surjection onto \( {\mathbb R}^{3n} \) and much of the geometry of the hypertoric is encoded in a collection of codimension 3 affine subspaces (the \emph{flats}) in \( {\mathbb R}^{3n} \). These play in some respects a role analogous to that of the hyperplanes giving the faces of the Delzant polytope for classical toric varieties. In particular the fibre of the moment map over a point in \( {\mathbb R}^{3n} \) is a torus determined by the collection of flats passing through that point. We refer the reader to \cite{BD}, \cite{HS} for further background on hypertorics. We want to relate the hyperk\"ahler implosion \( Q \) to the hypertoric variety associated to the arrangement of flats induced by the hyperplane arrangement given by the root planes in the Lie algebra \( \lie{t} \) of the maximal torus \( T \) of \( K=\Lie{SU}(n) \). \begin{definition} \label{defhypertoric} Let \( M_T \) be the subset of \( M \) consisting of all hyperk\"ahler quivers of the form \begin{equation*} \alpha_k = \left( \begin{array}{ccccc} \nu_1^k & 0 & 0 & \cdots & 0\\ 0 & \nu_2^k & 0 & \cdots & 0\\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & \nu_k^k\\ 0 & \cdots & 0 & 0 & 0 \end{array} \right) \end{equation*} and \begin{equation*} \beta_k = \left( \begin{array}{cccccc} \mu_1^k & 0 & 0 & 0 & \cdots & 0\\ 0 & \mu_2^k & 0 & 0 & \cdots & 0\\ & & \cdots & & & \\ 0 & \cdots & 0 & 0 & \mu_k^k & 0 \end{array} \right) \end{equation*} for some \( \nu^k_i, \mu_i^k \in \C \). Recall from \cite{DKS} that we use \( M^{{\textup{hks}}} \) to denote the set of \emph{hyperk\"ahler stable} quivers, that is, those that after an appropriate rotation of complex structures have all \( \alpha_i \) injective and all \( \beta_i \) surjective. Let \( M_T^{{\textup{hks}}} = M^{{\textup{hks}}} \cap M_T \) be the subset of \( M^{{\textup{hks}}} \) consisting of all hyperk\"ahler quivers of the form above; thus \( M_T^{{\textup{hks}}} \) consists of all quivers of the form above such that \( \mu_i^k \) and \( \nu_i^k \) are not simultaneously zero for any pair \( (i,k) \) with \( 1 \leqslant i \leqslant k < n \). Note that each of the compositions \( \alpha_k \beta_k \), \( \beta_k\alpha_k \), \( \alpha_k \alpha_{k}^* \), \( \alpha_{k}^*\alpha_k \), \( \beta_k\beta_{k}^* \) and \( \beta_{k}^*\beta_k \) is a diagonal matrix, so that for quivers of this form the hyperk\"ahler moment map equations for the action of \( H = \prod_{k=1}^{n-1} \Lie{SU}(k) \) reduce to the hyperk\"ahler moment map equations for the action of its maximal torus \begin{equation*} T_H = \prod_{k=1}^{n-1} T_k \end{equation*} where \( T_k \) is the standard maximal torus in \( \Lie{SU}(k) \). Moreover two hyperk\"ahler stable quivers of this form satisfying the hyperk\"ahler moment map equations lie in the same orbit for the action of \( H \) if and only if they lie in the same orbit for the action of its maximal torus \( T_H \). Thus we get a natural map \begin{equation*} \iota:M_T {\sslash\mkern-6mu/} T_H \to Q = M{\sslash\mkern-6mu/} H \end{equation*} which restricts to an embedding \begin{equation*} \iota: Q^{{\textup{hks}}}_T \to Q \end{equation*} where \( Q^{{\textup{hks}}}_T = M_T^{{\textup{hks}}} {\sslash\mkern-6mu/} T_H \). \end{definition} \begin{remark} \label{remhypertoric} Note that \( M_T = \bigoplus_{k=1}^{n-1} \mathbb{H}^k = \mathbb{H}^{n(n-1)/2} \) is a (flat) hypertoric variety with respect to the action of the standard maximal torus \( T_{\tilde{H}} = (S^1)^{n(n-1)/2} \) of \( \tilde{H} = \prod_{k=1}^{n-1} \Lie{U}(k) \). The associated arrangement of flats in \( {\mathbb R}^{3n(n-1)/2} = {\mathbb R}^3 \otimes {\mathbb R}^{n(n-1)/2} \) is just that induced by the hyperplane arrangement given by the coordinate hyperplanes in \( \lie{t}_{\tilde{H}} = {\mathbb R}^{n(n-1)/2} \). Thus \( M_T {\sslash\mkern-6mu/} T_H \) is a hypertoric variety for the induced action of \begin{equation*} T_{\tilde{H}}/T_H = \prod_{k=1}^{n-1} \Lie{U}(k)/\Lie{SU}(k) = (S^1)^{n-1}. \end{equation*} Moreover we can identify \( T_{\tilde{H}}/T_H \) with the standard maximal torus \( T \) of \( K=\Lie{SU}(n) \) in such a way that the induced action of \( T_{\tilde{H}}/T_H \) on \( Q^{{\textup{hks}}}_T \) coincides with the restriction to \( T \) of the action of \( K \) on \( Q^{{\textup{hks}}}_T \) embedded in \( Q=M {\sslash\mkern-6mu/} H \) as above, since \begin{gather*} \left( \begin{array}{cccccc} t_1^k & 0 & 0 & \cdots & 0 & 0\\ 0 & t_2^k & 0 & \cdots & 0 & 0\\ & & \cdots & & &\\ 0 & \cdots & 0 & 0 & t_{k}^{k} & 0 \\ 0 & \cdots & 0 & 0 & 0 & t_{k+1}^k \end{array} \right) \left( \begin{array}{ccccc} \nu_1^k & 0 & 0 & \cdots & 0\\ 0 & \nu_2^k & 0 & \cdots & 0\\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & \nu_k^k\\ 0 & \cdots & 0 & 0 & 0 \end{array} \right)\\ = \left( \begin{array}{ccccc} \nu_1^k & 0 & 0 & \cdots & 0\\ 0 & \nu_2^k & 0 & \cdots & 0\\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & \nu_k^k\\ 0 & \cdots & 0 & 0 & 0 \end{array} \right) \left( \begin{array}{cccccc} t_1^k & 0 & 0 & \cdots & 0 \\ 0 & t_2^k & 0 & \cdots & 0 \\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & t_{k}^k \end{array} \right) \end{gather*} and \begin{gather*} \left( \begin{array}{ccccc} t_1^k & 0 & 0 & \cdots & 0\\ 0 & t_2^k & 0 & \cdots & 0\\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & t_k^k \end{array} \right) \left( \begin{array}{cccccc} \mu_1^k & 0 & 0 & 0 & \cdots & 0\\ 0 & \mu_2^k & 0 & 0 & \cdots & 0\\ & & \cdots & & & \\ 0 & \cdots & 0 & 0 & \mu_k^k & 0 \end{array} \right) \\ = \left( \begin{array}{cccccc} \mu_1^k & 0 & 0 & 0 & \cdots & 0\\ 0 & \mu_2^k & 0 & 0 & \cdots & 0\\ & & \cdots & & & \\ 0 & \cdots & 0 & 0 & \mu_k^k & 0 \end{array} \right) \left( \begin{array}{cccccc} t_1^k & 0 & 0 & \cdots & 0 & 0\\ 0 & t_2^k & 0 & \cdots & 0 & 0\\ & & \cdots & & &\\ 0 & \cdots & 0 & 0 & t_k^k & 0\\ 0 & \cdots & 0 & 0 & 0 & t_{k+1}^k \end{array} \right) \end{gather*} for any \( \nu^k_i, \mu^k_i \) and \( t^k_i \) in \( \C \). Here if \( (s_1,\ldots,s_{n-1}) \) are the standard coordinates on the Lie algebra \( {\mathbb R}^{n-1} \) of \( (S^1)^{n-1} \) and \( (\tau_1, \ldots, \tau_n) \) are the standard coordinates of the Lie algebra \( {\mathbb R}^n \) of the maximal torus of \( \Lie{U}(n) \) consisting of the diagonal matrices, then we identify \( {\mathbb R}^{n-1} \) with the subspace of \( {\mathbb R}^n \) defined by \( \tau_1 + \cdots + \tau_n = 0 \) via the relationship \( s_j = \tau_{j+1} + \cdots + \tau_n \) for \( 1 \leq j \leq n-1 \). With respect to this identification, \( M_T {\sslash\mkern-6mu/} T_H \) becomes the hypertoric variety for \( T \) associated to the hyperplane arrangement in its Lie algebra \( \lie{t} \) given by the root planes. \end{remark} \section{Stratifying the universal hyperk\"{a}hler implosion into hyperk\"{a}hler strata} The universal hyperk\"{a}hler implosion \( Q = M {\sslash\mkern-6mu/} H \) for \( K = \Lie{SU}(n) \) is a singular space with a stratification into locally closed hyperk\"{a}hler submanifolds \( Q_{(S,\delta)} \) (cf.\, \cite{DKS} Theorem 6.15). These strata \( Q_{(S,\delta)} \) can be indexed by subsets \begin{equation*} S = \{(i_1,j_1), (i_2,j_2), \ldots, (i_p,j_p)\} \end{equation*} of \( \{1,\ldots,n\} \times \{1,\ldots,n\} \) with \( i_1, \dots, i_p \) distinct and \( j_1 < j_2 < \dots < j_p \), and sequences \( \delta=(d_1,\dots,d_p) \) of strictly positive integers such that if \( 1 \leqslant k \leqslant n \) then \begin{equation*} m_k = k - \sum_{\substack{h:\\ 1 \leqslant h \leqslant p\\ i_h \leqslant k < j_h}} d_h \end{equation*} satisfies \( 0 = m_0 \leqslant m_1 \leqslant \dots \leqslant m_n = n \). The open stratum \( Q_{(\emptyset,\emptyset)}= Q^{{\textup{hks}}} \), which is indexed by the empty set \( S = \emptyset \) and the empty sequence \( \delta=\emptyset \), consists of those elements of \( Q = M {\sslash\mkern-6mu/} H \) represented by hyperk\"ahler stable quivers. More generally, for any \( S \) and \( \delta \) as above, the stratum \( Q_{(S,\delta)} \) is the image of a hyperk\"ahler embedding into \( Q \) of a hyperk\"ahler modification \( \hat{Q}_1^{{\textup{hks}}} \) (in the sense of Definition \ref{hkmodification} below, following \cite{DSmod}) of the open subset \( Q_1^{{\textup{hks}}} \) represented by hyperk\"ahler quivers in the hyperk\"ahler quotient \begin{equation*} Q_1 = M_1 {\sslash\mkern-6mu/} H_S \end{equation*} where \( M_1 \) is the space of quivers of the form \begin{equation} \label{quiverdagger} 0 \stackrel[\beta_0]{\alpha_0}{\rightleftarrows} \C^{m_1}\stackrel[\beta_1]{\alpha_1}{\rightleftarrows} \C^{m_2}\stackrel[\beta_2]{\alpha_2}{\rightleftarrows}\dots \stackrel[\beta_{n-2}]{\alpha_{n-2}}{\rightleftarrows} \C^{m_{n-1}} \stackrel[\beta_{n-1}]{\alpha_{n-1}}{\rightleftarrows} \C^{m_n} = \C^n \end{equation} and \( H_S \) is the subgroup of \begin{equation*} \prod_{k=1}^{n-1} \Lie{U}(m_k) \leqslant \tilde{H} = \prod_{k=1}^{n-1} \Lie{U}(k) \end{equation*} defined as follows. \begin{definition} \label{defHs} To any set \( S \) of pairs \( (i,j) \) with \( i,j \in \{1,\dots,n\} \) we can associate a subtorus \( T_S \) of \( {T} = (S^1)^{n-1} \) such that the Lie algebra of \( T_S \) is generated by the vectors \( e_{ij}= (0, \dots,0, 1, 1,\dots,1, 0, \dots,0) \), which have \( 1 \) in places \( i,\dots, j-1 \) and zero elsewhere, where \( i,j \) range over all pairs \( (i,j) \in S \) with \( i < j \). Now consider the short exact sequence \begin{equation*} 1 \rightarrow H=\prod_{k=1}^{n-1} \Lie{SU}(m_k) \rightarrow \prod_{k=1}^{n-1} \Lie{U}(m_k) \stackrel{\phi}{\rightarrow} T \rightarrow 1, \end{equation*} where \( \phi \) is the obvious product of determinant maps, and define \( H_S \) to be the preimage \begin{equation*} H_S = \phi^{-1} (T_S) \end{equation*} of \( T_S \) in \( \prod_{k=1}^{n-1} \Lie{U}(m_k) \). \end{definition} \begin{definition} \label{hkmodification} Motivated by the concept of hyperk\"ahler modification introduced in \cite{DSmod}, we define \( \hat{Q}_1^{{\textup{hks}}} \) as \begin{equation*} \hat{Q}_1^{{\textup{hks}}} = (Q_1^{{\textup{hks}}} \times (\mathbb{H} \setminus \{ 0\}) ^\ell) {\sslash\mkern-6mu/} (S^1)^\ell. \end{equation*} Here \( \ell = |L| \) is the size of the set \begin{equation*} L= \{(h,k): 1 \leqslant h \leqslant p, \,\, i_h \leqslant k < j_h -1 \}. \end{equation*} The action of \( (S^1)^\ell \) on \( (\mathbb{H} \setminus \{ 0\}) ^\ell \) is the standard one, while the action of \( (S^1)^\ell \) on \( Q_1 \) is given by the homomorphism \begin{equation*} (S^1)^\ell \to {T} = (S^1)^{n-1} \end{equation*} whose restriction to the copy of \( S^1 \) in \( (S^1)^\ell \) labelled by \( (h,k) \in L \) sends the standard generator of the Lie algebra of \( S^1 \) to the vector \begin{equation*} e_{k+1\,\, j_h} = (0, \ldots, 0,1,1,\ldots,1,0,\ldots,0) \end{equation*} in the Lie algebra of \( {T} = (S^1)^{n-1} \) which has 1 in places \( k+1 \) to \( j_h - 1 \) and 0 elsewhere. \end{definition} The stratum \( Q_{(S,\delta)} \) is the image of a hyperk\"ahler embedding \begin{equation*} \hat{Q}_1^{{\textup{hks}}} \to Q \end{equation*} which is \( \Lie{SU}(2) \)-equivariant and is defined as follows. Consider a quiver \eqref{quiverdagger} together with an element \( (\gamma_k^{(h)}) \) of \( \mathbb{H}^\ell \) such that \( \gamma_k^{(h)} = \alpha_k^{(h)} + j \beta_k^{(h)} \) for \(1 \leqslant h \leqslant p \) and \( i_h \leqslant k < j_h - 1 \), satisfying the \( H_{S} \)-hyperk\"ahler moment map equations \begin{gather*} \alpha_i \beta_i - \beta_{i+1} \alpha_{i+1} = \lambda^\C_{i+1} I, \\ \alpha_i \alpha_i^* - \beta_i^* \beta_i + \beta_{i+1} \beta_{i+1}^* - \alpha_{i+1}^* \alpha_{i+1} = \lambda^{\mathbb R}_{i+1} I, \end{gather*} for \(1 \leqslant i \leqslant n-1 \), where \( \sum_{k = i_h}^{j_h-1} \lambda_k^\C = 0 \) and \( \sum_{k = i_h}^{j_h-1} \lambda_k^{\mathbb R}=0 \) for \( 1 \leqslant h \leqslant q \), and the \( (S^1)^\ell \)-hyperk\"ahler moment map equations \begin{gather*} \alpha_k^{(h)}\beta_k^{(h)} = \lambda^\C_{k+1} + \lambda^\C_{k+2} + \dots + \lambda^\C_{j_h -1} \quad\text{and}\\ \abs{\alpha_k^{(h)}}^2 - \abs{\beta_k^{(h)}}^2 = \lambda^{\mathbb R}_{k+1} + \lambda^{\mathbb R}_{k+2} + \dots + \lambda^{\mathbb R}_{j_h -1} \end{gather*} for \(1 \leqslant h \leqslant p \) and \( i_h \leqslant k < j_h - 1 \). Our embedding takes the \( H_{S} \times (S^1)^\ell \)-orbit of this configuration to the \( H \)-orbit of the quiver \begin{equation} \label{quiverdoubledagger} 0 \stackrel[\tilde{\beta}_0]{\tilde{\alpha}_0}{\rightleftarrows} \dots \rightleftarrows \C^{m_k} \oplus \bigoplus_{h :\, i_h \leqslant k < j_h} \C^{d_h} \rightleftarrows \dots \stackrel[\tilde{\beta}_{r-1}]{\tilde{\alpha}_{r-1}}{\rightleftarrows} \C^n \end{equation} which is the orthogonal direct sum of \eqref{quiverdagger} with the quivers given for \( 1 \leqslant h \leqslant p \) by \begin{equation*} \C^{d_h} \stackrel[\beta_{i_h}^{(h)}]{\alpha_{i_h}^{(h)}}{\rightleftarrows} \C^{d_h} \rightleftarrows \dots \rightleftarrows \C^{d_h} \stackrel[\beta_{j_h-2}^{(h)}]{\alpha_{j_h-2}^{(h)}}{\rightleftarrows} \C^{d_h} \end{equation*} in the places \( i_h,i_h + 1, \dots ,j_h -1 \). Here the maps \( \alpha_k^{(h)} \), \( \beta_k^{(h)} \), for \( i_h \leqslant k < j_h - 1 \), are multiplication by the complex scalars, also denoted by \( \alpha_k^{(h)} \), \( \beta_k^{(h)} \), that satisfy \( \alpha_k^{(h)} + j \beta_k^{(h)} = \gamma_k^{(h)} \). \begin{remark} \label{remhypertoricstrat} Note that the stabiliser in \( T=T_{\tilde{H}}/T_H \) of any \( \lie q \in Q_{(S,\delta)} \) is the subtorus \( T_S \) of \( T \) defined in Definition \ref{defHs}, which is the product \( (S^1)^p \) of \( p \) copies of \( S^1 \) where the \( j \)th copy of \( S^1 \) acts by scalar multiplication on the summand \begin{equation*} \C^{d_h} \stackrel[\beta_{i_h}^{(h)}]{\alpha_{i_h}^{(h)}}{\rightleftarrows} \C^{d_h} \rightleftarrows \dots \rightleftarrows \C^{d_h} \stackrel[\beta_{j_h-2}^{(h)}]{\alpha_{j_h-2}^{(h)}}{\rightleftarrows} \C^{d_h} \end{equation*} of the quiver \( \lie q \). \end{remark} \section{A refined stratification of the universal hyperk\"ahler implosion} In the last section we recalled the stratification of the universal hyperk\"ahler implosion \( Q= M{\sslash\mkern-6mu/} H \) for \( K=\Lie{SU}(n) \) into hyperk\"ahler strata \( Q_{(S,\delta)} \). In this section we will refine this stratification to obtain strata which are not in general hyperk\"ahler but which reflect the structure of the group \( K=\Lie{SU}(n) \); in particular we would like to find a description of the universal hyperk\"ahler implosion which permits generalisation to other compact groups. First let us consider the hyperk\"ahler moment map \begin{equation*} \mu_{(S^1)^{n-1}}: Q \to ({\mathbb R}^3)^{n-1} \end{equation*} for the induced action of \begin{equation*} T = (S^1)^{n-1} = \prod_{k=1}^{n-1} \Lie{U}(k)/\Lie{SU}(k) = \tilde{H}/H \end{equation*} on \( Q = M {\sslash\mkern-6mu/} H \). We are abusing notation slightly here by using the same symbol \( T \) to denote both \( (S^1)^{n-1} \) and the standard maximal torus of \( K=\Lie{SU}(n) \). These tori are of course isomorphic; we will always make the particular choice of identification given in Remark \ref{remhypertoric}, so that the restriction to \( Q_T^{{\textup{hks}}} \) of the action of \( T \) as a subgroup of \( K \) agrees with the restriction of the action of \( T \) identified with \( (S^1)^{n-1} \). This hyperk\"ahler moment map takes a quiver which satisfies the equations (\ref{eq:mmcomplex}),(\ref{eq:mmreal}) to the element of \begin{equation*} \lie{t} \otimes {\mathbb R}^3 = ({\mathbb R}^3)^{n-1} = (\C \oplus {\mathbb R})^{n-1} \end{equation*} given by \begin{equation*} (\lambda_1, \ldots, \lambda_{n-1})= (\lambda_1^\C,\lambda_1^{\mathbb R}, \ldots, \lambda_{n-1}^\C,\lambda_{n-1}^{\mathbb R}). \end{equation*} We will define a stratification of \( ({\mathbb R}^3)^{n-1} \) which we can pull back via the restriction of \( \mu_{(S^1)^{n-1}} \) to each hyperk\"ahler stratum \( Q_{(S,\delta)} \) of \( Q \). \begin{definition} If \( (\lambda_1, \ldots, \lambda_{n-1}) \in ({\mathbb R}^3)^{n-1} \) there is an associated equivalence relation \( \sim \) on \( \{1,\dots,n\} \) such that if \( 1 \leqslant i<j \leqslant n \) then \begin{equation*} i \sim j \iff \sum_{k=i}^{j-1} \lambda_k = 0 \ \text{in}\ {\mathbb R}^3 . \end{equation*} There is thus a stratification of \( ({\mathbb R}^3)^{n-1}=\lie{t} \otimes {\mathbb R}^3 \) into strata \( ({\mathbb R}^3)^{n-1}_{\sim} = (\lie{t} \otimes {\mathbb R}^3)_\sim \), indexed by the set of equivalence relations \( \sim \) on \( \{1,\dots,n\} \), where \begin{equation*} ({\mathbb R}^3)^{n-1}_{\sim} = \{ (\lambda_1, \ldots, \lambda_{n-1}) \in ({\mathbb R}^3)^{n-1}: \mbox{ if } 1 \leqslant i<j \leqslant n \mbox{ then } \end{equation*} \begin{equation*} i \sim j \iff \sum_{k=i}^{j-1} \lambda_k = 0 \ \text{in}\ {\mathbb R}^3\}. \end{equation*} \end{definition} \begin{remark} \label{remksim} Under the identification of \( T \) with \( (S^1)^{n-1} \) given in Remark~\ref{remhypertoric} this stratification of \( ({\mathbb R}^3)^{n-1}= \lie{t} \otimes {\mathbb R}^3 \) is the tensor product with \( {\mathbb R}^3 \) of the stratification of \( \lie{t} \) associated to the hyperplane arrangement given by the root planes in \( \lie{t} \). Note also that an equivalence relation \( \sim \) on \( \{1, \ldots, n\} \) determines and is determined by a subgroup \( K_\sim \) of \( K=\Lie{SU}(n) \), where \( K_\sim \) is the stabiliser in \( K \) of any \( (\lambda_1, \ldots, \lambda_{n-1}) \in \lie{t} \otimes {\mathbb R}^3 \) which lies in the stratum \( ({\mathbb R}^3)^{n-1}_{\sim} \) of \( ({\mathbb R}^3)^{n-1} \) identified with \( \lie{t} \otimes {\mathbb R}^3 \) as in Remark \ref{remhypertoric}. \end{remark} Observe that each stratum \( ({\mathbb R}^3)^{n-1}_{\sim} \) is an open subset of a linear subspace of the real vector space \( ({\mathbb R}^3)^{n-1} \). \begin{definition} \label{QSdeltasim} Given a hyperk\"ahler stratum \( Q_{(S,\delta)} \) of \( Q \) as in \S4, together with an equivalence relation \( \sim \) on \( \{1, \ldots, n\} \), define \begin{equation*} Q_{(S,\delta,\sim)} = Q_{(S,\delta)} \cap \mu_{(S^1)^{n-1}}^{-1}(({\mathbb R}^3)^{n-1}_{\sim}), \end{equation*} that is, the inverse image of the stratum \( ({\mathbb R}^3)^{n-1}_{\sim} \) in \( ({\mathbb R}^3)^{n-1} \) under the restriction to \( Q_{(S,\delta)} \) of the hyperk\"ahler moment map \( \mu_{(S^1)^{n-1}}: Q \to ({\mathbb R}^3)^{n-1} \). \end{definition} \begin{remark} \label{lem5.14} We recall from \cite{DKS} that, given a quiver which satisfies the complex moment map equations (\ref{eq:mmcomplex}), we may decompose each space in the quiver into generalised eigenspaces \( \ker (\alpha_i \beta_i - \tau I)^m \) of \( \alpha_i \beta_i \). We showed that \( \beta_i \) restricts to a map \begin{equation} \label{betai} \beta_i \colon \ker (\alpha_i \beta_i - \tau I)^m \rightarrow \ker(\alpha_{i-1} \beta_{i-1} - (\lambda_i^\C + \tau)I)^m. \end{equation} Similarly \( \alpha_i \) restricts to a map \begin{equation} \label{alphai} \alpha_i \colon \ker (\alpha_{i-1} \beta_{i-1} - (\lambda_i^\C + \tau) I)^m \rightarrow \ker (\alpha_i \beta_i - \tau I)^m. \end{equation} Moreover we showed the maps \eqref{betai} and \eqref{alphai} are bijective unless \( \tau = 0 \). It follows that \( \tau \neq 0 \) is an eigenvalue of \( \alpha_i \beta_i \) if and only if \( \tau + \lambda_i^\C \neq \lambda_i^\C \) is an eigenvalue of \( \alpha_{i-1} \beta_{i-1} \). Moreover \( \alpha_i \beta_i \) has zero as an eigenvalue and \( \alpha_i, \beta_i \) restrict to maps between the associated generalised eigenspace with eigenvalue \( 0 \) and the generalised eigenspace for \( \alpha_{i-1} \beta_{i-1} \) associated to \( \lambda_i^\C \). One can deduce (cf. Lemma 5.14 of \cite{DKS}) that \begin{equation*} \alpha_{n-1} \beta_{n-1} - \frac1n \tr(\alpha_{n-1} \beta_{n-1}) I_n \in \lie{sl}(n, \C) \end{equation*} now has eigenvalues \( \kappa_1,\dots,\kappa_n \), where \begin{equation*} \begin{split} \kappa_j &= \frac1n\Bigl( \lambda^\C_1 + 2 \lambda_2^\C + \dots + ({j-1}) \lambda^\C_{j-1} \eqbreak[4] -(n-j) \lambda_j^\C - (n-{j-1})\lambda_{j+1}^\C - \dots - \lambda_{n-1}^\C\Bigr). \end{split} \end{equation*} In particular if \( i<j \) then \begin{equation} \label{eqnkappa} \kappa_j - \kappa_i = \lambda_i^\C + \lambda_{i+1}^\C + \dots + \lambda_{j-1}^\C . \end{equation} We deduce that if \( i \sim j \) then we have equality of the eigenvalues \( \kappa_i \) and \( \kappa_j \). \end{remark} We would like to find an indexing set for the subsets \( Q_{(S,\delta, \sim)} \) which reflects the group theoretic structure of \( K \). As we observed in Remark \ref{remksim} the choice of \( \sim \) corresponds to the choice of a subgroup \( K_\sim \) of \( K \) which is the compact real form of a Levi subgroup of \( K_\C \); this subgroup \( K_\sim \) is the centraliser of \( \mu_{(S^1)^{n-1}}(\lie q) \in \lie{t} \otimes {\mathbb R}^3 \) for any \( \lie q \in Q_{(S,\delta, \sim)} \). Our next aim is to show that once \( \sim \) or equivalently \( K_\sim \) is chosen, the choice of \( (S,\delta) \) corresponds to the choice of a nilpotent adjoint orbit \( \mathcal{O} \subseteq (\lie{k}_\sim)_\C \). We will see that if \( \lie q \in Q_{(S,\delta, \sim)} \) then, for a generic choice of complex structure, when we decompose \( \C^n \) as the direct sum of the generalised eigenspaces of \begin{equation*} \alpha_{n-1} \beta_{n-1} - \frac{1}{n}{\rm tr} (\alpha_{n-1} \beta_{n-1}) I_n \in \lie{k}_\C \end{equation*} then the subgroup of \( K_\C \) preserving this decomposition is conjugate to \( (K_\sim)_\C \). In fact there is some \( g \in K_\C \) such that this subgroup is \( g(K_\sim)_\C)g^{-1} \) and when we write the element \begin{equation*} \alpha_{n-1} \beta_{n-1} - \frac{1}{n}{\rm tr} (\alpha_{n-1} \beta_{n-1}) I_n \end{equation*} of \( \lie{k}_\C \) as the sum of commuting nilpotent and semisimple elements of \( \lie{k} \), the semisimple element is the conjugate by \( g \) of \( \mu_{(S^1)^{n-1}}^\C(q) \) and the nilpotent element lies in the conjugate by \( g \) of the \( (K_\sim)_\C \)-orbit \( \mathcal{O} \). To see this, we need to recall from \cite{DKS} more about the hyperk\"ahler strata \( Q_{(S,\delta)} \). So suppose that a quiver satisfies the hyperk\"ahler moment map equations (\ref{eq:mmcomplex}) and (\ref{eq:mmreal}), and lies in the subset \( Q_{(S,\delta,\sim)} \), so that it lies in \( Q_{(S,\delta)} \) and \begin{equation*} i \sim j \iff \sum_{k=i}^{j-1} \lambda_k = 0 \ \text{in}\ {\mathbb R}^3 . \end{equation*} Notice that \( Q_{(S,\delta,\sim)} \) is preserved by the rotation action of \( \Lie{SU}(2) \) on \( {\mathbb R}^3 \), and that given \begin{equation*} (\lambda_1, \ldots, \lambda_{n-1})= (\lambda_1^\C,\lambda_1^{\mathbb R}, \ldots, \lambda_{n-1}^\C,\lambda_{n-1}^{\mathbb R}) \in ({\mathbb R}^3)^{n-1}, \end{equation*} by applying a generic element of \( \Lie{SU}(2) \) to rotate the complex structures and hence the decomposition \( {\mathbb R}^3 = \C \oplus {\mathbb R} \), we can assume that if \( 1 \leqslant i < j \leqslant n \) then \begin{equation} \label{reg} \sum_{k=i}^{j-1} \lambda_k = 0 \ \text{in}\ {\mathbb R}^3 \iff \sum_{k=i}^{j-1} \lambda_k^\C = 0 \ \text{in}\ \C. \end{equation} Thus \begin{equation*} Q_{(S,\delta,\sim)} = \Lie{SU}(2) Q_{(S,\delta,\sim)}^\circ \end{equation*} where \( Q_{(S,\delta,\sim)}^\circ \) is the open subset of \( Q_{(S,\delta,\sim)} \) represented by those quivers in \( Q_{(S,\delta,\sim)} \) such that \begin{equation*} i \sim j \iff \sum_{k=i}^{j-1} \lambda_k = 0 \ \text{in}\ {\mathbb R}^3 \iff \sum_{k=i}^{j-1} \lambda_k^\C = 0 \ \text{in}\ \C. \end{equation*} \begin{remark} \label{remkappa} It now follows immediately from (\ref{eqnkappa}) that for a quiver \( \lie q \) in \( Q_{(S,\delta,\sim)}^\circ \) we have equality of the eigenvalues \( \kappa_i \) and \( \kappa_j \) of \begin{equation*} \alpha_{n-1} \beta_{n-1} - \frac1n \tr(\alpha_{n-1} \beta_{n-1}) I_n \in \lie{sl}(n, \C) \end{equation*} if and only if \( i \sim j \) . In particular if \( \lie q\in Q_{(S,\delta,\sim)}^\circ \) then \( (K_\sim)_\C \) is the subgroup of \( K_\C \) which preserves the decomposition of \( \lie q \) into the subquivers determined by the generalised eigenspaces of the compositions \( \alpha_i\beta_i \) as in Remark~\ref{lem5.14} above. \end{remark} Let us suppose now that our quiver \( \lie q \) lies in \( Q_{(S,\delta,\sim)}^\circ \). As in Remark~\ref{lem5.14} we can decompose it into a direct sum of subquivers \begin{equation*} \cdots V_i^j \stackrel[\beta_{i,j}]{\alpha_{i,j}}{\rightleftarrows} V_{i+1}^j \cdots \end{equation*} determined by the generalised eigenspaces (with eigenvalues \( \tau_{i+1, j} \)) of the compositions \( \alpha_i\beta_i \), such that \begin{equation*} \alpha_{i,j} \beta_{i,j} - \beta_{i+1,j} \alpha_{i+1, j} = \lambda_{i+1}^\C \end{equation*} and \( \alpha_{i,j} \) and \( \beta_{i,j} \) are isomorphisms unless \( \tau_{i+1, j}=0 \). If for some \( j \) we have that \( \alpha_{k,j}, \beta_{k,j} \) are isomorphisms for \( i+1 \leqslant k < s \) but not for \( k=i,s \), then it follows that \( \tau_{i+1,j} = \tau_{s+1,j}=0 \), hence \( \sum_{k=i+1}^s \lambda_k^\C =0 \), and so since the quiver lies in \( Q_{(S,\delta,\sim)}^\circ \) we have \begin{equation*} \sum_{k=i+1}^s \lambda_k =0 \in {\mathbb R}^3. \end{equation*} This means that the quiver satisfies the hyperk\"ahler moment map equations not just for \( H = \prod_{k=1}^{n-1} \Lie{SU}(k) \) but for its extension \begin{equation*} H_{\{(i,s)\} } \end{equation*} by \( S^1 \) in the sense of Definition \ref{defHs}. In particular it satisfies the real moment map equations for this subgroup of \( \tilde{H} = \prod_{k=1}^{n-1} \Lie{U}(k) \), and thus its orbit under the complexification \( (H_{\{(i,s)\} })_\C \) of \( H_{\{(i,s)\} } \) is closed. \begin{remark} \label{remdim} Note that \( \alpha_{i,j} \) and \( \beta_{i,j} \) are isomorphisms when \( \tau_{i+1, j} \) is non-zero, and hence for each \( i \) there is exactly one \( j \) such that \( \tau_{i+1, j} = 0 \) and then \begin{equation*} \dim V_{i+1}^j = 1 + \dim V_{i}^j. \end{equation*} \end{remark} In the case when \( \tau_{i+1, j} \) is non-zero, and hence \( \alpha_{i,j} \) and \( \beta_{i,j} \) are isomorphisms, we may contract the subquiver \begin{equation*} \cdots V_i^j \stackrel[\beta_{i,j}]{\alpha_{i,j}}{\rightleftarrows} V_{i+1}^j \cdots \end{equation*} by replacing \begin{equation*} V_{i-1}^j \stackrel[\beta_{i-1,j}]{\alpha_{i-1,j}}{\rightleftarrows} V_i^j \stackrel[\beta_{i,j}]{\alpha_{i,j}}{\rightleftarrows} V_{i+1}^j \stackrel[\beta_{i+1,j}]{\alpha_{i+1,j}}{\rightleftarrows} V_{i+2}^j \end{equation*} with \begin{equation*} V_{i-1}^j \stackrel[\beta_{i-1,j}]{\alpha_{i-1,j}}{\rightleftarrows} V_i^j \stackrel[(\alpha_{i,j})^{-1}\beta_{i+1,j}]{\alpha_{i+1,j} \alpha_{i,j}}{\rightleftarrows} V_{i+2}^j, \end{equation*} and then the complex moment map equations are satisfied with \begin{equation*} \alpha_{i-1,j}\beta_{i-1,j} - (\alpha_{i,j})^{-1}\beta_{i+1,j} \alpha_{i+1,j} \alpha_{i,j} = \lambda_{i-1}^\C + \lambda_i^\C. \end{equation*} Moreover if we choose an identification of \( V_{i+1}^j \) with \( V_i^j \) and apply the action of \( \Lie{SL}(V_{i,j}) \) to set \( \alpha_{i,j} \) to be a non-zero scalar multiple \( aI \) of the identity, then \( \beta_{i,j} \) is determined by \( \alpha_{i-1,j}, \alpha_{i+1,j}, \beta_{i-1,j}, \beta_{i+1,j} \) via the equations \eqref{eq:mmcomplex} once we know the scalars \( a \) and \( \lambda_i^\C \). We refer the reader to \cite{DKS} for more details on contraction. After performing such contractions whenever \( \tau_{i+1, j} \) is non-zero, we obtain contracted quivers \begin{equation*} \cdots \,\,\,\, V_{i}^j \stackrel[\alpha_{i,j}^{-1} \cdots \alpha_{s-2,j}^{-1} \beta_{s-1,j}]{\alpha_{s-1,j}\cdots \alpha_{i,j}}{\rightleftarrows} V_s^j \,\,\,\, \cdots \end{equation*} where \begin{equation*} V^j_i \cong V^j_{i+1} \cong \cdots \cong V^j_{s-1} \end{equation*} and \( \dim V^j_{s-1} = \dim V^j_s - 1 \). Moreover each of these contracted quivers satisfies the complex moment map equations for the induced action of \begin{equation*} \prod_{i: \dim V^j_{i-1} < \dim V^j_i} \Lie{GL}(V^j_i) \end{equation*} and its orbit under the action of this complex group is closed. It then follows from~\cite{KS} Theorem 2.1 (cf. \cite{DKS} Proposition 5.16) that each contracted subquiver is the direct sum of a quiver of the form \begin{equation*} 0=V_0^{(*)} \stackrel[\beta^{(*)}_0]{\alpha^{(*)}_0}{\rightleftarrows} V^{(*)}_1 \stackrel[\beta^{(*)}_1]{\alpha_1^{(*)}}{\rightleftarrows} V^{(*)}_2 \stackrel[\beta^{(*)}_2]{\alpha^{(*)}_2}{\rightleftarrows} \dots \stackrel[\beta^{(*)}_{r-2}]{\alpha^{(*)}_{r-2}}{\rightleftarrows} V^{(*)}_{n-1} \stackrel[\beta^{(*)}_{r-1}]{\alpha^{(*)}_{r-1}}{\rightleftarrows} V^{(*)}_n \leqslant \C^n, \end{equation*} where \( V_j^{(*)}= 0 \) for \( 0 \leqslant j \leqslant k \) and \( \alpha^{(*)}_j \) is injective and \( \beta^{(*)}_j \) is surjective for \( k<j<n \), and a quiver \begin{equation*} 0 = V^{(0)}_0 \rightleftarrows V^{(0)}_1 \rightleftarrows V^{(0)}_2 \rightleftarrows \dots \rightleftarrows V^{(0)}_{n-1} \rightleftarrows V^{(0)}_n = \{ 0 \} \end{equation*} in which all maps are \( 0 \). It also follows from the same theorem that the direct sum of the contracted subquivers is completely determined (modulo the action of \( \prod_{i: \dim V^j_{i-1} < \dim V^j_i} \Lie{GL}(V^j_i) \)) by the nilpotent element of \( (\lie{k}_\sim)_\C \) given by the sum of the complex moment maps \( \alpha_{n-1}^{(*)} \beta_{n-1}^{(*)} \). Furthermore, given \( \sim \), the adjoint orbit of this nilpotent element in \( (\lie{k}_\sim)_\C \) corresponds precisely to determining the dimensions of the various vector spaces \( V_j^{(*)} \) and \( V_j^{(0)} \). To see this, observe first that by Remarks \ref{remkappa} and \ref{remdim} the equivalence relation \( \sim \) determines the dimensions of the generalised eigenspaces of the compositions \( \alpha_i\beta_i \) and determines how the corresponding subquivers are contracted. Also the nilpotent cone for \( (K_\sim)_\C \) in \( (\lie{k}_\sim)_\C \) is the nilpotent cone for the product \( [(K_\sim)_\C,(K_\sim)_\C] \) of special linear groups. Since \( (K_\sim)_\C \) is the product of its centre \( Z((K_\sim)_\C) \) and its commutator subgroup \( [(K_\sim)_\C,(K_\sim)_\C] \), the nilpotent orbits for \( (K_\sim)_\C \) are the same as the nilpotent orbits for \( [(K_\sim)_\C,(K_\sim)_\C] \), and thus are given by products of nilpotent orbits in the special linear groups corresponding to the equivalence classes of \( \sim \). These nilpotent orbits in special linear groups are determined in turn by their Jordan types, which correspond exactly to the data given by the dimensions of the kernels of their powers. The contracted quivers satisfy \( \alpha_i^{(*)} \beta_i^{(*)} = \beta_{i+1}^{(*)} \alpha_{i+1}^{(*)} \) for all \( i \), and so \begin{equation*} (\alpha_{n-1}^{(*)} \beta_{n-1}^{(*)} )^s = \alpha_{n-1}^{(*)} \alpha_{n-2}^{(*)}\ldots \alpha_{n-s}^{(*)} \beta_{n-s}^{(*)} \ldots \beta_{n-2}^{(*)} \beta_{n-1}^{(*)} . \end{equation*} Since the \( \alpha_{i}^{(*)} \) are injective and the \( \beta_{i}^{(*)} \) are surjective this composition has rank \( \dim V_s^{(*)} \) and nullity \( \dim V_{n-1}^{(*)} - \dim V_s^{(*)} \). Finally note that the sums \( \dim V_s^{(*)} + \dim V_s^{(0)} \) are determined by the dimensions of the vector spaces in the contracted subquiver. \begin{remark} \label{sumdecomp} Recall that since our quiver \( \lie q \) lies in \( Q_{(S,\delta)} \) it is the direct sum of a hyperk\"ahler stable quiver of the form \begin{equation} \label{eq6.8} 0 \stackrel[\beta_0]{\alpha_0}{\rightleftarrows} \C^{m_1}\stackrel[\beta_1]{\alpha_1}{\rightleftarrows} \C^{m_2}\stackrel[\beta_2]{\alpha_2}{\rightleftarrows}\dots \stackrel[\beta_{n-2}]{\alpha_{n-2}}{\rightleftarrows} \C^{m_{n-1}} \stackrel[\beta_{n-1}]{\alpha_{n-1}}{\rightleftarrows} \C^{m_n} = \C^n \end{equation} with quivers given for \( 1 \leqslant h \leqslant p \) by \begin{equation*} \C^{d_h} \stackrel[\beta_{i_h}^{(h)}]{\alpha_{i_h}^{(h)}}{\rightleftarrows} \C^{d_h} \rightleftarrows \dots \rightleftarrows \C^{d_h} \stackrel[\beta_{j_h-2}^{(h)}]{\alpha_{j_h-2}^{(h)}}{\rightleftarrows} \C^{d_h} \end{equation*} in the places \( i_h,i_h + 1, \dots ,j_h -1 \), where the maps \( \alpha_k^{(h)} \), \( \beta_k^{(h)} \), for \( i_h \leqslant k < j_h - 1 \), are multiplication by complex scalars. The latter correspond in the description above to the zero summands of the contracted subquivers, while the former is the direct sum of the summands of the form \begin{equation*} 0=V_0^{(*)} \stackrel[\beta^{(*)}_0]{\alpha^{(*)}_0}{\rightleftarrows} V^{(*)}_1 \stackrel[\beta^{(*)}_1]{\alpha_1^{(*)}}{\rightleftarrows} V^{(*)}_2 \stackrel[\beta^{(*)}_2]{\alpha^{(*)}_2}{\rightleftarrows} \dots \stackrel[\beta^{(*)}_{n-2}]{\alpha^{(*)}_{n-2}}{\rightleftarrows} V^{(*)}_{n-1} \stackrel[\beta^{(*)}_{n-1}]{\alpha^{(*)}_{n-1}}{\rightleftarrows} V^{(*)}_n \leqslant \C^n, \end{equation*} where \( V_j^{(*)}= 0 \) for \( 0 \leqslant j \leqslant k \) and \( \alpha^{(*)}_j \) is injective and \( \beta^{(*)}_j \) is surjective for \( k<j<n \). \end{remark} \begin{remark} \label{remequiv} It follows that once the equivalence relation \( \sim \) or its corresponding subgroup \( K_\sim \) of \( K \) is fixed, the choice of index \( (S,\delta) \) such that \( Q_{(S,\delta,\sim)} \) is nonempty corresponds exactly to a nilpotent adjoint orbit \( \mathcal{O} \subseteq (\lie{k}_\sim)_\C \) for the complexification \( (K_\sim)_\C \) of \( K_\sim \). Thus we can make the following definition. \end{remark} \begin{definition} \label{defnsimnil} Let \( \sim \) be an equivalence relation on \( \{1, \ldots, n\} \) and let \( \mathcal{O} \) be a nilpotent adjoint orbit for \( (K_\sim)_\C \). Then we will denote by \( Q_{[\sim,\mathcal{O}]} \) the subset \( Q_{(S,\delta,\sim)} \) of \( Q \) indexed by the corresponding \( (S,\delta,\sim) \), and we will denote by \( Q_{[\sim,\mathcal{O}]}^\circ \) its open subset \( Q_{(S,\delta,\sim)}^\circ \). \end{definition} \begin{remark} We also remark that the above analysis shows that the subset \( Q_{(S,\delta,\sim)} \) of \( Q \) is empty unless the subset \( S \) of \( \{1,\dots,n\} \times \{1,\dots,n\} \) is contained in \( \sim \) (where the equivalence relation \( \sim \) on \( \{1,\dots,n\} \) is formally identified with the subset \begin{equation*} \{(i,j) \in \{1,\dots,n\} \times \{1,\dots,n\}:i \sim j\} \end{equation*} of \( \{1,\dots,n\} \times \{1,\dots,n\} \)). Thus \( Q_{(S,\delta)} \) is the disjoint union \begin{equation*} Q_{(S,\delta)} = \coprod_{\sim} Q_{(S,\delta, \sim)} \end{equation*} over all the equivalence relations \( \sim \) containing \( S \), and \begin{equation} \label{newstrat} Q = \coprod_{S,\delta, \sim} Q_{(S,\delta, \sim)} \end{equation} is the disjoint union over all choices of \( S \) and \( \delta \) and equivalence relations \( \sim \) containing \( S \). Equivalently \( Q \) is the disjoint union \begin{equation*} Q = \coprod_{\sim, \mathcal{O}} Q_{[\sim,\mathcal{O}]} \end{equation*} over all equivalence relations \( \sim \) on \( \{1,\ldots,n\} \) and all nilpotent adjoint orbits \( \mathcal{O} \) in \( (\lie{k}_\sim)_\C \). \end{remark} \begin{remark} \label{rem5.15} Notice that the values at a point in \( Q \) of the hyperk\"ahler moment maps for the actions on \( Q \) of \( K = \Lie{SU}(n) \) and \( T = (S^1)^{n-1} \) determine the stratum \( Q_{(S,\delta,\sim)} \) (or equivalently \( Q_{[\sim, \mathcal{O}]} \)) to which the quiver belongs. For the value \( (\lambda_1, \dots, \lambda_{n-1}) \) of \( \mu_{(S^1)^{n-1}} \) determines the equivalence relation \( \sim \) and also the generic choices of complex structures for which \begin{equation*} \sum_{k=i}^{j-1} \lambda_k = 0 \ \text{in}\ {\mathbb R}^3 \iff \sum_{k=i}^{j-1} \lambda_k^\C = 0 \ \text{in}\ \C. \end{equation*} Moreover for such choices of complex structures the quiver decomposes as a direct sum of subquivers determined by the generalised eigenspaces of the composition \( \alpha_{n-1}\beta_{n-1} \) (which is given by the complex moment map for the action of \( K \)), and it follows that the Jordan type of \( \alpha_{n-1}\beta_{n-1} \) determines the nilpotent orbit \( \mathcal{O} \) in \( (\lie{k}_\sim)_\C \), or equivalently as above the data \( S \) and \( \delta \). \end{remark} \section{Using Jordan canonical form} In this section we will use Jordan canonical form to study hyperk\"ahler quivers as at (\ref{eq6.8}) in order to find standard forms in the next section for quivers in a corresponding stratum \( Q_{[\sim,\mathcal{O}]} \). We have the following description of such quivers from \cite{DKS} Proposition 7.2. \begin{proposition} \label{betasurj} Let \( 0=m_0 \leqslant m_1 \leqslant \cdots \leqslant m_n = n \) and let \( V_i = \C^{m_i} \) for \( 0 \leqslant i \leqslant n \). Consider quivers of the form \begin{equation*} 0=V_0 \stackrel[\beta_0]{\alpha_0}{\rightleftarrows} V_1 \stackrel[\beta_1]{\alpha_1}{\rightleftarrows} V_2 \stackrel[\beta_2]{\alpha_2}{\rightleftarrows} \dots \stackrel[\beta_{n-2}]{\alpha_{n-2}}{\rightleftarrows} V_{n-1} \stackrel[\beta_{n-1}]{\alpha_{n-1}}{\rightleftarrows} V_n = \C^n, \end{equation*} where each \( \beta_j \) is surjective and the complex moment map equations for \( H=\prod_{i=1}^{n-1} \Lie{SU}(m_i) \) are satisfied. The set of such quivers modulo the action of \( H_\C=\prod_{i=1}^{n-1} \Lie{SL}(m_i, \C) \) may be identified with \begin{equation*} K_\C \times_{[P,P]} [\lie p, \lie p]^\circ \end{equation*} where \( P \) is the parabolic subgroup of \( K_\C = \Lie{SL}(n, \C) \) associated to the flag \( (m_{1}, \dots, m_n=n) \), and \( [\lie p, \lie p]^\circ \) is the annihilator of the Lie algebra of the commutator subgroup of \( P \). The same is true if we replace the assumption that each \( \beta_j \) is surjective with the assumption that each \( \alpha_j \) is injective. When \( m_i =i \) for all \( i \) we have the space \begin{equation*} K_\C \times_N \lie b \end{equation*} where \( N \) is a maximal unipotent subgroup of \( K_\C = \Lie{SL}(n,\C) \) and \( \lie b = \lie n^\circ \) is a Borel subalgebra. \end{proposition} We can modify this result as follows to describe the subset of quivers for which each \( \alpha_j \) is injective and each \( \beta_j \) is surjective. \begin{proposition} \label{corBQ} Let \( 0=m_0 \leqslant m_1 \leqslant \cdots \leqslant m_n = n \) and let \( V_i = \C^{m_i} \) for \( 0 \leqslant i \leqslant n \). Consider quivers of the form \begin{equation*} 0=V_0 \stackrel[\beta_0]{\alpha_0}{\rightleftarrows} V_1 \stackrel[\beta_1]{\alpha_1}{\rightleftarrows} V_2 \stackrel[\beta_2]{\alpha_2}{\rightleftarrows} \dots \stackrel[\beta_{n-2}]{\alpha_{n-2}}{\rightleftarrows} V_{n-1} \stackrel[\beta_{n-1}]{\alpha_{n-1}}{\rightleftarrows} V_n = \C^n, \end{equation*} where each \( \alpha_j \) is injective and each \( \beta_j \) is surjective and the complex moment map equations for \( H=\prod_{i=1}^{n-1} \Lie{SU}(m_i) \) are satisfied. The set of such quivers modulo the action of \( H_\C=\prod_{i=1}^{n-1} \Lie{SL}(m_i, \C) \) may be identified with \begin{equation*} K_\C \times_{[P,P]} [\lie p, \lie p]^\circ_* \end{equation*} where \( [\lie p, \lie p]^\circ_* \) is the open subset of \( [\lie p, \lie p]^\circ \) consisting of those \( X \in [\lie p, \lie p]^\circ \) such that \begin{equation*} X_i - \frac{\mathrm{tr}X_{ii}}{k_i} \left( \begin{array}{c} 0_{k_i \times m_i} \\ I_{m_i \times m_i} \end{array} \right) \end{equation*} has maximal rank \( m_i \) for each \( i \) where \( k_i = m_{i+1} - m_i \). Here \( X_i \) is the bottom right \( m_{i+1} \times m_i \) block in \( X \) and \( X_{ii} \) is its \( i \)th diagonal block (of size \( k_i \times k_i \)). \end{proposition} \begin{proof} We first recall from \cite{DKS} the proof of Proposition \ref{betasurj} above. Given a quiver of the form \begin{equation*} 0=V_0 \stackrel[\beta_0]{\alpha_0}{\rightleftarrows} V_1 \stackrel[\beta_1]{\alpha_1}{\rightleftarrows} V_2 \stackrel[\beta_2]{\alpha_2}{\rightleftarrows} \dots \stackrel[\beta_{n-2}]{\alpha_{n-2}}{\rightleftarrows} V_{n-1} \stackrel[\beta_{n-1}]{\alpha_{n-1}}{\rightleftarrows} V_n = \C^n, \end{equation*} where each \( \beta_j \) is surjective and the complex moment map equations for \( H=\prod_{i=k+1}^{n-1} \Lie{SU}(m_i) \) are satisfied, it follows from an easy inductive argument that the vector spaces \( V_i \) with \( k<i\leqslant n \) have bases so that \begin{equation*} \beta_i = \left( 0_{m_i \times k_i} \mid I_{m_i \times m_i} \right), \end{equation*} where \( m_i = \dim V_i \) and \( k_i = m_{i+1} - m_i \) is the dimension of the kernel of \( \beta_i \). We have thus used the action of \( H_\C \times K_\C = \prod_{i=1}^n \Lie{SL}(m_i, \C) \) to put the maps \( \beta_i \) in standard form. This standard form is preserved by transformations satisfying \begin{equation*} g_{i+1} = \begin{pmatrix} * & * \\ 0 & g_i \end{pmatrix} , \end{equation*} where the top left block is \( k_i \times k_i \), the bottom right block is \( m_i \times m_i \) and \( g_{k+1} \) is an arbitrary element of \( \Lie{SL}(m_{k+1}, \C) \). The freedom in \( \Lie{SL}(n, \C) \) is therefore the commutator of the parabolic group \( P \) associated to the flag of dimensions \( (m_{k+1}, m_{k+2}, \dots, m_{n}=n) \) in \( \C^n \). With respect to bases chosen as above, the matrix of \( \alpha_i \beta_i \) for \( k<i<n \) is \begin{equation} \label{abform} \begin{pmatrix} 0_{k_i \times k_i} & D_{k_i \times m_i} \\ 0_{m_i \times k_i} & -\lambda_i^\C I_{m_i} + \alpha_{i-1} \beta_{i-1} \end{pmatrix} \end{equation} for some \( k_i \times m_i \) matrix \( D \). We can use the pairing \( (A,B) \mapsto \tr(AB) \) to identify \( \lie k \) and \( \lie k_\C \) with their duals. It now follows inductively that \( X = \alpha_{n-1} \beta_{n-1} \) lies in the annihilator of the Lie algebra of the commutator \( [P,P] \) of the parabolic determined by the integers \( k_j \), and the diagonal entries of \( X \) are \( 0 \) (\(k_{n-1} \) times), \( -\lambda_{n-1}^\C \) (\(k_{n-2} \) times), \( \dots, -(\lambda_{n-1}^\C + \dots + \lambda_{i+1}^\C) \) (\(k_i \) times), \( \dots \) Moreover one can show that any such \( X \) occurs for a solution of the complex moment map equations, and that the trace-free part of \( X \) determines all the \( \alpha_i \) and hence the entire quiver modulo the action of \( H_\C \). Note that if \( X \in [\lie p, \lie p]^\circ \) then the corresponding quiver \begin{equation*} 0=V_0 \stackrel[\beta_0]{\alpha_0}{\rightleftarrows} V_1 \stackrel[\beta_1]{\alpha_1}{\rightleftarrows} V_2 \stackrel[\beta_2]{\alpha_2}{\rightleftarrows} \dots \stackrel[\beta_{n-2}]{\alpha_{n-2}}{\rightleftarrows} V_{n-1} \stackrel[\beta_{n-1}]{\alpha_{n-1}}{\rightleftarrows} V_n = \C^n, \end{equation*} with each \( \beta_i \) in standardised form \begin{equation*} \beta_i = \left( 0_{n_i \times k_i} \mid I_{m_i \times m_i} \right), \end{equation*} has \begin{equation*} \alpha_i = X_i - \frac{\mathrm{tr}X_{ii}}{k_i} \left( \begin{array}{c} 0_{k_i \times m_i} \\ I_{m_i \times m_i} \end{array} \right) \end{equation*} where \( X_i \) is the bottom right \( m_{i+1} \times m_i \) block in \( X \) and \( X_{ii} \) is its \( i \)th diagonal block (of size \( k_i \times k_i \)). If we sweep out the space \( [\lie p,\lie p]_*^\circ \) of quivers of this form with each \( \alpha_j \) injective by the action of the torus \( T_\C = (\C^*)^{n-1} \) then we obtain the space \( T_\C [\lie p,\lie p]_*^\circ \) of quivers for which \( X = \alpha_{n-1}\beta_{n-1} \in [\lie p,\lie p]_*^\circ \). Here each \( \beta_k \) can be put in the form \begin{equation*} \beta_i = \left( 0_{m_i \times k_i} \mid a_i I_{m_i \times m_i} \right), \end{equation*} for some nonzero scalar \( a_i \) by using the action of \( H_\C=\prod_{i=k+1}^{n-1} \Lie{SL}(m_i, \C) \) but \emph{not} the action of \( K_\C = \Lie{SL}(n,\C) \). Then the set of all quivers of the form \begin{equation*} 0=V_0 \stackrel[\beta_0]{\alpha_0}{\rightleftarrows} V_1 \stackrel[\beta_1]{\alpha_1}{\rightleftarrows} V_2 \stackrel[\beta_2]{\alpha_2}{\rightleftarrows} \dots \stackrel[\beta_{n-2}]{\alpha_{n-2}}{\rightleftarrows} V_{n-1} \stackrel[\beta_{n-1}]{\alpha_{n-1}}{\rightleftarrows} V_n = \C^n, \end{equation*} where each \( \beta_j \) is surjective and each \( \alpha_j \) is injective and the complex moment map equations for \( H=\prod_{i=k+1}^{n-1} \Lie{SU}(m_i) \) are satisfied, modulo the action of \( H_\C=\prod_{i=k+1}^{n-1} \Lie{SL}(m_i, \C) \), is identified with \begin{equation*} K_\C \times_{[P,P]} [\lie p, \lie p]_*^\circ \cong K_\C \times_P (T_\C [\lie p,\lie p]_*^\circ) \end{equation*} since \( P = T_\C [P,P] \). This completes the proof. \end{proof} \begin{remark} \label{remJCF} Let \begin{equation*} 0=V_0 \stackrel[\beta_0]{\alpha_0}{\rightleftarrows} V_1 \stackrel[\beta_1]{\alpha_1}{\rightleftarrows} V_2 \stackrel[\beta_2]{\alpha_2}{\rightleftarrows} \dots \stackrel[\beta_{n-2}]{\alpha_{n-2}}{\rightleftarrows} V_{n-1} \stackrel[\beta_{n-1}]{\alpha_{n-1}}{\rightleftarrows} V_n = \C^n, \end{equation*} be a quiver such that each \( \alpha_j \) is injective and each \( \beta_j \) is surjective and the complex moment map equations for \( H=\prod_{i=k+1}^{n-1} \Lie{SU}(m_i) \) are satisfied. Then \( \ker \alpha_j \beta_j = \ker \beta_j \) and \( \mathrm{im} \, \alpha_j \beta_j = \mathrm{im} \, \alpha_j \) for each \( j \), and we can inductively choose coordinates on \( V_1, \ldots V_n \), starting with \( V_n = \C^n \), so that each \( \alpha_{j}\beta_{j} \) (and hence each \( \beta_j \alpha_j \)) is in Jordan canonical form while each \( \beta_j \) is the direct sum over all the Jordan blocks of matrices in the standardised form \begin{equation*} \left( \begin{array}{cccccc} 0 & 1 & 0 & 0 & \cdots & 0\\ 0 & 0 & 1 & 0 & \cdots & 0\\ & & \cdots & & & \\ 0 & \cdots & 0 & 0 & 1 & 0 \\ 0 & \cdots & 0 & 0 & 0 & 1 \end{array} \right). \end{equation*} More precisely, we first choose a basis \( \{e_1,\ldots,e_n\} \) for \( V_n = \C^n \) so that \( \alpha_{n-1} \beta_{n-1} \) is in Jordan canonical form. Since \( \beta_{n-1} :V_n \to V_{n-1} \) is surjective and \( \ker \beta_{n-1} = \ker \alpha_{n-1} \beta_{n-1} \), it follows that \begin{equation*} \{ \beta_{n-1}(e_i): e_i \not\in \ker \beta_{n-1} \} \end{equation*} is a basis for \( V_{n-1} \) such that \( \beta_{n-1} \) is in the required form. It is now easy to check that \( \beta_{n-1} \alpha_{n-1} \) is in Jordan canonical form, and it follows immediately from the complex moment map equations that the same is true of \( \alpha_{n-2} \beta_{n-2} \), so that we can repeat the argument with the basis \begin{equation*} \{ \beta_{n-2}\beta_{n-1}(e_i): e_i \not\in \ker \beta_{n-2} \beta_{n-1} \} \end{equation*} for \( V_{n-2} \). Alternatively we can choose coordinates so that each \( \alpha_{j}\beta_{j} \) and \( \beta_j\alpha_j \) is in Jordan canonical form while each \( \alpha_j \) is the direct sum over all the Jordan blocks of matrices in the standardised form \begin{equation*} \left( \begin{array}{ccccc} 1 & 0 & 0 & \cdots & 0\\ 0 & 1 & 0 & \cdots & 0\\ & & \cdots & &\\ 0 & \cdots & 0 & 1 & 0 \\ 0 & \cdots & 0 & 0 & 1 \\ 0 & \cdots & 0 & 0 & 0 \end{array} \right). \end{equation*} Note that if we write elements \( \zeta \) of \( \lie{k}_\C \) in the form \( \zeta = \zeta_{\bf s} + \zeta_{\bf n} \) where \( \zeta_{\bf s} \) is semisimple and \( \zeta_{\bf n} \) is nilpotent and \( [\zeta_{\bf s},\zeta_{\bf n}]=0 \) (cf.\! \cite{Collingwood-McGovern} \S1.1), then \( (\alpha_{n-1}\beta_{n-1})_{\bf n} \in [\lie p, \lie p]_* \) and the Jordan type of \( \alpha_{n-1}\beta_{n-1} \) determines, and is determined by, the flag \( (m_{1}, \dots, m_n=n) \) (or equivalently the parabolic subgroup \( P \) of \( K_\C = \Lie{SL}(n, \C) \) ) together with the nilpotent orbit containing \( (\alpha_{n-1}\beta_{n-1})_{\bf n} \). Note also that the fact that \( \alpha_{n-1}\beta_{n-1} \) lies in the open subset \( [\lie p, \lie p]_* \) of \( [\lie p,\lie p] \) tells us that \( P \) is the Jacobson-Morozov parabolic of \( (\alpha_{n-1}\beta_{n-1})_{\bf n} \) (cf.\! \cite{Collingwood-McGovern} \S3.8). The choice of coordinates needed to put the quiver into the standard form above amounts to using the action of the group \( K_\C \times \prod_{j=1}^{n-1} \Lie{GL}(m_j,\C) \) to standardise the quiver. Equivalently, once we have quotiented by the action of \( \prod_{j=1}^{n-1} \Lie{SL}(m_j,\C) \), it amounts to using the action of \( K_\C \times (\C^*)^{n-1} = K_\C \times T_\C \) to standardise the point of \( Q \) represented by the quiver. \end{remark} \section{Standard forms for quivers} Let \( \sim \) be an equivalence relation on \( \{1, \ldots, n\} \), and let \( \mathcal{O} \) be a nilpotent adjoint orbit in \( (\lie{k}_\sim)_\C \). Recall that \( Q_{[\sim,\mathcal{O}]} \) is the subset \( Q_{(S,\delta,\sim)} \) of \( Q \) indexed by the corresponding \( (S,\delta,\sim) \) as in Definition \ref{defnsimnil}. Similarly we will write \( Q_{[\sim,\mathcal{O}]}^\circ \) for the open subset \( Q_{(S,\delta,\sim)}^\circ \) of \( Q_{(S,\delta,\sim)} \). Putting the results of the last two sections together we find that given any quiver \( \lie q \) in \( Q_{[\sim,\mathcal{O}]}^\circ \) we can choose coordinates on \( \C, \C^2, \ldots, \C^n \) to put the quiver into standard form as follows. Firstly, as in Remark~\ref{lem5.14} we can decompose \( \lie q \) into a direct sum of subquivers \begin{equation*} \cdots V_i^j \stackrel[\beta_{i,j}]{\alpha_{i,j}}{\rightleftarrows} V_{i+1}^j \cdots \end{equation*} determined by the generalised eigenspaces of the compositions \( \alpha_i\beta_i \). Since \( \lie q \) lies in \( Q_{[\sim,\mathcal{O}]}^\circ \) each such subquiver is the direct sum of a quiver \( \lie q^{[j]} \) of the form \begin{equation} \label{eeq6.8} 0 \stackrel[\beta^{[j]}_0]{\alpha^{[j]}_0}{\rightleftarrows} \C^{m_1}\stackrel[\beta_1^{[j]}]{\alpha_1^{[j]}}{\rightleftarrows} \C^{m_2}\stackrel[\beta_2^{[j]}]{\alpha_2^{[j]}}{\rightleftarrows}\dots \stackrel[\beta_{n-2}^{[j]}]{\alpha_{n-2}^{[j]}}{\rightleftarrows} \C^{m_{n-1}} \stackrel[\beta_{n-1}^{[j]}]{\alpha_{n-1}^{[j]}}{\rightleftarrows} \C^{m_n} \end{equation} where the maps \( \alpha_k^{[j]} \) for \( 1\leqslant k \leqslant n-1 \) are injective and the maps \( \beta_k^{[j]} \) for \( 1\leqslant k \leqslant n-1 \) are surjective, with quivers given for \( 1 \leqslant h \leqslant p \) by \begin{equation*} \C^{d_h} \stackrel[\beta_{i_h}^{(h)}]{\alpha_{i_h}^{(h)}}{\rightleftarrows} \C^{d_h} \rightleftarrows \dots \rightleftarrows \C^{d_h} \stackrel[\beta_{j_h-2}^{(h)}]{\alpha_{j_h-2}^{(h)}}{\rightleftarrows} \C^{d_h} \end{equation*} in the places \( i_h,i_h + 1, \dots ,j_h -1 \), where the maps \( \alpha_k^{(h)} \), \( \beta_k^{(h)} \), for \( i_h \leqslant k < j_h - 1 \), are multiplication by complex scalars such that \( \gamma_k^{(h)} = \alpha_k^{(h)} + j \beta_k^{(h)} \in \mathbb{H}\setminus \{0\} \) (cf. Remark \ref{sumdecomp}). Moreover the combinatorial data here and the Jordan type of \( \alpha_{n-1}\beta_{n-1} \) for each summand (\ref{eeq6.8}) is determined by the pair \( (\sim,\mathcal{O}) \) (see Remarks \ref{remequiv} and \ref{remJCF}). Now if we allow any complex linear changes of coordinates in \( K_\C \times H_\C = \prod_{k=1}^n \Lie{SL}(k,\C) \) then using Remark \ref{remJCF} we can put \( \alpha_{n-1}\beta_{n-1} \) into Jordan canonical form and then decompose the quiver (\ref{eeq6.8}) into a direct sum of quivers determined by the Jordan blocks of \( \alpha^{[j]}_{n-1}\beta^{[j]}_{n-1} \). Now \( \alpha^{[j]}_k \) is a direct sum over the set \( B_j \) of Jordan blocks for \( \alpha^{[j]}_{n-1}\beta^{[j]}_{n-1} \) of matrices of the form \begin{equation} \label{formstar} \left( \begin{array}{ccccc} \nu_1^{bjk} & 0 & 0 & \cdots & 0\\ 0 & \nu_2^{bjk} & 0 & \cdots & 0\\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & \nu^{bjk}_{\ell_b - n+k}\\ 0 & \cdots & 0 & 0 & 0 \end{array} \right) \end{equation} for some \( \nu^{bjk}_i \in\C^* \) where \( \ell_b \) is the size of the Jordan block \( b \in B_j \), and \( \beta_k^{[j]} \) is a corresponding direct sum over \( b \in B_j \) of matrices of the form \begin{equation*} \left( \begin{array}{cccccc} \mu_1^{bjk} & \xi_1^{bjk} & 0 & 0 & \cdots & 0\\ 0 & \mu_2^{bjk} & \xi_2^{bjk} & 0 & \cdots & 0\\ & & \cdots & & & \\ 0 & \cdots & 0 & \mu^{bjk}_{\ell_b - n + k -1} & \xi^{bjk}_{\ell_b - n +k -1} & 0 \\ 0 & \cdots & 0 & 0 & \mu^{bjk}_{\ell_b - n+k} & \xi^{bjk}_{\ell_b - n +k} \end{array} \right) \end{equation*} for some \( \mu_i^{bjk}, \xi_i^{bjk} \in\C^* \) satisfying the complex moment map equations (\ref{eq:mmcomplex}). The resulting direct sum over all the Jordan blocks \( \bigcup_{j}B_j \) for \( \alpha_{n-1}\beta_{n-1} \) has closed \( (H_S)_\C \)-orbit. Let \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) be the subset of \( Q_{[\sim,\mathcal{O}]}^{\circ} \) representing quivers of this form, where \( \alpha_{n-1}\beta_{n-1} \) is in Jordan canonical form and the summands of the quiver corresponding to generalised eigenspaces of the compositions \( \alpha_i \beta_i \) (and thus by Remark \ref{remkappa} to equivalence classes for \( \sim \)) are ordered according to the usual ordering on the minimal elements of the equivalence classes, and the Jordan blocks for each equivalence class are ordered by size. Then we have \begin{equation*} Q_{[\sim,\mathcal{O}]}^\circ = K_\C Q_{[\sim,\mathcal{O}]}^{\circ,JCF}. \end{equation*} Note that if we allow any complex linear changes of coordinates in \( K_\C \times \tilde{H}_\C = \prod_{k=1}^n \Lie{GL}(k,\C) \) (or equivalently allow the action of \( K_\C \times T_\C \) on \( Q_{[\sim,\mathcal{O}]} \)), then as in Remark \ref{remJCF} we can put our quiver into a more restricted form which is completely determined by \( \alpha_{n-1}\beta_{n-1} \) and \( (\lambda_1^\C, \ldots, \lambda_{n-1}^\C) \), and thus by the values of the complex moment maps for the actions of \( K \) and \( T \) on \( Q \). Hence the fibres of the complex moment map \begin{equation*} Q_{[\sim,\mathcal{O}]}^{\circ} \to \lie{k}_\C \oplus \lie{t}_\C \end{equation*} for the action of \( K \times T \) are contained in single \( K_\C \times T_\C \)-orbits. \begin{remark} \label{remchoice} Let us consider the stabiliser in \( K_\C \times T_\C \) of a quiver \( \lie q \in Q_{[\sim,\mathcal{O}]}^{\circ} \). We may assume that \( \lie q \) is in the standard form described above, so that \( \lie q \in Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \). We also want to consider how much of the \( K_\C \times T_\C \)-orbit of \( \lie q \) lies in \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \). We know from Remark~\ref{remkappa} that the decomposition of \( \lie q \) into a direct sum of subquivers given by the generalised eigenspaces of the compositions \( \alpha_i\beta_i \) is determined by the equivalence relation \( \sim \), and since this decomposition is canonical it follows that the stabiliser of \( \lie q \) in \( K_\C \times T_\C \) is a subgroup of \( (K_\sim)_\C \times T_\C \), and indeed that elements of \( K_\C \times T_\C \) which preserve the standard form all lie in \( (K_\sim)_\C \times T_\C \). Now applying the proof of Proposition \ref{corBQ} and Remark \ref{remJCF} to the summands in this decomposition, it follows that elements of \( K_\C \times T_\C \) which preserve the standard form are contained in \( P \times T_\C \) where \( P \) is the parabolic subgroup of \( (K_\sim)_\C \) which is the Jacobson--Morozov parabolic of the element of the nilpotent orbit \( \mathcal{O} \) for \( (K_\sim)_\C \) given by the nilpotent component of \begin{equation*} \alpha_{n-1} \beta_{n-1} - \frac1n \tr(\alpha_{n-1} \beta_{n-1}) I_n \in (\lie{k}_\sim)_\C. \end{equation*} Indeed, the elements which preserve the standard form must lie in \( R_{[\sim,\mathcal{O}]} \times T_\C \) where \( R_{[\sim,\mathcal{O}]} \) is the centraliser in \( P \) of this nilpotent element of \( [\lie{p},\lie{p}] \). In particular the stabiliser of \( \lie q \) in \( K_\C \times T_\C \) is contained in \( R_{[\sim,\mathcal{O}]} \times T_\C \). Conversely, it is easy to see that both the centre \( Z((K_\sim)_\C) \) of \( (K_\sim)_\C \), embedded diagonally in \( T_\C \times T_\C \), and the intersection \( [P,P] \cap R_{[\sim,\mathcal{O}]} \), embedded in \( K_\C \times \{1\} \), stabilise the quiver \( \lie q \in Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \). This quiver is also stabilised by the complexification of the subgroup \( T_S \) defined at Definition 4.2. Moreover since \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) is \( R_{[\sim,\mathcal{O}]} \times T_\C \)-invariant it follows that an element of \( K_\C \times T_\C \) preserves the standard form if and only if it lies in \( R_{[\sim,\mathcal{O}]} \times T_\C \), and so \begin{equation} \label{identity} Q_{[\sim,\mathcal{O}]}^\circ \cong (K_\C \times T_\C) \times_{(R_{[\sim,\mathcal{O}]} \times T_\C) } Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \cong K_\C \times_{R_{[\sim,\mathcal{O}]} } Q_{[\sim,\mathcal{O}]}^{\circ,JCF}. \end{equation} Furthermore, to determine the stabiliser of \( \lie q \) in \( K_\C \times T_\C \) we should first consider its intersection with \( (T_{[\sim,\mathcal{O}]})_\C \times T_\C \), where \( T_{[\sim,\mathcal{O}]} \) is the intersection of \( T \) with \( R_{[\sim,\mathcal{O}]} \). The intersection of the stabiliser of \( \lie q \) with \( (T_{[\sim,\mathcal{O}]})_\C \times T_\C \) contains the product of the subgroups \( Z((K_\sim)_\C) \), \( Z_P \cap R_{[\sim,\mathcal{O}]} \) and \( (T_S)_\C \) of \( (T_{[\sim,\mathcal{O}]})_\C \times T_\C \). \end{remark} \begin{lemma} \label{leminj} The non-empty fibres of the restriction \begin{equation*} Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \to \lie{k}_\C \oplus \lie{t}_\C \end{equation*} to \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) of the complex moment map for the action of \( K \times T \) on \( Q \) are single \( (T_{[\sim,\mathcal{O}]})_\C \times T_\C \)-orbits, where \( T_{[\sim,\mathcal{O}]} \) is the intersection of \( T \) with \( R_{[\sim,\mathcal{O}]} \). \end{lemma} \begin{proof} We observed just before Remark \ref{remchoice} that the fibres of the complex moment map \begin{equation*} Q_{[\sim,\mathcal{O}]}^{\circ} \to \lie{k}_\C \oplus \lie{t}_\C \end{equation*} for the action of \( K \times T \) are contained in single \( K_\C \times T_\C \)-orbits. By (\ref{identity}) above, each \( K_\C \times T_\C \)-orbit which meets \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) meets it in a single \( R_{[\sim,\mathcal{O}]} \times T_\C \)-orbit where \begin{equation*} R_{[\sim,\mathcal{O}]} = (T_{[\sim,\mathcal{O}]})_\C \,\,([P,P] \cap R_{[\sim,\mathcal{O}]}) \end{equation*} and \( [P,P] \cap R_{[\sim,\mathcal{O}]} \) stabilises the quiver \( \lie q \in Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \). Thus each fibre of the restriction to \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) of the complex moment map for the action of \( K \times T \) is contained in a single \( (T_{[\sim,\mathcal{O}]})_\C \times T_\C \)-orbit. Since this complex moment map is \( K_\C \times T_\C \)-equivariant and \( (T_{[\sim,\mathcal{O}]})_\C \times T_\C \) fixes the image in \( \lie{k}_\C \oplus \lie{t}_\C \) of any element of \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \), the result follows. \end{proof} \begin{remark} \label{rem7.5a} Recall that the subgroup \( [P,P] \cap (T_{[\sim,\mathcal{O}]})_\C \) acts trivially on \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \). In fact \( (T_{[\sim,\mathcal{O}]})_\C/[P,P] \cap (T_{[\sim,\mathcal{O}]})_\C \) acts freely on \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \). To see this, consider a quiver which is a direct sum over the set \( \bigcup_{j}B_j \) of Jordan blocks for \( \alpha_{n-1}\beta_{n-1} \) of quivers of the form described in (\ref{formstar}), together with quivers given for \( 1 \leqslant h \leqslant p \) by \begin{equation*} \C^{d_h} \stackrel[\beta_{i_h}^{(h)}]{\alpha_{i_h}^{(h)}}{\rightleftarrows} \C^{d_h} \rightleftarrows \dots \rightleftarrows \C^{d_h} \stackrel[\beta_{j_h-2}^{(h)}]{\alpha_{j_h-2}^{(h)}}{\rightleftarrows} \C^{d_h} \end{equation*} in the places \( i_h,i_h + 1, \dots ,j_h -1 \), as in Remark \ref{sumdecomp}. The centraliser in \( T_\C \) of \( \alpha_{n-1}\beta_{n-1} \) consists of matrices in \( T_\C \) which are themselves direct sums over all the Jordan blocks of diagonal matrices with diagonal entries \( (\tau^{bjk}_1, \ldots , \tau^{bjk}_1,\tau^{bjk}_2) \) for some \( \tau^{bjk}_1,\tau^{bjk}_2 \in \C^* \). If such a matrix sends the quiver to an element of its \( H \)-orbit, then \( \tau^{bjk}_1 = \tau^{bjk}_2 \) for all \( b,j,k \), and so the matrix lies in \( [P,P] \cap (T_{[\sim,\mathcal{O}]})_\C \). \end{remark} \begin{remark} \label{remsigma2} The image in \( \lie{k}_\C \) of \( \lie q \in Q_{[\sim,\mathcal{O}]}^{\circ} \) under the complex moment map for \( K \) is the sum of an element \( \zeta \) of \( \lie{t}_\C \) with centraliser \( K_\sim \) in \( K \) and an element \( \xi \) of the nilpotent orbit \( \mathcal{O} \) in \( (\lie{k}_\sim)_\C \), and its image in \( \lie{t}_\C \) under the complex moment map for \( T \) is equal to \( \zeta \) by Remark \ref{remkappa}. Hence the image of \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) under the complex moment map for \( K \times T \) is \begin{equation*} \Delta (\lie{t}_\C)_\sim \oplus \xi_0 = \{ (\zeta + \xi_0, \zeta) \in \lie{k}_\C \oplus \lie{t}_\C : \zeta \in (\lie{t}_\C)_\sim \}, \end{equation*} where \( (\lie{t}_\C)_\sim \) consists of the elements of \( \lie{t}_\C \) with centraliser \( K_\sim \) in \( K \). Also \( \xi_0 \in \mathcal{O} \) is the element of the nilpotent orbit \( \mathcal{O} \) in Jordan canonical form with the Jordan blocks within each generalised eigenspace for \( \alpha_{n-1}\beta_{n-1} \) ordered by size and the generalised eigenspaces themselves ordered using the equivalence relation \( \sim \). The image of \( Q_{[\sim,\mathcal{O}]}^{\circ} \) under this moment map is the \( K_{\C} \) sweep of \( \Delta (\lie{t}_\C)_\sim \oplus \xi_0 \), where \( K_{\C} \) acts trivially on the second component. \end{remark} \begin{remark} \label{remqt} Notice that if a quiver is of the form \begin{equation*} \alpha_k^{bj} = \left( \begin{array}{ccccc} \nu_1^{bjk} & 0 & 0 & \cdots & 0\\ 0 & \nu_2^{bjk} & 0 & \cdots & 0\\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & \nu^{bjk}_{\ell_b - n+k}\\ 0 & \cdots & 0 & 0 & 0 \end{array} \right) \end{equation*} and \begin{equation*} \beta_k^{bj}=\left( \begin{array}{cccccc} \mu^{bjk}_1 & \xi^{bjk}_1 & 0 & 0 & \cdots & 0\\ 0 & \mu^{bjk}_2 & \xi^{bjk}_2 & 0 & \cdots & 0\\ & & \cdots & & & \\ 0 & \cdots & 0 & \mu^{bjk}_{\ell_b - n + k -1} & \xi^{bjk}_{\ell_b - n + k -1} & 0 \\ 0 & \cdots & 0 & 0 & \mu^{bjk}_{\ell_b - n + k} & \xi^{bjk}_{\ell-n+k} \end{array} \right) \end{equation*} for some \( \nu_i^{bjk}, \mu_i^{bjk}, \xi_i^{bjk} \in\C^* \) as at (\ref{formstar}) above, then \( \alpha^{bj}_k\beta^{bj}_k \) and \( \beta^{bj}_k \alpha^{bj}_k \) are upper triangular matrices with diagonal entries \begin{equation*} \mu_1^{bjk}\nu_1^{bjk}, \ldots, \mu^{bjk}_{\ell_b - n + k}\nu_{\ell_b - n + k}^{bjk}, 0 \end{equation*} and \begin{equation*} \mu_1^{bjk}\nu_1^{bjk}, \ldots, \mu^{bjk}_{\ell_b - n + k}\nu_{\ell_b - n + k}^{bjk} \end{equation*} respectively. It follows that the complex moment map equations are also satisfied by the quiver given by replacing \( \alpha_k^{bj} \) and \( \beta_k^{bj} \) with \begin{equation} \label{diagquiv} \alpha_k^{bj,T} = \left( \begin{array}{ccccc} \nu_1^{bjk} & 0 & 0 & \cdots & 0\\ 0 & \nu_2^{bjk} & 0 & \cdots & 0\\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & \nu_{\ell_b - n + k}^{bjk}\\ 0 & \cdots & 0 & 0 & 0 \end{array} \right) \end{equation} and \begin{equation*} \beta_k^{bj,T} = \left( \begin{array}{cccccc} \mu_1^{bjk} & 0 & 0 & & \cdots & 0\\ 0 & \mu_2^{bjk} & 0 & 0 & \cdots & 0\\ & & \cdots & & & \\ 0 & \cdots & 0 & 0 & \mu_{\ell_b - n + k}^{bjk} & 0 \end{array} \right). \end{equation*} Similarly if \( \lie q \) is any quiver representing a point in \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) whose Jordan blocks are of the form given by \( \alpha_k \) and \( \beta_k \) as above, then the quiver \( \lie q^T \) obtained from \( \lie q \) by replacing each such Jordan block with the quiver given by \( \alpha_k^T \) and \( \beta_k^T \) satisfies the complex moment map equations for the action of \( H \), or equivalently the complex moment map equations for the maximal torus \( T_H \) of \( H \). \end{remark} Recall from Definition~\ref{defhypertoric} the definition of \( M_T \) and \( \iota:M_T {\sslash\mkern-6mu/} T_H \to Q \) inducing an identification of the open subset \( Q_T^{{\textup{hks}}} = M_T^{{\textup{hks}}} {\sslash\mkern-6mu/} T_H \) of the hypertoric variety \( M_T {\sslash\mkern-6mu/} T_H \) with its image in \( Q \). \begin{definition} \label{stratumhypertoric} For any \( (\sim,\mathcal{O}) \) we can consider an open subset of a hyperk\"ahler modification \begin{equation*} ((M_{[\sim,\mathcal{O}],T}{\sslash\mkern-6mu/} T_H \times \mathbb{H} ^\ell) {\sslash\mkern-6mu/} (S^1)^\ell \end{equation*} (cf. Definition~\ref{hkmodification}) of the hyperk\"ahler quotient \( M_{[\sim,\mathcal{O}],T}{\sslash\mkern-6mu/} T_H \) of the space \( M_{[\sim,\mathcal{O}],T} \) of quivers which are direct sums of quivers of the form \begin{equation} \label{diagquiv2} \alpha_k^T = \left( \begin{array}{ccccc} \nu_1^k & 0 & 0 & \cdots & 0\\ 0 & \nu_2^k & 0 & \cdots & 0\\ & & \cdots & & \\ 0 & \cdots & 0 & 0 & \nu_k^k\\ 0 & \cdots & 0 & 0 & 0 \end{array} \right) \end{equation} and \begin{equation*} \beta_k^T = \left( \begin{array}{cccccc} \mu_1^k & 0 & 0 & & \cdots & 0\\ 0 & \mu_2^k & 0 & 0 & \cdots & 0\\ & & \cdots & & & \\ 0 & \cdots & 0 & 0 & \mu_k^k & 0 \end{array} \right), \end{equation*} for \( \nu_i^k,\mu_i^k \in \C \), with one such summand for every Jordan block of the canonical representative \( \xi_0 \) of the nilpotent orbit \( \mathcal{O} \) in \( (\lie{k}_\sim)_\C \) as in Remark \ref{remqt}. If \( M_{[\sim,\mathcal{O}],T}^\circ \) is the open subset of \( M_{[\sim,\mathcal{O}],T} \) where all \( \nu_i^k \) and \( \mu_i^k \) are nonzero, then let \begin{equation*} Q_{[\sim,\mathcal{O}],T}^\circ = ((M_{[\sim,\mathcal{O}],T}^\circ {\sslash\mkern-6mu/} T_H \times \mathbb{H} \setminus \{ 0\}) ^\ell) {\sslash\mkern-6mu/} (S^1)^\ell. \end{equation*} Equivalently \( Q_{[\sim,\mathcal{O}],T}^\circ \) can be identified with an open subset of the hyperk\"ahler quotient by \( T_{{H}} \) of the space of quivers which, like \( \lie q^T \) in Remark~\ref{remqt}, are direct sums of quivers of the form above and summands as in Remark \ref{remhypertoricstrat}. The space \( M_{[\sim,\mathcal{O}],T} \) is a flat hypertoric variety with respect to the action of a quotient of \( T_{\tilde{H}} \). Thus \begin{equation*} ((M_{[\sim,\mathcal{O}],T}{\sslash\mkern-6mu/} T_H) \times \mathbb{H} ^\ell) {\sslash\mkern-6mu/} (S^1)^\ell \end{equation*} is also hypertoric, for the action of a quotient of \( T = T_{\tilde{H}}/T_H \), and it contains \( Q_{[\sim,\mathcal{O}],T}^\circ \) as an open subset. We define \( Q_{[\sim,\mathcal{O}],T} \) to be the open subset \( \Lie{SU}(2) Q_{[\sim,\mathcal{O}],T}^\circ \) of the hypertoric variety \( ((M_{[\sim,\mathcal{O}],T}{\sslash\mkern-6mu/} T_H \times \mathbb{H} ^\ell) {\sslash\mkern-6mu/} (S^1)^\ell. \) \end{definition} \begin{definition} Let \( \psi: Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \to Q_{[\sim,\mathcal{O}],T}^\circ \) be the map which associates to a quiver \( \lie q \in Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) the quiver \( \lie q^T \) described in Remark~\ref{remqt}. Note that \( \psi \) is well defined, since any quiver which has the same form as that for \( \lie q \) described in Remarks \ref{remsigma2} and \ref{remqt} and which represents the same point in \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) (that is, lies in the same \( H \)-orbit) actually lies in the same \( T_H \)-orbit. \end{definition} \begin{lemma} \( \psi: Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \to Q_{[\sim,\mathcal{O}],T}^\circ \) is a bijection. \end{lemma} \begin{proof} Since \( \lie q \in Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) satisfies the complex moment map equations for \( H \) and \( \alpha_{n-1} \beta_{n-1} \) is in Jordan canonical form, the entries \( \xi_i \in\C^* \) of the Jordan blocks of \( \lie q \) are uniquely determined by the entries \( \nu_i, \mu_i \in\C^* \) of the corresponding blocks of \( \lie q^T \). \end{proof} Recall from Remark~\ref{remchoice} that \( R_{[\sim,\mathcal{O}]} \) is the centraliser of the canonical representative \( \xi_0 \) of the nilpotent orbit \( \mathcal{O} \) in its Jacobson--Morozov parabolic \( P \) in \( (K_\sim)_\C \), while \( [P,P] \cap R_{[\sim,\mathcal{O}]} \) stabilises each point of \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \). Thus \( P = Z_P [P,P] \) and \( R_{[\sim,\mathcal{O}]} = (T_{[\sim,\mathcal{O}]})_\C ([P,P] \cap R_{[\sim,\mathcal{O}]}) \) where \( Z_P \) is the centre of the standard Levi subgroup of \( P \) and \( (T_{[\sim,\mathcal{O}]})_\C = Z_P \cap R_{[\sim,\mathcal{O}]} \). Note also that \( R_{[\sim,\mathcal{O}]} \) contains the centre \( Z((K_\sim)_\C) \) of \( (K_\sim)_\C \), which acts trivially on \( Q_{[\sim,\mathcal{O}]}^\circ \) and on \( Q_{[\sim,\mathcal{O}],T}^\circ \), and that \( (K_\sim)_\C = Z((K_\sim)_\C)[(K_\sim)_\C,(K_\sim)_\C] \). We obtain an immediate corollary as follows. \begin{corollary} \label{corref} \begin{equation*} Q_{[\sim,\mathcal{O}]}^\circ \cong K_\C \times_{R_{[\sim,\mathcal{O}]} } Q_{[\sim,\mathcal{O}],T}^\circ = (K_\C/[P,P] \cap R_{[\sim,\mathcal{O}]}) \times_{ (T_{[\sim,\mathcal{O}]})_\C } Q_{[\sim,\mathcal{O}],T}^\circ \end{equation*} \begin{equation*} \cong K_\C \times_{(K_\sim)_\C} \left( ([(K_\sim)_\C,(K_\sim)_\C]/[P,P] \cap R_{[\sim,\mathcal{O}]}) \times_{(T^*_{[\sim,\mathcal{O}]})_\C} Q_{[\sim,\mathcal{O}],T}^\circ \right) \end{equation*} where \( Q_{[\sim,\mathcal{O}],T}^\circ \) is an open subset of a hypertoric variety and \( (T^*_{[\sim,\mathcal{O}]})_\C =(T_{[\sim,\mathcal{O}]})_\C \cap [(K_\sim)_\C,(K_\sim)_\C] \). \end{corollary} \begin{remark} \label{remref} The quotient of \( K_\C/[P,P] \cap R_{[\sim,\mathcal{O}]} \) by \( (T_{[\sim,\mathcal{O}]})_\C \) is of course the nilpotent orbit \( K_\C/R_{[\sim,\mathcal{O}]} \) in \( \lie{k}_\C \) which contains the nilpotent orbit \begin{equation*} \mathcal{O}= (K_\sim)_\C/R_{[\sim,\mathcal{O}]} = [(K_\sim)_\C,(K_\sim)_\C]/R_{[\sim,\mathcal{O}]}\cap [(K_\sim)_\C,(K_\sim)_\C] \end{equation*} in \( (\lie{k}_\sim)_\C \) or equivalently in \( [(\lie{k}_\sim)_\C ,(\lie{k}_\sim)_\C] \), which is itself the quotient of \( [(K_\sim)_\C,(K_\sim)_\C]/[P,P] \cap R_{[\sim,\mathcal{O}]} \) by \( (T^*_{[\sim,\mathcal{O}]})_\C \). This nilpotent orbit for a product of special linear groups is an open subset of a hyperk\"ahler quotient of a flat hyperk\"ahler space of quivers (cf. \cite{KS}), and \( K_\C/[P,P] \cap R_{[\sim,\mathcal{O}]} \) itself can likewise be described inductively in terms of the hyperk\"ahler implosion of the corresponding product of special unitary groups. \end{remark} \begin{remark} \label{remloctriv} The hyperk\"ahler moment map \( \mathbb{H} \to {\mathbb R}^3 \) for the standard \( S^1 \)-action on \( \mathbb{H} \) restricts to a locally trivial fibration \begin{equation*} \mathbb{H} \setminus \{ 0 \} \to {\mathbb R}^3 \setminus \{ 0 \} \end{equation*} with fibre \( S^1 \). Recall the definition of the hypertoric variety \( M_T \) from \S3. We can, as in \cite{BD}, stratify \( M_T = \mathbb{H}^{n(n-1)/2} \) using the quaternionic coordinate hyperplanes, each stratum corresponding to fixing the subset \( E \) of \( \{1,\ldots,n(n-1)/2\} \) indexing the quaternionic hyperplanes containing the points of the stratum. Then the hyperk\"ahler moment map \( M_T \to ({\mathbb R}^3)^{n(n-1)/2} \) restricted to a stratum is a locally trivial fibration with fibre \begin{equation*} T_{\tilde{H}}/(S^1)^{|E|} \end{equation*} where \( (S^1)^{|E|} \) is the subtorus of \( T_{\tilde{H}} = (S^1)^{n(n-1)/2} \) whose Lie algebra is generated by the basis vectors indexed by the elements of \( E \). Similarly it follows from Definition~\ref{stratumhypertoric} and Remark~\ref{rem7.5a} that that \( \mu_{(S^1)^{n-1}}: Q_{[\sim,\mathcal{O}],T} \to ({\mathbb R}^3)^{n-1}_\sim = (\lie{t}\otimes {\mathbb R}^3)_\sim \) is a locally trivial fibration with fibre the quotient \begin{equation*} T/[P,P] \cap T_{[\sim,\mathcal{O}]} \end{equation*} of \( T = T_{\tilde{H}}/T_H \). \end{remark} \section{The refined strata} Recall from (\ref{newstrat}) that the universal hyperk\"ahler implosion \( Q = M {\sslash\mkern-6mu/} H \) for \( K=\Lie{SU}(n) \) is a disjoint union \begin{equation*} Q = \coprod_{S,\delta, \sim} Q_{(S,\delta, \sim)} = \coprod_{\sim,\mathcal{O}} Q_{[\sim,\mathcal{O}]} \end{equation*} of subsets indexed by \( (S,\delta,\sim) \) or equivalently, as discussed immediately before Definition~\ref{defnsimnil}, by pairs \( (\sim,\mathcal{O}) \) where \( \sim \) is an equivalence relation on \( \{1,\ldots,n\} \) and \( \mathcal{O} \) is a nilpotent adjoint orbit in \( (\lie{k}_\sim)_\C \). Here \begin{equation*} Q_{[\sim,\mathcal{O}]} = Q_{(S,\delta,\sim)} = Q_{(S,\delta)} \cap \mu_{(S^1)^{n-1}}^{-1}(({\mathbb R}^3)^{n-1}_{\sim}) \end{equation*} as in Definition \ref{QSdeltasim}. We have \begin{equation*} Q_{[\sim,\mathcal{O}]} = \Lie{SU}(2) Q_{[\sim,\mathcal{O}]}^\circ \end{equation*} where \( Q_{[\sim,\mathcal{O}]}^\circ \) is the open subset of \( Q_{[\sim,\mathcal{O}]} \) which is its intersection with \begin{equation*} \mu_{(S^1)^{n-1}}^{-1}( \{ (\lambda_1, \ldots, \lambda_{n-1}) \in ({\mathbb R}^3)^{n-1}_\sim: \sum_{k=i}^{j-1} \lambda_k = 0 \ \text{in}\ {\mathbb R}^3 \end{equation*} \begin{equation*} \iff \sum_{k=i}^{j-1} \lambda_k^\C = 0 \ \text{in}\ \C \} ). \end{equation*} Recall also from Corollary~\ref{corref} and Remark~\ref{remref} that \begin{equation*} Q_{[\sim,\mathcal{O}]}^\circ = K_\C \times_{R_{[\sim,\mathcal{O}]} } Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \cong (K_\C/[P,P] \cap R_{[\sim,\mathcal{O}]}) \times_{(T_{[\sim,\mathcal{O}]})_\C } Q_{[\sim,\mathcal{O}],T}^\circ \end{equation*} where \( Q_{[\sim,\mathcal{O}],T}^\circ \) is an open subset of a hypertoric variety, and \( R_{[\sim,\mathcal{O}]} \) is the centraliser in \( (K_\sim)_\C \) of the standard representative \( \xi_0 \) in Jordan canonical form of the nilpotent orbit \( \mathcal{O} \) in \( (\lie{k}_\sim)_\C \) (cf. Remark~\ref{remsigma2}), while \( (T_{[\sim,\mathcal{O}]})_\C = T_\C \cap R_{[\sim,\mathcal{O}]} \). Moreover \( [P,P] \cap R_{[\sim,\mathcal{O}]} \) acts trivially on \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) and \( (T_{[\sim,\mathcal{O}]})_\C /[P,P] \cap (T_{[\sim,\mathcal{O}]})_\C \) acts freely on \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) by Remark \ref{rem7.5a}. In addition \( K_\C/[P,P] \cap R_{[\sim,\mathcal{O}]} \) can be described inductively in terms of the hyperk\"ahler implosions of the special unitary groups whose product is \( [(K_\sim)_\C,(K_\sim)_\C] \). Recall finally from Lemma~\ref{leminj} and Remark~\ref{remsigma2} that the non-empty fibres of the restriction \begin{equation*} Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \to \lie{k}_\C \oplus \lie{t}_\C \end{equation*} to \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) of the complex moment map for the action of \( K \times T \) on \( Q \) are single \( (T_{[\sim,\mathcal{O}]})_\C \times T_\C \)-orbits, and its image is \begin{equation*} \Delta (\lie{t}_\C)_\sim \oplus \xi_0 = \{ (\zeta + \xi_0, \zeta) \in \lie{k}_\C \oplus \lie{t}_\C : \zeta \in (\lie{t}_\C)_\sim \}. \end{equation*} Putting this all together we obtain the following theorem. \begin{theorem} \label{thm6.8} For each equivalence relation \( \sim \) on \( \{1,\ldots,n\} \) and nilpotent adjoint orbit \( \mathcal{O} \) for \( (K_\sim)_\C \), the stratum \( Q_{[\sim,\mathcal{O}]} \) is the union over \( s \in \Lie{SU}(2) \) of its open subsets \( sQ_{[\sim,\mathcal{O}]}^\circ \), and \begin{equation*} Q_{[\sim,\mathcal{O}]}^\circ = K_\C \times_{R_{[\sim,\mathcal{O}]} } Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \end{equation*} where \( R_{[\sim,\mathcal{O}]} \) is the centraliser in \( (K_\sim)_\C \) of the standard representative \( \xi_0 \) in Jordan canonical form of the nilpotent orbit \( \mathcal{O} \) in \( (\lie{k}_\sim)_\C \), and \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \) can be identified with an open subset of a hypertoric variety. The image of the restriction \begin{equation*} Q_{[\sim,\mathcal{O}]}^{\circ} \to \lie{k}_\C \end{equation*} of the complex moment map for the action of \( K \) on \( Q \) is \( K_\C( (\lie{t}_\C)_\sim \oplus \mathcal{O}) \cong K_\C \times_{(K_\sim)_\C} ( (\lie{t}_\C)_\sim \oplus \mathcal{O}) \) and its fibres are single \( (T_{[\sim,\mathcal{O}]})_\C \times T_\C \)-orbits, where \( (T_{[\sim,\mathcal{O}]})_\C = T_\C \cap R_{[\sim,\mathcal{O}]} \) and \( (T_{[\sim,\mathcal{O}]})_\C /[P,P] \cap (T_{[\sim,\mathcal{O}]})_\C \) acts freely on \( Q_{[\sim,\mathcal{O}]}^{\circ,JCF} \). \end{theorem} \begin{remark} Recall that the symplectic implosion \( X_{{\rm impl}} \) of a symplectic manifold \( X \) with a Hamiltonian action of a compact group \( K \) with moment map \( \mu_X \) is the disjoint union over the faces \( \sigma \) of a positive Weyl chamber \( \lie{t}_+ \) of the strata \begin{equation*} \mu_X^{-1}(\sigma)/[K_\sigma,K_\sigma] \end{equation*} where \( K_\sigma \) is the stabiliser in \( K \) of any element of the face \( \sigma \) of \( \lie{t}_+ \) and \( [K_\sigma,K_\sigma] \) is its commutator subgroup. Similarly if \( X \) is a hyperk\"ahler manifold with a hyperk\"ahler action of \( K=\Lie{SU}(n) \) and hyperk\"ahler moment map \( \mu_X:X \to \lie{k} \otimes {\mathbb R}^3 \) and complex moment map its projection \( \mu_{X,\C} \) to \( \lie{k}_\C \), then the hyperk\"ahler implosion \( X_{{\rm hkimpl}} \) of \( X \) is defined to be the hyperk\"ahler quotient of \( X \times Q \) by the diagonal action of \( K \). In the light of Theorem \ref{thm6.8} and Remark~\ref{rem7.5a} we expect to have a description of \( X_{{\rm hkimpl}} \) as follows, at least when \( X \) is an affine variety with respect to all its complex structures so that symplectic quotients can be identified with GIT quotients. \( X_{{\rm hkimpl}} \) should be the disjoint union over all equivalence relations \( \sim \) on \( \{1,\ldots,n\} \) and nilpotent adjoint orbits in \( (\lie{k}_\sim)_\C \) of its subsets \( X_{{\rm hkimpl} [\sim,\mathcal{O}] } \), which are themselves the unions of open subsets \( sX_{{\rm hkimpl} [\sim,\mathcal{O}] }^\circ \) for \( s \in \Lie{SU}(2) \), such that \begin{equation*} X_{{\rm hkimpl} [\sim,\mathcal{O}] }^\circ = \mu_{X,\C}^{-1}((\lie{t}_\C)_\sim \oplus \xi_0)/ [P,P] \cap R_{[\sim,\mathcal{O}]} \end{equation*} where \( \xi_0 \in \mathcal{O} \) is the canonical representative of the nilpotent orbit \( \mathcal{O} \subseteq (\lie{k}_\sim)_\C \) in Jordan canonical form as in Remark~\ref{rem7.5a}, and \( (\lie{t}_\C)_\sim \) is the set of elements of \( \lie{t}_\C \) with centraliser \( K_\sim \) in \( K \), while \( P \) is the Jacobson-Morozov parabolic of \( \xi_0 \) in \( (K_\sim)_\C) \) and \( R_{[\sim,\mathcal{O}]} \) is the centraliser of \( \xi_0 \) in \( (K_\sim)_\C \). \end{remark} \section{An approach via Nahm's equations} \bigskip The results so far have been proved for the case \( K=SU(n) \), where from \cite{DKS} we have a finite-dimensional description of the universal hyperk\"ahler implosion in terms of quivers. However in the current paper we have tried to formulate many of our results in a way that could potentially admit generalisation to other compact groups. In this final section we sketch a gauge-theoretic approach, involving Nahm's equations, which could provide another means of attacking this problem. We recall that the Nahm equations are the system \begin{equation} \label{Nahm} \frac{dT_i}{dt} + [T_0 , T_i] = [T_j, T_k], \qquad \text{\( (ijk) \) cyclic permutation of \( (123) \),} \end{equation} where \( T_i \) take values in \( \lie k \) and are smooth on some specified interval \( \mathcal{I} \). Moduli spaces of solutions to the Nahm equations are obtained by quotienting by the gauge action \begin{equation} \label{gauge} T_0 \mapsto g T_0 g^{-1} - \dot{g} g^{-1},\qquad T_i \mapsto g T_i g^{-1} \ (i=1,2,3), \end{equation} where \( g \colon \mathcal{I} \mapsto K \), subject to appropriate constraints on \( g \). The Nahm equations may be interpreted as the vanishing condition for a hyperk\"ahler moment map for the action (\ref{gauge}) of the group of gauge transformations on an infinite-dimensional flat quaternionic space of \( \lie k \)-valued functions on the interval \( \mathcal I \). In this way Nahm moduli spaces can acquire a hyperk\"ahler structure. In particular Kronheimer \cite{Kronheimer:semi-simple}, Biquard \cite{Biquard} and Kovalev \cite{Kovalev} have shown that coadjoint orbits of \( K_{\C} \) may be given hyperk\"ahler structures as moduli spaces of Nahm data on the half-line \( [0,\infty) \), while Kronheimer \cite{Kronheimer:cotangent} has shown that the cotangent bundle of \( K_{\C} \) may be given a hyperk\"ahler structure as a moduli space of Nahm data on the interval \( [0,1] \). \medskip Let us fix a Cartan algebra \( \lie t \) of the Lie algebra \( \lie k \) of the compact group \( K \). We can consider quadruples \( (T_0,T_1, T_2,T_3) \), where each \( T_i \) takes values in \( \lie k \), satisfying the Nahm equations and defined on the half line \( [0, \infty) \). We recall from \cite{Biquard} that such a solution has asymptotics \begin{equation*} T_i = \tau_i + \frac{\sigma_i}{t} + \dotsb \qquad (i=1,2,3), \end{equation*} where \( \tau=(\tau_1, \tau_2, \tau_3) \) is a commuting triple, which we shall take to lie in the fixed Cartan algebra \( \lie t \). Also \( \sigma_i = \rho(e_i) \) where \( e_1,e_2,e_3 \) is a standard basis for \( \lie{su}(2) \) and \( \rho \colon \lie{su}(2) \rightarrow \lie k \) is a Lie algebra homomorphism, so \( [\sigma_1, \sigma_2] = \sigma_3 \) etc. Moreover we must have \( [\tau_i, \sigma_j]=0 \) for \( i,j=1,2,3 \); equivalently, \( \rho \) takes values in the Lie algebra \( \lie c \) of the common centraliser \( C(\tau_1,\tau_2,\tau_3) \) of the triple \( (\tau_1,\tau_2,\tau_3) \). We factor out by gauge transformations equal to the identity at \( 0,\infty \). In addition, we have an action of \( T \) by gauge transformations that are the identity at \( t=0 \) and take values in \( T \) at \( t=\infty \). If the common centraliser \( C(\tau_1,\tau_2, \tau_3) \) of the triple \( (\tau_1, \tau_2, \tau_3) \) is just the maximal torus \( T \), then in fact \( \rho \) and hence the \( \sigma_i \) are zero. The resulting Nahm moduli space, where we quotient by \( T \), is exactly Kronheimer's description of a semisimple orbit as a moduli space \cite{Kronheimer:semi-simple}. If the centraliser of the triple is larger, we must consider Nahm data asymptotic to the triple, but with various possible \( \frac{\sigma_i}{t} \) terms. The various coadjoint orbits are obtained by choosing \( \sigma_i \) and factoring out by gauge transformations that are \( I \) at \( t=0 \) and lie in \( C(\tau_1, \tau_2, \tau_3) \cap C(\sigma_1, \sigma_2, \sigma_3) \) at infinity. The coadjoint orbits will fit together to form the Kostant variety corresponding to our choice of \( \tau \). (We recall that the Kostant varieties are the varieties obtained by fixing the values of a generating set of invariant polynomials \cite{Kostant:polynomial}). The semisimple stratum will correspond to \( \sigma_i=0 \). \medskip The universal hyperk\"ahler implosion for an arbitrary compact group \( K \) is expected to be a space, which, as in the finite-dimensional \( SU(n) \) quiver picture of the preceding section, admits a hyperk\"ahler torus action, with hyperk\"ahler reductions giving the Kostant varieties. In terms of Nahm data, this should mean that to obtain the universal hyperk\"ahler implosion we do not fix triples \( \tau \) but allow them to vary in a fixed Cartan algebra, and that we should factor out only gauge transformations asymptotic to the identity as \( t \) tends to \( \infty \), so the above \( T \) action remains. The moment map for this should formally be evaluation of \( (T_1, T_2, T_3) \) at \( \infty \); that is, it should give the triple \( (\tau_1, \tau_2, \tau_3) \). Then the hyperk\"ahler quotient by \( T \) would be the space of Nahm data on \( [0, \infty) \) asymptotic to a \emph{fixed} triple \( (\tau_1, \tau_2, \tau_3) \), modulo gauge transformations equal to the identity at \( t=0 \) and \( T \)-valued at infinity. As mentioned above, if the triple has common centraliser \( T \), this exactly gives the corresponding Kostant variety, which in this case is just the regular semisimple orbit. For a general commuting triple \( \tau = (\tau_1, \tau_2, \tau_3) \), the Kostant variety is stratified by different orbits, which as above are obtained by fixing \( \tau \) and \( \rho \) (or equivalently \( \sigma \)) and factoring by gauge transformations that take values in the common centraliser \( C(\tau, \sigma) \) of \( \tau \) and \( \sigma \) at infinity. To obtain this Kostant variety via hyperk\"ahler reduction by \( T \), we need therefore to perform further collapsings on the moduli space. More precisely, we should collapse by factoring out gauge transformations that are the identity at \( t=0 \) but take values at \( t= \infty \) in the commutator \( [C,C] \), where \( C=C(\tau_1, \tau_2, \tau_3) \). So on the open dense set of the moduli space where the triple \( (\tau_1, \tau_2, \tau_3) \) has centraliser \( T \), no collapsing occurs. In general, reducing by \( T \) will lead to quotienting by \( C = C(\tau_1, \tau_2, \tau_3) \). The action of \( C \) can be used to bring the \( \sigma_i \) into one of a finite list of standard forms (one for each stratum of the Kostant variety corresponding to the choice of \( \tau \)), and then the remaining freedom lies in \( C(\tau, \sigma) \), which is what we need to factor out by to get the coadjoint orbit. We denote the space obtained via this collapsing by \( {\mathcal Q} \), and this is a candidate for the universal hyperk\"ahler implosion for the general compact group \( K \). We can stratify \( \mathcal Q \) by the centraliser \( C(\tau_1, \tau_2, \tau_3) \) of the triple \( (\tau_1, \tau_2, \tau_3) \). That is, for each compact subgroup \( C \) of \( K \), we consider \( {\mathcal Q}^{C} \), the space of Nahm triples with \( C(\tau_1, \tau_2, \tau_3) = C \), modulo gauge transformations which are the identity at \( t=0 \) and take values in \( [C,C] \) at \( t= \infty \). The top stratum is then the open dense set where the centraliser of the triple is the maximal torus. This stratification agrees in the case \( K=\Lie{SU}(n) \) with the stratification by \( K_{\sim} \) in the quiver picture \( Q=M {\sslash\mkern-6mu/} H \), as in Remark \ref{remksim}; if we stratify further using \( \sigma \) we can obtain strata corresponding to the subsets \( Q_{[\sim,\mathcal{O}]} \) of \( Q \). Notice that the regularity condition (\ref{reg}) (which may always be achieved by a generic \( SU(2) \) rotation) is the condition of \emph{Biquard regularity} \cite{Biquard} for the triple \( (\tau_1, \tau_2, \tau_3) \), that is \( C(\tau_1, \tau_2, \tau_3) = C(\tau_2, \tau_3) \). \begin{remark} This is analogous to the symplectic case, where we obtain the implosion by taking \( K \times \lie t_{+}^* \) and collapsing by commutators of points in the Weyl chamber \( \lie t_{+}^* \), so that in the interior of the chamber no collapsing takes place. \end{remark} In the symplectic case each stratum can be viewed as a symplectic quotient of a suitable space by the commutator, and hence is itself symplectic. There is an analogous statement in the hyperk\"ahler setting. For we have a decomposition \begin{equation*} \lie c^* = \lie z(\lie c)^* \oplus [\lie c^*, \lie c^*] \end{equation*} where \( \lie c \) is the Lie algebra of the common centraliser \( C \) of the triple \( (\tau_1,\tau_2,\tau_3) \). Now consider the space of Nahm solutions with \( T_i(\infty) \in \lie c^* (\mbox{for }i=0,1,2,3) \), modulo gauge transformations equal to the identity at \( 0, \infty \). There is a \( C \) action on this space by gauge transformations equal to the identity at \( t=0 \) and lying in \( C \) at \( t=\infty \). The formal moment map for this action is evaluation at \( \infty \). So the moment map for the \( [C,C] \) action is evaluation at \( \infty \) followed by projection onto \( [\lie c^*, \lie c^*] \). So the formal hyperk\"ahler quotient by \( [C,C] \) at level zero is the set of Nahm matrices with \( T_{i}(\infty) \in \lie z(\lie c^*) : i=1,2,3 \), modulo the action of \( [C,C] \). The stratum \( {\mathcal Q}^{C} \), that is, the quotient by \( [C,C] \) of the set of Nahm matrices with common centraliser of \( T_1(\infty), T_2(\infty), T_3(\infty) \) equal to \( C \), is then an open dense subset of this quotient. \medskip It should also be possible to construct a hypertoric variety by considering the sweep under the \( T \) action of the set of constant solutions \( (0, c_1, c_2, c_3) \; : \; c_i \in \lie{t} \) to the Nahm equations. On the open subset where no collapsing takes place the \( T \) action is free. \bigskip The above statements are formal --- in order to carry out the programme mentioned at the beginning of this section we need to provide an analytical framework. In particular we need to work out a suitable stratified hyperk\"ahler metric. We conclude by discussing some of the issues in defining such a metric. \medskip As above, we fix a Cartan algebra \( \lie t \) of the Lie algebra \( \lie k \) of the compact group \( K \). We consider quadruples \( (T_0,T_1, T_2,T_3) \), where each \( T_i \) takes values in \( \lie k \), satisfying the Nahm equations and defined on the half line \( [0, \infty) \). We form a moduli space \( \tilde{\mathcal M} \) from the above data by quotienting out by gauge transformations that are the identity at \( t=0, \infty \). The standard \( L^2 \) metric on Nahm moduli spaces over the interval \( [0, \infty) \) is given by \begin{equation*} \parallel (X_0,X_1, X_2, X_3) \parallel^2 = \int_{0}^{\infty} \sum_{i=0}^{3} \langle X_i , X_i \rangle \, dt \end{equation*} for tangent vectors \( X=(X_0,\dots, X_3) \), where \( \langle, \rangle \) denotes the Killing form on \( \lie k \). Write our tangent vector as \begin{equation*} X_i = \delta_i + \frac{\epsilon_i}{t} + \ldots \qquad (i=1,2,3), \end{equation*} so \begin{equation*} \langle X_i, X_i \rangle = \langle \delta_i, \delta_i \rangle + 2 \frac{\langle \delta_i , \epsilon_i \rangle}{t} + O(\frac{1}{t^r}) \qquad(r>1) \end{equation*} The \( L^2 \) metric will not be finite except in tangent directions where \( \delta_i=0 \). Such directions correspond to those tangent to the Nahm matrices with a fixed commuting triple \( (\tau_1, \tau_2, \tau_3) \). We may, however, modify our metric following Bielawski \cite{Bielawski:hyper-kaehler} thus: \begin{equation*} \begin{split} \lvert (X_0,X_1, X_2, X_3) \rvert^2 &= \int_{0}^{\infty} \sum_{i=0}^{3} (\langle X_i , X_i \rangle -\langle X_i(\infty), X_i(\infty) \rangle \, ) dt\\ &\quad+ c \langle X_i(\infty), X_i(\infty) \rangle, \end{split} \end{equation*} where \( c \) is a constant. This defines a symmetric bilinear form, though definiteness and nondegeneracy properties remain unclear. Now \\ \( X_i(\infty) = \delta_i \), so we see that the Bielawski pseudometric is finite in directions such that \( \langle \delta_i, \epsilon_i \rangle =0 \) for all \( i \). So it is finite even in certain directions that correspond to infinitesimally changing the \( \tau_i \). In particular, it is finite on the open dense set of \( \tilde{M} \) consisting of all Nahm matrices where the triple \( (\tau_1, \tau_2, \tau_3) \) is regular, since, as remarked above, the \( \epsilon_i \) terms will be zero for directions tangent to this region. To analyse which directions are finite in general we need to use the relation \( [ \tau_i, \sigma_j]=0 \), \( (i,j,=1,2,3) \). Differentiated, this gives the relation \begin{equation} \label{eq:commrelation} [\delta_i, \sigma_j] + [\tau_i, \epsilon_j]=0 \qquad (i,j=1,2,3) \end{equation} on tangent vectors. Suppose we stratify Nahm data by the centraliser \( C \) of \( (\tau_1,\tau_2,\tau_3) \). Let us consider a tangent vector to a stratum. For all elements in the stratum, and for all \( h \in \lie c = {\rm Lie} \;(C) \), we have \( [\tau_i, h]=0 \). It follows that for our tangent vector, \begin{equation} \label{eq:deltacomm} [\delta_i, h]=0 \qquad\forall h \in \lie c \end{equation} In particular, \begin{equation*} [\delta_i, \sigma_j]=0 \qquad (i,j=1,2,3). \end{equation*} Now our above relation \eqref{eq:commrelation} shows \begin{equation*} [\tau_i, \epsilon_j]=0 \qquad (i,j=1,2,3) \end{equation*} so \( \epsilon_j \in \lie c \). If \( \epsilon_i \) is a commutator \( [\xi, \eta] \), where \( \xi, \eta \in \lie c \), we have: \begin{align*} \tr \delta_i \epsilon_i &= \tr \delta_i \xi \eta - \tr \delta_i \eta \xi \\ &= \tr \delta_i \xi \eta - \tr \xi \delta_i \eta \\ &= 0 \end{align*} where in the last step we have used the above observation that \( [\xi, \delta_i]=0 \) since \( \xi \in \lie c \). By linearity, this also holds if \( \epsilon_i \) is a sum of commutators in \( \lie c \). But recall that \( [\sigma_1, \sigma_2]=\sigma_3 \) etc. So, passing to tangent vectors, \begin{equation*} [\sigma_1, \epsilon_2] + [\epsilon_1, \sigma_2] = \epsilon_3 \end{equation*} and cyclically. Each \( \epsilon_i \) is indeed therefore a sum of commutators in \( \lie c \), and hence we see \( \langle \delta_i,\epsilon_i \rangle =0 \). We deduce \begin{theorem} The Bielawski pseudometric is finite in directions tangent to the set of Nahm matrices with fixed centraliser of \( (\tau_1, \tau_2, \tau_3) \). \end{theorem} \begin{remark} If we stratify Nahm matrices by centraliser of the triple, we therefore obtain a metric in the stratified sense. Note that the top stratum for this stratification is just the set of Nahm matrices where the centraliser of the triple is the maximal torus. \end{remark} \begin{remark} In fact, the above calculation works provided \( [\delta_i, \sigma_j]=0 \) for all \( i,j \), as we know from above that \( \epsilon_i \) is a sum of commutators \( [\xi_k, \eta_k] \) with \( \xi_k = \sigma_k \). In particular if \( \tau_i=0 \) for all \( i \) then by \eqref{eq:commrelation} this condition holds, so the metric is finite on all tangent vectors, not just those tangent to the stratum. If \( K=\Lie{SU}(2) \) we only have two strata, the regular one and the stratum where all \( \tau_i \) are zero. It follows that the metric is everywhere finite in this case, which checks as the implosion is now flat \( {\mathbb H}^2 \). \end{remark} \begin{remark} Here we have been considering the Nahm equations on the half-line \( [0,\infty) \), motivated by the constructions of coadjoint orbits of \( K_\C \) in Kronheimer \cite{Kronheimer:semi-simple}, Biquard \cite{Biquard} and Kovalev \cite{Kovalev}. However we expect the universal hyperk\"ahler implosion for \( K \) to be the complex-symplectic GIT reduction by the maximal unipotent group \( N \) of the cotangent bundle \( T^* K_{\C} \) of \( K_{\C} \), and in \cite{Kronheimer:cotangent} \( T^* K_{\C} \) is given a hyperk\"ahler structure as a moduli space of Nahm data on the interval \( [0,1] \). Comparison of this construction with the description in the last section of a hyperk\"ahler implosion \( X_{{\rm hkimpl}} \) when \( K=\Lie{SU}(n) \) suggests a formal picture of the universal hyperk\"ahler implosion for \( K \) which is similar to that above but uses Nahm data on the interval \( [0,1] \) instead of \( [0,\infty) \). \end{remark}
1,108,101,563,024
arxiv
\section*{Highlights} \bigskip \begin{itemize}\setlength\itemsep{2mm} \item Compare four methods (keyword, science-citations, WIPO, USPTO approach) to identify AI patents. \item The four methods differ by growth and scale (60k-600k patents), and overlap in only 1.36\%. \item All methods identify patents with GPT characteristics (growth, generality, complementarity). \item Simple keyword-based method performs best to reproduce GPT characteristics. \item Conclusions about the growth and impact of AI may be sensitive to choice of approach. \end{itemize} \newpage \tableofcontents \newpage \section{Introduction} Artificial intelligence (AI) is often defined as the next general purpose technology (GPT) that will shape the technological and economic evolution of the 21st century \citep{ agrawal2019economic,cockburn20194, brynjolfsson2021productivity}. AI is expected to affect a wide range of industries through its ability to augment and complement extant products and processes \citep{brynjolfsson2017can, cockburn2018impact}. Researchers predict radical impacts on labor markets with winners and losers \citep{trajtenberg2018ai}, and up to 47\% of existing jobs in the US to become obsolete \citep{frey2017future}. On the other hand, AI optimists portend that, as firms across various industries up-skill workers and adapt their business to utilise AI, rising productivity will drive the expansion of output, transform occupational task structures, and stimulate long-run employment \citep{gries2018artificial, acemoglu2020ai}. For researchers interested in studying the impact of AI, it is decisive to have a robust measure of AI diffusion \citep{valdes2017patents, webb2019impact, alderucci2020quantifying}. However, as \citet{schumpeter2005joseph} emphasised, the ex-ante scientific measurement of radical novelty is one of the greatest challenges for scholars of technological change; with AI development and diffusion still at an early stage \citep{brynjolfsson2021productivity}, it is not yet clear how it will impact various economic variables. Patents provide a quantitative and forward-looking indicator of technological change and innovation. Detailed classification systems that associate patents with technological fields provide insights into the qualitative dimension of technological change \citep{jaffe2019patent}. However, as technological developments in AI become increasingly embedded throughout multiple sectors of the economy, existing classification schemes may be unfit to capture sophisticated technology networks linked by fundamental principles and mutual dependencies \citep{hall2006uncovering, petralia2020mapping}. In this paper, we make a systematic comparison of four classification approaches to identify AI inventions in patent data. Specifically, we compare methods based on (1) keywords, (2) scientific citations, (3) the World Intellectual Property Organization (WIPO) classification method, and (4) the United States Patent and Trademark Office (USPTO) approach. The keyword-based approach identifies AI patents by searching for AI-related terms in patent texts, while science citations capture patents through citation links to AI research. The WIPO approach classifies AI patents using CPC and IPC codes, keywords, and application fields of patents, whereas the USPTO approach utilises machine learning techniques. For each of the four classification approaches, we collect AI patents granted by the USPTO between 1990 and 2019. In total, we identify 735,565 AI patents, yet the approaches differs strikingly by scale: 67,187 patents are captured by keywords, 178,004 by science citations, 158,652 by the WIPO, and 595,047 by the USPTO approach. Together, all four approaches agree on only 1.36\% of AI patents and the pairwise overlap accounts for around or less than 20\% throughout the entire period. In addition to varying by scale, the four methods also suggest disparate trends of AI inventions over time. Whilst growth in AI patents captured by the science and USPTO approaches slowed down in recent years, we observe growth in keyword and WIPO AI patents to accelerate. Given the heterogeneity of the approaches and the lack of a clear definition of AI \citep{krafft2020defining}, we do not aim to judge which method performs best in identifying AI inventions. Instead, we ask to what extent each classification approach generates canonical GPT characteristics in patented AI technologies, as these are the economic properties many researchers are interested in. We evaluate each approach according to the three GPT characteristics proposed by \citet{bresnahan1995general, hall2006uncovering, petralia2020mapping}: \begin{enumerate} \item GPTs are engines of growth that have a wide scope for technical improvement, as seen in the historical GPTs of electricity and computing \citep{petralia2020mapping}. For each of our four classification approaches, we measure this feature by calculating the growth rates of each pool of patents, as well as the growth rates of patents that rely on AI inventions (as indicated by citation links). GPTs are expected to show positive or even accelerating growth rates over time. \item GPTs are characterised by their potential uses across a wide range of products and processes. To capture this feature, we examine the technological diversity of patent citations. Historically, GPTs such as electricity experienced delays of up to forty-two years before becoming more widely utilised and disseminating throughout the economy \citep{comin2018if}. Following \citet{hall2006uncovering}, we capture this feature by measuring the citation lag between AI and its subsequent inventions. \item GPTs strongly complement extant and novel technologies \citep{petralia2020mapping}, as exemplified by electricity's ability to transform and improve myriad production processes. We evaluate this feature by studying the diversity of co-classification of AI patents across different technology groups. A high diversity of co-classifications suggests that AI complements technological opportunities in a wide range of fields. \end{enumerate} In our comparison, all four classification approaches show that AI patenting is rising continuously over time, as captured by patent counts and share of AI patents. Moreover, each approach consistently demonstrates a slow-down in AI patenting from 1990 to the early 2000s (a so-called `AI Winter') followed by a period of increased patenting beginning in 2010 \citep{stuart2003artificial, klinger2020narrowing}. Ultimately, each of the four patent groups reproduces key GPT characteristics ---but to different extents. Despite capturing the smallest number of patents, the keyword sample shows the highest average growth rate in the post 2010s and most strikingly demonstrates AI's wide-ranging usefulness across several technology fields. Patents identified by the USPTO approach most strongly show the feature of innovation complementarity from 1990-2010, while the other three approaches show more heterogeneous trends. However, when accounting for the differences in patent counts, innovation complementarity is most clearly demonstrated in the WIPO patents, closely followed by science and keywords. To validate our measures, we evaluate the same three GPT characteristics for other technologies that were proposed as GPT candidates in the post-1990s and find similar trends to those seen in AI patents. All of them outperform an average US patent which confirms their GPT nature. The results of our study indicate that the simple keyword approach generates key GPT-features most strikingly. The other approaches are conceptually and computationally more complex: the USPTO approach relies on sophisticated machine learning (ML) methods, the science approach leverages additional data sources, and the WIPO method relies on the symbolic combination of technology codes and keywords. However, \citet{bianchini2020deep, klinger2020narrowing, jurowetzki2021privatization, whittaker2021steep} contend that AI research is being rapidly privatised and narrowed towards deep learning (DL) at the expense of other relatively unexplored domains. This narrowing may explain why a method based on human-selected keywords that are related to DL produces the characteristics so strikingly as the rise in patents may reflect the most recent trends in applied AI research and technology. But it remains open to which extent this method will be able to cover the full scope of future AI inventions in fields that are, at present, relatively unexplored but represent relevant subfields of AI. Our results have important implications for innovation policy and research, as they demonstrate how the choice of AI classification approach can impact the outcome of economic and innovation policy analyses that track inventions through the economy. As an example, in 2021 the UK government launched a package of economic, policy, and regulatory reforms, dubbed the National Artificial Intelligence Strategy, to make the UK a `global AI superpower' and plan for the long-term needs of AI technologies \citep{NationalAI}. Given that some classification methods suggest that AI inventions are plateauing (science and USPTO), while others show that it is increasingly taking off (keywords and WIPO), we recommend using multiple techniques to counteract the dependence of policy conclusions on the choice of the method. The rest of this paper is structured as follows: Section \ref{sec:lit} describes previous attempts to measure GPTs in patent data while Section \ref{sec:GPTs} motivates the GPT measures we use to compare our classification methods. Section \ref{sec:data_method} provides details on our data and methods, while Section \ref{sec:approaches} describes our four AI classification approaches. We present our results in Section \ref{sec:results} and discuss the implications of our findings in Section \ref{sec:discussion}. Section \ref{sec:conclusions} concludes. \section{Literature Review} \label{sec:lit} In this section, we summarise the characteristics of a GPT and how they have been studied in the literature. Then, we review the evidence claiming AI to be a GPT. \subsection{What is a GPT?} \citet{bresnahan1995general} introduce the concept of a GPT as a technology which pervades the economy and spurs innovation, both endogenously and through complementarities. Well-established GPTs, such as electricity and ICT, drive economic growth and, thus, much research has focused on their early identification. In general, GPTs are said to share three characteristics: pervasiveness throughout the economy, capacity for rapid intrinsic improvement, and the ability to spawn spillover productivity across sectors though complementarities \citep{bresnahan1995general, lipsey2005economic}. The first of these criteria refers to GPTs' ability to engender new methods of production or innovation throughout the economy. Due to their pervasiveness, GPTs inspire a wave of technological innovation, as they open the gateway to new methods of production and invention. Many impactful technologies, such as nuclear power and fMRI, lack the generality required to pervade a significant number of sectors \citep{ agrawal2018finding, brynjolfsson2019artificial}. The second criterion refers to GPTs' inherent capacity to evolve through a series of rapid endogenous improvements. In contrast, the final criteria captures how GPTs spawn spillover gains in productivity across a range of industries through their ability to augment and complement extant products and processes. Many GPTs act as agents of creative destruction by restructuring established processes throughout the economy \citep[][]{, lipsey2005economic}. Altogether, these criteria necessitate a longer time period for GPTs to evolve, spread and establish their full impact \citep{lipsey2005economic}. Once they have iterated to their most impactful permutation, GPTs then rely on supporting infrastructure and secondary inventions to restructure production processes throughout the economy. \subsection{AI as a GPT} Our study focuses on the range of technical advances most recently dominated by deep neural networks and, more generally, machine learning. These technologies, are rooted in decades of research and inventions on making machines solve complex human-like and other tasks by automation. We will loosely refer to these technologies as AI, since AI includes much more than the techniques from the last few years \cite{klinger2020narrowing}. As we will see, there are several different ways to make the concept of AI more specific used in the literature. Although still under debate, many economists already model AI as a GPT for improved prediction and pattern recognition across a range of complex and non-linear contexts \citep{agrawal2018finding, crafts2021artificial}. Previous discussion of AI as a GPT have used patent data \citep{cockburn2018impact} and firm-level non-patent data \citep{klinger2018deep}, which we detail below. Given its non-specific field of application, AI can be viewed is viewed as powerful technologies for the latest step in the automation process and a novel mechanism for hypothesis formation and data mining \citep{aghion2018artificial}. Indeed, some economists model AI as a ‘general purpose meta-technology’ \citep{Romer2008, agrawal2018finding} that has the capacity to revolutionise the process of discovery and invention, much like the patent. However, the development of modern AI has been characterised by alternating periods of optimism and subsequent `AI winters'. The roots of modern AI stem back to the 1950s, when a series of inventions later led to breakthroughs in the ability of computers to perform reasoning and solve complex problems \citep{stuart2003artificial}. High expectations coupled with limited computing power and funding withdrawal saw a slowdown in AI innovation during both the late 1970s and again in the 1990s, forming periods now referred to as `AI winters' \citep{stuart2003artificial}. \section{Measuring GPTs} \label{sec:GPTs} Currently, there exists a number of alternative metrics to capture GPT characteristics. Given the lack of consensus, many believe GPTs should be better identified as sophisticated networks of technologies sharing `underlying principles and mutual dependencies' \citep{petralia2020mapping}. Historically, patent growth rates have been used to capture the endogenous elaboration of technologies similar to GPTs \citep{ moser2004electricity, jovanovic2005general, petralia2020mapping}. \citet{petralia2020mapping} uses patent growth rates, co-classifications, and a text-mining algorithm to successfully reproduce the canonical GPTs contained within the broad USPTO categories of electricity and computer communication. However, the author finds great heterogeneity within these pools of innovations, which contain both dynamic and stagnant innovations. Moreover, the author notes that the identification of more diffuse and diverse GPTs, such as AI, may require `bottom-up' classification approaches using lower levels of aggregation that can scan multiple technological classes for common principles. \citet{hall2006uncovering} attempt to capture GPTs by measuring the patent growth rates and unbiased generality measures for the most-cited US patents and the patents which cite them. The authors also find great heterogeneity between patents, which underscores the need for multiple metrics to satisfactorily capture GPTs. In the next section, we motivate our selection of patent measures for GPT characteristics and connect each with empirical facts about AI's dissemination and the three canonical GPT features. \subsection{Growth} For more than a decade, AI methods have become more powerful and complex as a result of new technical methods, increased data availability, and improved hardware. Consequently, AI innovation has shifted away from specific application-based methods to more generalised learning-orientated systems \citep{cockburn20194}. With this refinement, the performance of many sub-fields of AI, such as image and text recognition, have seen remarkable improvements in performance \citep{brynjolfsson2021productivity}. This is reflected in the exponential growth of patenting activity referencing terms such as machine learning and deep learning (see Supplementary Information, Figure \ref{keywords_study_learning}). Based on these observations, we measure improvements in AI via the growth rates of each group of patents and changes to their share of all patents, from 1990 to 2020 \citep{hall2006uncovering, petralia2020mapping}. We also look at the growth of the patents that cite such technologies: the `GPT hypothesis' in previous work has been that inventions that build on GPT-like technologies should spawn more new inventions \citep{hall2006uncovering}. Let $N_{i,t}$ denote the number of patents in a group $i \in \{ \text{keyword}, \allowbreak \text{science}, \text{WIPO}, \text{USPTO} \}$ at time $t$, indexed by year. We compute the growth rate as \begin{align} \frac{N_{i,t}-N_{i,t-1}}{N_{i,t-1}}. \label{eq:growth} \end{align} \subsection{Generality} \label{sec:methods_generality} AI has already begun to pervade a myriad of industries as it expands beyond computer science into such diverse fields as structural biology, transport, and imaging \citep{cockburn20194}. In the early 1990s, AI methods remained largely confined to computer science. However, over the past decade, the majority of patents referencing these technologies have appeared in secondary domains \citep{cockburn20194}. Based on the work of \citet{trajtenberg1997university}, we capture this stylised fact through the ‘generality’ of patents, measuring the dissemination of AI across different technology fields. To do so, we build on patent citation data and assume that a forward citation link entails information about the use of a patent in a subsequent innovation \citep{jaffe2019patent}. To operationalise wide usefulness, we rely on a modified version of the generality metric by \citet{trajtenberg1997university} and \citet{hall2006uncovering} given by \begin{align} 1 - \sum_j \left( \frac{\#cites_{ij,t}}{\sum^{N_j}_{j=1} \#cites_{ij,t}} \right)^2 \label{eq:generality} \end{align} where $\#cites_{ij}$ is the sum of citations to patents labelled as AI by classification approach $i$ from technology class $j$, whereby we use the CPC one- section level as class. The number of citations $\#cites_{ij}$ excludes citations within the same class: $N_i$ is the number of patents in approach $i$ and $N_j$ is the number of different CPC classes. Our approach differs to that of \citet{trajtenberg1997university} as we apply the method to each group of AI patents belonging to a variety of CPC sections. For the main analysis, we focus on one- CPC sections, as these are more technologically distant than three- or four- classes and subclasses, whose results we also report. Our generality measure is calculated for the entire group of patents in $i$ with $N_i$ unique patents. To address concerns that this metric may be affected by differences in the group size, we additionally calculate patent-level metrics given by the average number of citing classes, i.e. \begin{align} \frac{1}{N_{i,t}} \sum^{N_{i,t}}_{p=1} \sum^{N_d}_{j=1} \mathds{1}(\#ncites_{p,j,t} \geq 0) \label{eq:avg_cites} \end{align} where $\mathds{1}(\#ncites_{p,j,t} \geq 0) = 1$ if patent $p$ in $i$ is cited by at least one patent in technology class $j$ out of the total number of classes $N_d$ at level with $d \subset \{1,3,4\}$ s in the code. Again, we exclude within-class citations and present results at both the one- CPC section level ($d=1$) and higher orders of disaggregation ($d=3$ or $d=4$). \subsection{Complementarity} Thirdly, GPTs augment existing products and processes in a range of novel contexts to generate productive complementarities throughout the economy \citep{bresnahan1995general, petralia2020mapping}. AI technologies have been shown to complement and rely on secondary innovations, such as cloud computing and big data, which increase access to larger and more affordable data-sets \citep{brynjolfsson2019artificial}. Furthermore, because diverse AI systems share similar underlying structures and can share information, advances in one application of ML, such as machine vision, can spur innovations in other fields, such as autonomous vehicles. Following the approach of \citet{petralia2020mapping}, we measure the extent to which AI patents enhance and supplement other innovations through the diversity of their technology class co-occurrences. For our analysis, we calculate the share of three- and four- CPC codes ($d=3,4$) assigned to the patents in each group of AI patents. Specifically, we calculate the following \emph{diversity measure} over time is \begin{equation}\label{eq:eq_diversity} \frac{\#CPCs_{i,d,t}}{N_{d}} \end{equation} where $i$ denotes each of the four patent classification approaches, $d$ is the classification level and $t$ is year. $N_d$ refers to the number of CPC codes found in use for a particular group of patents, where the codes include $d$ digits. Note that there are 136 and 674 CPC codes, respectively at the three- and four-digit level (according to the February 2022 version of CPC codes). As the above measure could be biased by patent volume, we also calculate the average number of distinct one-, three-, and four-digit CPC codes per patent per year. The \emph{diversity per patent} over time is \begin{align} \frac{1}{N_{i,t}} \sum_p \#CPCs_{p,i,d,t} \label{eq:diversity_per} \end{align} where $d$ represent the technology class represented by one-, three-, or four-digit CPC codes. The time series graphs for the latter measures depict how an average patent's complementarity across technology sections evolves over time. \section{Data} \label{sec:data_method} We apply our methods to all patents granted by the USPTO from 1990-2019. For the analysis, we create four groups of AI patents for each classification method and complement each with supplementary information. From PATSTAT (Spring 2021 edition, \citet{patstat_data}) we sourced patent grant dates and from the USPTO we downloaded the Master classification file (April 2021 version) which contains CPC classifications of patents.\footnote{\url{https://bulkdata.uspto.gov/data/patent/classification/cpc/}} We added further data on patent-to-patent citations and patent titles from GooglePats obtained in an earlier project \citep{hotte2021rise}. For our analysis, we supplemented the citation data with citation year and the technology classes of both the citing and cited patent. In doing so, we obtained networks which represent citations from technology fields at different levels of aggregation to our four sets of AI patents. We also made use of the Reliance on Science database \citep{marx2020reliance} for citation data between patents and science. \FloatBarrier \section{Measuring AI} \label{sec:approaches} In our analysis, we compare four methodologically and conceptually distinct approaches to identifying AI innovation in patents based on (1) keyword search, (2) science citations, (3) the WIPO, and the (4) USPTO method. Here, we introduce these classification approaches in detail. \subsection{Keyword Search} \label{sec:approaches_keywords} Our first classification technique is a straight-forward approach based on keyword search, in which researchers use their discretion to develop a set of terms that reflect the most recent developments in AI. In this paper, we use the set of keywords provided in the appendix of \cite{cockburn2018impact}.\footnote{While we use the keywords from \cite{cockburn2018impact}, we do not fully replicate their approach. They use two subsets of patents: (1) patents classified by the USPC code 706 (Artificial Intelligence) and 901 (Robots); and (2) patents identified by searching titles for the selected keywords. Here we use patents identified by keyword search only, but we extend our search to match keywords also from abstract, claims, and description. We do not use the USPC classification codes since the WIPO method takes a more comprehensive approach combining keywords with IPC or CPC classifications. Also, with our extensive keyword search, we miss only a few patents which are in the first group (i.e., USPC 706 and 901) but not in the second group of \cite{cockburn2018impact}.} The keywords used in this paper focus on three sub-fields of AI: symbolic systems, learning algorithms, and robotics (see Table \ref{tab:keywords} for the full list of keywords). According to the authors, the symbolic systems represent `complex concepts through logical manipulation of symbolic representations' and include `natural language processing' and `pattern recognition'. Learning algorithms include core analytic techniques such as neural networks, deep learning, and machine learning. The last category, robotics, is related to automation or applications of AI (e.g. computer vision and sensory networks). We search for these keywords in patent titles, abstracts, claims, and descriptions using USPTO data. We match the resulting list with patents granted by the USPTO between 1990 and 2019. The main advantage of the keyword approach is its simplicity and ease of implementation. Moreover, carefully chosen keywords can capture recent changes in the AI field. However, the success of this approach depends on the judgement and familiarity of the researcher to the field of AI. Missing important keywords could lead to under-representation of a sub-field. Our approach yields 67,187 patents. \subsection{Science Citations} \label{sec:approaches_science} This classification approach harnesses the scientific basis of patents. In particular, we classify a patent as an AI patent if it makes at least one citation to a scientific paper in the scientific field of `Computer Science; Artificial Intelligence' (short, AI paper) as categorised by the Web of Science (WoS). Scientific citations are added to patent documents for multiple reasons such as describing the technological content of the invention or distinguishing the legal claim from other publicly available knowledge \citep[see ][]{narin1995linkage, meyer2000does, tijssen2001global,ahmadpoor2017dual, marx2019reliance_working_paper}. A citation link to an AI paper indicates that the patent is technologically related to AI because it builds on scientific advancements in this field. A limitation of this approach is that it only identifies AI patents within the subset of patents that make citations to science. For this method, we use data from the Reliance on Science (RoS) database \citep{marx2019reliance_working_paper, marx2020reliance, marx2021v30} which comprises a mapping from patents to scientific articles indexed in Microsoft Academic Graph (MAG) \citep{sinha2015overview}. Scientific articles are tagged by the WoS fields indicating the field of science into which an article is grouped.\footnote{Note that this assignment was made at the paper level using a probabilistic mapping which is different from the journal-based categorization of Clarivate Analytics (Web of Science).} The citation links in the RoS database cover citations made by both the patent applicant and examiner, as well as citations indicated at both the front page and body of the patent document. \citet{marx2019reliance_working_paper} identified citations through a sequential probabilistic text recognition technique. Each citation link is tagged with a confidence score indicating the reliability of the matching approach. In the RoS data, roughly one third (34\%) of all US patents granted in 2019 can be attributed with at least one citation to science. In our study, we identified AI papers by their WoS categories and extracted all patents with at least one citation link to an AI paper. We kept only citation links with a reliability score greater than three, which corresponds to a precision rate of 99.5\% and a recall of 93\%. This approach yields 178,004 AI patents \subsection{World Intellectual Property Organization (WIPO) Method} \label{sec:approaches_wipo} The WIPO methodology for classifying AI patents was published in 2019 and validated by a team of patent experts \citep{wipo2019technologybackground,wipo19}. The aim behind the methodology is to capture three aspects of AI innovation: (1) core AI techniques (deep learning, other learning methods, various type of logic, clustering, etc.); (2) functional applications of AI that can be used to simulate human-like cognitive capacities (such as vision, language, or decision-making); and (3) end-user application fields (such as automation in business, health, or military). This methodology is based on both a keyword search of patent texts and the use of patent classification codes (CPC and IPC). In this technique, some patents are classified based on only a subset of the technological codes, or keywords, whilst others are identified by a combination of both. The list of keywords used in this approach covers core AI methods as well as computing and mathematical concepts used in such technologies. These keywords are matched to the text in the patent titles, abstracts, and claims. This approach identifies 158,652 patents. \subsection{United States Patent and Trademark Office (USPTO) Classification} \label{sec:approaches_uspto} The USPTO approach uses a supervised machine learning (ML) classifier to identify AI patents \citep[see ][]{giczy2021identifying}. This ML model is trained to classify eight components of AI technologies, namely: machine learning, evolutionary computation, natural language processing, speech, vision, knowledge processing, planning/control, and AI hardware. The ML model is trained on the abstracts and claims of a seed (positive set) and an anti-seed (negative set). The seeds are chosen carefully for each respective component by taking an intersection of CPC, IPC, and USPC codes, as well as Derwent's World Patents Index\textsuperscript{TM}. The seeds are expanded based on patent families, CPC codes, and citations to identify all patents linked to the seed set. The anti-seed set is selected randomly from all remaining patents. For training, each text is pre-processed and embedded via the Word2Vec algorithm. The ML models also encode backward and forward citations in a citation vector. The predictions from the ML model are further validated using a small subset of patents that are manually examined. Published in August 2021, the resulting dataset contains 13.2 million USPTO patents and pre-grant publications issued or published between 1976 and 2020. For consistency with our other approaches, we only consider patents granted between 1990 and 2019 and exclude pre-grant publications. The remaining data yields 595,047 patents. \section{Results}\label{sec:results} In this section, we report our findings that show to which extent each patent classification approach generates GPT characteristics for AI inventions. We begin by investigating the volume of patents identified by each of the four classification methods and how these evolve over time. We then examine a number of metrics related to the markers of GPTs, namely: growth, generality, and complementarity. \subsection{Volume and Time Trends }\label{sec:trends} Table \ref{tab:patent_numbers} shows the number of patents identified by each of the four approaches. Summing across all four classification methods, we identify a total of 735,565 patents as AI. Notably, the four methods differ substantially by volume. The science and WIPO approaches both identify 178,004 and 158,652 patents respectively, whilst the keyword-based pool is much smaller with only 67,187 patents. The USPTO approach, in contrast, is vast in scope and identifies 595,047 AI patents. \begin{table}[H] \centering \caption{Number of Patents Identified by Each Approach} \label{tab:patent_numbers} \begin{threeparttable} \begin{tabular}{lcccc} \hline\hline & Keyword & Science & WIPO & USPTO \\[0.5em] \hline No. of patents & 67,187 & 178,004 & 158,652 & 595,047 \\[0.5em] \hline\hline \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item Notes: The table reports the number of unique utility patents granted by the USPTO in 1990-2019. \end{tablenotes} \end{threeparttable} \end{table} Figure \ref{fig:timeseries} shows the pace of AI innovation as suggested by each of the four approaches. Figure \ref{fig:timeseries_counts} plots the raw count of patents, which again shows the the extent to which the volume of USPTO patents dominates. Represented as a share of all granted patents in the US, the USPTO approach identifies roughly 16.6\% of all patents as AI in 2019 (Figure \ref{fig:share_all_patents}). For all approaches, we find that this share increased over time from 1-2\% in the 1990s to 3-17\% in 2019. \begin{figure}[H] \centering \caption{AI Patents by Year (1990-2019)} \label{fig:timeseries} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/overall/ts_counts_all_W2019.pdf} \caption{Number of AI patents} \label{fig:timeseries_counts} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/overall/shares_ofall_W2019.pdf} \caption{Share of AI patents} \label{fig:share_all_patents} \end{subfigure} \footnotesize\justifying Notes: Figure \ref{fig:timeseries_counts} and Figure \ref{fig:share_all_patents} show the evolution of AI patents over time as identified by the four different approaches. Panel (a) reports patent counts by grant year and panel (b) reports AI patents as a share of all US patents granted in the same year. \end{figure} To quantify the overlaps and differences between each pair of approaches, we compute Jaccard similarities and illustrate their evolution over time.\footnote{The Jaccard similarity for two sets of patents is given by $$J(A,B) = \frac{|\mbox{patents in both A and B}|}{|\mbox{patents in union of A and B}|} = \frac{|A \cap B|}{|A \cup B|}$$ with $J(A,B) \in [0,1]$ where $J(A,B) = 0$ if both sets do not overlap and $J(A,B) = 1$ if both sets are identical. In other terms, the overlap between the sets can range from 0\% to 100\%.} In Figure \ref{fig:jaccard_similarities}, we plot the evolution of the Jaccard similarity for pairwise comparisons of different methods. All approaches show a relatively low pairwise overlap ranging at or below 20\%. The WIPO approach shows the highest agreement with each of the other approaches. The WIPO-keyword pair and the science-USPTO pair experienced the most pronounced increase in similarity over the period of study. The keyword approach demonstrates the lowest Jaccard similarity to the other classification approaches. Looking across all patents, we find that only 10,859 or 1.36\% of all unique granted patents are uniformly identified as AI patents by all four approaches. \begin{figure}[H] \centering \caption{Jaccard Similarities by Year} \label{fig:jaccard_similarities} \includegraphics[width=0.8\textwidth]{inputs/overall/jaccard_similarities.pdf} \footnotesize \justifying Notes: This figure shows the evolution of the Jaccard similarities computed for each year in our dataset. We see that the four approaches agree, at best, on 20\% of their overall patents. \end{figure} \subsection{Growth We next investigate the GPT feature of a high intrinsic growth rate, defined as the rate of change of patents between consecutive years. Figure \ref{fig:growth_rates} shows the growth rates, with $1$ corresponding to $100\%$. We also include lowess\cite{cleveland1979robust} (local regression) smoother fits to the data in each plot to observe the overall pattern. In the figure, we find both similarities as well as differences. First, each group of patents produces positive growth rates in most years, as anticipated by our earlier discussion of GPTs. Second, each classification approach demonstrates a dip in growth during the early 2000s before taking off again in recent years. However, a comparison with control groups shows that patenting decreased across many sectors during this period (see Figure \ref{fig:growth_rates_bench} in Appendix \ref{appendix_bench_growth}). Third, two of the smaller groups produced by the keyword and WIPO approaches capture an accelerating growth rate in the last few years, in contrast to the larger USPTO pool. Taken together, we see positive growth rates and differences in time trends between the approaches. In the Supplementary Information (Figures \ref{fig:growth_rates_bench}), we study control groups that have been discussed in the literature as potentially demonstrating characteristics of GPTs. Figure \ref{fig:growth_rates_bench} shows that the AI approaches represent at least as high, or higher, growth rates than several other control groups. We additionally make a series of significance test using a Wilcoxon signed-rank test that indicates whether the observed differences in average growth rates are significant (Appendix \ref{appendix:significance_growth}). We find that the growth patterns of the AI samples cannot be statistically distinguished from our GPT benchmark candidates, although we observe the patterns of growth to be significantly higher than those of average patents.\footnote{Note that these tests rely on a very small number of observations and growth patterns of the four AI samples that fluctuate and differ across the three decades.} Generally, we find that the differences in the average growth rates are not significant across different AI samples, except for the USPTO approach which shows significantly lower growth rates. \begin{figure}[H] \centering \caption{Growth of Patents by Year} \label{fig:growth_rates} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/growth/matrix_aigroups_2x2_rates_key.pdf} \caption{Keyword} \label{fig:growth_rates_key} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/growth/matrix_aigroups_2x2_rates_sci.pdf} \caption{Science} \label{fig:growth_rates_sci} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/growth/matrix_aigroups_2x2_rates_wip.pdf} \caption{WIPO} \label{fig:growth_rates_wip} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/growth/matrix_aigroups_2x2_rates_usp.pdf} \caption{USPTO} \label{fig:growth_rates_uspto} \end{subfigure} \footnotesize\justifying Notes: The four AI approaches have different growth patterns over time. The averages for all are positive, but the keyword and WIPO approach both have increasing growth rates. \end{figure} \begin{table}[H] \centering \caption{\label{table_growth_summary}Average Growth Rates (1990-2019)} \begin{threeparttable} \begin{tabular}{lcccc} \hline\hline & Keyword & Science & WIPO & USPTO \\ \midrule Avg. growth rate & 0.16 & 0.15 & 0.14 & 0.13 \\ \hline\hline \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item \end{tablenotes} \end{threeparttable} \end{table} Figure \ref{fig:citing_AI} shows that the patents citing AI (excluding those patents that themselves are AI by the respective classification approach) have similar profiles over time. The size ranking also corresponds with the counts and shares in previous plots, with growth rates that have, on average, been positive. This gives support to each approach in capturing a growing number of spillovers. The significance tests in \ref{tab:appendix_growth_citing_AI_significance} indicate that the differences between the triple keyword-science-WIPO cannot be statistically distinguished, except for keyword showing the slowest uptake in the 1990s, but to take off thereafter. However, all three approaches consistently score significantly higher than the group of patents citing to USPTO AI patents. Figure \ref{fig:citing_AI_growth_rates} suggests a high similarity in the time profiles and growth series. This suggests that the AI definitions capture different aspects of a larger group of similar technologies. \begin{figure}[H] \centering \caption{Patents Citing AI} \label{fig:citing_AI} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/growth/cciting_AI_counts_noselfcitations.pdf} \caption{Counts} \label{fig:citing_AI_counts} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/growth/citing_AI_rates_noselfcitations.pdf} \caption{Growth rate (1=100\% growth)} \label{fig:citing_AI_growth_rates} \end{subfigure} \footnotesize\justifying Notes: Panel (a) shows actual number of AI citing patents. Panel (b) shows growth rates plotted from 1995 and onwards. \end{figure} \FloatBarrier \subsection{Generality} \label{sec:results_generality} We use two indicators to evaluate the generality of AI innovations, i.e., the extent to which these inventions are used in diverse technology fields. First, we assess the generality index as defined in Equation (\ref{eq:generality}). In Table \ref{tab:generality} we record this index value for all approaches at different levels of aggregation, calculated over the entire time period. The keyword approach shows the highest level of generality across all levels of aggregation. Hence, citations from other technology fields to keyword AI patents are most equally distributed across different technology fields. The WIPO and science produce similar generality indices, with a slightly higher value for science patents. The USPTO group produces the lowest generality index across all CPC levels. \begin{table}[H] \centering \caption{Average Generality Index (1990-2019) } \label{tab:generality} \begin{threeparttable} \begin{tabular}{lcccc} \hline \hline & Keyword & Science & WIPO & USPTO \\[0.25em] \hline 1 digit & 0.76 & 0.73 & 0.72 & 0.68 \\[0.25em] 3 digit & 0.91 & 0.87 & 0.87 & 0.84 \\[0.25em] 4 digit & 0.96 & 0.95 & 0.94 & 0.93 \\[0.25em] \hline\hline \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item Notes: The generality index is defined as share of citations to patents in different CPC classes at different aggregation levels (see \ref{sec:methods_generality}). Citations within the same class are excluded. \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[H] \centering \caption{Average Number of Citing CPC Classes (1990-2019)} \label{tab:avg_citations_all} \begin{threeparttable} \begin{tabular}{lcccc} \hline \hline & Keyword & Science & WIPO & USPTO \\[0.25em] \hline 1 digit & 2.15 & 1.83 & 1.82 & 1.68 \\[0.25em] 3 digit & 5.18 & 4.14 & 4.39 & 3.91 \\[0.25em] 4 digit & 8.90 & 7.41 & 8.04 & 7.13 \\[0.25em] \hline\hline \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item Notes: This table shows the numbers of different CPC classes making a citation to an average patent of the respective group. Citations within the same class are excluded. \end{tablenotes} \end{threeparttable} \end{table} In Table \ref{tab:avg_citations_all}, we present the second GPT measure of wide usefulness based on the average number of unique technology classes citing to an AI patent. For this metric, we calculate the mean annual average number of citing classes. This accounts for the different number of annual patents and avoids an over-representation of the generality of patents in more recent years, as the number of patents is steeply increasing over time. At all levels of aggregation, the keyword patents are the most general. At the one-digit level, science scores second, closely followed by WIPO. WIPO scores higher at the three- and four-digit level. The USPTO patents demonstrate the lowest number of unique one-, three-, and four-digit patent citations. In Figure \ref{fig:generality_AI} and \ref{fig:generality_AI_scaled} we show the evolution of the generality index over time, with \ref{fig:generality_AI_scaled} showing the z-scaled version. The figures confirm that the keyword patents are consistently the most general during the whole period, with the science-based and WIPO approach producing similar, but lower, results. Again, the USPTO patents show the lowest generality. Towards the end of the time period, the decline in generality of the keyword and USPTO methods needs to be considered with caution, as the absolute number of citations of recently granted patents is lower when the time window to be cited is small. \begin{figure}[H] \centering \caption{Generality Index at the one-digit CPC-section level} \label{fig:wide_usefulness_AI} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/generality/new_tt/timeseries_generality_by_CPC_remove_within_class_G_1_digit_AI.pdf} \caption{Generality index} \label{fig:generality_AI} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/generality/new_tt/timeseries_generality_by_CPC_remove_within_class_G_1_digit_AI_scaled.pdf} \caption{Z-score scaled generality} \label{fig:generality_AI_scaled} \end{subfigure} \footnotesize\justifying Notes: The z-scored value equals the level of the generality index minus its average across the four approaches divided by the standard deviation for each year. \end{figure} We also calculate the generality index for all patents as well as a set of benchmark patents that have been deemed as GPT in the literature. Table \ref{tab:generality_benchmark} shows the generality index across all patents is higher than AI categories. Among other groups, biochemistry/genetic engineering, nanotechnology, and climate innovations related patents have high generality score. Figure \ref{subfig:generality_generality_benchmark} shows that the time-series pattern of the generality score is quite stable except for some end-of-the-period fluctuations. For our second measure of generality we find that AI patents show more generality than the entire patent universe at any level of CPC codes (Table \ref{tab:generality2_benchmark}). Biochemistry/genetic engineering and climate innovations-related patents show high patent-level generality. These numbers show that our generality measures are working well, at least for some of the known GPT candidates. In the Appendix \ref{appendix:significance_generality}, we show the results of a series of significance tests for the differences in the time series results. These tests confirm that highest score of the keyword and the lowest of the USPTO method are statistically significant, while the differences between generality scores of the science and WIPO approaches are not significant at the one-digit level. Furthermore, we provide additional results on the generality of the `descendants' of AI patents, i.e., those patents that cite AI patents, but are not themselves AI (Appendix \ref{app:add_results_generality}). These patents produce a similar pattern of results between the classification approaches, although the differences are smaller. \begin{table}[H] \centering \caption{Average Number of Citing CPC Classes (1990-2019): Cited Patents} \label{tab:avg_citations_cited} \begin{threeparttable} \begin{tabular}{lcccc} \hline \hline & Keyword & Science & WIPO & USPTO \\[0.25em] \hline 1 digit & 2.71 & 2.42 & 2.41 & 2.26 \\[0.25em] 3 digit & 5.99 & 5.00 & 5.23 & 4.74 \\[0.25em] 4 digit & 9.92 & 8.54 & 9.05 & 8.21 \\[0.25em] \hline\hline \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item Notes: This table shows the numbers of different CPC classes making a citation to an average patent of the respective group conditional on the patent being cited at least once. Citations within the same class are excluded. \end{tablenotes} \end{threeparttable} \end{table} As an additional check, we also report the number of unique citing technology classes for the set of AI patents that are cited at least once (Table \ref{tab:avg_citations_cited}). Interpreting backward citations as a proxy for the technological value of a patent \citep{kogan2017technological}, this alternative measure focuses only on high-value patents. Imposing this restriction, we find the keyword-based patents once again show the highest generality across all levels of aggregation. Science scores slightly higher than WIPO at the one-digit level and vice versa at the three- and four-digit level. The USPTO approach shows the least generality. \begin{figure}[H] \centering \caption{Average Number of Classes Citing AI} \label{fig:avg_classes_citing_AI} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/generality/new_tt/timeseries_avg_cites_all_by_CPC_remove_within_class_avg_all_1_digit_AI.pdf} \caption{All patents} \label{fig:avg_class_cit_AI} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/generality/new_tt/timeseries_avg_cites_all_by_CPC_remove_within_class_avg_all_1_digit_AI_scaled.pdf} \caption{All patents (z-score scaled)} \label{fig:avg_class_cit_AI_scaled} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/generality/new_tt/timeseries_avg_cites_cited_by_CPC_remove_within_class_avg_cited_1_digit_AI.pdf} \caption{Subset of cited patents} \label{fig:avg_class_cit_AI_cited} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.95\textwidth]{inputs/generality/new_tt/timeseries_avg_cites_cited_by_CPC_remove_within_class_avg_cited_1_digit_AI_scaled.pdf} \caption{Subset of cited patents (z-score scaled)} \label{fig:avg_class_cit_AI_scaled_cited} \end{subfigure} \footnotesize\justifying Notes: The z-scored value equals the level of the generality index minus its average across the four approaches divided by the standard deviation for each year. \end{figure} Figure \ref{fig:avg_classes_citing_AI} and the significance tests shown in Appendix \ref{appendix:significance_generality} confirm that the keyword sample of AI patents shows a significantly larger number of different citing CPC classes compared to the other three methods. Notably, time series of citation data are sensitive to truncation towards the end of the period, as more recently granted patents have a shorter period of being cited. Looking at the z-score scaled data \ref{fig:avg_class_cit_AI_scaled} and \ref{fig:avg_class_cit_AI_scaled_cited}, we find that the science approach is least sensitive to this truncation. \begin{table}[H] \centering \caption{Average Citation Lags by Approach} \label{tab:avg_citation_lags} \begin{threeparttable} \begin{tabular}{lcccc} \hline \hline Period & Keyword & Science & WIPO & USPTO \\[0.25em] \hline 1990-1999 & 13.72 & 13.26 & 13.77 & 13.64 \\ [0.25em] 2000-2009 & 9.84 & 9.08 & 9.38 & 9.34 \\[0.25em] 2010-2019 & 4.26 & 4.15 & 4.19 & 4.33 \\ [0.25em] \hline \hline \end{tabular} \begin{tablenotes}[flushleft] \footnotesize \item Notes: This table shows the average number of years taken until a patent in the sample is cited. The average number of years is lower during the more recent decade as the maximal time lag is truncated when our data ends in 2019. \end{tablenotes} \end{threeparttable} \end{table} An analysis of the average citation lags confirms this observation (Table \ref{tab:avg_citation_lags}): keyword patents show the longest citation lag, i.e. patented inventions captured by this approach take more time to be used subsequently. Once again, the time series of the average number of citing classes is sensitive to truncation, as the time of being cited is short. Importantly, \citet{hall2006uncovering} argue that long citation lags are a characteristic property of GPTs, when their early `learning and destruction' phase relies on the accretion of complementary innovations and structural changes of products and processes \citep{crafts2021artificial}. Altogether, the keyword classification approach produces a set of AI patents with the highest level of generality across different metrics. \FloatBarrier \subsection{Complementarity} We examine the co-occurrence of technology classes of AI patents to understand how strongly they complement existing or novel products and processes. If AI can be combined with many other fields of technology, we would expect to observe that AI patents are classified across a diverse pool of technological classes. Figure \ref{fig:diverse} shows the percentage of three- and four-digit CPC classes associated with each pool of AI patents. In panel (a), USPTO patents span the most diverse pool of technology classes, with CPC classes ranging from 70-90\% of all possible three-digit codes. This result, however, could be caused by the large number of AI patents identified by the USPTO approach. The other three approaches identify a substantially smaller pool of AI patents (see Table \ref{tab:patent_numbers}). Still, these patents cover a large share (80-90\%) of three-digit codes by 2019. Panel (b) in the same figure shows the share of four-digit CPC classes. Starting around 2010, the share of technology classes embedded in keyword, science, and WIPO patents rapidly increased. Although USPTO patents represent the most diverse portfolio --- at 78\% of all technology classes in 2019 --- the gap to the other three approaches has narrowed in recent years. Our significance tests show that the differences between the triple keyword-science-WIPO are statistically insignificant, but all score significantly lower than USPTO patents (see Table \ref{tab:wilcoxon_complementarity_shareCPC3} and \ref{tab:wilcoxon_complementarity_shareCPC4}). \begin{figure}[H] \centering { \caption{Diversity of Patent Pool -- Share of Technology Classes} \label{fig:diverse} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{inputs/complement/fig_diverse3d.pdf} \caption{\% of all 3-digit CPC} \label{fig:diverse3d} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{inputs/complement/fig_diverse4d.pdf} \caption{\% of all 4-digit CPC} \label{fig:diverse4d} \end{subfigure} } \footnotesize \justifying Note: Panel (a) shows the percentage of three-digit CPC codes and panel (b) shows the percentage of four-digit CPC as a share of all codes in the respective category. Note that the total number of three-digit and four-digit CPC codes are 136 and 674, respectively according to the February 2022 version. \end{figure} To understand whether individual patents from each group are becoming more multidisciplinary, we calculate the annual averages of the number of one-, three- and four-digit CPC codes per patent. Table \ref{tab:cpc_134sum} shows that an average WIPO patent is associated with 1.64 three-digit classes and 2.05 four-digit classes across all years. An average keyword or science patent is associated with slightly fewer classes, whereas the USPTO patents appear to be the least multidisciplinary by this metric. The highest score of the WIPO method at the four-digit level and the weak diversity performance of the USPTO approach are statistically significant. At the one-digit level, an average keyword and science patent show both similarly high numbers of co-classifications (1.39-1.40) and cannot be statistically distinguished from each other. The other approaches rank significantly lower with the USPTO approach showing the lowest level of diversity. In Figure \ref{fig:diverse_perpatent}, we check how the average number of one-digit level technology classes per patent vary over time. All panels demonstrate a rising number of technology classes per patent towards the second half of the last decade, with the most profound increase for WIPO patents towards the end of the period. The significance tests in Table \ref{tab:wilcoxon_complementarity_avg_1}, \ref{tab:wilcoxon_complementarity_avg_3}, and \ref{tab:wilcoxon_complementarity_avg_4} suggest that the keyword and science methods show statistically indistiguishable patterns at the one- and three-digit level. At other levels of aggregation the differences diminish, except for the lower scoring of the USPTO method and the highest score of the WIPO method at the four-digit level. \begin{table}[H] \centering \caption{Average Number of one-, three- and four-digit CPC Classes per Paten }\label{tab:cpc_134sum} \begin{threeparttable} \begin{tabular}{l*{4}{c}} \hline\hline &\multicolumn{1}{c}{Keyword}&\multicolumn{1}{c}{Science}&\multicolumn{1}{c}{WIPO}&\multicolumn{1}{c}{USPTO}\\ \hline 1 digit& 1.39& 1.40& 1.36& 1.27\\ 3 digit& 1.61& 1.64& 1.64& 1.43\\ 4 digit& 1.88& 1.97& 2.05& 1.64\\ \hline\hline \end{tabular} \begin{tablenotes}[flushleft]\footnotesize \item Note: The table shows the average of annual average number of technology classes by one-, three- or four-digit CPC per patent. \end{tablenotes} \end{threeparttable} \end{table} \begin{figure}[H] \centering \caption{Patent-Level Diversity - Average Technology Classes} \label{fig:diverse_perpatent} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{inputs/complement/fig_diverse_perpatent_1d.pdf} \caption{Average number of 1-digit CPC} \label{fig:diverse_perpatent_1d} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{inputs/complement/fig_diverse_perpatent_3d.pdf} \caption{Average number of 3-digit CPC} \label{fig:diverse_perpatent_3d} \end{subfigure} \begin{subfigure}[c]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{inputs/complement/fig_diverse_perpatent_4d.pdf} \caption{Average number of 4-digit CPC} \label{fig:diverse_perpatent_4d} \end{subfigure} \end{figure} In summation, although the USPTO patents show a high diversity across many technology classes, the other three approaches produce the dynamics of increasingly diverse patent pools over time. It is remarkable that the keyword approach with only 67,186 AI patents cover almost 90\% of all possible three-digit CPC codes and 67\% of all possible four-digit CPC codes. Examining the diversity of average patents, we find widespread, recent increases in the number of technology classes per patent. In recent years, the WIPO approach seems to capture the most multidisciplinary patents, which could be an artefact of the way the WIPO approach was designed, by considering different aspects of AI, including its techniques, applications, and fields (see Section \ref{sec:approaches_wipo}). In Appendix \ref{appendix_bench_growth}, we also use these measures for a set of benchmark patent groups to see how well they capture the technological diversity. We find that our measures are performing equally well in capturing the increasing diversity of other GPTs suggested in the literature. \section{Discussion} \label{sec:discussion} \subsection{Key Findings} Our analysis finds striking differences along with some similarities between four approaches to classifying patented AI innovations. As summarised in Table \ref{fig_summary}, the keyword approach produces the patent pool with the highest growth rates, generality and second-highest level of complementarity. The WIPO and science pools produce similar levels of each characteristics, but WIPO being stronger when it comes to innovation complementarity. The USPTO patents least profoundly demonstrate each of the three GPT characteristics. \begin{table}[H] \small \centering \caption{\label{fig_summary} Summary of Findings \\} \begin{threeparttable} \begin{tabularx}{1\textwidth}{lccccll} \toprule {} & Keyword & USPTO & WIPO & Science & Metric & Based on \\ \midrule Growth & \ccell{BrickRed}{ } & \ccell{GreenYellow}{ } & \ccell{bronze}{ } & \ccell{bronze}{ } & Growth rate & Counts \\ Generality & \ccell{BrickRed}{ } & \ccell{GreenYellow}{ } & \ccell{bronze}{ } & \ccell{bronze}{ } & Generality index & Citations \\ Complementarity & \ccell{bronze}{ } & \ccell{GreenYellow}{ } & \ccell{BrickRed}{ } & \ccell{bronze}{ } & Avg. \# tech. classes & CPC codes \\ \bottomrule \end{tabularx} \begin{tablenotes} \footnotesize \item \flushleft Notes: This brief summary of our results shows which patent group generates the strongest average estimate of each three characteristics over the last 10 years. Red (yellow) colour indicates the strongest (weakest) performance. \end{tablenotes} \end{threeparttable} \end{table} Remarkably, the keyword approach generates the most profound evidence of each of the three GPT characteristics using a much smaller (68k) set of patents than the other approaches (158-959k). Both the WIPO and science-based patent groups demonstrate increasing generality, complementarities, and growth of AI innovation over time, yet these methods rely on more complex classification and face limitations with regards to patent coverage. Despite relying on the most computationally-intensive ML approach and covering the largest group of patents, the USPTO patents are the least diverse, have the lowest growth rates, and exhibit the fewest complementarities. Notably, each approach robustly reproduces key time-trends in AI innovation, including a substantial uptick in the number of patents citing an increasingly diverse spread of AI patents after 2010. All methods successfully reproduce stagnating generality of average AI patents and decreasing number of distinct citing technological classes since 2000. \subsection{Results in a Wider Context} Our results highlight that the extent to which AI constitutes a GPT is sensitive to the classification technique used. Moreover, our results call into question the literature that takes as given that AI is a GPT that is projected to expand in the ensuing years \citep{cockburn20194,goldfarb2019could, brynjolfsson2021productivity}. Relying on the USPTO classification method, it can be said that average patents now show the lowest generality index and there has been no growth in the number of three-digit CPC codes assigned an average AI patent since 2000. Moreover, by this method, the growth of AI patenting has plateaued after 2010 ---a result that may generate questions over new, large-scale initiatives designed to bolster AI research. Recent breakthroughs in the technology, data, and resources that underpin AI and its potential applications have spurred an international race between countries to become the global AI leader. In this context, the UK has recently launched the National Artificial Intelligence Strategy, demonstrating how hefty investments and significant policy decisions are founded on great expectations of future AI innovation. The policy itself is designed to bolster the `long-term needs of AI technologies' to ready the economy for the `Age of AI' \citep{NationalAI}, yet by the USPTO's own method of classification methods, it can be said that AI innovation began to slow back in 2010. From a research efficiency perspective, our results raise interesting questions as to whether human reasoning used to define relevant keywords can outperform more sophisticated and computationally complex methods. The small, simple, and tractable keyword approach may be desirable for researchers aiming to catch emerging GPTs. Small patent groups may indicate a clearer distinction from other patents and, because such technologies are `new', there is greater potential for future growth, compared to the USPTO approach which suggests that 16.6\% of all US patents today are based on AI. This being said, other classification approaches may be better suited to other questions surrounding the knowledge base of innovations (science) or the trajectory of an innovation through the universe of technologies (USPTO). During our analysis, we discovered that the straight-forward keyword method reproduced from \citet{cockburn2018impact} may be further simplified. We found that the majority of patents can be identified using a narrower set of four terms (machine learning, neural network, robot, pattern recognition), rather than the original list of over forty words. Finally, the phenomena of ‘AI narrowing', as highlighted in \citet{klinger2020narrowing}, suggests that AI innovation has become significantly less diverse during the time period of our analysis. The authors suggest that the concentration of AI research within a small group of private firms is leading to the premature narrowing of its GPT potential. The authors argue that the intensive focus on DL techniques since 2010, combined with innovation path dependency, has reduced incentives for such leaders to explore alternative methods. The authors argue that this ‘locking in’ of AI to a subset of DL techniques is driving under-investment in other applications, including ethical and environmentally conscious AI. Their investigation of arXiv AI research reveals a sharp turn towards computer vision and language AI dominated by Deep Neural Networks (DNNs) as well as a stagnation of various metrics of AI research diversity. The keyword and WIPO approaches both include terms closely related to ML and DL and may be the most sensitive to this premature narrowing. However, this is not visible in our results which instead show stabilised generality and increasingly diverse co-classifications since 2010. Another possible explanation is that the concentration on a few keywords is a hype that unfolds with a time lag, and that incentives support using certain keywords. Descriptions of inventions could be affected by this hype. On the other hand, this is made more difficult by the fact that approaches using keywords match key sections in patents that are carefully reviewed for inventiveness. A recent focus on certain techniques may simply represent a technological trend whereby existing keywords may soon be supplanted by new inventions. \subsection{Limitations of Our Approach} Our evaluation is subject to two major caveats. First, the performance of the different approaches to capture the GPT-like characteristics of AI may be dependent on the time period chosen. Given that our aim is to inform researchers who are interested in the impact of AI, we have chosen to investigate the period from 1990-2019, when AI began to grow and disseminate more widely \citep{cockburn20194}. Given that the performance of the keyword approach may reflect the popularity of a narrow set of technological buzzwords, our results may differ in other time periods. Second, we also emphasise that the evaluation of different classification methods depends on the intended purpose. Here, we aim to provide guidance for researchers interested in studying the economic impact of AI innovation that they take to be a GPT. Other methods may be more appropriate to study the dissemination of AI-related applications or the knowledge origins of AI. For example, the USPTO approach tells an interesting story, suggesting that 14\% of all patented inventions in the US today already rely on AI; the science approach in these contexts would provide interesting extrapolation to the early origins of AI research in the 1950s. Ultimately, our results suggest that the extent to which AI can be seen as a GPT is sensitive to the chosen classification method. This underscores the importance of balancing multiple classification techniques when motivating political and economic innovations in support of AI’s projected future impact. Furthermore, we raise the potential that those wishing to show AI as a strong GPT could do so by leveraging a more keyword-based classification approach. \section{Concluding Remarks} \label{sec:conclusions} In this paper, we have performed a systematic analysis of four separate approaches to identifying patented AI innovations. Ultimately, we demonstrate how sensitive each key GPT-like feature of AI is to the classification method chosen. Our results provide guidance to policy-makers and innovation scholars on how to identify patented AI innovations. For researchers interested in studying the GPT-like characteristics of AI, a simple keyword-based approach may be more successful at producing a narrowly defined set of patented technologies that demonstrates the canonical GPT features of intrinsic growth, wide usefulness, and innovation complementarity. Our work provides important caveats to those making policy and funding decisions for the future pace of AI innovation, such as the recently released UK National Artificial Intelligence Strategy \citep{NationalAI}. Ultimately, we underscore the need for multiple AI classification techniques in order to counteract the dependence of significant policy conclusions on the choice of classification approach. \FloatBarrier \section*{Acknowledgements} The authors thankfully acknowledge valuable feedback received from the participants of the OMPTEC-FoW and INET complexity group meetings. K.H. and T.T. thankfully acknowledge support from OMPTEC and Citi. V. V. acknowledges support from Intelligence Advanced Research Projects Activity (IARPA), via contract no. 2019-1902010003, for developing capabilities that this work builds upon. L. B. gratefully acknowledges support from the General Sir John Monash Foundation. \printbibliography \newpage
1,108,101,563,025
arxiv
\section*{Acknowledgments} This work was supported by the U.S.~Department of Energy through grant DE-FG02-93ER-40762. V.K. also acknowledges the support of JSA/Jefferson Lab Graduate Fellowship.
1,108,101,563,026
arxiv
\section{Introduction} \label{sect:intro} Unmanned Aerial Vehicles (UAVs) have an excellent capability to perform autonomous operations both indoors and outdoors. While autonomous UAVs are already in action for transportation tasks, industries have started to exploit the potential of UAVs for indoor and outdoor missions, such as inspection \cite{mansouri2018cooperative}, surveying, maintenance, and manipulation of power plants, pipelines, high-rise buildings, etc. (cf. \cite{6196930, 6820566, 7782757, 6761569, 7989609, 6878116, agha2021nebula, gupta2012survey}). In these operations, UAVs ensure human safety from operational hazards and ease out the operators' manual efforts. Furthermore, the global market is investing in expanding the capacities of UAVs to act as first responders in accidents and emergencies \cite{tomic2012toward} and to aid in the evacuation during disaster mitigation. Establishing autonomy in such critical applications is complex due to the unavailability of an accurate positioning system or a priori knowledge of the environment \cite{kanellakis2016evaluation}. Hence, most of these missions require complex exploration, mapping, state estimation, planning, control, and task allocation algorithms \cite{lindqvist2022compra} along with their intended tasks. However, executing all the tasks onboard increases the payload, reducing the UAV's flight time. An effective way to improve the performance and flight time of the UAV mission is to offload the heavy computational tasks to a remote system and run minimal control onboard. For UAV control applications, several methods have been proposed with unique advantages (cf. \cite{sankaranarayanan2022adaptive, xu2018safe, viswa2020asmc, ha2014passivity, sankaranarayanan2022survey}). In many cases, the UAV controller must incorporate a few constraints on the states \cite{Sankaranarayanan2022Robustifying, ganguly2021efficient} and control inputs \cite{lindqvist2020nonlinear} to include safety and optimality in power usage. Subsequently, Model Predictive Controller (MPC) \cite{lindqvist2020non} stands out as a solid contender to perform the UAV's desired tasks (such as trajectory tracking based on a high-level planner) while optimally ensuring multiple additional constraints. Offloading the MPC reduces the computational latency significantly, enhancing performance \cite{seisa2022edge}. Offloading the control to a remote system introduces the additional challenge of latency in the loop. Edge computing that brings the storage and computational resources closer to the source of data enables the offloading to provide real-time processing of heavy algorithms with reduced latency \cite{shi2016edge}. The Edge-based control loops are further accelerated by the introduction of 5G networking, which offers high reliability with low communication delays \cite{hassan2019edge}. Hence, these systems are of research interest for remote operations that involve safety-critical processes. They have a wide range of applications, such as traffic management, healthcare services, autonomous vehicles, and smart homes \cite{varghese2016challenges, cao2020overview, tian2019adaptive, tian2019fog, tanwani2019fog}. Besides, deploying the individual modules inside independent containers optimizes the process and resources by various levels of security, abstraction, and resource-sharing methodologies \cite{pahl2015containers}. Notably, the Kubernetes (K8s) framework bundles sets of containers into groups (called PODs) to improve the real-time reliability, scalability, and security of the processes \cite{seisa2022edge, seisa2022comparison}. Many of the Edge-computing applications extensively use K8s in their architecture \cite{seisa2022edge, seisa2022cnmpc}. Edge computing has received remarkable attention from the robotics community due to its potential use in applications such as Simultaneous Localization and Mapping (SLAM) of multiple agents \cite{sarker2019offloading}, teleoperation \cite{zhang2020toward}, object recognition \cite{barnawi2020intelligent}, exploration \cite{skarin2018towards}, and other industrial robotic applications \cite{chen2018industrial, spartharakis2020switching}. In particular, Edge-based remote offloading has enhanced the performance of MPCs with minimal latency, as presented in \cite{seisa2022anedge, seisa2022comparison, seisa2022cnmpc, seisa2022edge, aarzen2018control, skarin2020cloud, skarin2020cloud2, grafe2022event}. However, in a UAV application, even such a minimal closed-loop latency considerably deteriorates the performance \cite{sankaranarayanan2022adaptive}. Furthermore, none of these MPCs have considered delays in the control problem formulation. Given this premise, the contributions of this work are summarized in the following subsection. \subsection{Contributions} The main contribution of this work, Predictive Autonomous Control Using Edge for Drones over 5G (PACED-5G), is the development of an upgraded Edge-based control architecture for offloading high-level tasks in a remote environment that also handles ROS messaging and compensation for time-varying delays. Differently from \cite{seisa2022anedge, seisa2022cnmpc, seisa2022edge}, the upgraded architecture robustifies the system security over the network using a UDP tunneling approach. The dynamics of a UAV are modeled in the presence of delays in the system. A state estimator is designed to estimate the current state of the UAV using past observations and the available knowledge of the delay. The estimated delay in the network is updated using a moving average method to compensate for the time-varying nature of the delays. A nonlinear MPC is devised to ensure that the UAV follows the desired reference trajectory while avoiding obstacles and optimizing the control inputs. The proposed architecture exploits K8s orchestration for deploying the optimizer, MPC, state estimator, ROS master, trajectory generator, and UDP tunnel. The architecture is tested using a real-time quadrotor UAV, and the controller is validated using two experimental scenarios. The rest of the article is organized as follows: The model of the system is derived along with the state predictor, and an MPC is formulated in Section \ref{sect:cont_dev}; Section \ref{sect:edge_arch} describes the Edge architecture; Section \ref{sect:exp_val} describes the experimental validation of the proposed control architecture, while Section \ref{sect:conc_futu} presents the conclusions and the potential future works. \section{Controller Development} \label{sect:cont_dev} \subsection{UAV Modeling} The overall delays in the closed loop can be accounted for in control by introducing a delayed input into the model of the UAV defined in \cite{kamel2017model}, given by, \begin{align} \mathbf{\Ddot{p}}(t) &= \frac{1}{m} \mathbf{U}(t-\tau) + \mathbf{G} - \mathbf{A}\mathbf{\dot{p}}(t), \label{eq:pos_dyn} \\ \dot{\phi}(t) &= \frac{1}{\alpha_\phi}(K_\phi \phi^d(t - \tau) - \phi(t)), \label{eq:phi_dyn} \\ \dot{\theta}(t) &= \frac{1}{\alpha_\theta}(K_\theta \theta^d(t - \tau) - \theta(t)), \label{eq:theta_dyn} \\ \mathbf{U}(t-\tau) &= \mathbf{R}(\mathbf{q}(t)) \begin{bmatrix} 0 \\ 0 \\ F(t-\tau) \end{bmatrix}, \label{eq:control_mapping} \end{align} where $\mathbf{p} \triangleq \begin{bmatrix} x(t) & y(t) & z(t) \end{bmatrix}^T \in \mathcal{R}^3$ and $\mathbf{q} \triangleq \begin{bmatrix} \phi(t), \theta(t), \psi(t) \end{bmatrix}^T \in \mathcal{R}^3$ is the position is the orientation of the UAV in the inertial frame, $\mathbf{X_W - Y_W - Z_W}$, $m$ is the mass of the UAV, $\mathbf{U}(t)\in \mathcal{R}^3$ is the control input, $\mathbf{G} \triangleq \begin{bmatrix} 0 & 0 & -9.81 \end{bmatrix}^T$ is the gravity term, $\mathbf{A}\in \mathcal{R}^{3 \times 3}$ is a diagonal matrix with the drag-coefficients in its diagonals, ($\alpha_\phi, \alpha_\theta$) and ($K_{\phi}, K_{\theta}$) are time-constants and gains of inner-loop behaviors for roll and pitch respectively, ($\phi^d, \theta^d$) are the desired reference values for the roll and pitch angles, $\mathbf{R} \in \mathcal{R}^{3 \times 3}$ is the Euler angle rotation matrix, $F$ is the total thrust, and $\tau$ is the closed loop delay in the system. The UAV's linear and angular dynamics are partly decoupled with the position dynamics, dependent on the attitude of the UAV. So, the controller has a dual loop, where the inner loop controls the attitude dynamics, and the outer loop controls the position dynamics. Since the inner loop must run at a much higher frequency than the outer loop control to produce the desired moments, for the given control inputs, the inner loop control is placed on the onboard computer. Hence, the latency in the processing and actuation of the inner loop control is negligible. Since the outer loop control requires heavy computational power, it is offloaded on the Edge computer. The control inputs from the outer loop controller are the overall thrust, $F$, and the desired roll and pitch angles $\phi^d, \theta^d$. The yaw of the UAV is assumed to be zero at all times, since the reference input to yaw is always zero. Also, it is observed from the block diagram \ref{fig:arch} that the overall delay in the closed loop is given by $\tau = d_1 + d_2 + d_3$. Subsequently, the control problem is defined as the following: \textbf{Control Problem:} Design a controller for a UAV to track a given trajectory in the presence of time-varying delays in the loop while avoiding the obstacles on the way. \subsection{Proposed Control Solution} The solution for the control problem is divided into two steps: State Estimator and Model Predictive Controller, which are further explained in the following subsections. \subsubsection{State Estimator} Since the inputs to the UAV are delayed by time, $\tau$, a state estimator has to be designed to predict the future estimate of the state for the current observation of the states based on the available information. The architecture is designed in such a way that the onboard PC loops back the time-stamped control input to the State Estimator along with the odometry of the UAV and the obstacles. The estimator finds an estimate of the closed-loop delay, $\widehat{\tau}$ in the network using the following formulation, \begin{align} \widehat{\tau}(k+1) &= \widehat{\tau}(k) + \frac{(\tau_{new} - \widehat{\tau}(k))}{k+1}, ~ \tau(0) = 0, \label{eq:delay_est} \end{align} where $\tau_{new}$ is the difference between the current time-stamp and the time-stamp of the received control signal (delayed input). The estimates of the position are defined as follows, \begin{align} \mathbf{\widehat{p}}(t) &= \mathbf{p}(t-\tau), \label{eq:pos_est} \\ \implies \mathbf{\dot{\widehat{p}}}(t) &= \mathbf{\dot{p}}(t-\tau). \label{eq:vel_est} \end{align} Since the control inputs are delayed, they have to be designed in such a way that the current inputs ensure that the future state after a delay tracks the future trajectory with minimal error. So, an estimate of future states of the UAVs and the obstacle need to be generated to be used in the control law. Using \eqref{eq:vel_est}, the future position of the UAV is estimated as, \begin{align} \mathbf{\dot{\widehat{p}}}(t + \tau) &= \mathbf{\dot{\widehat{p}}}(t) + \int_{t-\tau}^{t} \mathbf{\ddot{p}}(t) dt, \label{eq:vel_est_fut_1} \end{align} The integral term in \eqref{eq:vel_est_fut_1} is simplified using a Taylor series approximation and ignoring the higher order terms (since $\tau ^2 << \tau$) as \begin{align} \mathbf{\dot{\widehat{p}}}(t + \tau) &= \mathbf{\dot{\widehat{p}}}(t) + \mathbf{\ddot{p}}(t) \tau. \label{eq:vel_est_fut_2} \end{align} Using \eqref{eq:pos_dyn}, the expression \eqref{eq:vel_est_fut_2} can be expanded as \begin{align} \mathbf{\dot{\widehat{p}}}(t + \tau) &= \mathbf{\widehat{p}}(t) + \left ( \frac{1}{m} \mathbf{U}(t-\tau) + \mathbf{G} - \mathbf{A}\mathbf{\dot{\widehat{p}}}(t+\tau) \right ) \tau, \nonumber \\ \mathbf{\dot{\widehat{p}}}(t + \tau) &= (\mathbf{I} + \mathbf{A}\tau)^{-1} \left (\mathbf{\widehat{p}}(t) + \left ( \frac{1}{m} \mathbf{U}(t-\tau) + \mathbf{G} \right ) \right ) \tau \label{eq:vel_est_fut_3} \end{align} Further, future estimates of the position can be predicted from \eqref{eq:vel_est_fut_3} using the relationship, \begin{align} \mathbf{{\widehat{p}}}(t + \tau) &= \mathbf{\widehat{p}}(t) + \mathbf{\dot{p}}(t) \tau. \label{eq:pos_est_fut} \end{align} The control input $U(t-\tau)$ depends on the attitude dynamics through the relationship, \eqref{eq:control_mapping}. So, the future estimates for the roll and pitch are derived by following similar steps from \eqref{eq:pos_est}-\eqref{eq:pos_est_fut}, and using the relationships \eqref{eq:phi_dyn}, \eqref{eq:theta_dyn}, as \begin{align} \mathbf{\widehat{\phi}}(t) &= \mathbf{\phi}(t-\tau), \quad \mathbf{\widehat{\theta}}(t) = \mathbf{\theta}(t-\tau), \label{eq:att_est} \\ \mathbf{{\widehat{\phi}}}(t+\tau) &= \left (\mathbf{I} + \frac{\tau}{\alpha_{\phi}} \right )^{-1} \left (\mathbf{{\widehat{\phi}}}(t) - \frac{K_\phi}{\alpha_{\phi}}\phi_{d}(t-\tau) \right )\tau, \label{eq:phi_est_fut} \\ \mathbf{{\widehat{\theta}}}(t+\tau) &= \left (\mathbf{I} + \frac{\tau}{\alpha_{\theta}} \right )^{-1} \left (\mathbf{{\widehat{\theta}}}(t) - \frac{K_\theta}{\alpha_{\theta}}\theta_{d}(t-\tau) \right )\tau. \label{eq:theta_est_fut} \end{align} Similarly, the future estimate of the obstacle location, $\mathbf{p}*o$ is estimated as, \begin{align} \mathbf{\dot{\widehat{p}}}^o(t+\tau) &= \mathbf{\dot{\widehat{p}}}^o (t) + \mathbf{G}\tau, \nonumber \\ \mathbf{{\widehat{p}}}^o(t + \tau) &= \mathbf{\widehat{p}}^o(t) + \mathbf{\dot{p}}^o(t) \tau, \label{eq:obs_est_fut} \end{align} where $\mathbf{\widehat{p}}_o(t) = \mathbf{p}_o(t-\tau)$ is the estimated position of the obstacle. Here, the considered obstacle has an initial velocity, but the only force acting on the obstacle is gravity. \subsubsection{Model Predictive Controller} The objective of the high-level controller is to design the control inputs $\mathbf{u} \triangleq \begin{bmatrix} F(t-\tau) & \phi^d(t-\tau) & \theta^d(t-\tau) \end{bmatrix}^T$ to track the desired reference position trajectory that comes out of the trajectory generator node (cf. Fig. \ref{fig:arch}). Since the control inputs are delayed, they must be designed based on the estimated future states. So, a state vector is formed using the estimated future position, velocity, roll, and pitch, given by $\mathbf{x} \triangleq \begin{bmatrix} \mathbf{\widehat{p}}(t+\tau)^T & \mathbf{\dot{\widehat{p}}}(t+\tau)^T & \widehat{\phi}(t+\tau) & \widehat{\theta}(t+\tau) \end{bmatrix}^T$ presented in Eq. \eqref{eq:vel_est_fut_3}-\eqref{eq:theta_est_fut}. The controller considers the state evolution of a specific number of time steps ($N$) with a sampling time, $T_s$, into the future (prediction horizon) to optimize the control inputs. The state evolution through the prediction horizon is obtained using the forward Euler method. The state evolution for the $(k+j)^{th}$ time step, predicted at the $k^{th}$ time step, is denoted by $\mathbf{x}_{k+j|k}$. The reference trajectory, $\mathbf{\Tilde{x}}^d$ is sampled through the prediction horizon for formulating the cost function, $J(\mathbf{\Tilde{x}}, \mathbf{\Tilde{u}}, \mathbf{u}_{k-1})$ as, \begin{align} J &= \sum_{j=0}^N \{\left (\mathbf{x}^d_{k+j} - \mathbf{x}_{k+j|k} \right)^T \mathbf{Q}_x \left (\mathbf{x}^d_{k+j} - \mathbf{x}_{k+j|k} \right) \nonumber \\ & + \left (\mathbf{u}_{k+j|k} - \mathbf{u}_{k+j-1|k} \right)^T \mathbf{Q}_{\delta u} \left (\mathbf{u}_{k+j|k} - \mathbf{u}_{k+j-1|k} \right) \nonumber\\ & + \left ( \mathbf{u}_{k+j|k} + \mathbf{G} \right)^T \mathbf{Q}_u \left ( \mathbf{u}_{k+j|k } + \mathbf{G} \right)\}, \label{eq:mpc_cost} \end{align} where $\mathbf{x}^d$ is the reference state at each time step, $\mathbf{\Tilde{x}}, \mathbf{\Tilde{u}}$ are the appended states and inputs over the horizon, $\mathbf{Q}_x, \mathbf{Q}_u, \mathbf{Q}_{\delta u}$ are the positive definite gain matrices for states, inputs, and input rates, respectively. It is noticeable that the cost function not only minimizes the state errors but also ensures smoothness in the control signal by minimizing the difference between consecutive inputs through $\mathbf{Q}_{\delta u}$ and maintains the magnitude of the overall control inputs close to hovering mode through $\mathbf{Q}_u$. Obstacle avoidance is imposed as an additional constraint over the cost function. Since the obstacles are considered to be unactuated objects, the Euler method state evolution is considered for their estimated positions over the prediction horizon. The obstacle is confined by a collision sphere of radius, $r_d$, and a spherical constraint of safety radius, $r_s$ is used to isolate the estimated states of the UAV from the estimated states of the obstacle over the prediction horizon, given by, \begin{align} h^o &= (r_s + r_d)^2 - \left (\mathbf{p}^o - \mathbf{p} \right)^T \left (\mathbf{p}^o - \mathbf{p} \right) \label{eq:obstacle} \end{align} The obstacle avoidance constraint in \eqref{eq:obstacle} is appended for all time steps of the prediction horizon. Additionally, bounding constraints are laid on the control inputs and the rate of desired roll and pitch to operate the system in a safe range on every $k^{th}$ time step over the complete horizon, given by, \begin{align} \mathbf{u}^{max} &\leq \mathbf{u}_{k+j} \leq \mathbf{u}^{min}, \label{eq:control_constraint} \\ -\delta \phi^{max} &\leq \phi^d_{k+j} - \phi^d_{k+j-1} \leq \delta \phi^{max}, \label{eq:phi_constraint} \\ -\delta \theta^{max} &\leq \theta^d_{k+j} - \theta^d_{k+j-1} \leq \delta \theta^{max}, ~ \forall j \in \lbrace 1,2,...,N \rbrace \label{eq:theta_constraint} \end{align} where $\mathbf{u}^{max}, \mathbf{u}^{min} \in \mathcal{R}^{3 \times 3}$ are the minimum and maximum bounds of the control input, and $\delta \phi^{max}, \delta \theta^{max}$ are the maximum bounds on the rate of roll and pitch of the UAV. Finally, the NMPC problem is solved in PANOC \cite{small2019aerial} using the following optimization objective, \begin{align} \min_{\mathbf{\Tilde{x}}^K, \mathbf{\Tilde{u}}^K} J(\mathbf{\Tilde{x}}, \mathbf{\Tilde{u}}, \mathbf{u}_{k-1}) &\nonumber \\ \quad \text{s.t.} ~ h^o_{k+j} > 0,& \nonumber\\ \qquad \qquad \text{Constraints} ~\text{\eqref{eq:control_constraint}} - \text{\eqref{eq:theta_constraint}}, & ~ \forall j \in \lbrace 0,1,...,N \rbrace. \label{eq:opt_obj} \end{align} \section{EDGE Architecture} \label{sect:edge_arch} \begin{figure*}[!h] \centering \includegraphics[width=\textwidth]{images/icuas_architecture.png} \caption{A schematic of the proposed Edge architecture} \label{fig:arch} \end{figure*} The Edge architecture handles the overall offboard computing required to control the UAV. In the proposed upgraded architecture, the control structure is divided into six ROS nodes: ROS master, MPC, Trajectory Generator, State Estimator, Optimizer, and UDP Tunnel. All the nodes run on their individual PODs, as shown in Figure \ref{fig:arch}. A Kubernetes cluster orchestrates all the PODs to form the remote system. Every POD has a single container that enables its functionality, whereas a container is a software unit that contains and runs specific application software and its libraries and dependencies. The containers are created using custom images based on ROS Noetic docker image entrypoints. Compared to the authors' previous work on Edge architectures (cf. \cite{seisa2022anedge, seisa2022edge, seisa2022cnmpc}), where a host network is used to enable the communication to and from the PODs, and thus, establish ROS messaging between the offboard and onboard computers, the upgraded architecture uses ROS messaging and UDP tunnelings over the 5G network for communicating between the offboard PODs and the onboard computers. Though the tunneling adds a small latency (of a few ms) in the loop, it eliminates the host network, which is unavailable in commercial Edge servers. Furthermore, using the host networks jeopardizes the system's security as it exposes the local services to the PODs, which an attacker could use to snoop on the network activity of other PODs or bypass the restrictive network policies in the namespace. The UDP tunnel has a server and a client. The UDP server subscribes to a ROS message and sends them over the 5G network as UDP packets, while the client receives the UDP packets from the network and publishes them as ROS messages. The trajectory generator POD is used to run a high-level planner, which takes information from the world and generates a trajectory to be tracked by the UAV over a time horizon. The trajectory generator POD sends the reference trajectory, $\mathbf{\Tilde{x}}^d$, to the MPC POD using the reference topic. The MPC POD receives the estimated future states of the UAV, $\mathbf{\Tilde{x}}$ and the obstacle from the state estimator, $\mathbf{\Tilde{p}}^o$ using the UAV pose and obstacle pose topics. It formulates the cost function using the reference trajectory. The MPC and optimizer communicate to generate the desired control inputs ($F, \phi^d, \theta^d$) for the low-level onboard controller using the references, estimated UAV and obstacle states, and cost function. The MPC POD generates a time stamp before the optimization and uses the time stamp to publish the control inputs through the command velocity topic. The UDP server on the Edge side subscribes to the command velocity topic and forwards them to the network as UDP packets (byte stream). The UDP client of the onboard side receives these packets and publishes them with and without time stamps. The command velocity topic without the time stamp is subscribed by the UAV's internal controller, while the one with the time stamp is looped back with the same time stamp on the old command velocity topic through a UDP server. The UDP server also subscribes to and transfers the odometry topics of the UAV and obstacles. The UDP client on the Edge side receives the UDP packets and publishes the information as odometry topics and the old command velocity topic. The state estimator subscribes to the odometry and old command velocity topics. It uses the difference between the current time stamp and the time stamp in the old command velocity to predict the approximate delay, ${\tau}$ in the network. The odometry ($\mathbf{p}, \mathbf{\dot{p}}, \mathbf{p}^o, \mathbf{\dot{p}}^o$), estimated delay ($\widehat{\tau}$), and the old control inputs ($F, \phi^d, \theta^d$) are used to estimate the future state of the UAV, $\mathbf{\widehat{p}}, \mathbf{\dot{\widehat{p}}}$. Similarly, the delay ($\tau$) and odometry of the obstacles ($\mathbf{p}^o, \mathbf{\dot{p}}^o$) are used to estimate the future state of the obstacle, $\mathbf{\widehat{p}}^o$. The state estimator publishes the estimated future states of the UAV and the obstacles. A ROS master node runs on an independent POD, so any node failure would not affect it. All the nodes in the POD are registered to the ROS master node. \section{Experimental Validation} \label{sect:exp_val} \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth]{images/cf.png} \caption{A schematic of the UAV used in the experiment.} \label{fig:cf} \end{figure} The performance of the PACED-5G architecture is validated experimentally using a quadrotor UAV, Crazyflie 2.0 Nano Quadcopter (cf. Fig. \ref{fig:cf}). The odometry feedback for the UAV and obstacle is obtained using a Vicon Motion Capture System. The Edge Architecture is deployed on a remote server at the datacenters of RISE RESEARCH INSTITUTES OF SWEDEN \cite{rise}, in Lule\aa\,\,. The specifications of the Kubernetes cluster are presented in Table. \ref{tb:cluster}. The edge datacenters provide significant computational power For the tracking demonstration, the trajectory generator provides a reference trajectory ($x^d(k) = sin(k/600) ~m, ~ y^d(k) = cos(k/600) ~m, ~ z(k) = 0.8 ~m$) to the MPC. The states are sampled at 30 Hz ($T_s \approx 0.033 ~s$) inside the MPC with a prediction horizon, $N = 60$ time steps. The experiment is performed in two scenarios, which are explained in the following subsections. \begin{table}[htbp] \centering \caption{Kubernetes Cluster Specifications} \begin{tabular}{ c c } \hline \hline \centering \shortstack{\\Kubernetes Version} & v1.20.15\\ \hline \centering \shortstack{\\Worker Nodes} & 3 nodes\\ \hline \centering \shortstack{\\CPU} & 60 cores (20 per node)\\ \hline \centering \shortstack{\\Memory} & 590 GiB (196 GiB per node)\\ \hline \centering \shortstack{\\PODs for the application} & 6\\ \hline \end{tabular} \label{tb:cluster} \end{table} \subsection{Scenario 1: Comparison with uncompensated control} \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth]{images/delay_1.png} \caption{Instantaneous and moving average estimate of the delay in the closed loop over the flight duration.} \label{fig:delay} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth]{images/pos_error.png} \caption{Tracking error in UAV's position} \label{fig:pos_err} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth]{images/plot3d_3.png} \caption{Trajectory tracked by the UAV with and without compensation for the delays in the system for the given reference trajectory.} \label{fig:traj} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth]{images/pred_err.png} \caption{Error in the estimation of the states ($x, y, z$) and their Euclidean distance.} \label{fig:pred_err} \end{figure} \begin{table}[!t] \renewcommand{\arraystretch}{1.1} \caption{{Position Tracking Performance}} \label{tb:rms_est_pos} \centering { { \begin{tabular}{c c c c c} \hline \hline Controller & \multicolumn{4}{c}{Root mean squared error} \\ \cline{1-5} & $x(cm)$ & $y(cm)$ & $z(cm)$ & $Euc (cm)$ \\ \hline with estimator & 4.46 & 3.75 & 0.90 & 5.70 \\ \hline without estimator & 6.45 & 32.99 & 2.28 & 33.55 \\ \hline \hline estimation error & 0.49 & 0.50 & 0.11 & 0.69 \\ \hline \end{tabular}}} \end{table} In this scenario, the need for the state estimator is presented by comparing the control of UAV with and without the state estimator. Fig. \ref{fig:delay} - \ref{fig:pred_err} highlight the experiment's results. The time-varying nature of the delays is observed in Fig. \ref{fig:delay}. The moving average delay estimate lies close to $67$ milliseconds over the entire duration. Though the architecture reduces the overall latency, the minimal delays in the system are sufficient to degrade the tracking performance. Fig. \ref{fig:pos_err} shows that the tracking error in position is significantly reduced using the state estimator. The same is observed in Fig. \ref{fig:traj}, which shows the 3D plot of the trajectories taken by the UAV with and without compensation. Without the estimator, the trajectory of the UAV is off to a large extent. However, the estimator noticeably improves the performance and maintains the actual trajectory quite close to the reference. Further, Fig. \ref{fig:pred_err} shows the effectiveness of the state estimator, where the error in the estimation (the difference between the actual value and the estimated value) is plotted. It is evident that the Euclidean distance between the estimated value and the actual value is less than $2$ millimeters at all the time steps. The numerical analysis of the results is presented in Table. \ref{tb:rms_est_pos}. The Root Mean Squared (RMS) value of the errors in position shows that without an estimator, the deviation in performance, especially in the $y$ axis, is drastic, making it unsuitable for practical applications. However, the estimator minimizes the tracking error into a small margin, with the mean of the Euclidean distance between the reference and the actual trajectories less than 6 cm. Further, the RMS value of the estimation error is less than or equal to $0.5$ cm in all directions, with the mean Euclidean distance between the estimated value and the actual value being less than $7$ cm, hence, proving the reliability of the proposed state estimator in the control structure. \subsection{Scenario 2: Obstacle Avoidance} \begin{figure}[!h] \centering \includegraphics[width=0.48\textwidth]{images/obs_plot3d.png} \caption{Trajectory taken by the UAV in the absence and presence of the obstacle while following a reference trajectory, along with the obstacle trajectories during possible collisions.} \label{fig:obs_traj} \end{figure} In this scenario, the UAV is commanded to track the trajectory in the presence of a moving obstacle. The obstacle is thrown at the drone in two instances when the UAV is following the commanded circular trajectory. Since the drone would not avoid the obstacle without the state estimator, which would damage the drone, this scenario is performed only for the case with the estimator. Fig. \ref{fig:obs_traj} shows the trajectory taken by the drone in the absence and presence of the obstacle. The NMPC predicts the possible future collision and moves the UAV away from the point of a collision immediately to maintain a safe distance from the obstacle, and brings the UAV back to the trajectory when the obstacle is avoided. Thus, the PACED-5G control architecture proves its dynamic obstacle avoidance capability in a delayed environment. \section{Conclusions and Future Work} \label{sect:conc_futu} An upgraded Edge architecture, PACED-5G, is proposed for UAVs to offload computationally heavy high-level control algorithms to a remote workstation. The utilization of ROS messages for the internode communication and the introduction of UDP tunneling to establish communication between the onboard and offboard computers over any existing 5G network removes the requirement on a host network. Further, it acts as a security layer protecting the Edge server from possible attacks. A state estimator is developed to compensate for the effects of time-varying closed-loop delays in the system. A nonlinear MPC is designed to follow any reference trajectory provided by the trajectory generator in the delayed environment. The architecture is tested on UAV hardware in two experimental scenarios. The results of the experiments are analyzed to show the efficacy of the proposed architecture. The proposed controller can compensate for time-varying latency (moving average $67$ milliseconds). However, some interesting future direction would be to provide more redundancy to the system. If, for example, the communication is degraded or lost entirely, safety actions should be considered and onboard backup planning implemented. Furthermore, the utilization of edge resources can provide an environment through which multiple robots would be able to communicate and collaborate in order to execute more complex and demanding tasks, while Kubernetes would manage the resources and orchestration of the applications. \bibliographystyle{IEEEtran}
1,108,101,563,027
arxiv
\section{Introduction} In 1970 Anosov and Katok introduced the so called \textit{approximation by conjugation} method (also known as the \textit{Anosov-Katok} or the \textit{AbC} method) to construct examples of transformations satisfying a pre-specified set of topological and/or measure theoretic properties. In the realm of smooth (or in some cases, real-analytic or even symplectic) zero entropy diffeomorphisms, this technique till date remains one of the rare methods that one can use to explore the possibility of the existence of diffeomorphisms satisfying such a set of properties. Such transformations or diffeomorphisms often are important in their own right. However, more interestingly, in recent years, there have been situations where they have been able to exhibit connections, such as that of rotation number at the boundary with the dynamical behaviour of a diffeomorphism \cite{FS}. This method has gained further momentum with the body of work produced by Foreman and Weiss \cite{FW04},\cite{FW1} establishing anti-classification theorems for smooth diffeomorphisms. In this article, we wish to explore the construction of various types of Anosov-Katok diffeomorphisms which supports non-ergodic generic measures. For a probability preserving dynamical system $(M,\mathcal{B},\mu,T)$, we define the set of \textit{$\mu$-generic points} \begin{align*} L_\mu=\{x\in M:\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^nf(T^ix)=\int_Xfd\mu\; \forall\; f\in C_c(M)\} \end{align*} where $C_c(M)$ is the set of all compactly supported real valued continuous functions. The measure $\mu$ is called a \textit{generic measure} if $L_\mu\neq\emptyset$. The celebrated Birkhoff ergodic theorem asserts that $\mu(L_\mu)=1$ for an $\mu$-ergodic transformation. There has been a considerable amount of interest regarding the existence of generic measures, particularly in the realm of interval exchange transformations. Chaika and Masur \cite{Cha} showed that there exists a minimal non-uniquely ergodic interval exchange transformation on $6$ intervals with $2$ ergodic measures, which also has a non-ergodic generic measure. Later, Cyr and Kra \cite{Cy19} found a criterion for establishing upper bounds on the number of distinct non atomic generic measures for subshifts based on complexity, and as a consequence, they showed that for $k>2$, a minimal exchange of $k$ intervals has at most $k-2$ generic measures. On the other hand, Gelfert and Kwietniak \cite{Gel18} gave an example of a topologically mixing subshift that can have exactly two ergodic measures, none of whose convex combination is generic. Anosov and Katok, in \cite{AK}, constructed examples of smooth measure preserving diffeomorphism, which is weakly mixing in the space $\A(M) = \overline{\{h\circ S_t\circ h^{-1}: t\in \T^1, h \in \text{Diff}^{\infty}(M,\mu)\}}^{C^{\infty}}$, on any manifold admitting a non-trivial $\T^1$ action. Later Fayad and Saprykina produced the smooth weakly mixing diffeomorphism in the restricted space $\A_{\alpha}(M) = \overline{\{h\circ S_{\alpha}\circ h^{-1}: h \in \text{Diff}^{\infty}(M,\mu)\}}^{C^{\infty}}$ for any Liouville number $\alpha$, i.e. for each $n$, there exist integers $p>0$ and $q>1$ such that $0<|\alpha-\frac{p}{q}|<\frac{1}{q^n}$. Both the above constructions are built using the approximation by conjugation method: The diffeomorphism is obtained as the limits of sequences $T_n= H_nS_{\alpha_{n+1}}H_n^{-1}$ where $\alpha_{n+1}\in \mathbb{Q}$ and $H_n=h_1\ldots h_n$ where $h_n$ is a measure preserving diffeomorphism satisfying $S_{{\alpha}_n}\circ h_n= h_n\circ S_{{\alpha}_n}$. For the diffeomorphism $T_n$ to converge in the space $\A_{\alpha}(M)$, for any $\alpha$, it needs construction of more explicit conjugation maps $h_n$ and very precise norm estimates and is generally difficult when compared to convergence in the space $\A(M)$. In general, a uniquely ergodic measure preserving transformation on a compact metric space is minimal on the support of the measure, but the converse is not true. Markov produced the first counterexample. Further, Windsor, in (\cite{Win}), constructed a minimal measure preserving diffeomorphism in $\A_{\alpha}(M)$ with the finite number of ergodic measures. Afterwards, Banerjee and Kunde(\cite{BaKu}) produced a similar result for the real analytic category on $\T^2$. It is well known that the Anosov-Katok constructions allow great flexibility, and we present several results in this article that explore the existence of non-ergodic generic measures in this setup. We also note that our constructions will be smooth and, in some cases, even real-analytic. Hereby we extend the above results to produce more compelling examples with different measure-theoretical and topological dynamical properties. \begin{maintheorem} For any natural number $r$, and any Liouvillian number $\a$, there exists a minimal $T\in\mathcal{A}_\a(\T^2)$ such that the Lebesgue measure is a generic measure for $T$, and there exists $r$ absolutely continuous w.r.t. to Lebesgue measures $\mu_1,\mu_2,\ldots,\mu_r$ such that $T$ is weakly mixing w.r.t. each of these measures. \end{maintheorem} In fact the approximation by conjugation method on $\T^2$ offers enough flexibility to repeat the construction using block-slide type of maps (\cite{BaKu}, Theorem E) and get the result in the analytic set-up. \begin{maintheorem} For any natural number $r$, there exist a minimal real-analytic $T\in\text{Diff }^\omega(\T^2,\mu)$ constructed by the approximation by conjugation method, such that the Lebesgue measure is a generic measure for $T$, and there exists $r$ absolutely continuous w.r.t. to Lebesgue measures $\mu_1,\mu_2,\ldots,\mu_r$ such that $T$ is weakly mixing w.r.t. each of these measures. \end{maintheorem} One of this paper's objectives is to examine the generic points and try to estimate their size. Here, instead of measuring the set of generic points by the Lebesgue measure, we produce more interesting values of their Hausdorff dimension. \begin{maintheorem} There exist a smooth diffeomorphism $T\in\text{Diff }^\infty({\T}^2,\mu)$ constructed by the approximation of conjugation method, such that the set $\mathrm{B}$ containing all the generic points of $T$ has $$\log_{3}2\leq \text{dim}_H(\mathrm{B})\leq 1+\log_{3}2,$$ and $\mu(\mathrm{B})=0.$ \end{maintheorem} We can generalize the above result by choosing the generalized Cantor set (p-series Cantor set \cite{CC}) instead of the Cantor set in the above setup and construct the generic sets of different Hausdorff dimensions as \begin{maintheorem} For any $1<\alpha<2,$ there exist a smooth diffeomorphism $T\in\text{Diff }^\infty(\mathbb{T}^2,\mu)$ constructed by the approximation of conjugation method, such that the set $\mathrm{B}_{\alpha}$ containing all the generic points of $T$ has $$\alpha-1 \leq \text{dim}_H(\mathrm{B}_{\alpha})\leq \alpha,$$ and $\mu(\mathrm{B}_{\alpha})=0.$ \end{maintheorem} In $\cite{ZBR},$ Theorem- 2.3.1, the author presented a variational type formula for the full-shift on an alphabet of two symbols $(\Omega,\sigma)$. But in our set-up, it appears that this theorem does not hold. For example, if $f:\T^2\to\R$ is any continuous function and $\a=\int f d\mu$ with $<1\alpha<2$ and $\mu$ being the usual Lebesgue measure, then the Hausdorff dimension of $E_f(\a)$ is greater than $zero$ (see theorem D) where $E_{f}(\alpha)= \{x\in \mathbb{T}^2 \ : \lim\limits_{N\longrightarrow\infty} \frac{1}{N}\sum\limits_{n=1}^{N}f(T^nx)= \alpha \}$. Whereas, according to the theorem A in $\cite{FFW}$ and theorem 2.3.1 in $\cite{ZBR}$, this number should be zero as topological entropy and all measure theoretic entropy, in our case, is always zero. For an ergodic transformation, the set of non-generic points has measure zero but can have more exciting values of its Hausdorff dimension. Precisely, one can obtain the analogue result of theorem D for the set of non-generic points for the case of ergodic measure with the appropriate choice of combinatorics. \begin{maintheorem} For any $1<\alpha<2,$ there exist a smooth ergodic diffeomorphism $T\in\text{Diff }^\infty(\mathbb{T}^2,\mu)$ constructed by the approximation of conjugation method, such that the set $\mathrm{B}_{\alpha}$ containing all the non-generic points of $T$ has $$\alpha-1 \leq \text{dim}_H(\mathrm{B}_{\alpha})\leq \alpha,$$ and $\mu(\mathrm{B}_{\alpha})=0.$ \end{maintheorem} \remark The diffeomorphism produced in the above Theorem-C, D, and E could be made minimal by following the same construction as in theorem A. \section{Preliminaries} This section explains some basic definitions and standard techniques that we use throughout the paper. \subsection{Basics of ergodic theory} Consider $(X,d)$ be a $\sigma$-compact metric space, $\mathcal{B}$ is a $\sigma$ algebra, $\mu$ is a measure and $T:X\longrightarrow X$ is a measure preserving transformation(\it mpt) i.e. $\mu(T^{-1}(A))= \mu(A) \ \forall A\in \mathcal{B}$. \begin{definition} A {\it mpt} $(X,\mathcal{B},\mu,T)$ is called {\it ergodic} if every invariant set $E\in \mathcal{B}$ satisfies $\mu(E) = 0 \ or \ \mu(X \backslash E) = 0$. We say $\mu$ is ergodic measure. \end{definition} \definition \label{def:1a} A point $x\in X$ is a {\it generic point} for $\mu$ if for every continuous compactly supported $\phi:X \longrightarrow \mathbb{R}$, we have $\frac{1}{N}\sum\limits_{i=0}^{N-1} \phi(T^ix)\longrightarrow \int \phi d\mu.$\\ A measure is called {\it generic measure} if it has a generic point. It follows from the Birkhoff ergodic theorem that if the system is ergodic, then $\mu$ almost-every point is generic. \definition Let $T:X\longrightarrow X$ be a continuous map where $X$ is topological space. The map $T$ is said to be minimal if for every $x\in X,$ the orbit $\{T^i(x)\}_{i\in \mathbb{N}}$ is dense in $X.$ Equivalently, in the case of a metric space, the map $T$ is minimal if for every $x\in X$, $\delta>0$ and every $\delta$-ball $B_{\delta}$ there exist $i \in \mathbb{N}$ such that $T^i(x)\in B_{\delta}.$ \definition A measure preserving diffeomorphism $T : X \longrightarrow X$ is said to be weakly mixing on the space $(X,\mathcal{B},\mu, T)$ if there exists a sequence $\{m_n\}\in \mathbb{N}$ such that for any pair $A,B \in \mathcal{B}:$ $$ {\left\lvert \mu(B\cap f^{-m_n}(A)) - \mu(B)\mu(A) \right\rvert} \longrightarrow 0.$$ \subsection{The middle third Cantor set}\label{sec:2.2a} Consider a middle third Cantor set $C\subset[0,1],$ obtained by removing the open middle third interval and then repeating the same process with each remaining interval. After completing the $n$ stage of removing middle intervals from $[0,1]$, we have $2^n$ number of closed intervals enumerated as $I_l^n, \ l=0,1,...,2^n-1$ and have $2^{n-1}$ number of removed open interval denoted as $J_l^n, \ l=0,1,...,2^{n-1}-1.$ Precisely, the interval $I_l^n$ is of the form $\left[\frac{3k}{3^n},\frac{3k+1}{3^n}\right]$ or $\left[\frac{3k+2}{3^n},\frac{3k+3}{3^n}\right]$, and interval $J_l^n$ of the form $\left(\frac{3k+1}{3^n},\frac{3k+2}{3^n}\right)$, for $k = 0,1,\ldots,3^{n-1}-1$. The explicit closed form of the Cantor set is defined as \begin{align} C=\bigcap\limits_{n\geq1}\bigcup_{l=0}^{2^{n}-1} I_l^n = \left[0,1\right]\backslash \bigcup_{n=1}^{\infty} \bigcup_{l=0}^{2^{n-1}-1} J_l^n \end{align} \subsection{The Cantor set associated with a sequence}\label{sec:2.3a} For any sequence $\lambda=\{\lambda_k\}_{k\in \mathbb{N}}$ such that $\sum \lambda_k= K,$ there exists a Cantor set $C_{\lambda}$ associated with it, defined on the interval $I_{0,\lambda}=[0,K]$ and also known as generalised Cantor Set. It is constructed in a similar way to the middle third Cantor set and has the same topological and measure properties. Precisely, it is a compact, perfect, totally disconnected subset of the real line and has measure zero. \\ The set $C_{\lambda}$ is obtained by the removal of open intervals whose lengths are the terms of the sequence $\lambda$. In the first step, an open interval $J_{0,\lambda}^1$ of length $\lambda_1$ is removed from $I_{0,\lambda}$, obtaining two closed intervals $I_{0,\lambda}^1, I_{1,\lambda}^1$. In the second step, we remove an open interval of length $\lambda_2$ and ${\lambda_3}$ from $I_{0,\lambda}^1$ and $I_{1,\lambda}^1$, respectively. After $k$ complete steps, we have $2^{k}$ number of closed intervals denoted as $\{I_{l,\lambda}^k\}_{l=0}^{2^k-1}$ and $2^{k-1}$ number of removed open intervals denoted as $\{J_{l,\lambda}^k\}_{l=0}^{2^{k-1}-1}$ of length equal to the previously used terms of the sequence. And continue in this way, removing an open interval $J_{l,\lambda}^{k+1}$ of length $\lambda_{2^k+l}$ from interval $I_{l,\lambda}^k$ we have $I_{2l,\lambda}^{k+1}$ and $I_{2l+1,\lambda}^{k+1}$. Since $\sum_{k} \lambda_k= K,$ the location of each interval $J_{l,\lambda}^k$ to be removed is determined uniquely, and the Cantor set $C_{\lambda}$ is well defined as \begin{align} C=\bigcap\limits_{n\geq1}\bigcup_{l=0}^{2^{n}-1} I_{l,\lambda}^{n} = \left[0,K\right]\backslash \bigcup_{n=1}^{\infty} \bigcup_{l=0}^{2^{n-1}-1} J_{l,\lambda}^{n} \end{align} \remark Since the length of the interval $I_{0,\lambda}$ equals the sum of the lengths of all the intervals removed in the construction, and there is a unique way of doing this construction. \remark Clearly, by normalization, we can define $C_{\lambda}$ on $I_0=[0,1]$ for the sequence $\lambda$. In our case, we choose Cantor sets on $[0,1],$ associated with the sequence $\lambda=\{\lambda_k\}_{k\in \mathbb{N}},$ where $\lambda_k= \frac{1}{c_0}(\frac{1}{k})^p$ such that $c_0= \sum_{k\in \mathbb{N}}\lambda_k$ (The constant $c_0$ is finite only for case $p>1$), and its Hausdorff dimension is described in more detail in \cite{CC}, \begin{align} \text{dim}_H(C_{\lambda})= \frac{1}{p} \label{eq:6.1d} \end{align} \remark If X and Y are metric spaces, then the Hausdorff dimension of their product satisfies \begin{align} \text{dim}_H(X)+ \text{dim}_H(Y)\leq \text{dim}_H(X\times Y)\leq \text{dim}_H(X) + \text{dim}_B(Y) \end{align} where $\text{dim}_B$ is the upper box counting dimension (see \cite{Ma54}). In particular, if $Y$ has equal Hausdorff and upper box-counting dimension (which holds if $Y$ is a compact interval), then \begin{align} \text{dim}_{H}(X\times Y) = \text{dim}_{H}(X)+ \text{dim}_{H}(Y) \label{def:1b} \end{align} \subsection{Smooth and Real-analytic diffeomorphisms} For the description of standard topology on the space of diffeomorphism on $M=\mathbb{T}^2$ and, explicitly, convergence in the space of smooth diffeomorphism and real-analytic diffeomorphism on the torus, one can ref to \cite{FS}. \subsection{Approximation by conjugation method} Here, we outline a scheme of constructing a smooth area preserving diffeomorphism with the specific ergodic property via the Approximation by conjugation method explained in \cite{AK}. Let's denote $S_t$, a measure preserving circle action $\mathbb{T}^1$ on the torus $\mathbb{T}^2= \mathbb{R}/\mathbb{Z}\times \mathbb{R}/\mathbb{Z}$ defined as a translation $t$ in the first coordinate : $S_{t}(x_1,x_2)= (x_1+t,x_2).$ The required map $T$ is constructed as the limit of a sequence of periodic measure preserving diffeomorphism $T_n$ in the smooth topology. The sequence of $T_n$ is defined iteratively as \begin{align}\label{eq:1d} T_n = H_n\circ S_{\alpha_{n+1}}\circ H_n^{-1}. \end{align} where $\alpha_{n+1}= \frac{p_{n+1}}{q_{n+1}}\in \mathbb{Q}/\mathbb{Z}$ and $H_n\in \text{Diff}^{\infty}(\mathbb{T}^2)$. The diffeomorphism $H_n$ is constructed successively as $H_n = h_1\circ \ldots \circ h_n,$ where $h_n$ is an area preserving diffeomorphism of $\mathbb{T}^2$ that satisfies \begin{align}\label{eq:2d} h_n\circ S_{\alpha_{n}}= S_{\alpha_{n}}\circ h_n. \end{align} The rationals $\alpha_{n+1}=\frac{p_{n+1}}{q_{n+1}}$ are defined iteratively as $p_{n+1} = k_{n}l_{n}q_{n}p_{n}+ 1$ and $q_{n+1} = k_{n}l_{n}q_{n}^2$ where $\{k_{n}\},\{l_{n}\}$ is the sequence of natural numbers chosen such that $\alpha_{n+1}$ is close enough to $\alpha_{n}$ to ensure the closeness between $T_n$ and $T_{n-1}$ in the $C^{\infty}$ topology. Given $\alpha_{n+1}, H_n$, at the $n+1$ stage of this iterative process, we construct $h_{n+1}$ such that $T_{n+1}$ satisfy a finite version of the specific property we eventually need to achieve for the limiting diffeomorphism. The explicit construction of $h_{n+1}$ has been done in section 3, which serves our purpose. Then we construct $\alpha_{n+2}= \alpha_{n+1} + \frac{1}{k_{n+1}l_{n+1}q_{n+1}^2}$ by choosing $k_{n+1}\in \mathbb{N}$ and $l_{n+1}\in \mathbb{N}$ to be large enough such that it satisfies the certain condition and guarantees the convergence of iterative sequence $T_{n+1}$ in the smooth topology. The limit obtained from this induction sequence is the required smooth diffeomorphism with the specific ergodic and/or topological properties, $T_{n+1} \longrightarrow T \in \text{Diff}^{\infty}(\mathbb{T}^2,\mu).$ \subsection{Preliminary Lemma} \begin{lemma}\label{le:2a} Let $g,h \in \text{Diff}^{\infty}(\mathbb{T}^2)$. For $k \in \mathbb{N},$ the norm estimates of the composition $g \circ h$ satisfy \begin{align} \vertiii{g\circ h}_k \leq C \vertiii{g}_k^k.\vertiii{h}_k^k, \end{align} where $C$ is constant. \end{lemma} \remark The above can be deduced using the corollary of the Faa di Bruno formula; similar proof has been done in [\cite{Ku15}, lemma 4.1]. \begin{lemma}\label{lem:01} For any $\e>0$, there is a smooth Lebesgue measure preserving diffeomorphism $\varphi=\varphi(\e)$ of $[0,1]^2$, equal to identity outside $[\e,1-\e]^2$ and rotating the square $[2\e,1-2\e]^2$ by $\pi/2$ in the clockwise direction. \end{lemma} The proof directly follows from [\cite{FS}, lemma 5.3]. \begin{lemma}\label{lem:3a} For any diffeomorphism $\phi:\Delta \longrightarrow \mathbb{R}^n$. For any compact set $A\subset\Delta$ : $$\text{dim}_H(\phi(A))= \text{dim}_H(A)$$ \end{lemma} \section{Construction of the Conjugacies} We consider the following conjugacies for the Approximation by conjugation method, for any $0 < \sigma < \frac{1}{2}$, on the torus as \begin{align} T_n&=H_n\circ S_{\a_{n+1}}\circ H_n^{-1} \ \text{where} \ H_n=h_1\circ\ldots \circ h_n \label{eq:3b}\\ h_n&= g_n\circ \phi_n \circ P_n \label{eq:3:d}\\ g_n(x,y)&=(x+\lfloor nq_n^\sigma\rfloor y,y) \label{eq:3c} \end{align} where the sequence $\alpha_{n+1} = p_{n+1}/q_{n+1} converging to $\alpha$ (a Liouville number), and the diffeomorphisms $\phi_n$ and $P_n$ commute with $S_{\alpha_n}$, are constructed in section \ref{sec:3a} below. \subsection{Outline} In order to prove theorem A, we decompose torus $\T^2$ into three different parts with distinct aims. On the one hand, we divide $\T^2$ into $r$ disjoint sets as $N^t$ where each set naturally supports an absolutely continuous Lebesgue measure $\mu_t$ obtained by the normalized Lebesgue measure $\mu$. \\ While on the other hand, we introduce another two different parts inside $\T^2$ such that other two dynamics property can be achieved explicitly. These parts are chosen to be measure theoretically insignificant such that the measure of these sets goes to zero. Then, with appropriate geometrical and combinatorial criterion explained in the next section, gives us the limit diffeomorphism $T$, obtained by $(\ref{eq:3a})$, to be minimal and have $r$ distinct weak mixing measures $\mu_t$ on $\T^2$ and, Lebesgue measure $\mu$ as a generic measure. \subsection{Explicit set-up} This subsequent section introduces a couple of fundamental domains on which our explicit construction of conjugation maps exhibits different ergodic properties. First, define the following subsets of $\mathbb{T}^2$, for $t=0,\ldots,r-1$: \begin{align} \label{eq:3.1b} N^t=\mathbb{T}^1\times \left[\frac{t}{r},\frac{t+1}{r}\right] \end{align} and denote $\mu_t$ be a measure on $N^t$ defined as normalized Lebesgue measure $\mu$ to $N^t,$ i.e. $\mu_{t}(A)= \frac{\mu(A\cap N^t)}{\mu(N^t)}$ for measurable set $A\in \mathcal{B}(\mathbb{T}^2).$ Considering the following fundamental domain of $N^t$ for $t\in \{0,\ldots,r-1\}$ as \begin{itemize} \item The fundamental domain: $D_{n}^t=\Big[0,\frac{1}{q_n}\Big]\times \Big[\frac{t}{r}, \frac{t+1}{r}\Big)$. \item Split the $D_n^t$ into two halves : $D_{n}^{t,1}=\Big[0,\frac{1}{2q_n}\Big)\times \Big[\frac{t}{r}, \frac{t+1}{r}\Big)$ and $D_{n}^{t,2}=\Big[\frac{1}{2q_n},\frac{1}{q_n}\Big)\times \Big[\frac{t}{r}, \frac{t+1}{r}\Big).$ \item $D_{n,j}^t,$ the shift of fundamental domain: $D_{n,j}^{t}= S_{j/q_n}(D_n^t),$ and so $D_{n,j}^{t,i}= S_{j/q_n}(D_n^{t,i})$ \end{itemize} \subsubsection{Construction of the conjugacies}\label{eq:sec1} The aim is to construct the conjugation map $h _n$, which allows the limiting diffeomorphism T, defined by (\ref{eq:3b}), to have $r$ weak mixing measures and have the Lebesgue measure as a generic measure, and be a minimal map. Here, we proceed with the construction of conjugation map $\phi_n$ in the following three steps and combining all together; we define the smooth diffeomorphism $\phi_n:\T^2\longrightarrow\T^2$ as \begin{align} \phi_n= \phi_n^g\circ\phi_n^m\circ\phi_n^w \label{eq:3a} \end{align} {\bb{Step-1:-}} Define the map $\phi_{n}^{w} : \mathbb{T}^2\longrightarrow\mathbb{T}^2$ to achieve $r$ weak mixing measures supported on each $N^t$: \begin{equation} \phi_{n}^w(x) = \begin{cases} \phi_{n,0}(x)\ \ &{\text{if}} \ x\in N^{0}\\ \phi_{n,1}(x)\ \ &{\text{if}} \ x\in N^{1}\\ \vdots\ \ &{} \ \vdots\\ \phi_{n,r-1}(x)\ \ &{\text{if}} \ x \in N^{r-1}\\ x \ \ &{\text{otherwise,}} \end{cases} \end{equation} where $\phi_{n,t}$ is a smooth diffeomorphism defined on $\T^2$ for $t=0,1,\ldots, r-1$ as described in the following paragraph. Consider a map $\phi_{n,t}:\Big[0,\frac{1}{q_n}\Big]\times \Big[\frac{t}{r}, \frac{t+1}{r}\Big) \longrightarrow \Big[0,\frac{1}{q_n}\Big]\times \Big[\frac{t}{r}, \frac{t+1}{r}\Big) :$ \begin{equation}\label{eqn:5.4} \phi_{n,t} = \begin{cases} C_{n,t}^{-1} \circ \varphi_n^{-1}(\e_n^{(1)})\circ C_{n,t} \ \ &{\text{on}} \ D_{n}^{t,1}\\ Id \ \ &{\text{otherwise}} \end{cases} \end{equation} here $C_{n,t}(x,y)=(q_nx,ry-t)$ and $\varphi$ is defined as in lemma \ref{lem:01} with $ \e_n^{(1)} =1/3nr$. In the same way we can extend this map $\phi_{n,t}$ as $\frac{1}{q_n}$ -equivariantly on the whole $N^t$, as done in \cite{FS}. \\ {\bb{Step} 2:-} Here, we construct a smooth diffeomorphism $\phi_{n}^g: \T^2\longrightarrow \T^2$ differently to ensure the existence of a generic point. Consider a map $\phi_{n}^g:\Big[0,\frac{1}{q_n}\Big]\times \mathbb{T}^1 \longrightarrow \Big[0,\frac{1}{q_n}\Big]\times \mathbb{T}^1$ defined as \begin{align*} \phi_{n}^g=\tilde{C}_n^{-1}\circ {\varphi}^{-1}(\e_n^{(3)})\circ{{\varphi}}(\e_n^{(2)})\circ\tilde{C}_n \end{align*} where $\tilde{C}_{n}(x,y)=(q_nx,y)$ and $\varphi$ is defined in lemma \ref{lem:01} with the choice of $\e_n^{(2)}= \frac{\e_{n}^{(1)}}{8}$and $\e_n^{(3)} =\frac{\e_n^{(1)}}{2}.$ As in the above step, we extend the $\phi_n^g$ equivariantly on $\T^2.$\\ Let's denote $B_{n,i}=\left[\frac{i}{q_n}+\frac{2\e_n^{(2)}}{q_n},\frac{i+1}{q_n}-\frac{2\e_n^{(2)}}{q_n}\right]\times [2\e_n^{(2)}, \e_n^{(3)}]$ and $Y_{n,i}= \left[\frac{i+1}{q_n}- \frac{\e_n^{(3)}}{q_n},\frac{i+1}{q_n}-\frac{2\e_n^{(2)}}{q_n}\right] \times [2\e_n^{(2)}, 1-2\e_n^{(2)}]$ for $i=0,\ldots, q_{n}-1.$ \remark This scheme is so-called as ``double rotation effect", as $\varphi^{-1}(\e_n^{(3)})\circ\varphi(\e_n^{(2)})$ first rotate the whole square with the error $\e_n^{(2)}$, i.e. rotate inside the square $[2\e_n^{(2)},1-2\e_n^{(2)}]^2$, by $\frac{\pi}{2}$ in the clockwise direction and act as an identity outside the square $[\e_n^{(2)},1-\e_n^{(2)}]^2$ (see lemma \ref{lem:01}). Similarly, we rotate the whole square with the error $\e_n^{(3)}$, i.e. $[2\e_n^{(3)},1-2\e_n^{(3)}]^2$, in the anticlockwise direction. Note that with the specific choice of $\e_n^{(2)}$ and $\e_n^{(3)}$, the map $\phi_n^g$ satisfying the following properties: \begin{enumerate} \item $\phi_n^g$ rotates the region $B_{n,i}$ by $\pi/2$ and then transforms $B_{n,i}$ inside $Y_{n,i}$, i.e. $\phi_n^g(B_{n,i})= Y_{n,i}.$ \item $\phi_n^g$ acts as an identity on the region $\Sigma_{1}\cup\Sigma_2,$ where \begin{itemize} \item $\Sigma_1=\bigcup_{i=0}^{q_n-1}\left(\left[\frac{i}{q_n},\frac{i}{q_n}+\frac{\e_n^{(2)}}{q_n}\right]\cup\left[\frac{i+1}{q_n}-\frac{\e_n^{(2)}}{q_n},\frac{i+1}{q_n}\right]\right)\times \left([0,\e_n^{(2)}]\cup[1-\e_n^{(2)},1] \right)$ \item $\Sigma_2=\bigcup_{i=0}^{q_n-1}\left[\frac{i}{q_n}+\frac{2\e_n^{(3)}}{q_n},\frac{i+1}{q_n}-\frac{2\e_n^{(3)}}{q_n}\right]\times [2\e_n^{(3)}, 1-2\e_n^{(3)}]$ \end{itemize} \end{enumerate} \remark\label{re:3.5} The region $\E_n^g \subset \T^2\backslash((\cup_{i=0}^{q_n-1}{B_{n,i}\cup Y_{n,i}})\cup (\Sigma_1\cup \Sigma_2))$, say as Error zone, comes from the smoothing of the map $\phi_n^g$.\\ {\bb{Step 3:-}} In the same spirit, we define $R_n = \left[0,\frac{\e_n^{(2)}}{q_n} \right]\times \T^1$ and the map $\phi_{n}^m:\Big[0,\frac{1}{q_n}\Big]\times \mathbb{T}^1 \longrightarrow \Big[0,\frac{1}{q_n}\Big]\times \mathbb{T}^1$ differently to achieve minimality as \begin{equation}\label{eqn:3.1.2} \phi_{n}^m = \begin{cases} \hat{C}_{n}^{-1} \circ \varphi(\e_n^{(4)})\circ \hat{C}_{n} \ \ &{\text{on}} \ R_n \\ Id \ \ &{\text{otherwise}} \end{cases} \end{equation} where $\hat{C}_{n}(x,y)=(\frac{q_n}{\e_n^{(2)}}x,y)$ and $\e_n^{(4)}= \frac{1}{2^nq_n}.$ We extend the map $\phi_n^m$ equivariantly on $\T^2$ such that it acts as an identity outside the region $R_{n,i} = \left[\frac{i}{q_n}, \frac{i}{q_n} +\frac{\e_n^{(2)}}{q_n} \right]\times \T^1$ (defined as the shift of domain: $S_{\frac{i}{q_n}}(R_n)= R_{n,i},\ \forall \ i\in \{0,1,\ldots,q_n-1\}$) \remark With specific chosen $\e_n^{(4)}$, the map $\phi_n^m$ rotates the region $\left[\frac{i}{q_n}+ \frac{2\e_n^{(4)}}{q_n} , \frac{i}{q_n} +\frac{\e_n^{(2)}}{q_n}- \frac{2\e_n^{(4)}}{q_n} \right]\times [2\e_n^{(4)},1-2\e_n^{(4)}]$, inside $R_{n,i}$, by $\pi/2$ and acts as an identity outside the region $R_{n,i}.$ The region \begin{align} \E_n^m = \bigcup_{i=0}^{q_n-1} \bigg(\left[\frac{i}{q_n}+\frac{\e_n^{(4)}}{q_n} ,\frac{i}{q_n}+ \frac{2\e_n^{(4)}}{q_n} \right] &\bigcup \left[\frac{i}{q_n}+ \frac{\e_n^{(2)}}{q_n}- \frac{2\e_n^{(4)}}{q_n},\frac{i}{q_n} +\frac{\e_n^{(2)}}{q_n}- \frac{\e_n^{(4)}}{q_n} \right]\bigg) \times \nonumber\\ &\qquad \qquad \left([\e_n^{(4)},2\e_n^{(4)}]\cup[1-2\e_n^{(4)},1-\e_n^{(4)}] \right) \end{align} the error zone comes from the smoothing of the map $\phi_n^m$ (see Figure 1). \begin{lemma} The diffeomorphism $\phi_n$ constructed above satisfy: for all $k\in \mathbb{N}$, $\vertiii{\phi_n}_k\leq c_k(n,k) q_n^{2k^3+k}$ where $c_k(n,k)$ is independent of $q_n.$ \end{lemma} \begin{proof} For any $a\in \mathbb{N}^2$ with $|a|=k$, we have $\|(D_a \phi_n^m)_j\|_0\leq c_m .q_n^{k},$ and similarly, $\|(D_a (\phi_n^m)^{-1})_j\|_0\leq c_m.q_n^{k},$ for $j=1,2.$ Hence $\vertiii{\phi_n^m}_k\leq c_m(n,k)q_n^{k}$ , where $c_m$ is a constant and independent of $q_n.$ Analogously, we have $ \vertiii{\phi_n^g}_k\leq c_g(n,k)q_n^{k}$ and $\vertiii{\phi_{n,i}}_k\leq c_i(n,k)q_n^{k}$ for $i \in \{0,\ldots,r-1\}$ where $c_g$ and $c_i$ are constants independent of $q_n$. With triangle inequality on the norm, we have $\vertiii{\phi_n^w}_k \leq c_w(n,k) r. q_n^k.$ Using the above estimate and lemma (\ref{le:2a}), we have \begin{align} \vertiii{\phi_n}_k &\leq c_k(n,k) \vertiii{\phi_n^g}_k^k.\vertiii{\phi_n^m\circ \phi_n^w}_k^k \nonumber\\ &\leq c_k(n,k) \vertiii{\phi_n^g}_k^k.\vertiii{\phi_n^m}_k^{k^2}.\vertiii{ \phi_n^w}_k^{k^2} \nonumber\\ & \leq c_k(n,k) q_n^{2k^3+k} \end{align} where $c_k(n,k)$ is a constant independent of $q_n$. \end{proof} \begin{figure}[t!] \centering \includegraphics[width=1\textwidth]{ThmA_Err.jpg} \caption{An example of action ${\phi}_n$ and $\Phi_n$ on the fundamental domains inside the $\T^2$ for $r=2$. The orange region, $B_{n,j}$, is transformed into the green region, $Y_{n,j},$ under the action of $\phi_n$. In (a), the horizontal line $I$ lying inside $D_{n,j}^{0,1}$ is transformed into vertical by $\phi_n^{-1}$ and then transferred to the right $D_{n,j}^{0,2}$ under the action of $\Phi_n$. Whereas in (b), the horizontal line $I$ lying inside $D_{n,j}^{0,2}$ is transferred to $D_{n,j+1}^{0,1}$ first and then transformed into vertical by $\phi_n$ under the action of $\Phi_n$. The same action of $\Phi_n$ will be followed inside regions $D_{n,j}^{1,1}$ and $D_{n,j}^{1,2}$ in both (a) and (b) respectively. In (c), the region inside $R_{n,j}$ is being rotated by the map $\phi_n$ by $\pi/2$. The blue and grey shaded region represent the error region for $\phi_n$. } \label{fig:1a} \end{figure} \subsection{The conjugation map \texorpdfstring{$h_n$}{Lg}} \label{sec:3a} The final conjugacy map $h_n:\T^2\longrightarrow\T^2$ is defined as a composition of the following maps as \begin{align}\label{eq:3d} h_n= g_n\circ \phi_n \circ P_n \end{align}where the diffeomorphism $P_n: \T^2\longrightarrow\T^2$ is defined by $P_n(x,y)= (x,y+ \kappa_n(x))$ with a smooth map $\kappa_n: \T^1\longrightarrow \T^1$. For our specific situation, we choose $\tilde{\kappa_n}:\left[0,\frac{1}{q_n}\right]\longrightarrow\T^1$ as follows, and then extend it $\frac{1}{q_n}$-periodically on the whole $\mathbb{T}^1,$ \begin{equation}\label{eqn:3.2}\tilde{\kappa_{n}}(x) = \begin{cases} \frac{2q_n}{n^2\e_n^{(2)}}x \ \ \ &,x\in [0,\frac{\e_n^{(2)}}{2q_n}] \\ -\frac{2q_n}{n^2\e_n^{(2)}}x+\frac{2}{n^2} \ \ \ &,x\in [\frac{\e_n^{(2)}}{2q_n},\frac{\e_n^{(2)}}{q_n}]\\0 \ \ &,x\in [\frac{\e_n^{(2)}}{q_n},\frac{1}{q_n}]. \end{cases} \end{equation} Let $\kappa_n$ be the smooth approximation of $\tilde{\kappa_n}$ on $[0,1]$ by convolving it with a mollifier (Wikipedia, \url{ https://en.wikipedia.org/wiki/Mollifier}). Let $\rho$ be the standard mollifier on $\mathbb{R},$ and set $\rho(x)=\begin{cases} c\exp{\frac{1}{|x|^2-1}}\ \ \ &, |x|<1 \\ 0 \ \ \ &, \text{otherwise} \end{cases}$, where $c$ is constant such that $\int_{\mathbb{R}} \rho(x)= 1$. Then, $$\kappa_n(x)= \lim_{\delta\rightarrow 0} \kappa_n^{\delta}(x)= \lim_{\delta\rightarrow0} \delta^{-1}\int_{\mathbb{T}^1}\rho\left(\frac{x-y}{\delta}\right)\tilde{\kappa}_n(y)dy.$$ \remark The map $\kappa_n(x)= q_n x$ on $\T^1$ is considered in \cite{FSW} to control almost all the orbits of space. \remark For minimality, the orbit of every point has to be dense. The map $\phi_n^m$ takes care of all the points inside $\T^2$ except the points whose whole orbit gets trapped inside the Error zone(where we do not have any control), $\mathbb{E}_n^m$, of $\phi_n^m$. The map $P_n$ acts as the vertical translation such that such an orbit would enter the minimality zone, and no whole orbit of a point gets trapped inside the Error zone. Also, note that $P_n$ acts as an identity outside the region $\cup_{i=0}^{q_n-1}R_{n,i}.$ \remark\label{re:3.3a} Also note that $\|D^k\kappa_n\|_0 \leq \max\limits_{x\in [-1,1]}|\tilde{\kappa}_n|.\|D^k\rho\|_0 \leq (\frac{2k\sqrt{18}}{\e})^{2k}.k!. q_n^{k}.$ \section{Convergence} There are some standard results on the closeness between the maps constructed as the conjugation of translations on the torus. The following two lemmas are identical to lemma 3,4 in \cite{FSW} with minor to no modification; hence, we skip the proofs for brevity. \begin{lemma} \label{la:1a} Let $k\in \mathbb{N}$. For all $\alpha,\beta \in \mathbb{R}$ and all $h\in \text{Diff}^{\infty}(\mathbb{T}^2)$, we have the estimate $$d_k(h S_{\alpha}h^{-1},h S_{\beta}h^{-1}) \leq C_k \max\{\vertiii{h}_{k+1},\vertiii{h^{-1}}_{k+1}\}|\alpha- \beta|,$$ where $C_k$ is a constant that depends only on k. \end{lemma} \begin{lemma} \label{la:1b} For any $\epsilon>0,$ let $k_n$ be a sequence of natural numbers satisfying $\sum\limits_{n=1}^{\infty}\frac{1}{k_n}<\epsilon$. Suppose for any Liouville $\alpha$, there exist a sequence of rationals $\{\alpha_n\}$ that satisfy: \begin{equation}\label{eqn:5.5} |\alpha-\alpha_n|<\frac{1}{2^{n+1}k_nC_{k_n}q_n \vertiii{H_n}_{k_{n+1}}^{k_{n+1}}} \end{equation} where $C_{k_n}$ is the same constant as in lemma \ref{la:1a}. Then the sequence of diffeomorphisms $T_{n} = H_{n} \circ S_{\alpha_{n+1}}\circ H_{n}^{-1}$ converges to $T\in \text{Diff}^{\infty}(\mathbb{T}^2, \mu)$ in the $C^{\infty}$ topology. Moreover, for any $m\leq q_{n+1},$ we have \begin{equation}\label{eq:4a} d_0(T^m,T_{n}^m) \leq \frac{1}{2^{n+1}}, \end{equation} \end{lemma} \begin{lemma} For any $k\in \mathbb{N},$ the conjugating diffeomorphism defined in $(\ref{eq:3a})$ and $(\ref{eq:3d})$ satisfy the following norm estimates as \begin{enumerate} \item $\vertiii{h_n}_k\leq c_k(n,k).q_n^{2k_n^4+2k^2}$ , where $c_k(n,k)$ is constant independent of $q_n.$ \item $\vertiii{H_n}_k \leq \hat{c}_k(n,k). q_n^{2k_n^5+2k^3+ k}$, where $\hat{c}_k(n,k)$ is constant independent of $q_n.$ \item For $\alpha$ Liouville, there exist a sequence of rational $\{\alpha_n\}$ satisfying {\eqref{eqn:5.5}}. \end{enumerate} \end{lemma} \begin{proof} The map $h_n$ is defined by \begin{align} h_n(x,y)&= g_n\circ \phi_n \circ P_n(x,y)\nonumber\\ &= ([\phi_n(x,y+\kappa_n(x))]_1 + \lfloor{nq_n}^{\sigma}\rfloor[\phi_n(x,y+\kappa_n(x))]_2 , [\phi_n(x,y+\kappa_n(x))]_2 ) \nonumber \end{align} By lemma \ref{le:2a} and remark \ref{re:3.3a}, we have estimate: \begin{align} \vertiii{h_n}_k &\leq 2.(nq_n)^{k-1}.\vertiii{\phi_n}_k^k.\vertiii{\kappa_n}_k^k \nonumber\\ &\leq c_k(n,k).q_n^{2k_n^4+2k^2}\nonumber \end{align} Similarly, $\vertiii{H_n}_k= \vertiii{H_{n-1}\circ h_n}_k \leq \vertiii{H_{n-1}}_k^k\vertiii{h_n}_k^k $. Since the derivatives of $H_{n-1}$ of $k$th order is independent of $q_n$, we can conclude $\vertiii{H_{n}}_k \leq \hat{c}_k(n,k)q_n^{2k_n^5+2k^3+ k}.$\\ For $\alpha$ being a Liouville, we can choose a sequence of rationals $\alpha_n=\frac{p_n}{q_n}$($p_n, q_n$ are coprime) that satisfy the following property: \begin{align} |\alpha-\alpha_n|& \leq \frac{1}{2^{n+1}k_nC_{k_n}q_n^{{2(k_n+1)^5+2(k_n+1)^3+(k_n+1)}}} \nonumber\\ &\leq \frac{1}{2^{n+1}k_nC_{k_n}q_n\vertiii{H_n}_{k_{n+1}}^{k_{n+1}}} \nonumber \end{align} \end{proof} \remark Finally, we have proven the estimate on the norms of the conjugation map $H_n$ as in \cite{FS}. Also, the existence of rationals satisfying (\ref{eqn:5.5}) guarantees the convergence of $T_n$ to $T\in \text{Diff}^{\infty}(\T^2,\mu)$ in lemma {\ref{la:1b} }. \section{Weak mixing, Minimality and Generic points} To prove theorem A which needs a couple of preliminary results. \subsection{A Fubini criterion for weak mixing} Here we state a few definitions and the criterion for weak mixing described in \cite{FS} for $\mathbb{T}^2.$ \begin{definition} A collection of disjoint sets $\eta_n$ on $\T^2$ is called partial decomposition of $\T^2$. A sequence of partial decompositions $\eta_n$ converges to the decomposition into points (notation: $\eta_n \rightarrow \varepsilon$) if, any measurable set A, for any $n$ there exists a measurable set $A_n$, which is a union of elements of $\eta_n$, such that $\lim_{n\rightarrow \infty} \mu(A \triangle A_n) = 0$ (here $\triangle$ denotes the symmetric difference). \end{definition} Recall the notion of $(\gamma, \delta,\epsilon)$-distribution of a horizontal interval in the vertical direction. \begin{definition} ($(\gamma, \delta,\epsilon)$- distribution):- A diffeomorphism $\Phi: M \longrightarrow M, \ (\gamma, \delta,\epsilon)$ distributes a horizontal interval $I \in \eta,$ where $\eta$ is the partial decomposition of $M$ (or $\phi(I) $ is $(\gamma, \delta,\epsilon)$- distributed on $M$ ), if \begin{itemize} \item $J=\pi_{y}(\Phi(I))$ is an interval with $1-\delta \leq \lambda(J) \leq 1,$ where $\pi_y$ is the projection map onto the y coordinate. \item $\Phi(S) \subseteq K_{c,\gamma}= {[c,c+\gamma]\times J}$ for some c (i.e $\Phi(S)$ is almost vertical); \item for any interval $\tilde{J} \subseteq J$ we have: ${\left\lvert \frac{\lambda(I\cap \Phi^{-1}(\mathbb{T}\times \tilde{J}))}{\lambda(I)} - \frac{\lambda(\tilde{J})}{\lambda(J)}\right\rvert} \leq \epsilon \frac{\lambda(\tilde{J})}{\lambda(J)}.$ \end{itemize} \end{definition} \begin{prop}[\cite{FS}, Proposition 3.9] \label{eq:pr1} Assume $T_n = H_n \circ S_{{\alpha_n+1}}\circ H_n^{-1}$ is the sequence of diffeomorphism constructed by (\ref{eq:3b}), (\ref{eq:3c}) and (\ref{eq:3d}) such that all $n,$ $\|DH_{n-1}\|_0 < \ln q_n $ holds. Suppose $\lim_{n\to\infty}T_n = T$ exists. If there exists a sequence of natural numbers $\{\mathfrak{m}_n\}$ such that $d_o(f^{\mathfrak{m}_n}, f_n^{\mathfrak{m}_n})<\frac{1}{2^n}$, and a sequence of standard partial decomposition $\eta_n$ of $M$ into horizontal intervals of length less than $\frac{1}{q_n}$ satisfying \begin{itemize} \item[1.] $\eta_n\to \varepsilon$ \item[2.] for $I_n\in \eta_n$, the diffeomorphism $\Phi_n = \phi_n \circ S_{\alpha_n+1}^{\mathfrak{m}_n} \circ \phi_n^{-1}$ is $(\frac{1}{nq_n^\sigma},\frac{1}{n},\frac{1}{n})$ uniformly distribute the interval $I_n$. \end{itemize} Then limiting diffeomorphism $T$ is weak mixing. \end{prop} \subsection{Proof for weak mixing}\label{sec:5.1:b} The specific scheme that we describe here builds on the construction in \cite{FS}. First, consider a subset of the $\mathbb{T}^2$ as \begin{align}\label{eq:5a} \E_n^w=\left(\bigcup\limits_{k=0}^{2q_n-1}\left[\frac{k}{2q_n}-\frac{2\e_n^{(1)}}{q_n},\frac{k}{2q_n}+\frac{2\e_n^{(1)}}{q_n}\right] \times \mathbb{T}^1\right)\bigcup\left(\bigcup\limits_{t=0}^{r-1} \mathbb{T}^1 \times \left[\frac{t}{r}-2\e_n^{(1)}, \frac{t}{r}+2\e_n^{(1)} \right]\right). \end{align} \subsubsection{Action of \texorpdfstring{$\phi_n$}{Lg}} Consider the interval, $I_{n,j} \subseteq D_{n,j}^{t,1}$ for some fixed $t$ and $j$ of the form $I_{n,j} = I_{n,j}^0 \times \{s\}$ where $s \in \left[\frac{t}{r},\frac{t+1}{r}\right]$ and \begin{align}\label{eq:5.1a} I_{n,j}^0 = \left[\frac{j}{q_n} + \frac{2}{3nq_nr},\frac{j}{q_n} + \frac{1}{2q_n}- \frac{2}{3nq_nr}\right] \end{align} From our construction of $\phi_n$, the image of $I_{n,j}$ under both $\phi_n$ and $\phi_n^{-1}$ is an interval of type $\{\theta \} \times\left[\frac{t}{r}+\frac{2}{3nr},\frac{t+1}{r} -\frac{2}{3nr} \right]$ for some $\theta \in I_{n,j}^0.$ \subsubsection{Choice of \texorpdfstring{$\mathfrak{m}_n$}{Lg}- mixing sequence}\label{sec:5.1a Consider $\mathfrak{m}_n= \min\left\{m\leq q_{n+1} \ | \ \inf_{k\in \Z} \left \lvert m\frac{q_np_{n+1}}{q_{n+1}}-\frac{1}{2}+k \right\rvert \leq \frac{q_{n}}{q_{n+1}}\right\}$ and $\mathfrak{a}_n = (\mathfrak{m}_n\alpha_{n+1}- \frac{1}{2q_n}\mod\frac{1}{q_n})$ as defined in Fayad's paper for the torus case and with the growth assumption, $q_{n+1} > 10n^2q_n$ would result here: $$|\mathfrak{a}_n| \leq \frac{1}{q_{n+1}} \leq \frac{1}{10n^2q_n}.$$ Further, if we define a precise domain as ${\overline{D}_{n,j}^{t,1}} = I_{n,j}^0\times \left[\frac{t}{r},\frac{t+1}{r}\right] \subset D_{n,j}^{t,1}$ for some $j\in \mathbb{Z}$, then we would have $S_{\alpha_{n+1}}^{\mathfrak{m}_n}(\overline{D}_{n,j}^{t,1}) \subset D_{n,j'}^{t,2}$ for some $j'\in \mathbb{Z}.$ \subsubsection{Choice of decomposition \texorpdfstring{$\eta_n^t$}{Lg}}\label{sec:5.1b} For fixed $t\in\{0,1,\ldots,r-1\}$, we consider the partial decomposition $\eta_n^t$ of the set $N^t$, outside $\E_n^w$, which consists of two types of horizontal intervals: $I_{n,j}=I_{n,j}^0 \times \{s\} \subset D_{n,j}^{t,1}$ and $\overline{I}_{n,j}= \overline{I}_{n,j}^0 \times \{s'\} \subset D_{n,j}^{t,2} $ where $s, s' \in \left[\frac{t}{r},\frac{t+1}{r}\right]$, and $I_{n,j}^0$ by (\ref{eq:5.1a}), and \begin{align} \overline{I}_{n,j}^0 &= \left[\frac{j}{q_n} + \frac{1}{2q_n} - \frac{2}{3nq_nr} -\mathfrak{a}_n,\frac{j+1}{q_n} - \frac{2}{3nq_nr}-\mathfrak{a}_n\right]. \label{eq:02} \end{align} Note that for any element $I _n\in \eta_n^t,$ we have $\pi_y(\phi_n(I_n))\subset \left[\frac{t}{r},\frac{t+1}{r}\right].$ Since the length of intervals goes to zero and $\sum_{I_n\in \eta_n}\lambda(I_n)\leq 1 - \lambda(\E_n^w) \leq 1 - \frac{4}{n}\rightarrow 1,$ it implies $\eta_n^t\rightarrow 0$ as $n\rightarrow\infty.$ \begin{lemma}\label{le:5.1b} For any $t\in \{0,1,...,r-1\}$. The map $\Phi_n = \phi_n\circ P_n\circ S_{\alpha_{n+1}}^{\mathfrak{m}_n} \circ P_n^{-1}\circ \phi_n^{-1}$ transform the elements of the partial decomposition, i.e. $I_{n,j}= I_{n,j}^0\times \{s\} \in \eta_n^t$, into vertical interval of the form $\{\theta\} \times[\frac{t}{r}+\frac{2}{3nr},\frac{t+1}{r}-\frac{2}{3nr}]$ for some $\theta \in I_{n,j}^0$(see Figure-1). \end{lemma} \begin{proof} From our construction of $\phi_n\circ P_n$, an interval $I_{n,j}= I_{n,j}^0\times \{s\} \subset D_{n,j}^{t,1}$ where $s\in \left[\frac{t}{r}+\frac{2}{3nr},\frac{t+1}{r} -\frac{2}{3nr} \right]$, we have $P_n^{-1}\circ \phi_n^{-1}(I_{n,j})= \{\theta \} \times\left[\frac{t}{r}+\frac{2}{3nr},\frac{t+1}{r} -\frac{2}{3nr} \right]$ for some $\theta\in I_{n,j}^0.$\\ With the specific choice of sequence $\mathfrak{m}_n$ and the condition mentioned in section (\ref{sec:5.1a}), we get $$ S_{\alpha_{n+1}}^{\mathfrak{m}_n}\circ P_n^{-1}\circ \phi_n^{-1}(I_{n,j}) = \{\theta'\}\times \left[\frac{t}{r}+\frac{2}{3nr},\frac{t+1}{r} -\frac{2}{3nr} \right] \subset D_{n,j'}^{t,2},$$ for some $\theta'\in\mathbb{T}$ and $j' \in \mathbb{Z}$. Since $\kappa_n$ acts as an identity on $[\frac{\e_n^{(2)}}{q_n}, \frac{1}{q_n}]$ and the fact $\phi_n$ acts as an identity on $D_{n,j'}^{t,2},$ concludes the claim. Similarly, for the interval $\overline{I}_{n,j} = \overline{I}_{n,j}^0 \times \{s\} \subset D_{n,j}^{t,2},$ we deduced that $$\phi_n\circ P_n\circ S_{\alpha_{n+1}}^{\mathfrak{m}_n} \circ P_n^{-1}\circ \phi_n^{-1}(\overline{I}_{n,j}) = \{\theta' \} \times \left[\frac{t}{r}+\frac{1}{3nr},\frac{t+1}{r} -\frac{1}{3nr} \right] \subset D_{n,j'}^{t,1}$$ for some $j'\in \mathbb{Z}$ and $\theta'\in \mathbb{T}.$ \end{proof} \subsection{A criterion for minimality} The aim of this section is to deduce a criterion for minimality for our explicit construction. Precisely, it allows us to understand the action $\phi_n$ on the region $R_{n,i}$ explained in step 3, section \ref{eq:sec1}. Here, we define the following partition of set $R_{n,i}$ excluding the set $\E_n^m$, for any natural number $l_n$, as follows \begin{align} A_{i,k}^n &:= \left[\frac{i}{q_n}+\frac{2\e_n^{(4)}}{q_n}+ \frac{k(\e_n^{(2)}-4\e_n^{(4)})}{l_nq_n}, \frac{i}{q_n}+ \frac{2\e_n^{(4)}}{q_n}+ \frac{(k+1)(\e_n^{(2)}-4\e_n^{(4)})}{l_nq_n}\right)\times \left[2\e_n^{(4)},1- 2\e_n^{(4)} \right] \nonumber\\ B_{i,k}^n &:= \left[\frac{i}{q_n}+\frac{2\e_n^{(4)}}{q_n}, \frac{i}{q_n}+ \frac{\e_n^{(2)}}{q_n}-\frac{2\e_n^{(4)}}{q_n}\right)\times \left[2\e_n^{(4)}+ \frac{k(1-4\e_n^{(4)})}{l_n},2\e_n^{(4)}+ \frac{(k+1)(1-4\e_n^{(4)})}{l_n}\right]\nonumber. \end{align} Let's denote the family of these subsets by $\A_n=\{A_{i,k}^n, \ \ i= 0,\ldots,q_n-1, \ k= 0,\ldots ,l_n-1\}$ and $\B_n= \{B_{i,k}^n, \ \ i= 0,\ldots,q_n-1, \ k= 0,\ldots ,l_n-1\}$. \remark Note that under the transformation $\phi_n$, the elements of $\A_n$ map to the elements of $\B_n.$ In particular, by (\ref{eqn:3.1.2}), we get $\phi_n^m(A_{i,k}^n)= B_{i,k}^n$ for all $i,k$ as defined above. Since $R_{n,i}$ lies inside $\Sigma_1$ and the maps $\phi_{n}^w,\phi_{n}^g$ act as an identity on $\Sigma_1$. Therefore $\phi_n(A_{i,k}^n)= B_{i,k}^n$ \begin{lemma} Let $x\in \T^2$ and $q_{n+1}>l_nq_n^2$ be arbitrary, the orbit $\{S_{\alpha_{n+1}}^k(x)\}_{k=0}^{q_{n+1}-1}$ intersects every set $P_n^{-1}(A_{i_1,i_2}^n)$ \end{lemma} \begin{proof} Let fix $x=(x_1,x_2)\in \T^2$, $i_1\in\{0,\ldots, q_n-1\}$ and $i_2\in\{0,\ldots, l_n-1\}.$ The map $P_n$ acts as the vertical translation on $\T^2$ and with the choice of $\kappa_n$ function (see (\ref{eqn:3.2})), the set $A_{i_1,i_2}^n$ under the map $P_n^{{-1}},$ $c\leq \pi_y(P_n^{-1}(A_{i_1,i_2}^n))\leq c+ \gamma,$ where $c=\frac{2\e_n^{(4)}}{q_n}+ \frac{i_2(\e_n^{(2)}-4\e_n^{(4)})}{l_nq_n}$ and $\gamma= \frac{(\e_n^{(2)}-4\e_n^{(4)})}{n^2\e_n^{(2)}l_n}.$ Since $[2\e_n^{(4)},1-2\e_n^{(4)}]\subseteq \pi_y(A_{i_1,i_2}^n)$, it satisfy $\pi_y(P_n^{-1}(A_{i_1,i_2}^n))=\T^1.$\\ Since $\{k\alpha_{n+1}\}_{k=0,1,\ldots,q_{n+1}-1}$ is equidistributed on $\T^1$ and $S_{\alpha_{n+1}}$ act as horizontal translation on $\T^2$, therefore there exist $k \in \{0,1,\ldots,q_{n+1}-1\}$ such that $S_{\alpha_{n+1}}^{k}(x)\in P_n^{-1}(A_{i_1,i_2}^n ),$ in other words, there exist $k \in \{0,1,\ldots,q_{n+1}-1\}$ such that $x_1+k\alpha_{n+1}\in \pi_x(P_n^{-1}(A_{i_1,i_2}^n))$ and $x_2\in \pi_y(P_n^{-1}(A_{i_1,i_2}^n)).$ \end{proof} \begin{prop} \label{pr:2a} \begin{enumerate} \item For every $z\in \T^2$, the iterates $\{\phi_{n}\circ P_n \circ S_{\alpha_{n+1}}^k\circ H_{n}^{-1}(z)\}_{k=0,1,\ldots,q_{n+1}-1}$ meets every set of the form $\left[\frac{i}{q_n},\frac{i+1}{q_n}\right]\times \left[\frac{j}{l_n},\frac{j+1}{l_n}\right]$, where $l_n\in \N$ and satisfy (\ref{eq:l_n}). \item Suppose the sequence of diffeomorphism $T_{n} = H_{n}\circ S_{\alpha_{n+1}}\circ H_{n}^{-1}$ converges to $T\in \text{Diff}^{\infty}(\mathbb{T}^2,\mu)$ in the $C^{\infty}$ topology and satisfies the proximity condition, $d_0(T_{n}^k,T^k)<\frac{1}{2^n} \ \forall k= 0,\ldots,q_{n+1}-1$, then the limiting diffeomorphism $T$ is minimal. \end{enumerate} \end{prop} \begin{proof} Let $x\in \mathbb{T}^2$ and $i\in \{0,1,\ldots,q_n-1\}$ and $j\in \{0,1,\ldots,l_n-1\}$ be arbitrary. Note that if $\alpha_{n+1}$ is chosen large enough that $q_{n+1}>{l_nq_n^2}$ and by above lemma, there exist $k \in \{0,1,\ldots,q_{n+1}-1\}$ such that $S_{\alpha_{n+1}}^{k}(x)\in P_n^{-1}(A_{i,j}^n ).$ Under the conjugation map, we have \begin{align} \phi_{n}\circ P_n \circ S_{\alpha_{n+1}}^k(x) &\in \phi_n(A_{i,j}^n)= B_{i,j}^n ;\nonumber \\ B_{i,j}^n &\subset \left[\frac{i}{q_n},\frac{i+1}{q_n} \right]\times \left[\frac{j}{l_n},\frac{j+1}{l_n}\right]\label{eq:m2} \end{align} It shows that for $x=H_{n}^{-1}(z)$, the orbit $\{\phi_n\circ P_{n}\circ S_{\alpha_{n+1}}^k\circ H_{n}^{-1}(z)\}_{k=0,1,\ldots,q_{n+1}-1}$ meets every set of type $\left[\frac{i}{q_n},\frac{i+1}{q_n}\right]\times \left[\frac{j}{l_n},\frac{j+1}{l_n}\right].$ Also, record that the collection of such sets $\left[\frac{i}{q_n},\frac{i+1}{q_n}\right]\times \left[\frac{j}{l_n},\frac{j+1}{l_n}\right]$ for $0\leq i< q_n,\ 0\leq j<l_n$ covers the whole space $\mathbb{T}^2$ and $$ diam\left(H_{n-1}\circ g_n\left(\left[\frac{i}{q_n},\frac{i+1}{q_n}\right]\times \left[\frac{j}{l_n},\frac{j+1}{l_n}\right]\right)\right)\leq \|DH_{n-1}\|_0.\|Dg_n\|.\frac{2}{l_n}$$ which goes to 0 as $n \rightarrow\infty$(by condition(\ref{eq:l_n})). Hence, for $\varepsilon>0$ and $y\in \mathbb{T}^2$ there is $n_1\in \mathbb{N}:$ there exist a set $$H_{n-1}\circ g_n \left(\left[\frac{i}{q_n},\frac{i+1}{q_n}\right]\times \left[\frac{j}{l_n},\frac{j+1}{l_n}\right]\right) \subset B_{\frac{\varepsilon}{2}}(y) \ \forall \ n> n_1$$ For $H_n= H_{n-1}\circ g_n\circ \phi_n \circ P_n$, we use the condition of convergence for diffeomorphism $\{T_n\}$ and $d_0(T_{n}^k,T^k)<\frac{1}{2^n}$. Hence, we can conclude that for arbitrary $x, y\in \mathbb{T}^2$ and $\varepsilon>0,$ there exist $n_2\in \mathbb{N}$ such that $d_0(T_{n}^k,T^k)<\frac{\varepsilon}{2} \ \forall \ k=0,...q_{n}-1; n> n_2.$ Assuming $n> \max\{n_1,n_2\},$ there is a set $H_{n-1}\circ g_n\left(\left[\frac{i}{q_n},\frac{i+1}{q_n}\right]\times \left[\frac{j}{l_n},\frac{j+1}{l_n}\right]\right) \subset B_{\frac{\varepsilon}{2}}(y)$ and $T_{n}^k(x)\in H_{n-1}\circ g_n\left(\left[\frac{i}{q_n},\frac{i+1}{q_n}\right]\times \left[\frac{j}{l_n},\frac{j+1}{l_n}\right]\right) \subset B_{\frac{\varepsilon}{2}}(y)$ for some $k<q_{n+1}$. With the triangle inequality, we have \begin{align} d(T^k(x),y) &\leq d(T^k(x),T_{n}^{k}(x))+ d(T_{n}^{k}(x),y)\nonumber \\ &\leq d_0(T^k,T_{n}^k) + \frac{\varepsilon}{2} < \varepsilon. \nonumber \end{align} i.e. $T^k(x)\in B_{\varepsilon}(y)$ and which implies T is minimal. \end{proof} \subsection{ A Generic Measure} The following results allow us to show the existence of the generic points residing inside the region ${\mathcal{G}}_{n}= \cup_{i=0}^{q_n-1}B_{n,i}$. Denote $\mathcal{Y}_n=\cup_{i=0}^{q_n-1}Y_{n,i}$ (defined in section \ref{eq:sec1}, step 2) and $\mathcal{D}_{n}=\T^2$. First, we introduce the following partitions of the sets $\mathcal{G}_{n}, \mathcal{Y}_n$ and $\mathcal{D}_{n}$ for any natural number sequence $s_n>q_n$, by the family of subsets ${G}_{i,j}^n$, ${Y}_{i,j}^n$, and ${\Delta}_{i,j}^n$ respectively, for $0\leq i< q_n,\ 0\leq j< s_n$: \begin{align} {G}_{i,j}^n &:=\left [\frac{i}{q_n} + \frac{2\e_n^{(2)}}{q_n}+\frac{j(1-4\e_n^{(2)})}{s_nq_n} , \frac{i}{q_n}+\frac{2\e_n^{(2)}}{q_n}+\frac{(j+1)(1-4\e_n^{(2)})}{s_nq_n}\right] \times \left[2\e_n^{(2)},\e_n^{(3)} \right]\nonumber\\ {Y}_{i,j}^n &:=\left [\frac{i}{q_n} + \frac{1-\e_n^{(3)}}{q_n}, \frac{i}{q_n} + \frac{1-2\e_n^{(2)}}{q_n}\right] \times \left[2\e_n^{(2)} + \frac{j(1-4\e_n^{(2)})}{s_n}, 2\e_n^{(2)} + \frac{(j+1)(1-4\e_n^{(2)})}{s_n}\right]\nonumber\\ {\Delta}_{i,j}^n &:=\left [\frac{i}{q_n} ,\frac{i+1}{q_n}\right] \times \left[\frac{j}{s_n}, \frac{j+1}{s_n} \right]. \label{eq:5.3a} \end{align} \remark\label{re:5.3b}For $x\in \T^1\times(2\e_n^{(2)},\e_n^{(3)})$, since the sequence $\{{k\alpha_{n+1}}\}$ equidistributed over $\T^1$, the orbit of $x$ (say $\mathcal{O}^x$) under the $S_{\alpha_{n+1}}$ equidistributed among the element of $\mathcal{G}_{n}$. There are at most $(4\e_n^{(2)}\frac{q_{n+1}}{q_n})$ exceptional points that are trapped inside the error region $\E_n^g$ (see remark \ref{re:3.5}). Therefore, any element $G_{i,j}^n\in \mathcal{G}_n$ captures at least $\left(1-4\e_n^{(2)}\right)\frac{q_{n+1}}{s_nq_n}$ points of the orbit $\mathcal{O}^x$. \remark \label{re:5.3a} Note that under the transformation $\phi_n,$ the elements of $\mathcal{G}_{n}$ map to the elements of $\mathcal{Y}_n.$ In particular, $\phi_n^g(G_{i,j}^n)= Y_{i,s_n-j}^n$ and conversely, $\phi_n^g(Y_{i,j}^n)= G_{i,s_n-j}^n$ for all $i,j$. By construction, the maps $\phi_{n}^w,\phi_{n}^m$ and $P_n$ act as an identity on the set $\mathcal{G}_{n}$. \begin{prop}\label{pr:1a} For $\epsilon>0$, consider $(\frac{\sqrt{2}}{q_{n}}, \epsilon)$-uniformly continuous function $\psi : \mathbb{T}^2 \longrightarrow \mathbb{R}$, i.e. $\psi(B_\frac{\sqrt{2}}{q_{n}}{(x)})\subset B_{\epsilon}(\psi(x))$. The point $x \in \T^1\times (2\e_n^{(2)},\e_n^{(3)})$ satisfy the following estimate: \begin{align} \left|\frac{1}{q_{n+1}} \sum_{i=0}^{q_{n+1}-1} \psi(\phi_n\circ P_n \circ S_{\alpha_{n+1}}^{i}(x)) - \int_{\T^2} \psi d\mu \right| \leq 4\epsilon + \frac{2}{nr}\|\psi\|_0 \end{align} \end{prop} \begin{proof} Fix $x\in \T^1 \times (2\e_n^{(2)},\e_n^{(3)}) $. Since the orbit of $x$ under the $S_{\alpha_{n+1}}$ is being almost trapped inside the elements of $\mathcal{G}_{n},$ therefore there exist a $i_0\in \mathbb{N}$ such that $S_{\alpha_{n+1}}^{i_0}(x) \in G_{i,s_n-j}^n$ for some $i,j \in \mathbb{N}.$ Under the action of $\phi_n$ and by Remark (\ref{re:5.3a}) and (\ref{eq:3a}), we have $$\phi_n\circ P_n\circ S_{\alpha_{n+1}}^{i_o}(x) \in Y_{i,j}^n \subset \Delta_{i,j}^n$$ Therefore for any $y\in \Delta_{i,j}^n$, we have $$d(\phi_n\circ P_n \circ S_{\alpha_{n+1}}^{i_o}(x),y )\leq diam(\phi_n\circ P_n \circ S_{\alpha_{n+1}}^{i_o}(x), y)\leq \sqrt{2}/q_n.$$ Using the hypothesis on $\psi$, we have $|\psi(\phi_n\circ P_n \circ S_{\alpha_{n+1}}^{i_o}(x))- \psi(y)|< 2\epsilon.$ Take the average for all $y \in \Delta_{i,j}^n $ in the above equation, we get $$|\psi(\phi_n\circ P_n \circ S_{\alpha_{n+1}}^{i_o}(x) )-\frac{1}{\mu(\Delta_{i,j}^n)} \int_{\Delta_{i,j}^n} \psi(y) d\mu|< 2\epsilon$$ Let's denote $J_{\Delta} = \{ k\in {0,1,\ldots,q_{n+1}-1}\ : \phi_n\circ P_n \circ S_{\alpha_{n+1}}^{k}(x) \in \Delta \}$ for all $\Delta\in \mathcal{D}_n.$ By remark(\ref{re:5.3b}), we have $|J_{\Delta}|>\left(1- \frac{2}{nr}\right)\frac{q_{n+1}}{s_nq_n}$ (use $4\e_n^{(2)}<\frac{2}{nr}$). Now using the count on $|J_{\Delta}|$ and triangle inequality in the above equation, we get \begin{align} \left|\frac{1}{q_{n+1}} \sum_{i \in J_{\Delta}} \psi(\phi_n\circ P_n \circ S_{\alpha_{n+1}}^{i}(x) ) - \int_{\Delta_{i,j}^n} \psi d\mu\right| &< 2\epsilon.\mu(\Delta_{i,j}^n) + \frac{2}{nr}(\|\psi\|_0 + 2\epsilon)\mu(\Delta_{i,j}^n) \nonumber\\ &< \left(4\epsilon + \frac{2}{nr}\|\psi\|_0\right)\mu(\Delta_{i,j}^n) \end{align} Since the last inequality holds for arbitrary $\Delta \in \mathcal{D}_n,$ therefore, we conclude \begin{align} \left|\frac{1}{q_{n+1}} \sum_{i=0}^{q_{n+1}-1} \psi(\phi_n\circ P_n \circ S_{\alpha_{n+1}}^{i}(x)) - \int_{\T^2} \psi d\mu \right| &\leq \left| \sum\limits_{\Delta\in \mathcal{D}_n}\left( \frac{1}{q_{n+1}} \sum_{i \in J_{\Delta}} \psi(\phi_n\circ P_n \circ S_{\alpha_{n+1}}^{i}(x) - \int_{\Delta} \rho d\mu \right)\right| \nonumber \\ & \ \ \ + \frac{q_n}{q_{n+1}}||\psi||_0 \nonumber\\ & \leq 4\epsilon + \frac{2}{nr}\|\psi\|_0\nonumber \end{align} \end{proof} \noindent\textbf{Proof of Theorem A:} We will construct a minimal map $T \in \text{Diff}^{\infty}(\mathbb{T}^2,\mu)$, obtained by (\ref{eq:3a}),(\ref{eq:3b}), and (\ref{eq:3c}) for any Liouville $\alpha$ satisfying (\ref{eqn:5.5}), has distinct $r$ weak mixing measures $\mu_t$ and have the Lebesgue measure $\mu$ as a generic measure. Let's fix a countable set of Lipshitz functions $\Psi = \{ \psi_i\}_{i\in \mathbb{N}}$, which is dense in $C^0(\mathbb{T}^2,\mathbb{R}).$ Denote $L_n$ as a uniform Lipshitz constant for $\psi_1,\psi_2,...,\psi_n.$ Choose $q_{n+1}= l_{n}k_nq_n^2$ large enough by choosing $l_n$ arbitrarily large enough such that it satisfies: \begin{align} l_n> n^2. \|DH_{n-1}\|_{n-1}.\|Dg_n\|_0 max_{0\leq i\leq n} L_n. \label{eq:l_n} \end{align} This assumption implies that $\psi_1H_{n-1}g_n,\psi_2H_{n-1}g_n,...,\psi_nH_{n-1}g_n$ are $(\frac{\sqrt{2}}{q_n}, \frac{2}{nr})-$ uniformly continuous. {\it{ claim 1: The point $x=(0,\frac{\e_n^{(3)}-2\e_n^{(2)}}{2})$ is a generic point for the Lebesgue measure $\mu$ on the $\T^2.$}}\\ Using the fact $h_{n}$ is measure preserving and acts as an identity on the boundary of the unit square, precisely $h_{n}(x)= x$ for all $n$, and $g_n$'s acts as horizontal translation on $\T^2$, we get $H_n^{-1}(x)= x'\in \T^1 \times (2\e_n^{(2)},\e_n^{(3)})$. Now applying the proposition \ref{pr:1a} with $\epsilon = \frac{2}{nr},$ $1\leq k\leq n$, and for $x'\in \mathbb{T}^1\times (2\e_n^{(2)},\e_n^{(3)}) $, we get \begin{align} \left|\frac{1}{q_{n+1}}\sum_{i=0}^{q_{n+1}-1} \psi_k ({H}_{n}S_{{\alpha}_{n+1}}^i x') - \int_{\mathbb{T}^2} \psi_k {H_n} d\mu \right| < \frac{2}{nr} \| \psi_k\|_0 + \frac{8}{nr}.\label{eq:5b} \end{align} Using relation (\ref{eq:3b}) and the convergence estimate (\ref{eq:4a}), implies that for every $\psi_k\in \Psi:$ $$\left|\frac{1}{q_{n+1}}\sum_{i=0}^{q_{n+1}-1} \psi_k(T^i x) - \int_{\mathbb{T}^2} \psi_k d\mu \right| < \frac{2}{nr} \|\psi_k\|_0 + \frac{8}{nr}+ \frac{1}{2^{n+1}}. $$ Using the triangle inequality, we obtain the claim as $x$ is a generic point for $\mu$. $$\lim_{N\longrightarrow \infty}\frac{1}{N} \sum_{i=0}^{N-1} \psi_k(T^ix)\longrightarrow \int_{\mathbb{T}^2} \psi_k d\mu. $$ In order to prove the map $T$ is weak mixing w.r.t. to an invariant measure $\mu_t$, we will apply proposition \ref{eq:pr1} on each set $N^t,(t=0,\ldots, r-1)$ which supports $\mu_t$ (see ( \ref{eq:3.1b})). For that consider the sequence $(\mathfrak{m}_n)$ and decomposition $\eta_n^t$ described in section $(\ref{sec:5.1a}-\ref{sec:5.1b})$, and it is enough to show that $\eta_n^t\rightarrow \varepsilon$ and the diffeomorphism $\Phi_n(I_n) = \phi_n\circ P_n\circ S_{\alpha_{n+1}}^{\mathfrak{m}_n} \circ P_n^{-1}\circ \phi_n^{-1}(I_n)$ is $(0,2/3q_n,0)$ -distributes for any $I_n \in \eta_n.$ Clearly, $\eta_n\rightarrow \epsilon$, since $\eta_n$ consists of all intervals of each length less than $1/q_n.$ By lemma (\ref{le:5.1b}), for any $I_n\in \eta_n^t$, $J = \pi_{y}(\Phi_n(I_n)) = \left[ \frac{t}{r}+\frac{2}{3nr},\frac{t+1}{r}-\frac{2}{3nr}\right]$ and $\Phi_n(I_n)$ is a vertical interval. Hence we take $\delta = 2/3n$ and $\gamma = 0$. Finally, the restriction of $\Phi_n(I_n)$ being an affine map, verify the condition for $\epsilon = 0.$ Therefore the map $T$ is a weak mixing w.r.t to the measure $\mu_t (t=0,\ldots, r-1)$. One can ref. to \cite{FS} for more detailed proof. \\ The map $T$ is minimal and has been proved in proposition \ref{pr:2a}, and this completes the proof. \remark The measure $\mu =\mu_0 +\mu_1+\ldots+\mu_{r-1}$ is a nonergodic Lebesgue measure but a generic measure on the $\T^2$. \section{Construction of the Generic sets} In order to prove theorem C and theorem D, we construct a $T\in \text{Diff}^{\infty}(\T^2, \mu)$ using the Approximation by conjugation scheme as done in the last section but will modify the combinatorics in the above setup to get the desired result. First, we define the combinatorics such that the set $\mathrm{B}\supseteq \{0\} \times C$, where $C$ is the middle third Cantor set, consists of all the generic points of the system and the set $\mathrm{NB}\supseteq \{0\} \times C^c$ , where $C^c= [0,1]\backslash C$, contains all the non-generic points. \subsection{Explicit set-up}\label{sec:6a} Consider the following collection of disjoint subsets of $\T^2:$ $\T^2 = (\mathrm{G} \cup \mathrm{NG})$ such that \begin{align} \mathrm{G}&= \bigcap_{n\geq 1} \mathrm{G}_n = \T^1 \times C ,\ \ \text{where} \ \mathrm{G}_{n} = \T^1 \times \bigcup_{l=0}^{2^{n}-1} I_{l}^n , \label{eq:6a} \ \ \\ \mathrm{NG} &= \bigcup_{n\geq 1} \mathrm{NG}_n = \T^1 \times ([0,1]\backslash C), \ \ \text{where} \ \ \mathrm{NG}_{n} = \T^1 \times\bigcup_{k=0}^{n-1} \bigcup_{l=0}^{2^{k-1}-1} J_{l}^k , \label{eq:6.1.b} \ \ \end{align} where $I_{l}^n$ and $J_l^n$ are intervals of $[0,1]$ as defined in section \ref{sec:2.2a}. We split the interval $J_0^1$ into two halves as $J_0^1= \hat{J}_{0}^1\cup \hat{J}_{1}^1$, where $\hat{J}_{0}^1= \left(\frac{1}{3},\frac{1}{2}\right)$ and $\hat{J}_{1}^1= \left(\frac{1}{2},\frac{2}{3}\right)$.\\ Additionally, we introduce the following partition of $\T^2$ for any natural number sequence $q_n$ and $s_n> q_n$ as follows: \begin{align} \mathrm{G}_{n} &:=\left\{\mathcal{I}_{i_1,i_2}^n=\left[\frac{i_1}{s_nq_n},\frac{i_1+1}{s_nq_n}\right) \times I_{i_2}^n \ \ : 0\leq i_1< s_nq_n, 0\leq i_2< 2^n-1 \right\}, \\ {\mathrm{NG}}_{n} &:= \left.\begin{cases} \mathcal{J}_{i_1,i_2}^{n,k}=\left[\frac{i_1}{s_nq_n},\frac{i_1+1}{s_nq_n}\right) \times J_{i_2}^k; \ \ \ 2\leq k\leq n, \ 0< i_2< 2^{n-1}-1, \\ \mathcal{J}_{i_1,i_2'}^{n,1}=\left[\frac{i_1}{s_nq_n},\frac{i_1+1}{s_nq_n}\right) \times \hat{J}_{i_2'}^1 \ \ : \ 0\leq i_1< s_nq_n, \ i_2'= 0,1 \end{cases} \right\},\\ \mathrm{V}_{n} &:=\bigg\{{\mathcal{V}}_{i_1,i_2,i_3}^n =\left [\frac{i_1}{q_n} +\frac{i_2}{3^nq_n} ,\frac{i_1}{q_n}+\frac{i_2+1}{3^nq_n}\right) \times \left[\frac{i_3}{s_n} , \frac{i_3+1}{s_n} \right) \ \ : 0\leq i_1< q_n, \nonumber \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad \qquad \qquad \qquad 0\leq i_2< 2^n-1,\ 0\leq i_3< s_n\bigg\}, \label{eq:6h} \\ \mathrm{W}_{n} &:=\left. \begin{cases} \mathcal{W}_{i_1,i_2}^{n,k} =\left [\frac{i_1}{q_n} +\frac{2^k}{3^kq_n} ,\frac{i_1}{q_n}+\frac{2^k}{3^kq_n}+\frac{2^{k-1}}{3^{k}q_n}\right) \times \left[\frac{i_2}{s_n2^{k-1}} , \frac{i_2+1}{s_n2^{k-1}} \right); \ 2\leq k \leq n; \\ \mathcal{W}_{i_1,i_2'}^{n,1} =\left [\frac{i_1}{q_n} +\frac{2}{3q_n} ,\frac{i_1+1}{q_n}\right) \times \left[\frac{i_2'}{2s_n} , \frac{i_2'+1}{2s_n} \right) : 0\leq i_1< q_n, \ 0\leq i_2< s_n, i_2'=0, 1 \end{cases}\right\}. \end{align} \subsubsection{The Conjugation map \texorpdfstring{$\overline{\phi}_n$}{Lg}} Now we define the following permutation maps $\widetilde{\phi}_n:\T^2\longrightarrow\T^2$ of the above partition $\mathrm{G}_n\cup\mathrm{NG}_n$ which maps to the elements of partition $\mathrm{V}_n\cup\mathrm{W}_n.$ Consider the map $\widetilde{\phi}_{n}: \left[\frac{i}{q_n},\frac{i+1}{q_n}\right) \times \T^1 \longrightarrow \left[\frac{i}{q_n},\frac{i+1}{q_n}\right) \times \T^1$ as following and extend it to the whole $\T^2$ as $\frac{1}{q_n}$-equivariantly. \begin{align} \widetilde{\phi}_n(\mathcal{I}_{i_1,i_2}^n)= \mathcal{V}_{j_1,j_2,j_3}^n \ \ \ &\text{where} \ \ \ j_1=\left\lfloor{\frac{i_1}{s_n}}\right\rfloor, \ j_2=i_2,\ j_3= i_1\mod s_n, \\ \widetilde{\phi}_n(\mathcal{J}_{i_1',i_2'}^{n,k})= \mathcal{W}_{j_1',j_2'}^{n,k}\ \ \ &\text{where}\ \ \ j_1'= \left\lfloor{\frac{i_1'}{s_n}}\right\rfloor, j_2' = \begin{cases} i_2'.s_n+i_1'\mod{s_n} \ &{\text{for}} \ 2\leq k\leq n\\ i_1'\mod{s_n}\ &{\text{for}}\ k=1 \ \& \ i_2'=0\\ s_n + i_1'\mod{s_n}\ &{\text{for}}\ k=1 \ \& \ i_2'=1 \end{cases} \end{align} \begin{figure}[ht]\label{fig:02} \centering \includegraphics[width=.7\textwidth]{ThmC_curlvariable.png} \caption{An example of action $\widetilde{\phi}_n$ on the elements of $\mathrm{G}_n\cup\mathrm{NG}_n$ for $n=2.$ } \end{figure} Indeed, the map $\widetilde{\phi}_n$ is a measure preserving map on the $\T^2$ and can be better understood by following rectangles as \begin{align} &\widetilde{\phi}_n\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times I_l^n \right) = \left[ \frac{i}{q_n} + \frac{l}{3^nq_n},\frac{i}{q_n}+ \frac{l+1}{3^nq_n}\right) \times \T^1 \ \ \label{eqn:6a}\\ &\widetilde{\phi}_n\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times J_l^k \right) = \left[ \frac{i}{q_n} + \frac{2^k}{3^kq_n},\frac{i}{q_n}+ \frac{2^k}{3^kq_n}+ \frac{2^{k-1}}{3^{k}q_n}\right) \times \left(\frac{l}{2^{k-1}} , \frac{l+1}{2^{k-1}}\right) \ ; \ 2\leq k\leq n \label{eq:6b}\\ &\widetilde{\phi}_n\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times \left(\frac{1}{3},\frac{1}{2}\right) \right) = \left[ \frac{i}{q_n} + \frac{2}{3q_n},\frac{i}{q_n}+ \frac{2}{3q_n}+ \frac{1}{3q_n}\right) \times \left(0 , \frac{1}{2}\right) \label{eq:6c} \\ &\widetilde{\phi}_n\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times \left(\frac{1}{2},\frac{2}{3}\right) \right) = \left[ \frac{i}{q_n} + \frac{2}{3q_n},\frac{i}{q_n}+ \frac{2}{3q_n}+ \frac{1}{3q_n}\right) \times \left(\frac{1}{2} , 1 \right)\label{eq:6d} \end{align} \remark Observe that, in (\ref{eqn:6a}), $\widetilde{\phi}_n$ takes very thin horizontal strip $\mathcal{I}_{l}^n= \T^1\times I_l^n$ and distributes it in the vertical direction all over the torus periodically, which will allow us to obtain generic points whose orbits are uniformly distributed all over the torus. Also, note that the measure of such a set ,containing generic points, is zero. Whereas in (\ref{eq:6b}), (\ref{eq:6c}) and (\ref{eq:6d}), $\widetilde{\phi}_n$ take $\mathcal{J}_l^k= \T^1\times J_l^k$ and distributes it such that it remain within the region $\left(\frac{l}{2^{k-1}} , \frac{l+1}{2^{k-1}}\right)$, which produces the non-generic points, see Figure {\ref{fig:02}}. We can extend this map to a smooth map $\widetilde{\phi}_n:\T^2\longrightarrow\T^2$ as $\frac{1}{q_n}$ equivariantly. Using the fact that any permutation map defined on the torus can be well approximated by a smooth map that preserves the same combinatorics of the permutation inside the torus and acts as an identity on the boundary of $\T^2$. This assertion builds upon the lemma ({\ref{lem:01}}) that there is $C^{\infty}$ measure-preserving map that rotates the disc of radius $R-\delta$ inside $[0,1]\times[0,1]$ by an angle $\pi$ and which is identically equal to zero in an arbitrarily small neighbourhood of the disc of radius $R$, and acts as an identity on the boundary of $[0,1]\times[0,1]$. Hence any permutation $\sigma$ can be written as a composition of transposition(rotation). Therefore the smooth maps can closely approximate each transposition by choosing a small enough $\delta$ in the above lemma. The analogous result has been used in \cite{FW1}, \cite{FS} and \cite{FSW}. Let's denote $\overline{\phi}_{n}$ to be the smooth diffeomorphism obtained by the permutation map $\widetilde{\phi}_{n}$ on $\mathbb{T}^2$. \subsubsection{The conjugation map \texorpdfstring{$h_n$}{Lg}} Here we define our final conjugation diffeomorphism as \begin{align} h_n= \overline{\phi}_n\circ P_n,\label{eq:6i} \end{align} where $\overline{\phi}_n$ is the smooth approximation of the map $\widetilde{\phi}_n$ and the diffeomorphism $P_n$ from section{\ref{sec:3a}} with the smooth map $\kappa_n: \T^1\longrightarrow [0,1]$. In this specific situation, we choose $\tilde{\kappa_n}:\left[0,\frac{1}{s_nq_n}\right]\longrightarrow\T^1$ defined as \begin{equation}\label{eqn:1.2} \tilde{\kappa_{n}}(x) = \begin{cases} \frac{\delta_n2q_ns_n}{n^2}(x) \ \ \ &,x\in [0,\frac{1}{2s_nq_n}) \\ -\frac{\delta_n2q_ns_n}{n^2}(x)+\frac{2\delta_n}{n^2} \ \ \ &,x\in [\frac{1}{2s_nq_n},\frac{1}{s_nq_n}] \end{cases} \end{equation} where $\delta_n= \frac{1}{e^{3^n}}$. Now, extend this map $\tilde{\kappa_n}$ periodically with period $\frac{1}{s_nq_n}$ on $\T^1$ and choose $\kappa_n$ to be the smooth approximation of $\tilde{\kappa_n}$ on $\T$ by Weierstrass Approximation Theorem. \remark The map $P_n$ ensures control of all the orbits, such that no whole orbit of a point is trapped inside the error set, which would guarantee that there are no other generic points w.r.t. to $\mu$ measure outside the set $\mathrm{B}$ and no other non-generic points outside the set $\mathrm{NB}$. But, this is not the case in theorem A, where we don't care about the number of generic points. \remark Note that $h_n\circ S_{\alpha_n}= S_{\alpha_n} \circ h_n,$ since both the maps $\overline{\phi}_n$ and $P_n$ commute with $S_{\alpha_n}$ by construction. \subsubsection{Convergence and Estimates} To exclude the region where we don't have control over the combinatorics, we consider a subset $E_n$ of $\T^1$ as \begin{align} E_n= \left(\bigcup_{i=0}^{s_nq_n-1}\left[\frac{i}{s_nq_n}-\frac{\epsilon_n'}{2},\frac{i}{s_nq_n}+\frac{\epsilon_n'}{2}\right] \times \T^1\right) \bigcup \left(\bigcup_{l=0}^{3^n-1} \T^1 \times \left[\frac{l}{3^n}-\frac{\epsilon_n'}{2}, \frac{l}{3^n}+\frac{\epsilon_n'}{2}\right]\right), \end{align} where $\epsilon_n'$ is chosen such that $\mu(E_n)< \frac{1}{e^{3^n}}.$ Denote the set $F_n= \T^2\backslash E_n$ such that $\mu(F_n)> 1-\frac{1}{e^{3n}}.$\\ Hereby we introduce the following collection of sets that corresponds to ``trapping generic zones" and ``trapping nongeneric zones" respectively (for $i_1=0,1,\ldots,q_ns_n-1$), \begin{align} \mathcal{X}_{i_1,t_1}^n &= P_n^{-1}\left(\mathcal{I}_{i_1,t_1}^n\bigcap F_n \right), \ \ \ \ t_1= 0,1,\ldots,2^{n}-1 \label{eq:6f}\\ \mathcal{Y}_{i_1,t_2}^{n,k} &= P_n^{-1}\left(\mathcal{J}_{i_1,t_2}^{n,k} \bigcap F_n \right),\ \ \ \ t_2= 0,1,\ldots,2^{n-1}-1, \ \ 1\leq k\leq n. \label{eq:6g} \end{align} \begin{lemma} For any $x\in \T^1\times I_{t_1}^n,$ for $t_1=0,1,\ldots,2^{n}-1,$ the orbit $\{S_{\alpha_{n+1}}^k(x)\}_{k=0}^{q_{n+1}-1}$ meets every set $\mathcal{X}_{i_1,t_1}^n,$ for any $ i_1= 0,1,\ldots,s_nq_n-1.$ Moreover, the number of iterates of orbit lie in every set $\mathcal{X}_{i_1,t_1}^n$ is at least $\left(1- \frac{2}{n^2}\right)\frac{q_{n+1}}{3^ns_nq_n}.$ \end{lemma} \begin{proof} Fix any $x\in \T^1\times I_{t_1}^n,$ the orbit of $x$ under the circle action $S_{\alpha_{n+1}}^k$, say $\mathcal{O}^x$, is equidistributed along $\T^1\times I_{t_1}^n$ because the sequence $\{k\alpha_{n+1}\}_{k=0}^{q_{n+1}-1}$ is equidistributed along $\T^1.$ In particular, $\mathcal{O}^x$ is equidistributed along the elements $ \mathcal{I}_{i_1,t_1}^n= \left[\frac{i_1}{s_nq_n},\frac{i_1+1}{s_nq_n}\right) \times I_{t_1}^n$ for every $i_1=0,1,\ldots,s_nq_n-1.$ Note that $$\left[\frac{i_1}{s_nq_n}+\frac{\epsilon_n'}{2},\frac{i_1+1}{s_nq_n}-\frac{\epsilon_n'}{2}\right] \times \left[\frac{t_1}{3^n}+\frac{\epsilon_n'}{2}, \frac{t_1+1}{3^n}-\frac{\epsilon_n'}{2}\right]\subset \mathcal{I}_{i_1,t_1}^n\bigcap F_n.$$ The map $P_n$ acts as vertical translation on $\T^2,$ and with the choice of $\kappa_n$ function, the net translation caused by the section $\left[\frac{i_1}{s_nq_n}+\frac{\epsilon_n'}{2},\frac{i_1+1}{s_nq_n}-\frac{\epsilon_n'}{2}\right]$ inside the section $\left[\frac{t_1}{3^n}+\frac{\epsilon_n'}{2}, \frac{t_1+1}{3^n}-\frac{\epsilon_n'}{2}\right]$ is almost $\frac{\delta_n}{n^2s_n}$. Due to $\frac{\delta_n}{n^2} <\frac{1}{n^23^n}$, we can estimate \begin{align} \mu(\mathcal{X}_{i_1,t_1}^n\cap \mathcal{I}_{i_1,t_1}^n) &\geq (1-2\epsilon_n') \frac{\left|\left[\frac{t_1}{3^n}+\frac{\delta_n}{n^2}, \frac{t_1}{3^n}+\frac{1}{3^n}\right]\right|}{s_nq_n} \nonumber \\ &\geq (1-2\epsilon_n')\left(1-\frac{3^n\delta_n}{n^2}\right)\frac{1}{3^ns_nq_n} \nonumber\\ &\geq \left(1-\frac{2.3^n\delta_n}{n^2}\right)\frac{1}{3^ns_nq_n} \geq \left(1-\frac{2}{n^2}\right)\frac{1}{3^ns_nq_n} \end{align} Hence, at least $\left(1-\frac{2}{n^2}\right)\frac{q_{n+1}}{3^ns_nq_n}$ number of elements are trapped inside the orbit $\mathcal{O}^x$. \end{proof} \remark {\label{rem:6a}}Recall that the image of $\mathcal{X}_{i_1,i_2}^n$, under the conjugation map $h_{n},$ contained inside $\mathcal{V}_{\lfloor{\frac{i_1}{s_n}}\rfloor, i_2, i_1\mod{s_n}}^n$ and conversely, $\mathcal{V}_{i_1,i_2,i_3}^n$ is uniquely mapped onto $\mathcal{X}_{i_1.s_n+i_3,i_2}^n.$ By the above estimate, the number of iterates $ k\in \{0,1,\ldots,q_{n+1}-1\}$ such that $h_n \circ S_{\alpha_{n+1}}^{k}(x) \in \mathcal{V}_{i_1,i_2,i_3}$ for $x\in \T^1\times I_{t_1}^n$ is at least $\left(1-\frac{2}{n^2}\right)\frac{q_{n+1}}{3^ns_nq_n}. $ \remark {\label{rem:6b}} Note that under the action of $h_n$, every element from $\mathrm{NG}_n$ transform as $(\text{for} \ i_2= 0,1,\ldots,2^{n-1}-1)$, \begin{align} h_n\left(\bigcup\limits_{i_1=0}^{s_nq_n-1}\mathcal{Y}_{i_1,i_2}^{n,k}\right)&= \bigcup\limits_{i_1=0}^{s_nq_n-1}\overline{\phi}_n(\mathcal{J}_{i_1,i_2}^{n,k}\cap F_n)\subseteq \T^1 \times \left[\frac{i_2}{2^{k-1}},\frac{i_2+1}{2^{k-1}}\right); \ \ 2\leq k \leq n \\ h_n\left(\bigcup\limits_{i_1=0}^{s_nq_n-1}\mathcal{Y}_{i_1,t}^{n,1}\right)&= \bigcup\limits_{i_1=0}^{s_nq_n-1}\overline{\phi}_n(\mathcal{J}_{i_1,t}^{n,1}\cap F_n) \subseteq \T^1 \times \left[\frac{t}{2},\frac{t+1}{2}\right); \ t=0,1 . \end{align} \begin{prop}\label{pr:2:a} For $\epsilon>0$, consider $(\frac{\sqrt{2}}{q_{n}}, \epsilon)$-uniformly continuous function $\psi : \mathbb{T}^2 \longrightarrow \mathbb{R}$, i.e. $\psi(B_\frac{\sqrt{2}}{q_{n}}{(x)})\subset B_{\epsilon}(\psi(x))$. Then for any $x\in \mathrm{G}_n,$ satisfy the following estimate: \begin{align} \left|\frac{1}{q_{n+1}} \sum_{i=0}^{q_{n+1}-1} \psi(h_n \circ S_{\alpha_{n+1}}^{i}(x)) - \int_{\T^2} \psi d\mu \right| \leq 4\epsilon + \frac{2}{n^2}\|\psi\|_0 \label{eq:6e} \end{align} \end{prop} \begin{proof} For any $x\in \mathrm{G}_n$ and $\Delta_{i_1,i_2}^n \in \Delta_{i,j}^n $(see (\ref{eq:5.3a})). Precisely, $x\in \T^1 \times I_{l}^n$ for some $l$. Since the orbit of $x$ under the $S_{\alpha_{n+1}}^k$ is almost trapped by the domains $\{\mathcal{X}_{t_1,t_2}^n\}$, therefore there exist a $i_0\in \mathbb{N}$ such that $S_{\alpha_{n+1}}^{i_0}(x) \in \mathcal{X}_{i_1.s_n+i_2,l}^n.$ With the action of $h_n$, by (\ref{eqn:1.2}) and remark ({\ref{rem:6a}}), we have $$h_n\circ S_{\alpha_{n+1}}^{i_o}(x) \in \mathcal{V}_{i_1,l,i_2}^n \subset \Delta_{i_1,i_2}^n.$$ Therefore for any $y\in \Delta_{i,j}^n$, we conclude $$d(h_n \circ S_{\alpha_{n+1}}^{i_o}(x),y )\leq diam(h_n \circ S_{\alpha_{n+1}}^{i_o}(x), y)\leq \sqrt{2}/q_n.$$ Now using the hypothesis on $\psi$, we have $|\psi(h_n \circ S_{\alpha_{n+1}}^{i_o}(x))- \psi(y)|< 2\epsilon.$ Take the average for all $y \in \Delta_{i,j}^n $ in the last equation, we get $$|\psi(h_n \circ S_{\alpha_{n+1}}^{i_o}(x) )-\frac{1}{\mu(\Delta_{i,j}^n)} \int_{\Delta_{i,j}^n} \psi(y) d\mu|< 2\epsilon.$$ Let's denote $J_{\Delta} = \{ k\in {0,1,\ldots,q_{n+1}-1}\ : h_n \circ S_{\alpha_{n+1}}^{k}(x) \in \Delta \}$ for all $\Delta\in \mathcal{D}_n,$ where $\mathcal{D}_n$ defined by (\ref{eq:5.3a}). Using the count estimate described in remark({\ref{rem:6a}}) and triangle inequality in the last equation, we have \begin{align} \left|\frac{1}{q_{n+1}} \sum_{i \in J_{\Delta}} \psi(h_n \circ S_{\alpha_{n+1}}^{i}(x) ) - \int_{\Delta_{i,j}^n} \psi d\mu\right| < (4\epsilon+ \frac{2}{n^2}\|\psi\|_0)\mu(\Delta_{i,j}^n) \end{align} Further, we follow the analogous estimation as done in proposition {\ref{pr:1a}}, and we have the estimate(\ref{eq:6e}) as required. \end{proof} \begin{lemma}\label{le:6a} The sequence of diffeomorphisms $T_{n} = H_{n} \circ S_{\alpha_{n+1}}\circ H_{n}^{-1},$ such that $H_n=h_1\circ h_2\ldots\circ h_n$ and $h_n$ defined by (\ref{eq:6i}) and $\alpha_{n+1}$ converges to a Liouvillian number, converges to $T\in \text{Diff}^{\infty}(\mathbb{T}^2, \mu)$ in the $C^{\infty}$ topology. Moreover, for any $m\leq q_{n+1},$ we have \begin{equation}\label{eq:6.24} d_0(T^m,T_{n}^m) \leq \frac{1}{2^{n+1}}, \end{equation} \end{lemma} \subsubsection{Proof of Theorem C} \begin{proof} Let's fix a countable set of Lipshitz functions $\Psi = \{ \psi_i\}_{i\in \mathbb{N}},$ which is dense in $C^0(\mathbb{T}^2,\mathbb{R}).$ Denote $L_n$ to be a uniform Lipshitz constant for $\psi_1,\psi_2,\ldots,\psi_n.$ Choose $q_{n+1}= l_{n}k_nq_n^2$ large enough by choosing $l_n$ enuogh arbitrary large such that it satisfies: \begin{align} l_n> n^2. ||DH_{n-1}||_{n-1} max_{0\leq i\leq n} L_n. \label{eq:6.1c} \end{align} The latter assumption guarantees the convergence of sequences of diffeomorphism $\{T_n\}$ and implies that $\psi_1H_{n-1},\psi_2H_{n-1},...,\psi_nH_{n-1}$ are $(\frac{\sqrt{2}}{q_n}, \frac{1}{n^2})$-uniformly continuous.\\ {\it{ claim 1: Every point inside the set $\mathrm{B}= \liminf\limits_{n\rightarrow0} \mathrm{B}_n$ is a generic point, where $\mathrm{B}_n= H_n(\mathrm{G})$}}\\ Let $y\in \mathrm{B}$, i.e. $y\in \mathrm{B}_n \ \forall n$ except for finitely many $n.$ Say, $x_n=H_n^{-1}(y)\in \mathrm{G}\subset \mathrm{G}_n.$\\ Apply the propostition {\ref{pr:2a}} with $\epsilon = \frac{1}{n^2},$ $1\leq k\leq n$, and for $x_n\in \mathrm{G}_n$ (see \ref{eq:6a}), \begin{align} \left|\frac{1}{q_{n+1}}\sum_{i=0}^{q_{n+1}-1} \psi_k(H_{n}S_{{\alpha}_{n+1}}^i x_n) - \int_{\mathbb{T}^2} \psi_kH_n d\mu \right| < \frac{2}{n^2} ||\psi_k||_0 + \frac{4}{n^2}. \end{align} Use the fact $H_{n}$ is area preserving smooth diffeomorphism and $H_{n}(x_n)= y,$ with the convergence estimate (\ref{eq:6.24}) in the last equation, which implies for every $\psi_k\in \Psi$ $$\left|\frac{1}{q_{n+1}}\sum_{i=0}^{q_{n+1}-1} \psi_k(T^i y) - \int_{\mathbb{T}^2} \psi_k d\mu \right| < \frac{2}{n^2}||\psi_k||_0 + \frac{4}{n^2}+ \frac{1}{2^{n+1}}, $$ Using the triangle inequality and we obtain $y$ as a generic point for $\mu$ in the sense of ({\ref{def:1a}}) such that $$\lim_{N\longrightarrow \infty}\frac{1}{N} \sum_{i=0}^{N-1} \psi_k(T^iy)\longrightarrow \int_{\mathbb{T}^2} \psi_k d\mu. $$ Since $y\in \mathrm{B}$ chosen arbitrarily,, therefore every point $ y\in \mathrm{B}$ is a generic point. \\ {\it{claim 2: $\text{dim}_H(C)\leq \text{dim}_H(\mathrm{B}) \leq \text{dim}_H(\mathrm{G})= 1+ \frac{log2}{log3}$}}.\\ By construction, $H_n$ acts as an identity near the boundary of $\T^2$, implying that $\{0\}\times C \subseteq \mathrm{B}_n$ for all $n$. Hence, $\{0\}\times C \subseteq \mathrm{B}$ and $\text{dim}_H(C)\leq \text{dim}_H(\mathrm{B})$. \\ The right-hand inequality holds by the following inequality: $\text{dim}_H(\mathrm{B})\leq \text{dim}_H(\mathrm{B}_n)= \text{dim}_H(\mathrm{G})$ where the first inequality holds true by containment $\mathrm{B}\subseteq \mathrm{B}_n$ and the second equality holds by lemma (\ref{lem:3a}) where $H_n$ being smooth diffeomorphism and $\mathrm{G}$ is a compact set. With the product rule of Hausdorff dimension (\ref{def:1b}), and the fact $\text{dim}_H(C)= \frac{log2}{log3}$ and (\ref{eq:6a}), we have {$\text{dim}_H(\mathrm{G})= 1+ \frac{log2}{log3}$}.\\ {\it{claim 3: Every point inside the set $\mathrm{NB}=\T^2\backslash \mathrm{B}=\limsup\limits_{n\rightarrow\infty} \mathrm{B}_n^c $ is a non-generic point.}}\\ With the convergence estimate (\ref{eq:6.24}) and triangle inequality, it is enough to show for $y\in \mathrm{NB}$, $$\lim_{N\longrightarrow \infty}\frac{1}{N} \sum_{i=0}^{N-1} \phi(T_n^iy) \not\longrightarrow \int_{\mathbb{T}^2} \phi du \, \ \text{for infinitely many } n \ \text{ and for some} \ \phi\in C^0(\mathbb{T}^2,[0,1]).$$ If $y\in \mathrm{NB}$ then $\forall \ n_0\in \mathbb{N},\ \text{there exist}\ n_1>n_0 : y\in \mathrm{B}_{n_1}^c$, where $\mathrm{B}_{n_1}^c=\T^2\backslash \mathrm{B}_{n_1}$. Say, $x_{n_1}= H_{n_1}^{-1}(y)$. Therefore $x_{n_1}\in \mathrm{NG},$ i.e. ${x_{n_1}}\in \mathcal{J}_l^k \ \text{for some} \ l, k\in \mathbb{N} \ (\text{because} \ \mathrm{NG}=\sqcup_{k}\sqcup_{l} \mathcal{J}_l^k).$ Let's consider $\phi_n = \pi_2\circ H_{n-1}^{-1}$ a continuous function on $\T^2,$ and by remark (\ref{rem:6b}), we reduced to \begin{align} &\phi_{n_1}(T_{n_1}^i(y))= \pi_2\circ h_{n_1}\circ S_{\alpha_{n_1+1}}^i(x_{n_1}) \subset \left[\frac{l}{2^{k-1}},\frac{l+1}{2^{k-1}}\right) \ \ \forall \ i \in \mathbb{N},\nonumber\\ \text{i.e.} \ &\left|\lim_{N\longrightarrow \infty}\frac{1}{N} \sum_{i=0}^{N-1} \phi_{n_1}(T_{n_1}^iy) - \int_{\mathbb{T}^2} \phi_{n_1} d\mu\right| \geq 1/2.\nonumber \end{align} $$\implies \forall n_0\in \mathbb{N},\ \text{there exist } n_1>n_0 : \lim_{N\longrightarrow \infty}\frac{1}{N} \sum_{i=0}^{N-1} \phi_{n_1}(T_{n_1}^iy) \not\longrightarrow \int_{\mathbb{T}^2} \phi_{n_1} d\mu.$$ It shows there are infinitely many $\{T_n\}$ whose orbit $\{T^i_n(y)\}_{i=0}^{q_n-1}$ is not uniformly distributed along the whole torus, and $y\in \mathrm{NB}$ is arbitrary. It completes the claim. \end{proof} \subsection{Proof of Theorem D}\label{sec:6.2a} Here, we construct a couple of sets containing the generic points for the interesting values of their Hausdorff dimension. The sets can be constructed in a similar manner to the set $\mathrm{G}$ constructed in the last subsection (see \ref{eq:6a}). Therefore we will only mention the remarkable changes that need to be made.\\ For any $1<\alpha<2,$ and consider a Cantor set $C_{\lambda}$ associated with the sequence $\lambda=\{\lambda_k\}_{k\in \mathbb{N}},$ where $\lambda_k= \frac{1}{c_0}(\frac{1}{k})^{\frac{1}{\alpha-1}},$ the constant $c_0= \sum_{k\in \mathbb{N}}\lambda_k,$ explained in section {\ref{sec:2.3a}}. At first, just replace the Cantor set $C$ with $C_{\lambda}$, $I_l^n$ with $I_{l,\lambda}^n$, and $J_l^n$ with $J_{l,\lambda}^n$ in $(\ref{eq:6a}),\ref{eq:6.1.b}, (\ref{eqn:6a})$ and $(\ref{eq:6b})$ to get following collection of disjoint subsets of $\T^2:$ $\T^2 = (\mathrm{G}_{\lambda} \cup \mathrm{NG}_{\lambda})$ where \begin{align} \mathrm{G}_{\lambda}&= \bigcap_{n\geq 1} \mathrm{G}_{n,\lambda} = \T^1 \times C_{\lambda} ,\ \ \text{where} \ \mathrm{G}_{n,\lambda} = \T^1 \times \bigcup_{l=0}^{2^{n}-1} I_{l,\lambda}^n , \label{eq:6.2a} \ \ \\ \mathrm{NG}_{\lambda} &= \bigcup_{n\geq 1} \mathrm{NG}_{n,\lambda} = \T^1 \times ([0,1]\backslash C_{\lambda}), \ \ \text{where} \ \ \mathrm{NG}_{n,\lambda} = \T^1 \times\bigcup_{k=0}^{n-1} \bigcup_{l=0}^{2^{k-1}-1} J_{l,\lambda}^k ,\label{eq:6.2b} \ \ \end{align} where $I_{l,\lambda}^n$ and $J_{l,\lambda}^n$ are intervals of $[0,1]$ as defined in section {\ref{sec:2.3a}}. We split the interval $J_{0,\lambda}^1$ into two equal halves as $J_{0,\lambda}^1= \hat{J}_{0,\lambda}^1\cup \hat{J}_{1,\lambda}^1$.\\ Consider the following permutation map $\widetilde{\phi}_{n,\lambda}: \T^2 \longrightarrow \T^2$ which follows the same combinatorics as $\widetilde{\phi}_n$ from section $\ref{sec:6a}$. \begin{align} \widetilde{\phi}_{n,\lambda}\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times I_{l,\lambda}^{n} \right) &= \left[ \frac{i}{q_n} + \sum_{k=0}^{l-1}\frac{|I_{k,\lambda}^n|}{q_n},\frac{i}{q_n}+ \sum_{k=0}^{l}\frac{|I_{k,\lambda}^{n}|}{q_n}\right) \times \T^1 \ \ \forall \ 0\leq l< 2^n \\ \widetilde{\phi}_{n,\lambda}\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times J_{l,\lambda}^{k} \right) &= \left[ \frac{i}{q_n} + \sum_{l=0}^{2^k-1}\frac{|I_{l,\lambda}^{k}|}{q_n},\frac{i}{q_n}+ \sum_{l=0}^{2^k-1}\frac{|I_{l,\lambda}^{k}|}{q_n} + \sum_{l=0}^{2^{k-1}-1}\frac{ 2^{n-1}|J_{l,\lambda}^k|}{q_n}\right) \times \left(\frac{l}{2^{n-1}} , \frac{l+1}{2^{n-1}}\right); \nonumber \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \forall \ 0\leq l< 2^{k-1}, \ \ 2\leq k\leq n, \\ \widetilde{\phi}_{n,\lambda}\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times \hat{J}_{l,\lambda}^{1} \right) &= \left[ \frac{i}{q_n} + \sum_{l=0}^{1}\frac{|I_{l,\lambda}^{1}|}{q_n},\frac{i}{q_n}+ \sum_{l=0}^{1}\frac{|I_{l,\lambda}^{1}|}{q_n} + \frac{ 2|\hat{J}_{l,\lambda}^1|}{q_n}\right) \times \left(\frac{l}{2} , \frac{l+1}{2}\right) \ \ \forall \ l=0,1 \nonumber \end{align} Then the final conjugation map $h_n:\T^2\longrightarrow\T^2$ can be described as \begin{align} h_n= \overline{\phi}_{n,\lambda}\circ P_n \end{align} where $\overline{\phi}_{n,\lambda}$ is a smooth approximation of the map $\widetilde{\phi}_{n,\lambda}$ and diffeomorphism $P_n$ with the same smooth map $\kappa_n:\T^1\longrightarrow[0,1]$ from $(\ref{eqn:1.2})$ with $\delta_n=\lambda_{2^{n+1}}$. To exclude the region where we don't have control over the combinatorics, we consider a subset $E_n$ of $\T^1$ as \begin{align} E_n= \left(\bigcup_{i=0}^{s_nq_n-1}\left[\frac{i}{s_nq_n}-\frac{\epsilon_n'}{2},\frac{i}{s_nq_n}+\frac{\epsilon_n'}{2}\right] \times \T^1\right)\bigcup \left( \bigcup_{l=0}^{2^n-1} \T^1 \times \left[I_{l,\lambda}^n-\frac{\epsilon_n'}{2},I_{l,\lambda}^n +\frac{\epsilon_n'}{2}\right]\right) \end{align} where $\epsilon_n'$ is chosen such that $\mu(E_n)< \frac{1}{e^{3^n}}.$ Denote the set $F_n= \T^2\backslash \E_n$ such that $\mu(F_n)> 1-\frac{1}{e^{3n}}.$\\ Analogously, we consider the specific domains as in (\ref{eq:6f}). Using $\frac{\delta_n}{n^2}\leq |I_{l,\lambda}^n|,$ for all $l=0,1,\ldots,2^{n}-1,$ we produce the following result as similar to lemma \ref{le:6a} and proposition \ref{pr:2a} as \begin{prop}\label{pr:6.2a} For $\epsilon>0$, consider $(\frac{\sqrt{2}}{q_{n}}, \epsilon)$-uniformly continuous function $\psi : \mathbb{T}^2 \longrightarrow \mathbb{R}$, i.e. $\psi(B_\frac{\sqrt{2}}{q_{n}}{(x)})\subset B_{\varepsilon}(\psi(x))$. Then for any $x\in \mathrm{G}_{n,\lambda}$ satisfy the following estimate: \begin{align} \left|\frac{1}{q_{n+1}} \sum_{i=0}^{q_{n+1}-1} \psi(h_n \circ S_{\alpha_{n+1}}^{i}(x)) - \int_{\T^2} \psi d\mu \right| \leq 4\epsilon + \frac{2}{n^4}\|\psi\|_0 \end{align} \end{prop} The proof of theorem D will follow on the same line as the proof of theorem C. We start by choosing $L_n$ to be uniform Lipshitz constant and $q_{n+1} = l_nq_n^2$ where $l_n$ satisfying (\ref{eq:6.1c}). Now it is enough to show that every point inside $\mathrm{B}_{\lambda}= \liminf_{n\rightarrow\infty}\mathrm{B}_{n,\lambda}$ where $\mathrm{B}_{n,\lambda}= H_n(\mathrm{G}_{\lambda})$ is a generic point, and its Hausdorff dimension lies between $\alpha-1$ and $\alpha$. The latter fact is followed by using proposition (\ref{pr:6.2a}) as done in claim 2, and $\text{dim}_H(C_{\lambda})= \alpha-1 $ and $\text{dim}_H(\mathrm{G}_{\lambda})= \alpha$ followed by (\ref{eq:6.1d}) and (\ref{def:1b}). \\ In our specific case, the same relations as mentioned in remark {\ref{rem:6b}} are satisfied, and hence, it shows that every point inside the $\mathrm{NB}_{\lambda}=\mathbb{T}^2\backslash{\mathrm{B}_{\lambda}}$ is a non-generic point. This completes the proof. \subsection{Proof of Theorem E:} To prove the theorem, we divide $\T^2$ into two disjoint subsets where one subset supports an ergodic measure, and the other subset has measure zero, and its Hausdorff dimension is less than $\alpha$, which contains all non-generic points. For that, we follow a similar construction for the map $T\in \text{Diff}^{\infty}(\T^2,\mu)$ as done in the proof of theorem D. Hereby, we present the modification in the combinatorics of the elements of $\T^2= \mathrm{G}_{\lambda}\cup\mathrm{NG}_{\lambda,}$ which allows us to prove set $\mathrm{G}_{\lambda}$ by (\ref{eq:6.2a}) and set $\mathrm{NG}_{\lambda}$ by (\ref{eq:6.2b}) traps only non-generic points and generic points, respectively.\\ Consider the following permutation map $\widetilde{\phi}_{n,\lambda}: \T^2 \longrightarrow \T^2$, in place of $\widetilde{\phi}_{n,\lambda}$ from section $\ref{sec:6.2a}$, which follows the required combinatorics as (for $i= 0,1,\ldots,q_n-1$, \ $l= 0,1,\ldots,2^{n}-1$ and $k\leq n$), \begin{align} \widetilde{\phi}_{n,\lambda}\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times I_{l,\lambda}^{n} \right) &= \left[ \frac{i}{q_n} + \sum_{k=1}^{n}\sum_{j=0}^{2^{k-1}-1}\frac{ |J_{j,\lambda}^k|}{q_n}, \ \frac{i}{q_n}+ \sum_{k=1}^{n}\sum_{j=0}^{2^{k-1}-1}\frac{ |J_{j,\lambda}^k|}{q_n}+ \sum_{k=0}^{2^{n}-1}\frac{2^{n}|I_{k,\lambda}^{n}|}{q_n}\right) \times \left(\frac{l}{2^{n}}, \frac{l+1}{2^n}\right) \nonumber\\ \widetilde{\phi}_{n,\lambda}\left(\left[ \frac{i}{q_n},\frac{i+1}{q_n}\right) \times J_{l,\lambda}^{k} \right) &= \left[ \frac{i}{q_n} + \sum_{k'=1}^{k-1}\sum_{j=0}^{2^{k'-1}-1}\frac{ |J_{j,\lambda}^{k'}|}{q_n}+\sum_{j=0}^{l-1}\frac{ |J_{j,\lambda}^{k}|}{q_n}, \ \frac{i}{q_n} + \sum_{k'=1}^{k-1}\sum_{j=0}^{2^{k'-1}-1}\frac{ |J_{j,\lambda}^{k'}|}{q_n}+\sum_{j=0}^{l}\frac{ |J_{j,\lambda}^k|}{q_n}\right) \times \T^1 \nonumber \end{align} \remark Recall that $|J_{l,\lambda}^k|= \lambda_{2^{k-1}+l-1}$ for all $k\leq n$ and $|I_{l,\lambda}^{n}|= \sum_{n=k}^{\infty}\sum_{j=l2^{n-k}}^{(l+1)2^{n-k}-1}\lambda_{2^{n}+j}.$ Refer to Figure (\ref{fig:03}) for an illustration of the combinatorics. \begin{figure}[ht] \centering \includegraphics[width=0.75\textwidth]{ThmE_curlvariable.png} \caption{An example of action $\widetilde{\phi}_{n,\lambda}$ on the elements of $\mathrm{G}_n\cup\mathrm{NG}_n$ for $n=2.$} \label{fig:03} \end{figure} Following the analogous construction from section {\ref{sec:6.2a}}, we reduce to the following proposition for the elements of $\mathrm{NG}_{\lambda}$ and $\mathrm{G}_{\lambda},$ which is sufficient to prove the required property. \begin{prop}\label{pr:6.3a} \begin{enumerate} \item For $\epsilon>0$, consider $(\frac{\sqrt{2}}{q_{n}}, \epsilon)$-uniformly continuous function $\psi : \mathbb{T}^2 \longrightarrow \mathbb{R}$, i.e. $\psi(B_\frac{\sqrt{2}}{q_{n}}{(x)})\subset B_{\epsilon}(\psi(x))$. Then for any $x\in \mathrm{NG}_{n,\lambda}$ satisfy the following estimate: \begin{align} \left|\frac{1}{q_{n+1}} \sum_{i=0}^{q_{n+1}-1} \psi(h_n \circ S_{\alpha_{n+1}}^{i}(x)) - \int_{\T^2} \psi d\mu \right| \leq 4\epsilon + \frac{1}{2^{n(\alpha-1)}}\|\psi\|_0 \end{align} \item Every element $\T^1\times I_{l,\lambda}^n \in \mathrm{G}_{n,\lambda}$ satisfies \begin{align} h_n(\T^1\times I_{l,\lambda}^n)&\subset \T^1 \times \left[\frac{l}{2^n},\frac{l+1}{2^n}\right) \end{align} \end{enumerate} \end{prop} \remark Here, the set $\mathrm{B}_{\lambda}= \liminf_{n\rightarrow\infty}H_n({\mathrm{G}_{\lambda}})$ contains the non-generic points of the map $T$ and its $\alpha-1 \leq \dim_H(\mathrm{B}_{\lambda})\leq \alpha$ (see theorem D) for chosen $\lambda=\{\lambda_k\}_{k\in \N}$ defined by $\lambda_k= \frac{1}{c_0}(\frac{1}{k})^{\frac{1}{\alpha-1}},$ the constant $c_0= \sum_{k\in \mathbb{N}}\lambda_k$. \subsection{Future Direction:} \begin{enumerate} \item Can we choose a set $\mathrm{B}$ containing all the generic points such that $\text{dim}_H(\mathrm{B})= \alpha$ for all $0<\alpha<2$? \item Can we choose a generic set $\mathrm{B}$ of type $C\times C$, where $C$ is Cantor set on the unit interval, in the above setup of theorem C? \item Can we generalize the theorem C for a 3-dimensional torus with a choice of generic set of type \begin{itemize} \item $\mathrm{B}= \mathbb{T}^1\times C\times C.$ If this is true, the result generalizes to the n-dimensional torus. \item In fact, can we choose the set $A= \mathbb{T}^1\times$``2D-fractal", where 2D fractal is not necessarily the product of two sets like $C \times C $ type. \end{itemize} \end{enumerate} \begin{center} \large{Acknowledgement} \end{center} The author wants to thank P.~Kunde and S.~Banerjee for suggesting the problem and for valuable discussions which helped to develop the ideas put forward. The work was supported by the University Grants Commission(UGC-JRF), India. \bibliographystyle{plain}
1,108,101,563,028
arxiv
\section{Introduction}\ \\[-3mm] The refined analytic torsion, defined by M. Braverman and T. Kappeler in [BK1] and [BK2] on closed manifolds, can be viewed as a refinement of the Ray-Singer torsion, since it is a canonical choice of an element with Ray-Singer norm one, in case of unitary representations. \\[3mm] The complex phase of the refinement is given by the rho-invariant of the odd-signature operator. Hence one can expect the refined analytic torsion to give more geometric information than the Ray-Singer torsion. \\[3mm] This is indeed the case in the setup of lens spaces with explicit formulas for the associated Ray-Singer torsion and eta-invariants, see [RH, Section 5] and the references therein. There it is easy to find explicit examples of lens spaces which are not distinguished by the Ray-Singer torsion, however have different rho-invariants of the associated odd-signature operators. \\[3mm] An important property of the Ray-Singer torsion norm is its gluing property, as established by W. L\"{u}ck in [L\"{u}] and S. Vishik in [V]. It is natural to expect a refinement of the Ray-Singer torsion to admit an analogous gluing property. \\[3mm] Unfortunately there seems to be no canonical way to extend the construction of Braverman and Kappeler to compact manifolds with boundary. In particular a gluing formula seems to be out of reach. \\[3mm] We propose a different refinement of analytic torsion, similar to Braverman and Kappeler, which does apply to compact manifolds with and without boundary. In the subsequent publication [BV4] we establish a gluing formula for our construction, which in fact can also be viewed as a gluing law for the original definition of refined analytic torsion by Braverman and Kappeler. \\[3mm] The presented construction is analogous to the definition in [BK1] and [BK2], but applies to any smooth compact Riemannian manifold, with or without boundary. For closed manifolds the construction differs from the original definition in [BK2]. Nevertheless we still refer to our concept as "refined analytic torsion" within the present discussion. \\[3mm] {\bf Acknowledgements.} The results of this article were obtained during the author's Ph.D. studies at Bonn University, Germany. The author would like to thank his thesis advisor Prof. Matthias Lesch for his support and useful discussions. The author was supported by the German Research Foundation as a scholar of the Graduiertenkolleg 1269 "Global Structures in Geometry and Analysis". \section{Motivation for the generalized construction}\ \\[-3mm] The essential ingredient in the definition of the refined analytic torsion in [BK2] is the twisted de Rham complex with a chirality operator and the elliptic odd-signature operator associated to the complex, viewed as a map between the even forms. Hence in the case of a manifold with boundary we are left with the task of finding elliptic boundary conditions for the odd-signature operator which preserve the complex structure and provide a Fredholm complex, in the sense of [BL1]. \\[3mm] The notions of a Hilbert and a Fredholm complex were studied systematically in [BL1] and will be provided for convenience in the forthcoming section. The boundary conditions, that give rise to a Hilbert complex are referred to as "ideal boundary conditions". It is important to note that the most common self-adjoint extensions of the odd-signature operator between the even forms do not come from ideal boundary conditions. \\[3mm] The existence and explicit determination of elliptic boundary conditions for the odd-signature operator between the even forms, arising from ideal boundary conditions, is an open question. However, it is clear that the absolute and relative boundary conditions do not satisfy these requirements. \\[3mm] On the other hand the gluing formula in [V] and [L\"{u}] for the Ray-Singer torsion makes essential use of the relative and absolute boundary conditions. Since the establishment of a corresponding gluing formula for the refined analytic torsion is a motivation for our discussion, these boundary conditions seem to be natural choices. \\[3mm] We are left with a dilemma, since neither the relative nor the absolute boundary conditions are invariant under the Hodge operator. We resolve this dilemma by combining the relative and absolute boundary conditions. This allows us to apply the concepts of [BK2] in a new setting and to establish the desired gluing formula. \section{Definition of Refined analytic torsion}\label{explicit-unitary}\ \\[-3mm] Let $(M^m, g^M)$ be a smooth compact connected odd-dimensional oriented Riemannian manifold with boundary $\partial M$, which may be empty. Let $(E, \nabla, h^E)$ be a flat complex vector bundle with any fixed Hermitian metric $h^E$, which need not to be flat with respect to $\nabla$. \\[3mm] The flat covariant derivative $\nabla$ is a first order differential operator $$\nabla: \Gamma (E) \rightarrow \Gamma (T^*M\otimes E),$$ satisfying the Leibniz rule $$\nabla_X(fs)=(Xf)s+f\nabla_Xs, \quad s \in \Gamma (E), X \in \Gamma (TM), f \in C^{\infty}(M).$$ The covariant derivative $\nabla$ extends by the Leibniz rule to the twisted exterior differential $\nabla: \Omega^k_0(M, E)\to \Omega^{k+1}_0(M, E)$ on $E-$valued differential forms with compact support in the interior of the manifold $\Omega^k_0(M,E)$. The exterior differential satisfies the (generalized) Leibniz rule \begin{align*} \nabla_X(w\wedge \eta)=(\nabla_X w)\wedge \eta +(-1)^pw\wedge \nabla_X\eta, \end{align*} for any $w \in \Omega^p_0(M), \eta \in \Omega^q_0(M,E), X\in \Gamma (TM)$. \\[3mm] Due to flatness of $(E,\nabla)$ the twisted exterior differential gives rise to the twisted de Rham complex $(\Omega^*_0(M,E), \nabla)$. The metrics $g^M, h^E$ induce an $L^2-$inner product on $\Omega^*_0(M,E)$. We denote the $L^2-$completion of $\Omega^*_0(M,E)$ by $L^2_*(M,E)$. \\[3mm] Next we introduce the notion of the dual covariant derivative $\nabla'$. It is defined by requiring: \begin{align}\label{dual-connection} dh^E(u,v)[X]=h^E(\nabla_Xu,v)+h^E(u,\nabla'_Xv), \end{align} to hold for all $u,v \in C^{\infty}(M,E)$ and $X \in \Gamma(TM)$. In the special case that the Hermitian metric $h^E$ is flat with respect to $\nabla$, the dual $\nabla'$ and the original covariant derivative $\nabla$ coincide. More precisely the Hermitian metric $h^E$ can be viewed as a section of $E^*\otimes E^*$. The covariant derivative $\nabla$ on $E$ gives rise to a covariant derivative on the tensor bundle $E^*\otimes E^*$, also denoted by $\nabla$ by a minor abuse of notation. \\[3mm] For $u,v,X$ as above one has: $$\nabla h^E(u,v)[X]=dh^E(u,v)[X]-h^E(\nabla_{X}u,v)-h^E(u,\nabla_{X}v).$$ In view of \eqref{dual-connection} we find $$\nabla h^E=0 \ \Leftrightarrow \nabla=\nabla'.$$ As before, the dual $\nabla'$ gives rise to a twisted de Rham complex. Consider the differential operators $\nabla, \nabla'$ and their formal adjoint differential operators $\nabla^t, \nabla'^t$. The associated minimal closed extensions $\nabla_{\min}, \nabla_{\min}'$ and $\nabla_{\min}^t, \nabla_{\min}'^t$ are defined as the graph-closures in $L^2_*(M,E)$ of the respective differential operators. The maximal closed extensions are defined by $$\nabla_{\max}:=(\nabla^t_{\min})^*, \quad \nabla_{\max}':=(\nabla'^t_{\min})^*.$$ These extensions define Hilbert complexes in the following sense, as introduced in [BL1]. \begin{defn} \textup{[BL1]} Let the Hilbert spaces $H_i,i=0,..,m,H_{m+1}=\{0\}$ be mutually orthogonal. For each $i=0,..,m$ let $D_i\in C(H_i,H_{i+1})$ be a closed operator with domain $\mathcal{D} (D_i)$ dense in $H_i$ and range in $H_{i+1}$. Put $\mathcal{D}_i:=\mathcal{D}(D_i)$ and $R_i:=D_i(\mathcal{D}_i)$ and assume $$R_i\subseteq \mathcal{D}_{i+1}, \quad D_{i+1}\circ D_i=0.$$ This defines a complex $(\mathcal{D}, D)$ $$0 \rightarrow \mathcal{D}_0 \xrightarrow{D_0}\mathcal{D}_1\xrightarrow{D_1}\cdots \xrightarrow{D_{m-1}}\mathcal{D}_m\rightarrow 0.$$ Such a complex is called a Hilbert complex. If the homology of the complex is finite, i.e. if $R_i$ is closed and $\ker D_i / \textup{im} D_{i-1}$ is finite-dimensional for all $i=0,...,m$, the complex is referred to as a Fredholm complex. \end{defn}\ \\ \\[-7mm] Indeed, by [BL1, Lemma 3.1] the extensions define Hilbert complexes as follows \begin{align*}&(\mathcal{D}_{\min}, \nabla_{\min}), \ \textup{where} \ \mathcal{D}_{\min}:=\mathcal{D} (\nabla_{\min}), \\ &(\mathcal{D}_{\max}, \nabla_{\max}), \ \textup{where} \ \mathcal{D}_{\max}:=\mathcal{D} (\nabla_{\max}) \\[3mm] &\hspace{20mm} (\mathcal{D}_{\min}', \nabla_{\min}'), \ \textup{where} \ \mathcal{D}_{\min}':=\mathcal{D} (\nabla_{\min}'), \\ &\hspace{20mm} (\mathcal{D}_{\max}', \nabla_{\max}'), \ \textup{where} \ \mathcal{D}_{\max}':=\mathcal{D} (\nabla_{\max}') . \end{align*} Note the following well-known central result on these complexes. \begin{thm}\label{thm41} The Hilbert complexes $(\mathcal{D}_{\min}, \nabla_{\min})$ and $(\mathcal{D}_{\max}, \nabla_{\max})$ are Fredholm with the associated Laplacians $\triangle_{\textup{rel}}$ and $\triangle_{\textup{abs}}$ being strongly elliptic in the sense of [Gi]. The de Rham isomorphism identifies the homology of the complexes with the relative and absolute cohomology with coefficients: \begin{align*} H^*(\mathcal{D}_{\min}, \nabla_{\min})&\cong H^*(M,\partial M, E), \\ H^*(\mathcal{D}_{\max}, \nabla_{\max})&\cong H^*(M,E). \end{align*} Furthermore the cohomology of the Fredholm complexes $(\mathcal{D}_{\min}, \nabla_{\min})$ and $(\mathcal{D}_{\max}, \nabla_{\max})$ can be computed from the following smooth subcomplexes, \begin{align*} (\Omega^*_{\min}(M,E), \nabla), \quad &\Omega_{\min}^*(M,E):=\{\mathrm{\omega} \in \Omega^*(M,E)|\iota^*(\mathrm{\omega})=0\}, \\ (\Omega^*_{\max}(M,E), \nabla), \quad &\Omega_{\max}^*(M,E):=\Omega^*(M,E), \end{align*} respectively, where we denote by $\iota: \partial M \hookrightarrow M$ the natural inclusion of the boundary. \end{thm} \ \\ [-5mm] In the untwisted setup this theorem is essentially the statement of [BL1, Theorem 4.1]. The theorem remains true in the general setup. An analogue of the trace theorem [P, Theorem 1.9], in case of flat vector bundles, allows an explicit computation of the boundary conditions for $\triangle_{\textup{rel}}$ and $\triangle_{\textup{abs}}$. Then [Gi, Lemma 1.11.1] implies strong ellipticity of the Laplacians. Note that this result in the reference [Gi] is proved explicitly, even though other aspects of [Gi, Section 1.11] are rather expository. \\[3mm] By strong ellipticity the Laplacians $\triangle_{\textup{rel}}$ and $\triangle_{\textup{abs}}$ are Fredholm and by [BL1, Theorem 2.4] the complexes $(\mathcal{D}_{\min}, \nabla_{\min})$ and $(\mathcal{D}_{\max}, \nabla_{\max})$ are Fredholm as well. By [BL1, Theorem 3.5] their cohomology indeed can be computed from the smooth subcomplexes $(\Omega^*_{\min}(M,E), \nabla)$ and $(\Omega^*_{\max}(M,E), \nabla)$, respectively. \\[3mm] Finally, the relation to the relative and absolute cohomolgy (the twisted de Rham theorem) is proved in [RS, Section 4] for flat Hermitian metrics, but an analogous proof works in the general case. Corresponding results hold also for the complexes associated to the dual connection $\nabla'$. \\[3mm] Furthermore, the Riemannian metric $g^M$ and the fixed orientation on $M$ give rise to the Hodge-star operator for any $k=0,..,m=\dim M$: $$*:\Omega^k(M,E)\to \Omega^{m-k}(M,E).$$ Define $$\Gamma :=i^r(-1)^{\frac{k(k+1)}{2}}*:\Omega^k(M,E)\to \Omega^{m-k}(M,E), \quad r:= (\dim M+1)/2.$$ This operator extends to a well-defined self-adjoint involution on $L^2_*(M,E)$, which we also denote by $\Gamma$. The following properties of $\Gamma$ are essential for the later construction. \begin{lemma}\label{G-Lemma} The self-adjoint involution $\Gamma$ relates the minimal and maximal closed extensions of $\nabla$ and $\nabla'$ as follows $$\Gamma\nabla_{\min} \Gamma=(\nabla_{\max}')^*, \quad \Gamma\nabla_{\max} \Gamma=(\nabla_{\min}')^*.$$ \end{lemma} \begin{proof} One first checks explicitly, cf. [BGV, Proposition 3.58] $$\Gamma \nabla \Gamma =(\nabla')^t, \quad \Gamma\nabla' \Gamma=\nabla^t.$$ Recall that the maximal domain of $\nabla, \nabla'$ can also be characterized as a subspace of $L^2_*(M,E)$ with its image under $\nabla, \nabla'$ being again in $L^2_*(M,E)$. Since $\Gamma$ gives an involution on $L^2_*(M,E)$, we obtain: \begin{align*} \Gamma \nabla_{\max} \Gamma =(\nabla')^t_{\textup{max}}, \quad \Gamma\nabla_{\max}' \Gamma=\nabla_{\max}^t&, \\ \textup{i.e.} \quad \Gamma \nabla_{\max} \Gamma =(\nabla_{\min}')^*, \quad \Gamma\nabla_{\max}' \Gamma=\nabla_{\min}^*&. \end{align*} Taking adjoints on both sides of the last relation, we obtain the full statement of the lemma, since $\Gamma$ is self-adjoint. \end{proof} \ \\ \\[-7mm] Now we can introduce the following central concepts. \begin{defn}\label{domain} $(\widetilde{\mathcal{D}}, \widetilde{\nabla}):=(\mathcal{D}_{\min}, \nabla_{\min})\oplus (\mathcal{D}_{\max}, \nabla_{\max}).$ The chirality operator $\widetilde{\Gamma}$ on $(\widetilde{\mathcal{D}}, \widetilde{\nabla})$ by definition acts anti-diagonally with respect to the direct sum of the components \begin{align}\label{chirality} \widetilde{\Gamma} :=\left(\begin{array}{rr} 0 & \Gamma \\ \Gamma & 0 \end{array}\right). \end{align} \end{defn}\ \\ \\[-7mm] The Fredholm complex $(\widetilde{\mathcal{D}}, \widetilde{\nabla})$ with the chirality operator $\widetilde{\Gamma}$ is in case of a flat Hermitian metric a complex with Poincare duality, in the sense of [BL1, Lemma 2.16], i.e. $$\nabla h^E=0 \ \Rightarrow \ \widetilde{\Gamma}\widetilde{\nabla}=\widetilde{\nabla}^*\widetilde{\Gamma},$$ which follows directly from Lemma \ref{G-Lemma}. We now apply the concepts of Braverman and Kappeler to our new setup. \begin{defn}\label{odd-signature} The odd-signature operator of the Hilbert complex $(\widetilde{\mathcal{D}}, \widetilde{\nabla})$ is defined as follows $$\mathcal{B}:=\widetilde{\Gamma}\widetilde{\nabla}+\widetilde{\nabla}\widetilde{\Gamma}.$$ \end{defn}\ \\ \\[-7mm] Before we can state some basic properties of the odd signature operator, let us recall the notions of the Gauss-Bonnet operator and its relative and absolute self-adjoint extensions. The Gauss-Bonnet operator $$D^{GB}:=\nabla+\nabla^t,$$ admits two natural self-adjoint extensions \begin{align}\label{gauss-bonnet-rel-abs} D^{GB}_{\textup{rel}}=\nabla_{\min}+\nabla_{\min}^*, \ D^{GB}_{\textup{abs}}=\nabla_{\max}+\nabla_{\max}^*, \end{align} respectively called the relative and the absolute self-adjoint extensions. Their squares are correspondingly the relative and the absolute Laplace operators: $$\triangle_{\textup{rel}}=(D^{GB}_{\textup{rel}})^*D^{GB}_{\textup{rel}}, \quad \triangle_{\textup{abs}}=(D^{GB}_{\textup{abs}})^*D^{GB}_{\textup{abs}}.$$ Similar definitions, of course, hold for the Gauss-Bonnet Operator associated to the dual covariant derivative $\nabla'$. Now we can state the following basic result. \begin{lemma}\label{odd-signature-laplacian} The leading symbols of $\mathcal{B}$ and $\widetilde{\Gamma} \left(D^{GB}_{\textup{rel}}\oplus D'^{GB}_{\textup{abs}}\right)$ coincide and moreover $$\mathcal{D}(\mathcal{B})= \mathcal{D} \left(D^{GB}_{\textup{rel}}\oplus D'^{GB}_{\textup{abs}}\right).$$ \end{lemma} \begin{proof} First recall the relations $$\Gamma\nabla\Gamma=(\nabla')^t, \quad \Gamma\nabla^t\Gamma=\nabla'.$$ All connections differ by an endomorphism-valued differential form of degree one, which can be viewed as a differential operator of order zero. This implies the statement on the leading symbol of $\mathcal{B}$ and $\widetilde{\Gamma} \left(D^{GB}_{\textup{rel}}\oplus D'^{GB}_{\textup{abs}}\right)$ \\[3mm] A differential operator of zero order naturally extends to a bounded operator on the $L^2$-Hilbert space, and hence does not pose additional restrictions on the domain, in particular we obtain (compare Lemma \ref{G-Lemma}) $$\mathcal{D} (\nabla_{\min}^*)=\mathcal{D} (\Gamma \nabla_{\max} \Gamma), \quad \mathcal{D} (\nabla_{\max}^*)=\mathcal{D} (\Gamma \nabla_{\min} \Gamma).$$ Using these domain relations we find: \begin{align*} \mathcal{D}(\mathcal{B})= \mathcal{D} \left(\widetilde{\Gamma}(D^{GB}_{\textup{rel}}\oplus D'^{GB}_{\textup{abs}})\right)=\mathcal{D} \left(D^{GB}_{\textup{rel}}\oplus D'^{GB}_{\textup{abs}}\right). \end{align*} \end{proof} \ \\ \\[-7mm] Note by the arguments of the lemma above that $\mathcal{B}$ is a bounded perturbation of a closed operator $\widetilde{\Gamma} \left(D^{GB}_{\textup{rel}}\oplus D'^{GB}_{\textup{abs}}\right)$ and hence is closed, as well. Before we continue analyzing the spectral properties of the odd-signature operator $\mathcal{B}$, let us introduce some concepts and notation. \begin{defn} Let $D$ be a closed operator in a separable Hilbert space. An angle $\theta\in [0,2\pi)$ is called an "Agmon angle" for $D$, if for $R_{\theta}\subset \mathbb{C}$ being the cut in $\mathbb{C}$ corresponding to $\theta$ $$R_{\theta}:=\{z \in \mathbb{C} | z=|z|\cdot e^{i\theta}\}$$ we have the following spectral relation $$R_{\theta}\cap \textup{Spec}(D)\backslash \{0\}=\emptyset.$$ \end{defn} \begin{thm}\label{Freddy} \textup{[S. Agmon, R. Seeley]} Let $(K,g^K)$ be a smooth compact oriented Riemannian manifold with boundary $\partial K$. Let $(F,h^F)$ be a Hermitian vector bundle over $K$. The metric structures $(g^K,h^F)$ define an $L^2$-inner product. Let $$D:C^{\infty}(K,F)\to C^{\infty}(K,F)$$ be a differential operator of order $\mathrm{\omega}$ such that $\mathrm{\omega} \cdot \textup{rank}F$ is even. Consider a boundary value problem $(D,B)$ strongly elliptic with respect to $\mathbb{C} \backslash \mathbb{R}^*$ in the sense of [Gi]. Then \begin{enumerate} \item $D_B$ is a Fredholm operator with compact resolvent and discrete spectrum of eigenvalues of finite (algebraic) multiplicity, accumulating only at infinity. \item The operator $D_B$ admits an Agmon angle $\theta \in (-\pi, 0)$ and the associated zeta-function \begin{align*} &\zeta(s, D_B):=\sum\limits_{\lambda \in \textup{Spec}(D_B)\backslash \{0\}}m(\lambda)\cdot \lambda_{\theta}^{-s}, \quad \textup{Re}(s) > \frac{\dim K}{\mathrm{\omega}}, \end{align*} where $\lambda_{\theta}^{-s}:=\textup{exp}(-s\cdot \log_{\theta}\lambda)$ and $m(\lambda)$ denotes the multiplicity of the eigenvalue $\lambda$, is holomorphic for $\textup{Re}(s) > \dim K / \mathrm{\omega}$ and admits a meromorphic extension to the whole complex plane $\mathbb{C}$ with $s=0$ being a regular point. \end{enumerate} \end{thm}\ \\ \\[-7mm] For the proof of the theorem note that the notion of strong ellipticity in the sense of [Gi] in fact combines ellipticity with Agmon's conditions, as in the treatment of elliptic boundary conditions by R.T. Seeley in [Se1, Se2]. The statement of the theorem above follows then from [Ag] and [Se1, Se2]. \begin{remark} The definition of a zeta-function, as in Theorem \ref{Freddy} (ii), also applies to any operator $D$ with finite spectrum $\{\lambda_1,..,\lambda_n\}$ and finite respective multiplicities $\{m_1,..,m_n\}$. For a given Agmon angle $\theta \in [0,2\pi)$ the associated zeta-function $$\zeta_{\theta}(s,D):=\sum_{i=1, \lambda_i\neq 0}^nm_i\cdot (\lambda_i)^{-s}_{\theta}$$ is holomorphic for all $s\in \mathbb{C}$, since the sum is finite and the eigenvalue zero is excluded. \end{remark}\ \\ \\[-7mm] Now we return to our specific setup. The following result is important in view of the relation between $\mathcal{B}$ and the Gauss-Bonnet operators with relative and absolute boundary conditions, as established in Lemma \ref{odd-signature-laplacian}. \begin{prop}\label{strongly-elliptic0} The operators $$D=\widetilde{\Gamma} (D^{GB}_{\textup{rel}}\oplus D'^{GB}_{\textup{abs}}), \quad D^2=\triangle_{\textup{rel}}\oplus \triangle'_{\textup{abs}}$$ are strongly elliptic with respect to $\mathbb{C}\backslash \mathbb{R}^*$ and $\mathbb{C}\backslash \mathbb{R}^+$, respectively, in the sense of P. Gilkey [Gi]. \end{prop}\ \\ \\[-7mm] The fact that $D^2=\triangle_{\textup{rel}}\oplus \triangle'_{\textup{rel}}$ is strongly elliptic with respect to $\mathbb{C}\backslash \mathbb{R}^+$ is already encountered in Theorem \ref{thm41}. The strong ellipticity of $D$ now follows from [Gi, Lemma 1.11.2]. Note that this result in the reference [Gi] is proved explicitly, even though other aspects of [Gi, Section 1.11] are rather expository. \\[3mm] Since Lemma \ref{odd-signature-laplacian} asserts the equality between the leading symbols of the differential operators $\mathcal{B},D$ and moreover the equality of the associated boundary conditions, the odd signature operator $\mathcal{B}$ and its square $\mathcal{B}^2$ are strongly elliptic as well. This proves together with Theorem \ref{Freddy} the next proposition. \begin{prop}\label{strongly-elliptic} The operators $\mathcal{B}$ and $\mathcal{B}^2$ are strongly elliptic with respect to $\mathbb{C}\backslash \mathbb{R}^*$ and $\mathbb{C}\backslash \mathbb{R}^+$, respectively, in the sense of P. Gilkey [Gi]. The operators $\mathcal{B}, \mathcal{B}^2$ are discrete with their spectrum accumulating only at infinity. \end{prop} \ \\ \\[-7mm] Let now $\lambda \geq 0$ be any non-negative real number. Denote by $\Pi_{\mathcal{B}^2, [0,\lambda]}$ the spectral projection of $\mathcal{B}^2$ onto eigenspaces with eigenvalues of absolute value in the interval $[0,\lambda]$: $$\Pi_{\mathcal{B}^2, [0,\lambda]}:=\frac{i}{2\pi}\int_{C(\lambda)}(\mathcal{B}^2-x)^{-1}dx,$$ with $C(\lambda)$ being any closed counterclockwise circle surrounding eigenvalues of absolute value in $[0,\lambda]$ with no other eigenvalue inside. One finds using the analytic Fredholm theorem that the range of the projection lies in the domain of $\mathcal{B}^2$ and that the projection commutes with $\mathcal{B}^2$. \\[3mm] Since $\mathcal{B}^2$ is discrete, the spectral projection $\Pi_{\mathcal{B}^2, [0,\lambda]}$ is of finite rank, i.e. with a finite-dimensional image. In particular $\Pi_{\mathcal{B}^2,[0,\lambda]}$ is a bounded operator in $L^2_*(M,E\oplus E)$. Hence with [K, Section 4, p.155] the decomposition \begin{align}\label{decomp-L-2} L^2_*(M,E\oplus E)=\textup{Image}\Pi_{\mathcal{B}^2, [0,\lambda]}\oplus \textup{Image}(\mathbf{1} - \Pi_{\mathcal{B}^2, [0,\lambda]}), \end{align} is a direct sum decomposition into closed subspaces of the Hilbert space $L^2_*(M,E\oplus E)$. \\[3mm] Note that if $\mathcal{B}^2$ is self-adjoint, the decomposition is orthogonal with respect to the fixed $L^2-$Hilbert structure, i.e. the projection $\Pi_{\mathcal{B}^2,[0,\lambda]}$ is an orthogonal projection, which is the case only if the Hermitian metric $h^E$ is flat with respect to $\nabla$. \\[3mm] The decomposition induces by restriction a decomposition of $\widetilde{\mathcal{D}}$, which was introduced in Definition \ref{domain}: $$\widetilde{\mathcal{D}}=\widetilde{\mathcal{D}}_{[0,\lambda]}\oplus \widetilde{\mathcal{D}}_{(\lambda, \infty)}.$$ Since $\widetilde{\nabla}$ commutes with $\mathcal{B}, \mathcal{B}^2$ and hence also with $\Pi_{\mathcal{B}^2, [0,\lambda]}$, we find that the decomposition above is in fact a decomposition into subcomplexes: \begin{align}\nonumber (\widetilde{\mathcal{D}}, \widetilde{\nabla})=(\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})\oplus (\widetilde{\mathcal{D}}_{(\lambda, \infty)}, \widetilde{\nabla}_{(\lambda, \infty)}) \\ \label{decomposition} \textup{where} \ \widetilde{\nabla}_{\mathcal{I}}:=\widetilde{\nabla}|_{\widetilde{\mathcal{D}}_{\mathcal{I}}} \ \textup{for} \ \mathcal{I}=[0,\lambda] \ \textup{or} \ (\lambda, \infty). \end{align} Further $\widetilde{\Gamma}$ also commutes with $\mathcal{B}, \mathcal{B}^2$ and hence also with $\Pi_{\mathcal{B}^2, [0,\lambda]}$. Thus as above we obtain $$\widetilde{\Gamma} = \widetilde{\Gamma}_{[0,\lambda]} \oplus \widetilde{\Gamma}_{(\lambda, \infty)}.$$ Consequently the odd-signature operator of the complex $(\widetilde{\mathcal{D}}, \widetilde{\nabla})$ decomposes correspondingly \begin{align}\nonumber &\mathcal{B}=\mathcal{B}^{[0,\lambda]}\oplus \mathcal{B}^{(\lambda, \infty)}\\ \textup{where} \quad &\mathcal{B}^{\mathcal{I}}:=\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}}+\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}} \ \textup{for} \ \mathcal{I}=[0,\lambda] \ \textup{or} \ (\lambda, \infty). \label{66} \end{align} The closedness of the subspace Image$(1-\Pi_{\mathcal{B}^2,[0,\lambda]})$ implies that the domain of $\mathcal{B}^{(\lambda, \infty)}$ $$\mathcal{D} (\mathcal{B}^{(\lambda, \infty)}):=\mathcal{D} (\mathcal{B})\cap \textup{Image}(1-\Pi_{\mathcal{B}^2,[0,\lambda]})$$ is closed under the graph-norm, hence the operator $\mathcal{B}^{(\lambda, \infty)}$ is a closed operator in the Hilbert space $\textup{Image}(1-\Pi_{\mathcal{B}^2,[0,\lambda]})$. \\[3mm] We need to analyze the direct sum component $\mathcal{B}^{(\lambda, \infty)}$. For this we proceed with the following general functional analytic observations. \begin{prop}\label{compact} Let $D$ be a closed operator in a separable Hilbert space $(H, \langle \cdot ,\cdot \rangle)$. The domain $\mathcal{D} (D)$ is a Hilbert space with the graph-norm $$\langle x,y\rangle_D=\langle x,y\rangle+\langle Dx,Dy\rangle$$ for any $x,y \in \mathcal{D} (D)$. Let Res$D \neq \emptyset$. Then the following statements are equivalent \\ 1) \ The inclusion $\iota : \mathcal{D} (D) \hookrightarrow H$ is a compact operator \\ 2) \ $D$ has a compact resolvent, i.e. for some (and thus for all) $z \in$ Res$(D)$ the resolvent operator $(D-z)^{-1}$ is a compact operator on $H$. \end{prop} \begin{proof} Assume first that the inclusion $\iota : \mathcal{D} (D) \hookrightarrow H$ is a compact operator. Since Spec$D \neq \mathbb{C}$ the resolvent set Res$(D)$ is not empty. For any $z \in$ Res$(D)$ the resolvent operator $$(D-z)^{-1}: H \to \mathcal{D} (D)$$ exists and is bounded, by definition of the resolvent set. With the inclusion $\iota$ being a compact operator we find directly that $(D-z)^{-1}$ is compact as an operator from $H$ to $H$. Finally, if $(D-z)^{-1}$ is compact for some $z \in$ Res$(D)$, then by the second resolvent identity it is compact for all $z \in$ Res$(D)$, see also [K, p.187]. \\[3mm] Conversely assume that for some (and therefore for all) $z \in$ Res$(D)$ the resolvent operator $(D-z)^{-1}$ is compact as an operator from $H$ into $H$. Observe $$\iota = (D-z)^{-1}\circ (D-z):\mathcal{D} (D) \hookrightarrow H. $$ By compactness of the resolvent operator, $\iota$ is compact as an operator between the Hilbert spaces $\mathcal{D} (D)$ and $H$. \end{proof} \begin{prop}\label{index-zero} Let $D$ be a closed operator in a separable Hilbert space $H$ with Res$(D) \neq \emptyset$ and compact resolvent. Then $D$ is a Fredholm operator with $$\textup{index} \, D=0.$$ \end{prop} \begin{proof} By closedness of $D$ the domain $\mathcal{D}(D)$ turns into a Hilbert space equipped with the graph norm. By Proposition \ref{compact} the natural inclusion $$\iota : \mathcal{D} (D) \hookrightarrow H$$ is a compact operator. Therefore, viewing $\mathcal{D} (D)$ as a subspace of $H$, i.e. endowed with the inner-product of $H$, the inclusion $$\iota : \mathcal{D} (D) \subset H \hookrightarrow H$$ is relatively $D$-compact in the sense of [K, Section 4.3, p.194]. More precisely this means, that if for a sequence $\{u_n\}\subset \mathcal{D} (D)$ both $\{u_n\}$ and $\{Du_n\}$ are bounded sequences in $H$, then $\{\iota (u_n)\}\subset H$ has a convergent subsequence. \\[3mm] Now for any $\lambda \in \mathbb{C}\backslash \textup{Spec}(D)$ the operator $$(D-\lambda \iota):\mathcal{D} (D) \subset H \rightarrow H$$ is invertible and hence trivially a Fredholm operator with trivial kernel and closed range $H$. In particular $$\textup{index}(D-\lambda \iota)=0.$$ Now, from stability of the Fredholm index under relatively compact perturbations (see [K, Theorem 5.26] and the references therein) we infer with the inclusion $\iota$ being relatively compact, that $D$ is a Fredholm operator of zero index:$$\textup{index}\, D=\textup{index}(D-\lambda\iota)=0.$$ \end{proof} \begin{cor}\label{bijective} The operator $\mathcal{B}^{(\lambda, \infty)}: \mathcal{D} (\mathcal{B}^{(\lambda, \infty)})\to \textup{Image}(1-\Pi_{\mathcal{B}^2,[0,\lambda]})$ of the complex $(\widetilde{\mathcal{D}}_{(\lambda, \infty)}, \widetilde{\nabla}_{(\lambda, \infty)})$ with $\lambda \geq 0$ is bijective. \end{cor} \begin{proof} Consider any $\lambda \in \mathbb{C} \backslash \textup{Spec}\mathcal{B}$. By the strong ellipticity of $\mathcal{B}$, the operator $$(\mathcal{B}-\lambda):\mathcal{D} (\mathcal{B})\rightarrow L^2_*(M,E\oplus E)$$ is bijective with compact inverse. Hence we immediately find that the restriction \begin{align*} (\mathcal{B}^{(\lambda, \infty)}-\lambda)\equiv (\mathcal{B}-\lambda)\restriction \textup{Im}(1-\Pi_{\mathcal{B}^2,[0,\lambda]}): \mathcal{D} (\mathcal{B}^{(\lambda, \infty)})\rightarrow \textup{Im}(1-\Pi_{\mathcal{B}^2,[0,\lambda]}) \end{align*} is bijective with compact inverse, as well. Now we deduce from Proposition \ref{index-zero} that $\mathcal{B}^{(\lambda, \infty)}$ is Fredholm with $$\textup{index}\, \mathcal{B}^{(\lambda, \infty)}=0.$$ The operator $\mathcal{B}^{(\lambda, \infty)}$ is injective, by definition. Combining injectivity with the vanishing of the index, we derive surjectivity of $\mathcal{B}^{(\lambda, \infty)}$. This proves the statement. \end{proof}\ \\ \\[-7mm] Note, that in case of a flat Hermitian metric the assertion of the previous corollary is simply the general fact that a self-adjoint Fredholm operator is invertible if and only if its kernel is trivial. \begin{cor}\label{cohomology} The subcomplex $(\widetilde{\mathcal{D}}_{(\lambda, \infty)}, \widetilde{\nabla}_{(\lambda, \infty)})$ is acyclic and $$H^*((\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]}))\cong H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla}).$$ \end{cor} \begin{proof} Corollary \ref{bijective} allows us to apply the purely algebraic result [BK2, Lemma 5.8]. Consequently $(\widetilde{\mathcal{D}}_{(\lambda, \infty)}, \widetilde{\nabla}_{(\lambda, \infty)})$ is an acyclic complex. Together with the decomposition \eqref{decomposition} this proves the assertion. \end{proof} \ \\ \\[-7mm] Observe that since the spectrum of $\mathcal{B}^2$ is discrete accumulating only at infinity, $(\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})$ is a complex of finite-dimensional complex vector spaces with $\widetilde{\Gamma}_{[0,\lambda]}:\widetilde{\mathcal{D}}^k_{[0,\lambda]}\to \widetilde{\mathcal{D}}^{m-k}_{[0,\lambda]}$ being the chirality operator on the complex in the sense of [BK2, Section 1.1]. \\[3mm] We also use the notion of determinant lines of finite dimensional complexes in [BK2, Section 1.1], which are given for any finite complex of finite-dimensional vector spaces $(C^*,\partial_*)$ as follows: $$\textup{Det}H^*(C^*,\partial_*)=\bigotimes\limits_k \det H^k(C^*,\partial_*)^{(-1)^k}, $$ where $\det H^k(C^*,\partial_*)$ is the top exterior power of $H^k(C^*,\partial_*)$ and $\det H^k(C^*,\partial_*)^{-1}\equiv\det H^k(C^*,\partial_*)^*$. We follow [BK2, Section 1.1] and form the "refined torsion" (note the difference to "refined analytic torsion") of the complex $(\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})$ \begin{align}\label{finite-torsion} \rho_{[0,\lambda]}:=c_0\otimes (c_1)^{-1}\otimes \cdots \otimes (c_r)^{(-1)^r} \otimes (\widetilde{\Gamma}_{[0,\lambda]}c_r)^{(-1)^{r+1}}\otimes \cdots \\ \cdots \otimes (\widetilde{\Gamma}_{[0,\lambda]}c_1) \otimes (\widetilde{\Gamma}_{[0,\lambda]}c_0)^{(-1)}\in \textup{Det}(H^*(\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})), \nonumber \end{align} where $c_k\in \det H^k(\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})$ are arbitrary elements of the determinant lines, $\widetilde{\Gamma}_{[0,\lambda]}$ denotes the chirality operator $\widetilde{\Gamma}_{[0,\lambda]}:\widetilde{\mathcal{D}}^{\bullet}_{[0,\lambda]}\to \widetilde{\mathcal{D}}^{m-\bullet}_{[0,\lambda]}$ extended to determinant lines and for any $v\in \det H^k(\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})$ the dual $v^{-1}\in \det H^k(\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})^{-1}\equiv \det H^k(\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})^*$ is the unique element such that $v^{-1}(v)=1$. \\[3mm] By Corollary \ref{cohomology} we can view $\rho_{[0,\lambda]}$ canonically as an element of $\textup{Det}(H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla}))$, which we do henceforth. \\[3mm] The second part of the construction is the graded determinant. The operator $\mathcal{B}^{(\lambda, \infty)},\lambda \geq 0$ is bijective by Corollary \ref{bijective} and hence by injectivity (put $\mathcal{I}=(\lambda, \infty)$ to simplify the notation) \begin{align}\label{kern} \textup{ker}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}})\cap\textup{ker}(\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}})=\{0\}. \end{align} Further the complex $(\widetilde{\mathcal{D}}_{\mathcal{I}}, \widetilde{\nabla}_{\mathcal{I}})$ is acyclic by Corollary \ref{cohomology} and due to $\widetilde{\Gamma}_{\mathcal{I}}$ being an involution on $\textup{Im}(1-\Pi_{\mathcal{B}^2,[0,\lambda]})$ we have \begin{align}\label{image1} \textup{ker}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}})=\widetilde{\Gamma}_{\mathcal{I}}\textup{ker}(\widetilde{\nabla}_{\mathcal{I}})=\widetilde{\Gamma}_{\mathcal{I}}\textup{Im}(\widetilde{\nabla}_{\mathcal{I}})=\textup{Im}(\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}}), \\ \label{image2} \textup{ker}(\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}})=\textup{ker}(\widetilde{\nabla}_{\mathcal{I}})=\textup{Im}(\widetilde{\nabla}_{\mathcal{I}})=\textup{Im}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}). \end{align} We have $\textup{Im}(\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}})+\textup{Im}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}})=\textup{Im}(\mathcal{B}^{\mathcal{I}})$ and by surjectivity of $\mathcal{B}^{\mathcal{I}}$ we obtain from the last three relations above \begin{align}\label{hilbert-decomposition} \textup{Im}(1-\Pi_{\mathcal{B}^2, [0,\lambda]})=\textup{ker}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}})\oplus\textup{ker}(\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}}). \end{align} Note that $\mathcal{B}$ leaves $\ker (\widetilde{\nabla}\widetilde{\Gamma})$ and $\ker (\widetilde{\Gamma}\widetilde{\nabla})$ invariant. Put \begin{align*} \mathcal{B}^{+,(\lambda,\infty)}_{\textup{even}}:=\mathcal{B}^{(\lambda,\infty)}\restriction \widetilde{\mathcal{D}}^{\textup{even}}\cap \ker (\widetilde{\nabla}\widetilde{\Gamma}), \\ \mathcal{B}^{-,(\lambda,\infty)}_{\textup{even}}:=\mathcal{B}^{(\lambda,\infty)}\restriction \widetilde{\mathcal{D}}^{\textup{even}}\cap \ker (\widetilde{\Gamma}\widetilde{\nabla}). \end{align*} We obtain a direct sum decomposition $$\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}=\mathcal{B}^{+,(\lambda,\infty)}_{\textup{even}} \oplus \mathcal{B}^{-,(\lambda,\infty)}_{\textup{even}}.$$ As a consequence of Theorem \ref{Freddy} (ii) and Proposition \ref{strongly-elliptic} there exists an Agmon angle $\theta\in (-\pi, 0)$ for $\mathcal{B}$, which is clearly an Agmon angle for the restrictions above, as well. \\[3mm] By Theorem \ref{Freddy} and Proposition \ref{strongly-elliptic} the zeta function $\zeta_{\theta}(s,\mathcal{B})$ is holomorphic for Re$(s)$ sufficiently large. The zeta-functions $\zeta_{\theta}(s,\mathcal{B}^{\pm,(\lambda,\infty)}_{\textup{even}})$ of $\mathcal{B}^{\pm,(\lambda,\infty)}_{\textup{even}}$, defined with respect to the given Agmon angle $\theta$, are holomorphic for Re$(s)$ large as well, since the restricted operators have the same spectrum as $\mathcal{B}$ but in general with lower or at most the same multiplicities. \\[3mm] We define the \emph{graded zeta-function} $$\zeta_{gr,\theta}(s,\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}):=\zeta_{\theta}(s,\mathcal{B}^{+,(\lambda,\infty)}_{\textup{even}})-\zeta_{\theta}(s,-\mathcal{B}^{-,(\lambda,\infty)}_{\textup{even}}), \ Re(s)\gg 0.$$ \\[3mm] In the next subsection we prove in Theorem \ref{log-det-gr} that the graded zeta-function extends meromorphically to $\mathbb{C}$ and is regular at $s=0$. For the time being we shall assume regularity at zero and define the graded determinant. \begin{defn}\label{graded-determinant}[Graded determinant] Let $\theta \in (-\pi, 0)$ be an Agmon angle for $\mathcal{B}^{(\lambda, \infty)}$. Then the "graded determinant" associated to $\mathcal{B}^{(\lambda, \infty)}$ and its Agmon angle $\theta$ is defined as follows: $$\det\nolimits_{gr,\theta}(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}}):= \textup{exp}(-\left.\frac{d}{ds}\right|_{s=0} \zeta_{gr,\theta}(s,\mathcal{B}^{(\lambda,\infty)}_{\textup{even}})).$$ \end{defn} \begin{prop}\label{rho-element} The element $$\rho(\nabla, g^M, h^E):=\det\nolimits_{gr,\theta}(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})\cdot \rho_{[0,\lambda]}\in \textup{Det}(H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla}))$$ is independent of the choice of $\lambda \geq 0$ and choice of Agmon angle $\theta \in (-\pi, 0)$ for the odd-signature operator $\mathcal{B}^{(\lambda, \infty)}$. \end{prop} \begin{proof} Let $0 \leq \lambda < \mu < \infty$. We obtain $\widetilde{\mathcal{D}}_{[0,\mu]}=\widetilde{\mathcal{D}}_{[0,\lambda]}\oplus \widetilde{\mathcal{D}}_{(\lambda, \mu]}$ and also $\widetilde{\mathcal{D}}_{(\lambda, \infty)}\!=\widetilde{\mathcal{D}}_{(\lambda, \mu]}\oplus \widetilde{\mathcal{D}}_{(\mu, \infty)}$. Since the odd-signature operator respects this spectral direct sum decomposition (see \eqref{66}), we obtain $$\det\nolimits_{gr}(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})=\det\nolimits_{gr}(\mathcal{B}^{(\mu, \infty)}_{\textup{even}})\cdot \det\nolimits_{gr}(\mathcal{B}^{(\lambda, \mu]}_{\textup{even}}).$$ Further the purely algebraic discussion behind [BK2, Proposition 5.10] implies $$\rho_{[0,\mu]}=\det\nolimits_{gr}(\mathcal{B}^{(\lambda, \mu]}_{\textup{even}})\cdot \rho_{[0,\lambda]}.$$ This proves the following equality $$\det\nolimits_{gr}(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})\cdot \rho_{[0,\lambda]}=\det\nolimits_{gr}(\mathcal{B}^{(\mu, \infty)}_{\textup{even}})\cdot \rho_{[0,\mu]}.$$ To see independence of $\theta \in (-\pi, 0)$ note that the strongly elliptic operator (cf. Lemma \ref{odd-signature-laplacian}) $$D:=\widetilde{\Gamma} (D^{GB}_{rel}\oplus D'^{GB}_{abs})$$ is self-adjoint and $\mathcal{B}$ differs from $D$ by a bounded perturbation. By a Neumann-series argument and the asymptotics of the resolvent for $D$ (see [Se1, Lemma 15]) we get: \begin{align}\label{spectral-cut} \forall \theta \in (-\pi, 0): \quad \textup{Spec}(\mathcal{B})\cap R_{\theta} \quad \textup{is finite.} \end{align} By discreteness of $\mathcal{B}$ we deduce that if $\theta, \theta' \in (-\pi,0)$ are both Agmon angles for $\mathcal{B}^{(\lambda, \infty)}$, there are only finitely many eigenvalues of $\mathcal{B}^{(\lambda, \infty)}$ in the solid angle between $\theta$ and $\theta'$. Hence \begin{align*} \left.\frac{d}{ds}\right|_{s=0}\zeta_{gr,\theta}(s,\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}))\equiv \left.\frac{d}{ds}\right|_{s=0}\zeta_{gr,\theta'}(s,\mathcal{B}^{(\lambda,\infty)}_{\textup{even}})) \quad \textup{mod} \ 2\pi i, \\ \textup{and therefore } \ \det\nolimits_{gr,\theta}(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})=\det\nolimits_{gr,\theta'}(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}}). \end{align*} This proves independence of the choice of $\theta \in (-\pi, 0)$ and completes the proof. \end{proof}\ \\ \\[-7mm] The element $\rho(\nabla, g^M,h^E)$ is well-defined but a priori not independent of the choice of metrics $g^M, h^E$ and so does not provide a differential invariant. In the next subsection we determine the metric anomaly of $\rho(\nabla, g^M,h^E)$ in order to construct a differential invariant, which will be called the refined analytic torsion. \section{Metric Anomaly and Refined Analytic Torsion}\label{anomaly} We introduce the notion of the eta-function leading to the notion of the eta-invariant of an elliptic operator. The eta-invariant was first introduced by Atiyah-Patodi-Singer in [APS] as the boundary correction term in their index formula. \begin{thm}\label{eta-regular-original}[P.B. Gilkey, L. Smith] Let $(K,g^K)$ be a smooth compact oriented Riemannian manifold with boundary $\partial K$. Let $(F,h^F)$ be a Hermitian vector bundle and let the metric structures $(g^K,h^F)$ define an $L^2-$scalar product. Let $$D: C^{\infty}(K,F)\rightarrow C^{\infty}(K,F)$$ be a differential operator of order $\mathrm{\omega}$ such that $\mathrm{\omega} \cdot \textup{rank}F$ is even. Let a boundary value problem $(D,B)$ be strongly elliptic with respect to $\mathbb{C} \backslash \mathbb{R}^*$ and an Agmon angle $\theta \in (-\pi, 0)$. Then we have \begin{enumerate} \item $D_B$ is a discrete Fredholm operator in the Hilbert space $L^2(K,F)$ and its eta-function $$\eta_{\theta}(s,D_B):=\sum\limits_{\textup{Re}(\lambda)>0}m(\lambda)\cdot \lambda_{\theta}^{-s}-\sum\limits_{\textup{Re}(\lambda)<0}m(\lambda)\cdot(-\lambda)_{\theta}^{-s},$$ where $m(\lambda)$ denotes the finite (algebraic) multiplicity of the eigenvalue $\lambda$ , is holomorphic for Re$(s)$ large and extends meromorphically to $\mathbb{C}$ with at most simple poles. \\ \item If $D$ is of order one with the leading symbol $\sigma_D(x,\xi), x \in K, \xi \in T^*_xK$ satisfying $$\sigma_D(x,\xi)^2=|\xi|^2\cdot I,$$ where $I$ is $\textup{rank}F\times \textup{rank}F$ identity matrix, and the boundary condition $B$ is of order zero, then the meromorphic extension of $\eta_{\theta}(s,D_B)$ is regular at $s=0$. \end{enumerate} \end{thm} \ \\ \\[-7mm] The proof of the theorem follows from the results in [GS1] and [GS2] on the eta-function of strongly elliptic boundary value problems. The fact that $\eta_{\theta}(s,D_B)$ is holomorphic for Re$(s)$ sufficiently large is asserted in [GS1, Lemma 2.3 (c)]. The meromorphic continuation with at most isolated simple poles is asserted in [GS1, Theorem 2.7]. \\[3mm] The fact that $s=0$ is a regular point of the eta-function is highly non-trivial and cannot be proved by local arguments. Using homotopy invariance of the residue at zero for the eta-function, P. Gilkey and L. Smith [GS2] reduced the discussion to a certain class of operators with constant coefficients in the collar neighborhood of the boundary and applied the closed double manifold argument. The reduction works for differential operators of order one with 0-th order boundary conditions under the assumption on the leading symbol of the operator as in the second statement of the theorem. The regularity statement of Theorem \ref{eta-regular-original} follows directly from [GS2, Theorem 2.3.5] and [GS2, Lemma 2.3.4]. \begin{remark} The definition of an eta-function, as in Theorem \ref{eta-regular-original} $(i)$, also applies to any operator $D$ with finite spectrum $\{\lambda_1,..,\lambda_n\}$ and finite respective multiplicities $\{m_1,..,m_n\}$. For a given Agmon angle $\theta \in [0,2\pi)$ the associated eta-function $$\eta_{\theta}(s,D):=\sum\limits_{\textup{Re}(\lambda)>0}m(\lambda)\cdot \lambda_{\theta}^{-s}-\sum\limits_{\textup{Re}(\lambda)<0}m(\lambda)\cdot(-\lambda)_{\theta}^{-s},$$ is holomorphic for all $s\in \mathbb{C}$, since the sum is finite and the zero-eigenvalue is excluded. \end{remark} \begin{prop}\label{eta-regular} The eta-function $\eta_{\theta}(s,\mathcal{B}_{\textup{even}})$ associated to the even part $\mathcal{B}_{\textup{even}}$ of the odd-signature operator and its Agmon angle $\theta \in (-\pi,0)$, is holomorphic for Re$(s)$ large and extends meromorphically to $\mathbb{C}$ with $s=0$ being a regular point. \end{prop}\ \\ \\[-7mm] The statement of the proposition on the meromorphic extension of the eta-function is a direct consequence of Theorem \ref{eta-regular-original} (i) and Proposition \ref{strongly-elliptic}. The regularity statement follows from Theorem \ref{eta-regular-original} (ii) and an explicit computation of the leading symbol of the odd-signature operator, compare also [GS2, Example 2.2.4]. \\[3mm] Using Proposition \ref{eta-regular} we can define the eta-invariant in the manner of [BK2] for $\mathcal{B}_{\textup{even}}$: \begin{align}\label{eta-BK} \eta(\mathcal{B}_{\textup{even}}):=\frac{1}{2}\left(\eta_{\theta}(s=0,\mathcal{B}_{\textup{even}})+m_+-m_-+m_0\right), \end{align} where $m_{\pm}$ is the number of $\mathcal{B}_{\textup{even}}-$eigenvalues on the positive, respectively the negative part of the imaginary axis and $m_0$ is the dimension of the generalized zero-eigenspace of $\mathcal{B}_{\textup{even}}$. \\[3mm] Implicit in the notation is also the fact, that $\eta(\mathcal{B}_{\textup{even}})$ does not depend on the Agmon angle $\theta \in (-\pi, 0)$. This is due to the fact that, given a different Agmon angle $\theta'\in (-\pi, 0)$, there are by \eqref{spectral-cut} and discreteness of $\mathcal{B}$ only finitely many eigenvalues of $\mathcal{B}_{\textup{even}}$ in the acute angle between $\theta$ and $\theta'$. \\[3mm] Similarly we define the eta-invariants of $\mathcal{B}^{(\lambda, \infty)}_{\textup{even}}$ and $\mathcal{B}^{[0,\lambda]}_{\textup{even}}$ and in particular we get $$\eta(\mathcal{B}_{\textup{even}})=\eta(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})+ \eta(\mathcal{B}^{[0,\lambda]}_{\textup{even}}).$$ Before we prove the next central result, let us make the following observation. \\[3mm] Consider the imaginary axis $i\mathbb{R}\subset \mathbb{C}$. By \eqref{spectral-cut} there are only finitely many eigenvalues of $\mathcal{B}$ on $i\mathbb{R}$. Further by the discreteness of $\mathcal{B}$ small rotation of the imaginary axis does not hit any further eigenvalue of $\mathcal{B}$ and in particular of $\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}, \lambda \geq 0$. More precisely this means that there exists an $\epsilon > 0$ sufficiently small such that the angle $$\theta:=-\frac{\pi}{2}+\epsilon $$ is an Agmon angle for $\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}$ and the solid angles \begin{align*} L_{(-\pi /2, \theta]}&:=\{z \in \mathbb{C} | z=|z|\cdot e^{i\phi}, \phi \in (-\pi /2, \theta]\}, \\ L_{(\pi /2, \theta+\pi]}&:=\{z \in \mathbb{C} | z=|z|\cdot e^{i\phi}, \phi \in (\pi /2, \theta+\pi]\} \end{align*} do not contain eigenvalues of $\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}$. With this observation we can state the following central result: \begin{thm}\label{log-det-gr} Let $\theta \in (-\pi /2 , 0)$ be an Agmon angle for $\mathcal{B}^{(\lambda, \infty)}_{\textup{even}}$ such that there are no eigenvalues of $\mathcal{B}^{(\lambda, \infty)}_{\textup{even}}$ in the solid angles $L_{(-\pi /2, \theta]}$ and $L_{(-\pi /2 , \theta + \pi]}$. Then $2\theta$ is an Agmon angle for $(\mathcal{B}^{(\lambda,\infty)}_{\textup{even}})^2$. Then the graded zeta-function $\zeta_{gr,\theta}(s, \mathcal{B}^{(\lambda, \infty)}_{\textup{even}}), Re(s)\gg0$ extends meromorphically to $\mathbb{C}$ and is regular at $s=0$ with the following derivative at zero: \begin{align*} \left.\frac{d}{ds}\right|_{s=0}\zeta_{gr,\theta}(s,\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}))=\frac{1}{2}\sum_{k=0}^m(-1)^{k+1}\cdot k\cdot \left.\frac{d}{ds}\right|_{s=0}\zeta_{2\theta}(s, \mathcal{B}^2\restriction \widetilde{\mathcal{D}}^k_{(\lambda, \infty)}) + \\ + \frac{i \pi}{2}\sum_{k=0}^m(-1)^k\cdot k\cdot \zeta_{2\theta}(0, \mathcal{B}^2\restriction \widetilde{\mathcal{D}}^k_{(\lambda, \infty)}) + i\pi \eta(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}}). \end{align*} \end{thm} \begin{proof} For $Re(s)\gg0$ the general identities [BK1 (4.10), (4.11)] imply the following relation between holomorphic functions: \begin{align*} \zeta_{gr,\theta}(s,\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}))=\frac{1+e^{-i\pi s}}{2}\left[ \zeta_{2\theta}\left(\frac{s}{2}, \left(\mathcal{B}^{+,(\lambda, \infty)}_{\textup{even}}\right)^2\right)- \zeta_{2\theta}\left(\frac{s}{2}, \left(\mathcal{B}^{-,(\lambda, \infty)}_{\textup{even}}\right)^2\right)\right] + \\ +\frac{1}{2}(1-e^{-i\pi s}) \left[\eta(s, \mathcal{B}^{(\lambda, \infty)}_{\textup{even}})+f(s)\right], \end{align*} where $f(s)$ is a holomorphic function (combination of zeta-functions associated to finite-dimensional operators) with $$f(0)=m_+(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})-m_-(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}}),$$ where $m_{\pm}(\cdot)$ denotes the number of eigenvalues of the operator in brackets, lying on the positive, respectively the negative part of the imaginary axis. \\[3mm] Put $\mathcal{I}=(\lambda, \infty)$ to simplify notation. Recall \eqref{image2} and show that \begin{align}\label{bijective-two} \widetilde{\nabla}_{\mathcal{I}}: \textup{ker}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}})\rightarrow \textup{ker}(\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}})= \textup{Im}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}) \end{align} is bijective. Indeed, injectivity is clear by \eqref{kern}. For surjectivity let $x=\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}v\in \textup{Im}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}})$ with (recall \eqref{hilbert-decomposition}) $$v=v'\oplus v'' \in \textup{Im}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}) \oplus \textup{Im}(\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}})=\textup{Im}(1-\Pi_{\mathcal{B}^2,[0,\lambda]}).$$ In particular $v''\in \textup{Im}(\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}})=\ker \widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}$ and $v'=\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}} \mathrm{\omega}$ for some $\mathrm{\omega}$. Hence we obtain \begin{align*} x=\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}v=\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}v'=\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}\mathrm{\omega}, \\ \textup{and} \quad \widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}\mathrm{\omega}\in \ker\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}. \end{align*} In other words we have found a preimage of any $x\in \textup{Im}(\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}})$ under $\widetilde{\nabla}_{\mathcal{I}}$. This proves bijectivity of the map in \eqref{bijective-two} and consequently, since $\widetilde{\nabla}_{\mathcal{I}}$ commutes with $\mathcal{B}^{\mathcal{I}}$ and $(\mathcal{B}^{\mathcal{I}})^2$, we obtain in any degree $k=0,..,m$ \begin{align}\label{zwei} \zeta_{2\theta}(s,(\mathcal{B}^{+,\mathcal{I}})^2 \restriction \widetilde{\mathcal{D}}^k)=\zeta_{2\theta}(s,(\mathcal{B}^{-, \mathcal{I}})^2 \restriction \widetilde{\mathcal{D}}^{k+1}). \end{align} Using this relation we compute straightforwardly for $Re(s)$ sufficiently large: \begin{align} \zeta_{2 \theta}(s,(\mathcal{B}^{+,\mathcal{I}}_{\textup{even}})^2)-\zeta_{2 \theta}(s,(\mathcal{B}^{-,\mathcal{I}}_{\textup{even}})^2)=\sum_{k=0}^m (-1)^{k+1} \cdot k\cdot \zeta_{2 \theta}(s,(\mathcal{B}^{\mathcal{I}})^2 \restriction \widetilde{\mathcal{D}}^k). \end{align} We arive at the following preliminary result for $Re(s)\gg0$ \begin{align}\label{graded-formula} \zeta_{gr,\theta}(s,\mathcal{B}^{\mathcal{I}}_{\textup{even}}))=\frac{1}{2}(1+e^{-i\pi s})\sum_{k=0}^m(-1)^{k+1}\cdot k\cdot \zeta_{2\theta}(s,(\mathcal{B}^{\mathcal{I}})^2\restriction \widetilde{\mathcal{D}}^k) + \\ +\frac{1}{2}(1-e^{-i\pi s}) \left[\eta(s,\mathcal{B}^{\mathcal{I}}_{\textup{even}})+f(s)\right]. \nonumber \end{align} We find with Theorem \ref{Freddy} and Proposition \ref{eta-regular} that the right hand side of the equality above is a meromorphic function on the entire complex plane and is regular at $s=0$. Hence the left hand side of the equality, the graded zeta-function, is meromorphic on $\mathbb{C}$ and regular at $s=0$, as claimed and as anticipated in Definition \ref{graded-determinant}. Computing the derivative at zero, we obtain the statement of the theorem. \end{proof}\ \\ \\[-7mm] As a consequence of the theorem above, we obtain for the element $\rho(\nabla, g^M,h^E)$ defined in Proposition \ref{rho-element} the following relation \begin{align}\label{drei} \rho(\nabla, g^M,h^E)&=e^{\xi_{\lambda}(\nabla, g^M)}e^{-i\pi \xi'_{\lambda}(\nabla, g^M)}e^{-i\pi \eta(\mathcal{B}^{(\lambda, \infty)}_{\textup{even}}(g^M))}\cdot \rho_{[0,\lambda]}, \\ \label{xi1} \xi_{\lambda}(\nabla, g^M)&=\frac{1}{2}\sum_{k=0}^m(-1)^{k}\cdot k\cdot \left. \frac{d}{ds}\right|_{s=0}\zeta_{2\theta}(s,(\mathcal{B}^2\restriction \widetilde{\mathcal{D}}^k_{(\lambda, \infty)})) \\ \label{xi2} \xi'_{\lambda}(\nabla, g^M)&=\frac{1}{2}\sum_{k=0}^m(-1)^{k}\cdot k\cdot \zeta_{2\theta}(s=0,(\mathcal{B}^2\restriction \widetilde{\mathcal{D}}^k_{(\lambda, \infty)})). \end{align} Now we can identify explicitly the metric dependence of $\rho(\nabla, g^M,h^E)$ using the formula \eqref{drei}. \\[3mm] First note that the construction is in fact independent of the choice of a Hermitian metric $h^E$. Indeed, a variation of $h^E$ does not change the odd-signature operator $\mathcal{B}$ as a differential operator. However it enters a priori the definition of $\mathcal{D} (\mathcal{B})$, since $h^E$ defines the $L^2-$Hilbert space. \\[3mm] Recall that different Hermitian metrics give rise to equivalent $L^2-$norms over compact manifolds. Hence a posteriori the domain $\mathcal{D} (\mathcal{B})$ is indeed independent of the particular choice of $h^E$. \\[3mm] Independence of the choice of a Hermitian metric $h^E$ is essential, since for non-unitary flat vector bundles there is no canonical choice of $h^E$ and Hermitian metric is fixed arbitrarily. \\[3mm] Consider a smooth family $g^M(t), t\in \mathbb{R}$ of Riemannian metrics on $M$. Denote by $\widetilde{\Gamma}_t$ the corresponding chirality operator in the sence of Definition \ref{chirality} and denote the associated refined torsion (recall \eqref{finite-torsion}) of the complex $(\widetilde{\mathcal{D}}_{t,[0,\lambda]},\widetilde{\nabla}_{t,[0,\lambda]})$ by $\rho_{t,[0,\lambda]}$. \\[3mm] Let $\mathcal{B}(t)=\mathcal{B}(\nabla, g^M(t))$ be the odd-signature operator corresponding to the Riemannian metric $g^M(t)$. Fix $t_0\in \mathbb{R}$ and choose $\lambda \geq 0$ such that there are no eigenvalues of $\mathcal{B}(t_0)^2$ of absolute value $\lambda$. Then there exists $\delta>0$ small enough such that the same holds for the spectrum of $\mathcal{B}(t)^2$ for $|t-t_0|<\delta$. Under this setup we obtain: \begin{prop}\label{anomaly1} Let the family $g^M(t)$ vary only in a compact subset of the interior of $M$. Then $\exp(\xi_{\lambda}(\nabla, g^M(t)))\cdot \rho_{t,[0,\lambda]}$ is independent of $t\in (t_0-\delta,t_0+\delta)$. \end{prop} \begin{proof} The arguments of [BK2, Lemma 9.2] are of local nature and transfer ad verbatim to the present situation for metric variations in the interior of the manifold. Hence the assertion follows for Riemannian metric remaining fixed in an open neighborhood of the boundary. \end{proof} \begin{prop}\label{anomaly2} Denote the trivial connection on the trivial line bundle $M\times \mathbb{C}$ by $\nabla_{\textup{trivial}}$. Consider the even part of the associated odd-signature operator (recall Definition \ref{odd-signature}) $$\mathcal{B}_{\textup{trivial}}=\mathcal{B}_{\textup{even}}(\nabla_{\textup{trivial}}).$$ Indicate the metric dependence by $\mathcal{B}_{\textup{trivial}}(t):=\mathcal{B}_{\textup{trivial}}(g^M)$. Then $$\eta(\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}(t))-\textup{rank}(E)\eta(\mathcal{B}_{\textup{trivial}}(t))\quad \textup{mod} \, \mathbb{Z}$$ is independent of $t\in (t_0-\delta,t_0+\delta)$. \end{prop} \begin{proof} Indicate the dependence of $\widetilde{\mathcal{D}}^*_{[0,\lambda]}$ on $g^M(t)$ by $$\widetilde{\mathcal{D}}^k_{[0,\lambda]}(t):=\textup{Image}\, \Pi_{\mathcal{B}(t)^2, [0,\lambda]}\cap \widetilde{\mathcal{D}}^k.$$ Note first the by the choice of $\delta >0$ $$\dim \widetilde{\mathcal{D}}^k_{[0,\lambda]}(t)=\textup{const}, \quad t\in (t_0-\delta,t_0+\delta).$$ Since $\mathcal{B}^{[0,\lambda]}_{\textup{even}}(t)$ is finite-dimensional, we infer from the definition of the eta-invariant (cf. [BK2, (9.11)]) \begin{align}\label{konstantin} \eta (\mathcal{B}^{[0,\lambda]}_{\textup{even}}(t))\equiv \frac{1}{2}\dim \widetilde{\mathcal{D}}^k_{[0,\lambda]}(t) \equiv \textup{const} \ \textup{mod} \, \mathbb{Z}, \quad t\in (t_0-\delta,t_0+\delta). \end{align} By construction $$\eta(\mathcal{B}_{\textup{even}}(t))=\eta(\mathcal{B}^{(\lambda,\infty)}_{\textup{even}}(t))+\eta(\mathcal{B}^{[0,\lambda]}_{\textup{even}}(t)).$$ Hence, in view of \eqref{konstantin}, it suffices (modulo $\mathbb{Z}$) to study the metric dependence of the eta-invariant of $\eta(\mathcal{B}_{\textup{even}}(t))$. \\[3mm] View $\mathcal{B}_{\textup{even}}(t)$ as a pair of a differential operator $P_E(t)$ with its boundary conditions $Q_E(t)$. Similarly view $\mathcal{B}_{\textup{trivial}}(t)$ as a pair $(P_{\mathbb{C}}(t), Q_{\mathbb{C}}(t))$. Note that by construction the pair $(P_E(t),Q_E(t))$ is locally isomorphic to $(P_{\mathbb{C}}(t), Q_{\mathbb{C}}(t))\times \mathbf{1}^k$, since the flat connection $\nabla$ is locally trivial in appropriate local trivializations. \\[3mm] Since the variation of the eta-invariants is computed from the local information of the symbols (cf. [GS1, Theorem 2.8, Lemma 2.9]), we find that the difference \begin{align*} \eta(\mathcal{B}_{\textup{even}}(t))-\textup{rank}(E)\eta(\mathcal{B}_{\textup{trivial}}(t))=\\=\eta(P_E(t),Q_E(t))-\textup{rank}(E)\eta(P_{\mathbb{C}}(t), Q_{\mathbb{C}}(t)) \end{align*} is independent of $t\in \mathbb{R}$ modulo $\mathbb{Z}$. The modulo $\mathbb{Z}$ reduction is needed to annihilate discontinuity jumps arising from eigenvalues crossing the imaginary axis. This proves the statement of the proposition. \end{proof} \begin{prop}\label{anomaly3} Let $\mathcal{B}(\nabla_{\textup{trivial}})$ denote the odd-signature operator (Definition \ref{odd-signature}) associated to the trivial line bundle $M\times \mathbb{C}$ with the trivial connection $\nabla_{\textup{trivial}}$. Consider in correspondence to \eqref{xi2} the expression \begin{align*} \xi'(\nabla_{\textup{trivial}}, g^M(t))=\frac{1}{2}\sum_{k=0}^m(-1)^{k}\cdot k\cdot \zeta_{2\theta}(s=0,(\mathcal{B}(\nabla_{\textup{trivial}}, g^M(t))^2\restriction \widetilde{\mathcal{D}}^k). \end{align*} Then $$\xi'_{\lambda}(\nabla,g^M(t))-\textup{rank}(E)\cdot \xi'(\nabla_{\textup{trivial}},g^M(t))\quad \textup{mod}\ \mathbb{Z}$$ is independent of $t\in \mathbb{R}$. \end{prop} \begin{proof} We show first that modulo $\mathbb{Z}$ it suffices to study the metric dependence of \begin{align*} \xi'(\nabla, g^M(t)):=\frac{1}{2}\sum_{k=0}^m(-1)^{k}\cdot k\cdot \zeta_{2\theta}(s=0,(\mathcal{B}(\nabla, g^M(t))^2\restriction \widetilde{\mathcal{D}}^k). \end{align*} Indeed, by construction we have $$\xi'(\nabla, g^M(t))=\xi'_{\lambda}(\nabla, g^M(t))+\frac{1}{2}\sum_{k=0}^m(-1)^k\cdot k\cdot \dim \widetilde{\mathcal{D}}^k_{(0,\lambda]}(t).$$ Anticipating the auxiliary result of Lemma \ref{modulo-2z} (iii) below, we obtain $$\xi'(\nabla, g^M(t))\equiv \xi'_{\lambda}(\nabla, g^M(t))\quad \textup{mod}\ \mathbb{Z}.$$ Recall that $\mathcal{B}(\nabla_{\textup{trivial}},g^M)\times \mathbf{1}^{\textup{rk}E}$ and $\mathcal{B}(\nabla, g^M)$ are locally isomorphic, as already encountered in the proof of Proposition \ref{anomaly2}. Now the statement of the proposition follows from the fact that the value of a zeta function at zero is given, modulo $\mathbb{Z}$ in order to avoid $\dim \ker \mathcal{B}(t)\in \mathbb{Z}$, by integrands of local invariants of the operator and its boundary conditions. \end{proof} \begin{lemma}\label{modulo-2z} Let $\mathcal{I}\subset \mathbb{R}$ denote any bounded intervall. Then \begin{enumerate} \item $\frac{1}{2}\sum_{k=0}^m(-1)^{k+1}\cdot k\cdot \dim \widetilde{\mathcal{D}}^k_{\mathcal{I}}\equiv \frac{\dim M}{2}\dim \widetilde{\mathcal{D}}^{\textup{even}}_{\mathcal{I}}\ \textup{mod}\ 2\mathbb{Z}.$ \item If $0\notin \mathcal{I}$, then $\dim \widetilde{\mathcal{D}}^{\textup{even}}_{\mathcal{I}}\equiv 0 \ \textup{mod}\ 2\mathbb{Z},$ \item If $0\notin \mathcal{I}$, then $\frac{1}{2}\sum_{k=0}^m(-1)^{k+1}\cdot k\cdot \dim \widetilde{\mathcal{D}}^k_{\mathcal{I}}\equiv 0 \ \textup{mod}\ \mathbb{Z}.$ \end{enumerate} \end{lemma} \begin{proof} Note first the following relation $$\mathcal{B}^2_k=\widetilde{\Gamma} \circ \mathcal{B}^2_{m-k}\circ \widetilde{\Gamma}.$$ Hence with $r=(m+1)/2$ we obtain: \begin{align}\label{modulo-z} \frac{1}{2}\sum_{k=0}^m(-1)^{k+1}\cdot k\cdot \dim \widetilde{\mathcal{D}}^k_{\mathcal{I}}= \frac{1}{2}\sum_{k=0}^{r-1}(m-4k)\cdot \dim \widetilde{\mathcal{D}}^{2k}_{\mathcal{I}}=\\ =\frac{m}{2}\dim \widetilde{\mathcal{D}}^{\textup{even}}_{\mathcal{I}}-2\sum_{k=0}^{r-1}k \cdot \dim \widetilde{\mathcal{D}}^{2k}_{\mathcal{I}}. \end{align} This proves the first statement. For the second statement assume $0 \notin \mathcal{I}$ till the end of the proof. Consider the operators \begin{align}\label{stern1} \mathcal{B}^{+,\mathcal{I}}_k=\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}}:\widetilde{\mathcal{D}}^k_{\mathcal{I}}\cap \ker (\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}})\rightarrow \widetilde{\mathcal{D}}^{m-k-1}_{\mathcal{I}}\cap \ker (\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}), \\ \label{stern2} \mathcal{B}^{-,\mathcal{I}}_k=\widetilde{\nabla}_{\mathcal{I}}\widetilde{\Gamma}_{\mathcal{I}}:\widetilde{\mathcal{D}}^k_{\mathcal{I}}\cap \ker (\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}})\rightarrow \widetilde{\mathcal{D}}^{m-k+1}_{\mathcal{I}}\cap \ker (\widetilde{\Gamma}_{\mathcal{I}}\widetilde{\nabla}_{\mathcal{I}}). \end{align} Since $0 \notin \mathcal{I}$, the maps $\mathcal{B}^{\pm, \mathcal{I}}_k$ are isomorphisms by bijectivity of the map in \eqref{bijective-two}. Furthermore they commute with $(\mathcal{B}^{\pm, \mathcal{I}})^2$ in the following way \begin{align}\label{BB-BB} \mathcal{B}^{\pm, \mathcal{I}}_k \circ [(\mathcal{B}^{\pm, \mathcal{I}})^2\restriction \widetilde{\mathcal{D}}^k]=[(\mathcal{B}^{\pm, \mathcal{I}})^2\restriction \widetilde{\mathcal{D}}^{m-k\mp 1}]\circ \mathcal{B}^{\pm, \mathcal{I}}_k. \end{align} Hence we obtain with $\widetilde{\mathcal{D}}^{\pm,k}_{\mathcal{I}}$ denoting the span of generalized eigenforms of $(\mathcal{B}^{\pm,\mathcal{I}})^2 \restriction \widetilde{\mathcal{D}}^k$ the following relations \begin{align*} \dim \widetilde{\mathcal{D}}^{+,k}_{\mathcal{I}}=\dim \widetilde{\mathcal{D}}^{+,m-k-1}_{\mathcal{I}}, \\ \dim \widetilde{\mathcal{D}}^{-,k}_{\mathcal{I}}=\dim \widetilde{\mathcal{D}}^{-,m-k+1}_{\mathcal{I}}. \end{align*} Due to $\dim \widetilde{\mathcal{D}}^{\textup{even}}_{\mathcal{I}}=\dim \widetilde{\mathcal{D}}^{+,\textup{even}}_{\mathcal{I}}+\dim \widetilde{\mathcal{D}}^{-,\textup{even}}_{\mathcal{I}}$ this implies (recall $M$ is odd-dimensional) \begin{align}\label{stern3} \dim \widetilde{\mathcal{D}}^{\textup{even}}_{\mathcal{I}}\equiv \dim \widetilde{\mathcal{D}}^{+,2p}_{\mathcal{I}} \ \textup{mod} \ 2\mathbb{Z}, \textup{if} \ \dim M=4p+1, \\ \label{stern4} \dim \widetilde{\mathcal{D}}^{\textup{even}}_{\mathcal{I}}\equiv \dim \widetilde{\mathcal{D}}^{-,2p}_{\mathcal{I}} \ \textup{mod} \ 2\mathbb{Z}, \textup{if} \ \dim M=4p-1. \end{align} Finally recall the explicit form of $(\mathcal{B}^{\pm})^2$: \begin{align*} (\mathcal{B}^+)^2=\left( \begin{array}{cc} \Gamma \nabla_{\max} \Gamma \nabla_{\min} & 0 \\ 0 & \Gamma \nabla_{\min} \Gamma \nabla_{\max} \end{array}\right)=:\left(\begin{array}{cc} D^+_1 & 0 \\ 0 & D^+_2 \end{array}\right), \\ (\mathcal{B}^-)^2=\left( \begin{array}{cc} \nabla_{\min} \Gamma \nabla_{\max} \Gamma & 0 \\ 0 & \nabla_{\max} \Gamma \nabla_{\min} \Gamma \end{array}\right)=:\left(\begin{array}{cc} D^-_1 & 0 \\ 0 & D^-_2 \end{array}\right). \end{align*} Moreover we put $$(\mathcal{B}^{\pm, \mathcal{I}})^2\restriction \widetilde{\mathcal{D}}^k= D^{\pm , \mathcal{I}}_{1,k}\oplus D^{\pm , \mathcal{I}}_{2,k}.$$ Note the following relations \begin{align*} &(\Gamma \nabla_{\min})\circ D^+_1=D^+_2\circ (\Gamma \nabla_{\min}), \\ &D^+_1\circ (\Gamma \nabla_{\max})=(\Gamma \nabla_{\max})\circ D^+_2; \\ &\hspace{30mm} (\nabla_{\max} \Gamma)\circ D^-_1=D^-_2\circ (\nabla_{\max} \Gamma), \\ &\hspace{30mm} D^-_1\circ (\nabla_{\min} \Gamma)= (\nabla_{\min} \Gamma)\circ D^-_2. \end{align*} Due to $0 \notin \mathcal{I}$ these relations imply, similarly to \eqref{BB-BB}, spectral equivalence of $D^{\pm, \mathcal{I}}_{1,k}$ and $D^{\pm, \mathcal{I}}_{2,k}$ in the middle degree $k=2p$ for $\dim M=4p \pm 1$, respectively. This finally yields the desired relations \begin{align*} \dim \widetilde{\mathcal{D}}^{\textup{even}}_{\mathcal{I}}\equiv \dim \widetilde{\mathcal{D}}^{+,2p}_{\mathcal{I}}\equiv 0 \ \textup{mod} \ 2\mathbb{Z}, \textup{if} \ \dim M=4p+1, \\ \dim \widetilde{\mathcal{D}}^{\textup{even}}_{\mathcal{I}}\equiv \dim \widetilde{\mathcal{D}}^{-,2p}_{\mathcal{I}}\equiv 0 \ \textup{mod} \ 2\mathbb{Z}, \textup{if} \ \dim M=4p-1. \end{align*} \end{proof}\ \\ \\[-7mm] Propositions \ref{anomaly1}, \ref{anomaly2} and \ref{anomaly3} determine together the metric anomaly of $\rho(\nabla, g^M, h^E)$ up to a sign and we deduce the following central corollary. \begin{cor}\label{RAT-sign} Let $M$ be an odd-dimensional oriented compact Riemannian manifold. Let $(E, \nabla, h^E)$ be a flat complex vector bundle over $M$. Denote by $\nabla_{\textup{trivial}}$ the trivial connection on $M\times \mathbb{C}$ and let $\mathcal{B}_{\textup{trivial}}$ denote the even part of the associated odd-signature operator. Then \begin{align*} \rho_{\textup{an}}(\nabla):=\rho(\nabla, g^M, h^E)\cdot \exp\left[i\pi \, \textup{rk}(E)(\eta(\mathcal{B}_{\textup{trivial}}(g^M)) + \xi'(\nabla_{\textup{trivial}}, g^M))\right] \end{align*} is modulo sign independent of the choice of $g^M$ in the interior of $M$. \end{cor}\ \\ \\[-7mm] In view of the corollary above we can now define the "refined analytic torsion". It will be a differential invariant in the sense, that even though defined by geometric data in form of the metric structures, it is shown to be independent of their form in the interior of the manifold. \begin{defn}\label{rho-def} Let $M$ be an odd-dimensional oriented Riemannian manifold. Let $(E,\nabla)$ be a flat complex vector bundle over $M$. Then the refined analytic torsion is defined as the equivalence class of $\rho_{\textup{an}}(\nabla)$ modulo multiplication by $\exp [i \pi]$: $$\rho_{\textup{an}}(M, E):=\rho_{\textup{an}}(\nabla) /_{e^{i\pi}}.$$ \end{defn} \ \\ \\[-7mm] Note that the sign indeterminacy is also present in the original construction by Braverman and Kappeler, see [BK2, Remark 9.9 and Remark 9.10]. In the presentation below, we refer to the representative $\rho_{\textup{an}}(\nabla)$ of the class $\rho_{\textup{an}}(M, E)$ as refined analytic torsion, as well. \section{Ray-Singer norm of Refined analytic torsion}\label{RS} \ \\[-3mm] Recall first the construction of the Ray-Singer torsion as a norm on the determinant line bundle for compact oriented Riemannian manifolds. Let $(M,g^M)$ and $(E,\nabla,h^E)$ be as in Subsection \ref{explicit-unitary}. \\[3mm] Let $\triangle_{\textup{rel}}$ be the Laplacian associated to the Fredholm complex $(\mathcal{D}_{\min},\nabla_{\min})$ defined at the beginning of Section \ref{explicit-unitary}. As in \eqref{decomposition} in case of the squared odd-signature operator $\mathcal{B}^2$, it induces a spectral decomposition into a direct sum of subcomplexes for any $\lambda \geq 0$. $$(\mathcal{D}_{\min}, \nabla_{\min})=(\mathcal{D}_{\min}^{[0,\lambda]}, \nabla_{\min}^{[0,\lambda]})\oplus (\mathcal{D}_{\min}^{(\lambda, \infty)}, \nabla_{\min}^{(\lambda, \infty)}).$$ The scalar product on $\mathcal{D}_{\min}^{[0,\lambda]}$ induced by $g^M$ and $h^E$, induces a norm on the determinant line $\textup{Det}(\mathcal{D}_{\min}^{[0,\lambda]}, \nabla_{\min}^{[0,\lambda]})$ (we use the notation of determinant lines of finite dimensional complexes in [BK2, Section 1.1]). There is a canonical isomorphism $$\phi_{\lambda}:\textup{Det}(\mathcal{D}_{\min}^{[0,\lambda]}, \nabla_{\min}^{[0,\lambda]})\to \textup{Det}H^*(\mathcal{D}_{\min}, \nabla_{\min}),$$ induced by the Hodge-decomposition in finite-dimensional complexes. Choose on $\textup{Det}H^*(\mathcal{D}_{\min}, \nabla_{\min})$ the norm $\|\cdot\|^{\textup{rel}}_{\lambda}$ such that $\phi_{\lambda}$ becomes an isometry. Further denote by $T^{RS}_{(\lambda, \infty)}(\nabla_{\min})$ the scalar analytic torsion associated to the complex $(\mathcal{D}_{\min}^{(\lambda, \infty)}, \nabla_{\min}^{(\lambda, \infty)})$: $$T^{RS}_{(\lambda, \infty)}(\nabla_{\min}):=\exp \left(\frac{1}{2}\sum_{k=1}^m(-1)^{k+1}\cdot k\cdot \zeta'(s=0, \triangle^{(\lambda, \infty)}_{k, \textup{rel}})\right),$$ where $\triangle^{(\lambda, \infty)}_{\textup{rel}}$ is the Laplacian associated to the complex $(\mathcal{D}_{\min}^{(\lambda, \infty)}, \nabla_{\min}^{(\lambda, \infty)})$. Note the difference to the sign convention of [RS]. However we are consistent with [BK2]. \\[3mm]The Ray-Singer norm on $\textup{Det}H^*(\mathcal{D}_{\min}, \nabla_{\min})$ is then defined by \begin{align}\label{norm-rel} \|\cdot\|^{RS}_{\textup{Det}H^*(\mathcal{D}_{\min}, \nabla_{\min})}:=\|\cdot\|^{\textup{rel}}_{\lambda}\cdot T^{RS}_{(\lambda, \infty)}(\nabla_{\min}). \end{align} With a completely analogous construction we obtain the Ray-Singer norm on the determinant line $\textup{Det}H^*(\mathcal{D}_{\max}, \nabla_{\max})$ \begin{align}\label{norm-abs} \|\cdot\|^{RS}_{\textup{Det}H^*(\mathcal{D}_{\max}, \nabla_{\max})}:=\|\cdot\|^{\textup{abs}}_{\lambda}\cdot T^{RS}_{(\lambda, \infty)}(\nabla_{\max}). \end{align} Both constructions turn out to be independent of the choice of $\lambda \geq 0$, which follows from arguments analogous to those in the proof of Proposition \ref{rho-element}. In fact we get for $0 \leq \lambda < \mu$: \begin{align*} \|\cdot \|^{\textup{rel/abs}}_{\mu}=\|\cdot \|^{\textup{rel/abs}}_{\lambda} \cdot T^{RS}_{(\lambda, \mu ]}(\nabla_{\textup{min/max}}), \end{align*} which implies that the Ray-Singer norms are well-defined. Furthermore by the arguments in [Mu, Theorem 2.6] the norms do not depend on the metric structures in the interior of the manifold. \begin{remark} Note that the Ray-Singer analytic torsion considered in [V] and [L\"{u}] differs from our setup in the sign convention and by the absence of factor $1/2$. \end{remark} \ \\ \\[-7mm] We can apply the same construction to the Laplacian of the complex $(\widetilde{\mathcal{D}}, \widetilde{\nabla})$ introduced in Definition \ref{domain} $$(\widetilde{\mathcal{D}}, \widetilde{\nabla})=(\mathcal{D}_{\min}, \nabla_{\min})\oplus (\mathcal{D}_{\max}, \nabla_{\max}).$$ Similarly we obtain \begin{align}\label{norm} \|\cdot\|^{RS}_{\textup{Det}H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla})}:=\|\cdot\|_{\lambda}\cdot T^{RS}_{(\lambda, \infty)}(\widetilde{\nabla}). \end{align} This "doubled" Ray-Singer norm is naturally related to the previous two norms in \eqref{norm-rel} and \eqref{norm-abs}. There is a canonical "fusion isomorphism", cf. [BK2, (2.18)] for general complexes of finite dimensional vector spaces \begin{align}\nonumber \mu: \textup{Det}H^*(\mathcal{D}_{\min}, \nabla_{\min})\oplus \textup{Det}H^*(\mathcal{D}_{\max}, \nabla_{\max}) \to \textup{Det}H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla}), \\ \label{fusion}\textup{such that}\ \|\mu(h_1\otimes h_2)\|_{\lambda}=\|h_1\|^{\textup{rel}}_{\lambda}\cdot \|h_2\|^{\textup{abs}}_{\lambda}, \end{align} where we recall $(\widetilde{\mathcal{D}}, \widetilde{\nabla})=(\mathcal{D}_{\min}, \nabla_{\min})\oplus (\mathcal{D}_{\max}, \nabla_{\max})$ by definition. Further we have by the definition of $(\widetilde{\mathcal{D}}, \widetilde{\nabla})$ following relation between the scalar analytic torsions: \begin{align}\label{scalar} T^{RS}_{(\lambda, \infty)}(\widetilde{\nabla})=T^{RS}_{(\lambda, \infty)}(\nabla_{\min})\cdot T^{RS}_{(\lambda, \infty)}(\nabla_{\max}). \end{align} Combining \eqref{fusion} and \eqref{scalar} we end up with a relation between norms \begin{align} \|\mu(h_1\otimes h_2)\|^{RS}_{\textup{Det}H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla})}=\|h_1\|^{RS}_{\textup{Det}H^*(\mathcal{D}_{\min}, \nabla_{\min})}\cdot \|h_2\|^{RS}_{\textup{Det}H^*(\mathcal{D}_{\max}, \nabla_{\max})}. \end{align} The next theorem provides a motivation for viewing $\rho_{\textup{an}}(\nabla)$ as a refinement of the Ray-Singer torsion. \begin{thm}\label{rho-norm} Let $M$ be a smooth compact odd-dimensional oriented Riemannian manifold. Let $(E,\nabla, h^E)$ be a flat complex vector bundle over $M$ with a flat Hermitian metric $h^E$. Then $$\|\rho_{\textup{an}}(\nabla)\|^{RS}_{\textup{Det}H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla})}=1.$$ \end{thm} \begin{proof} Recall from the assertion of Theorem \ref{log-det-gr} \begin{align*} \det\nolimits_{gr} (\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})=e^{\xi_{\lambda}(\nabla, g^M)}\cdot e^{-i\pi \xi'_{\lambda}(\nabla,g^M)}\cdot e^{-i\pi\eta(\mathcal{B}_{\textup{even}})}, \end{align*} Flatness of $h^E$ implies by construction that $\mathcal{B}^2=\triangle_{\textup{rel}}\oplus \triangle_{\textup{abs}}$ and hence \begin{align*} \xi_{\lambda}(\nabla, g^M)=-\log T^{RS}_{(\lambda, \infty)}(\widetilde{\nabla}). \end{align*} Further $\mathcal{B}_{\textup{even}}$ is self-adjoint and thus has a real spectrum. Hence $\eta(\mathcal{B}_{\textup{even}})$ and $\xi'_{\lambda}(\nabla,g^M)$ are real-valued, as well. Thus we derive \begin{align}\label{vier} \left|\det\nolimits_{gr} (\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})\right|=\frac{1}{T^{RS}_{(\lambda, \infty)}(\widetilde{\nabla})}. \end{align} Furthermore we know from [BK2, Lemma 4.5], which is a general result for complexes of finite-dimensional vector spaces, \begin{align}\label{5} \|\rho_{[0,\lambda]}\|_{\lambda}=1. \end{align} Now the assertion follows by combining the definition of the refined analytic torsion with \eqref{vier}, \eqref{5} and the fact that the additional terms annihilating the metric anomaly are all of norm one. In fact we have: \begin{align*} \|\rho_{\textup{an}}(\nabla)\|^{RS}_{\textup{Det}H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla})}=\left|\det\nolimits_{gr} (\mathcal{B}^{(\lambda, \infty)}_{\textup{even}})\right| \cdot T^{RS}_{(\lambda, \infty)}(\widetilde{\nabla}) \cdot \|\rho_{[0,\lambda]}\|_{\lambda} = 1. \end{align*} \end{proof}\ \\ \\[-7mm] If the Hermitian metric is not flat, the situation becomes harder. In the setup of closed manifolds M. Braverman and T. Kappeler performed a deformation procedure in [BK2, Section 11] and proved in this way the relation between the Ray-Singer norm and the refined analytic torsion in [BK2, Theorem 11.3]. \\[3mm] Unfortunately the deformation argument is not local and the arguments in [BK2] do not apply in the setup of manifolds with boundary. Nevertheless we can derive appropriate result by relating our discussion to the closed double manifold. \\[3mm] Assume the metric structures $(g^M, h^E)$ to be product near the boundary $\partial M$. The issues related to the product structures are discussed in detail in [BLZ, Section 2]. More precisely, we identify using the inward geodesic flow a collar neighborhood $U\subset M$ of the boundary $\partial M$ diffeomorphically with $[0,\epsilon)\times \partial M, \epsilon > 0$. Explicitly we have the diffeomorphism \begin{align*} \phi^{-1}:[0,\epsilon)\times \partial M &\rightarrow U, \\ (t,p)& \mapsto \gamma_p(t), \end{align*} where $\gamma_p$ is the geodesic flow starting at $p \in \partial M$ and $\gamma_p(t)$ is the geodesics from $p$ of length $t \in [0,\epsilon)$. The metric $g^M$ is product near the boundary, if over $U$ it is given under the diffeomorphism $\phi: U \to [0,\epsilon)\times \partial M$ by \begin{align} \phi_*g^M|_U=dx^2\oplus g^M|_{\partial M}. \end{align} The diffeomorphism $U \cong [0,\epsilon)\times \partial M$ shall be covered by a bundle isomorphism $\widetilde{\phi}: E|_U \to [0,\epsilon)\times E|_{\partial M}$. The fiber metric $h^E$ is product near the boundary, if it is preserved by the bundle isomorphism, i.e. \begin{align} \widetilde{\phi}_*h^E|_{\{x\}\times \partial M}=h^E|_{\partial M}. \end{align} The assumption of product structures guarantees that the closed double manifold $$\mathbb{M}=M\cup_{\partial M}M$$ is a smooth closed Riemannian manifold and the Hermitian vector bundle $(E,h^E)$ extends to a smooth Hermitian vector bundle $(\mathbb{E},h^{\mathbb{E}})$ over the manifold $\mathbb{M}$. \\[3mm] Moreover we assume the flat connection $\nabla$ on $E$ to be in \emph{temporal gauge}. The precise definition of a connection in temporal gauge and the proof of the fact that each flat connection is gauge-equivalent to a flat connection in temporal gauge, are provided in [BV4, ]. \\[3mm] The assumption on $\nabla$ to be a flat connection in temporal gauge is required in the present context to guarantee that $\nabla$ extends to a smooth flat connection $\mathbb{D}$ on $\mathbb{E}$, with $$\mathbb{D}|_{M}=\nabla.$$ \begin{thm}\label{double} Let $(M^m,g^M)$ be an odd-dimensional oriented and compact smooth Riemannian manifold with boundary $\partial M$. Let $(E,\nabla,h^E)$ be a flat Hermitian vector bundle with the Hermitian metric $h^E$, not necessarily flat. \\[3mm] Assume the metric structures $(g^M,h^E)$ to be product and the flat connection $\nabla$ to be in temporal gauge near the boundary $\partial M$. Then $$ \|\rho_{\textup{an}}(\nabla)\|^{RS}_{\det H^*(\widetilde{\mathcal{D}},\widetilde{\nabla})}=\textup{exp}[\pi \textup{Im}\, \eta (\mathcal{B}_{\textup{even}}(g^M))].$$ \end{thm} \begin{proof} By assumption we obtain a closed Riemannian double manifold $(\mathbb{M},g^{\mathbb{M}})$ and a flat Hermitian vector bundle $(\mathbb{E}, \mathbb{D}, h^{\mathbb{E}})$ over $\mathbb{M}$ with a flat Hermitian metric $h^{\mathbb{E}}$. Denote by $(\mathcal{D}, \mathbb{D})$ the unique boundary conditions (see [BL1]) of the twisted de Rham complex $(\Omega^*(\mathbb{M}, \mathbb{E}),\mathbb{D})$. Denote the closure of $\Omega^*(\mathbb{M}, \mathbb{E})$ with respect to the $L^2-$scalar product defined by $g^{\mathbb{M}}$ and $h^{\mathbb{E}}$, by $L^2_*(\mathbb{M}, \mathbb{E})$. \\[3mm] The Riemannian metric $g^{\mathbb{M}}$ gives rise to the Hodge star operator $*$ and we set $$\mathbb{G}:=i^r(-1)^{\frac{k(k+1)}{2}}*:\Omega^k(\mathbb{M}, \mathbb{E})\rightarrow \Omega^{k-1}(\mathbb{M}, \mathbb{E}), \quad r:=(\dim M +1)/2$$ which extends to a self-adjoint involution on $L^2_*(\mathbb{M}, \mathbb{E})$. We define the odd signature operator $\mathbb{B}$ of the Hilbert complex $(\mathcal{D}, \mathbb{D})$: $$\mathbb{B}:=\mathbb{G}\mathbb{D}+\mathbb{D}\mathbb{G}.$$ This is precisely the odd-signature operator associated to the closed manifold $\mathbb{M}$, as used in the construction of [BK1, BK2]. \\[3mm] Note that we now have two triples: the triple $(\mathbb{D}, \mathbb{G}, \mathbb{B})$ associated to the closed manifold $\mathbb{M}$ and the triple $(\widetilde{\nabla}, \widetilde{\Gamma}, \mathcal{B})$ associated to $(M, \partial M)$, as defined in Subsection \ref{explicit-unitary}. \\[3mm] Consider now the diffeomorphic involution on the closed double $$\mathrm{\alpha}: \mathbb{M}\rightarrow \mathbb{M},$$ interchanging the two copies of $M$. It gives rise to an isomorphism of Hilbert complexes $$\mathrm{\alpha}^*: (\mathcal{D}, \mathbb{D})\rightarrow (\mathcal{D}, \mathbb{D}),$$ which is an involution as well. We get a decomposition of $(\mathcal{D} , \mathbb{D})$ into the $(\pm 1)$-eigenspaces of $\mathrm{\alpha}^*$, which form subcomplexes of the total complex: \begin{align}\label{involution-decomp} (\mathcal{D} , \mathbb{D})=(\mathcal{D}^+ , \mathbb{D}^+)\oplus (\mathcal{D}^- , \mathbb{D}^-), \end{align} where the upper-indices $\pm$ refer to the $(\pm 1)$-eigenspaces of $\mathrm{\alpha}^*$, respectively. \\[3mm] The central property of the decomposition, by similar arguments as in [BL1, Theorem 4.1], lies in the following observation \begin{align} \mathcal{D}^+|_M=\mathcal{D}_{\max}, \quad \mathcal{D}^-|_M=\mathcal{D}_{\min}. \end{align} By the symmetry of the elements in $\mathcal{D}^{\pm}$ we obtain the following natural isomorphism of complexes: \begin{align*} \Phi:(\mathcal{D} , \mathbb{D})=(\mathcal{D}^+ , \mathbb{D}^+)\oplus (\mathcal{D}^- , \mathbb{D}^-)&\rightarrow (\mathcal{D}_{\textup{max}}, \nabla_{\max})\oplus (\mathcal{D}_{\textup{min}}, \nabla_{\min}), \\ \mathrm{\omega}=\mathrm{\omega}^+\oplus\mathrm{\omega}^-&\mapsto 2\mathrm{\omega}^+|_M \oplus 2\mathrm{\omega}^-|_M, \end{align*} which extends to an isometry with respect to the natural $L^2-$structures. Using the relations \begin{align}\label{G-double} \Phi\circ \mathbb{D}\circ \Phi^{-1}=\widetilde{\nabla}, \quad \Phi\circ \mathbb{G}\circ \Phi^{-1}=\widetilde{\Gamma}, \end{align} we obtain with $\Delta$ and $\widetilde{\triangle}$, denoting respectively the Laplacians of the complexes $(\mathcal{D}, \mathbb{D})$ and $(\widetilde{\mathcal{D}}, \widetilde{\nabla})\equiv (\mathcal{D}_{\min}, \nabla_{\min})\oplus (\mathcal{D}_{\max}, \nabla_{\max})$: \begin{align*} \Phi \mathcal{D} (\mathbb{B})=\mathcal{D} (\mathcal{B}), \quad \Phi \circ \mathbb{B} \circ \Phi^{-1}=\mathcal{B}, \\ \Phi \mathcal{D} (\Delta)=\mathcal{D} (\widetilde{\triangle}), \quad \Phi \circ \Delta \circ \Phi^{-1}=\widetilde{\triangle}. \end{align*} Hence the odd-signature operators $\mathbb{B}, \mathcal{B}$ as well as the Laplacians $\Delta, \widetilde{\triangle}$ are spectrally equivalent. Consider the spectral projections $\Pi_{\mathbb{B}^2,[0,\lambda]}$ and $\Pi_{\mathcal{B}^2,[0,\lambda]}, \lambda \geq 0$ of $\mathbb{B}$ and $\mathcal{B}$ respectively, associated to eigenvalues of absolute value in $[0,\lambda]$. By the spectral equivalence $\mathbb{B}$ and $\mathcal{B}$ we find $$\Phi \circ \Pi_{\mathbb{B}^2,[0,\lambda]}=\Pi_{\mathcal{B}^2,[0,\lambda]}\circ \Phi.$$ Hence the isomorphism $\Phi$ reduces to an isomorphism of finite-dimensional complexes: \begin{align*} \Phi_{\lambda}:(&\mathcal{D}_{[0,\lambda]}, \mathbb{D}_{[0,\lambda]})\xrightarrow{\sim} (\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]}), \\ \textup{where} \quad &\mathcal{D}_{[0,\lambda]}:=\mathcal{D} \cap \textup{Image}\Pi_{\mathbb{B}^2,[0,\lambda]}, \\ &\widetilde{\mathcal{D}}_{[0,\lambda]}:=\widetilde{\mathcal{D}} \cap \textup{Image}\Pi_{\mathcal{B}^2,[0,\lambda]}. \end{align*} Moreover $\Phi_{\lambda}$ induces an isometric identification of the corresponding determinant lines, which we denote again by $\Phi_{\lambda}$, by a minor abuse of notation $$\Phi_{\lambda}:\det (\mathcal{D}_{[0,\lambda]}, \mathbb{D}_{[0,\lambda]})\xrightarrow{\sim} \det (\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]}),$$ where we use the notation for determinant lines of finite-dimensional complexes in [BK2, Section 1.1]. By Corollary \ref{cohomology} we have the canonical identifications of determinant lines \begin{align}\label{di1} \det (\mathcal{D}_{[0,\lambda]}, \mathbb{D}_{[0,\lambda]})\cong &\det H^*(\mathcal{D} , \mathbb{D}), \\ \label{di2} \det (\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]})\cong &\det H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla}), \end{align} The determinant lines on the left hand side of both identifications carry the natural $L^2-$Hilbert structure. Denote the norms on $\det H^*(\mathcal{D} , \mathbb{D})$ and $\det H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla})$ which turn both identifications into isometries, by $\|\cdot \|_{\lambda}$ and $\|\cdot \|_{\lambda}^{\sim}$, respectively. Then we can view $\Phi_{\lambda}$ as $$\Phi_{\lambda}:\det H^*(\mathcal{D} , \mathbb{D})\xrightarrow{\sim} \det H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla}),$$ isometric with respect to the Hilbert structures induced by $\|\cdot \|_{\lambda}$ and $\|\cdot \|_{\lambda}^{\sim}$. \\[3mm] Finally, consider the refined torsion elements (not the refined analytic torsion) of the determinant lines, as defined in [BK2, Section 1.1], see also \eqref{finite-torsion} \begin{align*} \rho^{\mathbb{G}}_{[0,\lambda]}\in \det (\mathcal{D}_{[0,\lambda]}, \mathbb{D}_{[0,\lambda]})\cong \det H^*(\mathcal{D}, \mathbb{D}), \\ \rho^{\widetilde{\Gamma}}_{[0,\lambda]}\in \det (\widetilde{\mathcal{D}}_{[0,\lambda]}, \widetilde{\nabla}_{[0,\lambda]}) \cong \det H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla}). \end{align*} We infer from \eqref{G-double} the following relation: \begin{align*} \Phi_{\lambda}\left( \rho^{\mathbb{G}}_{[0,\lambda]} \right) = \rho^{\widetilde{\Gamma}}_{[0,\lambda]}, \quad \textup{hence:} \ \| \rho^{\mathbb{G}}_{[0,\lambda]} \|_{\lambda} = \| \rho^{\widetilde{\Gamma}}_{[0,\lambda]} \|_{\lambda}^{\sim}. \end{align*} Together with spectral equivalence of $\Delta$ and $\widetilde{\triangle}$, as well as of $\mathbb{B}$ and $\mathcal{B}$, with similar statements for constructions on trivial line bundles $M\times \mathbb{C}$ and $\mathbb{M}\times \mathbb{C}$, we finally obtain \begin{align} \|\rho_{\textup{an}}(\mathbb{D})\|^{RS}_{\det H^*(\mathcal{D}, \mathbb{D})}= \|\rho_{\textup{an}}(\nabla)\|^{RS}_{\det H^*(\widetilde{\mathcal{D}}, \widetilde{\nabla})}, \end{align} where $\rho_{\textup{an}}(\mathbb{D})$ denotes the refined analytic torsion as defined by M. Braverman and T. Kappeler in [BK2] and $\rho_{\textup{an}}(\nabla)$ denotes the refined analytic torsion in the sense of the present discussion. \\[3mm] The statement now follows from [BK2, Theorem 11.3]. \end{proof}\ \\ \\[-7mm] In the setup of the previous theorem we can improve the sign indeterminacy of $\rho_{\textup{an}}(\nabla)$ as follows: \begin{prop}\label{RAT-sign2} Let $M$ be an odd-dimensional oriented compact Riemannian manifold. Let $(E, \nabla, h^E)$ be a flat complex vector bundle over $M$. Denote by $\nabla_{\textup{trivial}}$ the trivial connection on $M\times \mathbb{C}$ and let $\mathcal{B}_{\textup{trivial}}$ denote the even part of the associated odd-signature operator. \\[3mm] Assume the metric structures $(g^M,h^E)$ to be product and the flat connection $\nabla$ to be in temporal gauge near the boundary $\partial M$. Then \begin{align*} \rho_{\textup{an}}(\nabla)=\rho(\nabla, g^M, h^E)\cdot \exp\left[i\pi \, \textup{rk}(E)(\eta(\mathcal{B}_{\textup{trivial}}(g^M)) + \xi'(\nabla_{\textup{trivial}}, g^M))\right] \end{align*} is independent of the choice of $g^M$ in the interior of $M$, up to multiplication by $$\exp [i \pi \textup{rank}(E)].$$ In particular it is independent of $g^M$ in the interior of $M$ for $E$ being a complex vector bundle of even rank. \end{prop} \begin{proof} Consider a smooth family $g^M(t),t\in \mathbb{R}$ of Riemannian metrics, variing only in the interior of $M$ and being of fixed product structure near $\partial M$. By arguments in Theorem \ref{double} we can relate $\mathcal{B}(g^M(t))$ to operators on the closed double $\mathbb{M}$ and deduce from [BK1, Theorem 5.7] that $\rho(\nabla, g^M(t),h^E)$ is continuous in $t$. However \begin{align*} \exp\left[i\pi \, \textup{rk}(E)\eta(\mathcal{B}_{\textup{trivial}}(g^M(t)))\right] \end{align*} is continuous in $t\in \mathbb{R}$ only up to multiplication by $e^{i\pi \textup{rk}E}$. Hence the element $\rho_{\textup{an}}(\nabla)$, where we denote the a priori metric dependence by $\rho_{\textup{an}}(\nabla, g^M(t))$, is continuous in $t$ only modulo multiplication by $e^{i \pi \textup{rk}(E)}$. For $g^M(t)$ varying only in the interior of $M$ and any $t_0, t_1 \in \mathbb{R}$ we infer from the mod $\mathbb{Z}$ metric anomaly considerations in Propositions \ref{anomaly2} and \ref{anomaly3}: $$\rho_{\textup{an}}(\nabla, g^M(t_0))=\pm \rho_{\textup{an}}(\nabla, g^M(t_1)).$$ For rk$(E)$ odd this is already the desired statement, since $\exp (i\pi \textup{rk}(E))=-1$. For rk$(E)$ even, $\rho_{\textup{an}}(\nabla, g^M(t))$ is continuous in $t$ and nowhere vanishing, so the sign in the last relation must be positive. This proves the statement. \end{proof}\ \\ \\[-7mm] In view of the corollary above we can re-define the refined analytic torsion in the setup of product metric structures and flat connection in temporal gauge as follows: \begin{align}\label{RAT-sign3} \rho_{\textup{an}}(M, E):=\rho_{\textup{an}}(\nabla) /_{e^{i\pi \textup{rank}(E)}}. \end{align} \begin{remark} The interdeterminacy of $\rho_{\textup{an}}(\nabla)$ modulo multiplication by the factor $e^{i\pi \textup{rk}E}$ in fact corresponds and is even finer than the general indeterminacy in the construction of M. Braverman and T. Kappeler on closed manifolds, see [BK2, Remark 9.9 and Remark 9.10]. \end{remark} \section{Open Problems}\label{open-refined} \ \\[-3mm] \emph{Ideal Boundary Conditions} \\[3mm] As explained in the introduction, the approach of Braverman and Kappeler in [BK1, BK2] requires ideal boundary conditions for the twisted de Rham complex, which turn it into a Fredholm complex with Poincare duality and further provide elliptic boundary conditions for the associated odd-signature operator, viewed as a map between the even forms. In our construction we pursued a different strategy, however the question about existence of such boundary conditions remains. \\[3mm] This question was partly discussed in [BL1]. In view of [BL1, Lemma 4.3] it is not even clear whether ideal boundary conditions exist, satisfying Poincare duality and providing a Fredholm complex. For the approach of Braverman and Kappeler we need even more: the ideal boundary conditions need to provide elliptic boundary conditions for the odd-signature operator. We arrive at the natural open question, whether such boundary conditions exist. \\[4mm] \emph{Conical Singularities} \\[3mm] Another possible direction for the discussion of refined analytic torsion is the setup of compact manifolds with conical singularities. At the conical singularity the question of appropriate boundary conditions is discussed in [Ch2], as well as in [BL2]. \\[3mm] It turns out that on odd-dimensional manifolds with conical singularities the topological obstruction is given by $H^{\nu}(N)$, where $N$ is the base of the cone and $\nu=\dim N /2$. If $$H^{\nu}(N)=0$$ then all ideal boundary conditions coincide and the construction of Braverman and Kappeler [BK1, BK2] goes through. Otherwise, see [Ch2, p.580] for the choice of ideal boundary conditions satisfying Poincare duality. \\[4mm] \emph{Combinatorial Counterpart} \\[3mm] Let us recall that the definition of the refined analytic torsion in [BK1, BK2] was partly motivated by providing analytic counterpart of the refined combinatorial torsion, introduced by V. Turaev in [Tu1]. \\[3mm] In his work V. Turaev introduced the notion of Euler structures and showed how it is applied to refine the concept of Reidemeister torsion by removing the ambiguities in choosing bases needed for construction. Moreover, Turaev observed in [Tu2] that on three-manifolds a choice of an Euler structure is equivalent to a choice of a Spin$^c$-structure. \\[3mm] Both, the Turaev-torsion and the Braverman-Kappeler refined torsion are holomorphic functions on the space of representations of the fundamental group on $GL(n,\mathbb{C})$, which is a finite-dimensional algebraic variety. Using methods of complex analysis, Braverman and Kappeler computed the quotient between their and Turaev's construction. \\[3mm] A natural question is whether this procedure has an appropriate equivalent for our proposed refined analytic torsion on manifolds with boundary. In our view this question can be answered affirmatively. \\[3mm] Indeed, by similar arguments as in [BK1, BK2] the proposed refined analytic torsion on manifolds with boundary can also be viewed as an analytic function on the finite-dimensional variety of representations of the fundamental group. \\[3mm] For the combinatorial counterpart note that M. Farber introduced in [Fa] the concept of Poincare-Reidemeister metric, where using Poincare-duality in the similar spirit as in our construction, he constructed an invariantly defined Reidemeister torsion norm for non-unimodular representations. Further M. Farber and V. Turaev elaborated jointly in [FaTu] the relation between their concepts and introduced the refinement of the Poincare-Reidemeister scalar product. \\[3mm] The construction in [Fa] extends naturally to manifolds with boundary by similar means as in our definition of refined analytic torsion. This provides a combinatorial torsion norm on compact manifolds, well-defined without unimodularity assumption. It can then be refined in the spirit of [FaTu]. This would naturally provide the combinatorial counterpart for the presented refined analytic torsion. \section{References}\ \\[-1mm] [Ag] S. Agmon \emph{"On the eigenfunctions and on the eigenvalues of general elliptic boundary value problems"} Comm. Pure Appl. Math., vol. 15, 119-147 (1962) \\[3mm] [APS] M. F. Atiyah, V.K. Patodi, I.M. Singer \emph{"Spectral asymmetry and Riemannian geometry I"}, Math. Proc. Camb. Phil. Soc. 77, 43-69 (1975) \\[3mm] [BFK] D. Burghelea, L. Friedlander, T. Kappeler \emph{"Mayer-Vietoris type formula for determinants of elliptic differential operators"}, Journal of Funct. Anal. 107, 34-65 (1992) \\[3mm] [BGV] N. Berline, E. Getzler, M. Vergne \emph{"Heat kernels and Dirac operators"}, Springer-Verlag, New Jork (1992) \\[3mm] [BK1] M. Braverman and T. Kappeler \emph{"Refined analytic torsion"}, arXiv:math.DG/ 0505537v2, to appear in J. of. Diff. Geom. \\[3mm] [BK2] M. Braverman and T. Kappeler \emph{"Refined Analytic Torsion as an Element of the Determinant Line"} , arXiv:math.GT/0510532v4, To appear in Geometry \& Topology \\[3mm] [BL1] J. Br\"{u}ning, M. Lesch \emph{"Hilbert complexes"}, J. Funct. Anal. 108, 88-132 (1992) \\[3mm] [BL3] J. Br\"{u}ning, M. Lesch \emph{"On boundary value problems for Dirac type operators. I. Regularity and self-adjointness"}, arXiv:math/9905181v2 [math.FA] (1999) \\[3mm] [BLZ] B. Booss, M. Lesch, C. Zhu \emph{"The Calderon Projection: New Definition and Applications"}, arXiv:math.DG/0803.4160v1 (2008) \\[3mm] [BS] J. Br\"{u}ning, R. Seeley \emph{"An index theorem for first order regular singular operators"}, Amer. J. Math 110, 659-714, (1988) \\[3mm] [BW] B. Booss, K. Wojchiechovski \emph{"Elliptic boundary problemsfor Dirac Operators"}, Birkh\"{a}user, Basel (1993) \\[3mm] [BV4] B. Vertman \emph{"Gluing Formula for Refined Analytic Torsion"}, preprint, arXiv:0808.0451 (2008) \\[3mm] [BZ] J. -M. Bismut and W. Zhang \emph{"Milnor and Ray-Singer metrics on the equivariant determinant of a flat vector bundle"}, Geom. and Funct. Analysis 4, No.2, 136-212 (1994) \\[3mm] [BZ1] J. -M. Bismut and W. Zhang \emph{"An extension of a Theorem by Cheeger and M\"{u}ller"}, Asterisque, 205, SMF, Paris (1992) \\[3mm] [Ch] J. Cheeger \emph{"Analytic Torsion and Reidemeister Torsion"}, Proc. Nat. Acad. Sci. USA 74 (1977), 2651-2654 \\[3mm] [Fa] M. Farber \emph{Combinatorial invariants computing the Ray-Singer analytic torsion}, arXiv:dg-ga/ 9606014v1 (1996) \\[3mm] [Gi] P.B. Gilkey \emph{"Invariance Theory, the Heat-equation and the Atiyah-Singer Index Theorem"}, Second Edition, CRC Press (1995) \\[3mm] [Gi2] P.B. Gilkey \emph{"The eta-invariant and secondary characteristic classes of locally flat bundles"}, Algebraic and Differential topology $-$ global differential geometry, Teubner-Texte zur Math., vol. 70, Teubner, Leipzig, 49-87 (1984) \\[3mm] [GS1] P. Gilkey, L. Smith \emph{"The eta-invariant for a class of elliptic boundary value problems"}, Comm. Pure Appl. Math. Vol.36, 85-131 (1983) \\[3mm] [GS2] P. Gilkey, L. Smith \emph{"The twisted index problem for manifolds with boundary"}, J. Diff. Geom. 18, 393-444 (1983) \\[3mm] [K] T. Kato \emph{"Perturbation Theory for Linear Operators"}, Die Grundlehren der math. Wiss. Volume 132, Springer (1966) \\[3mm] [KL] P. Kirk and M. Lesch \emph{"The $\eta$-invariant, Maslov index and spectral flow for Dirac type operators on manifolds with boundary"}, Forum Math. 16, 553-629 (2004) \\[3mm] [KM] F.F: Knudsen, D. Mumford \emph{"The projectivity of the moduli space of stable curves. I. Preliminaries on 'det' and 'Div'"}, Math. Scand. 39, no1, 19-55 (1976) \\[3mm] [KN] S. Kobayashi, K. Nomizu \emph{"Foundations of differential geometry"}, Volume I, Interscience Publishers (1963) \\[3mm] [L2] M. Lesch \emph{"Gluing formula in cohomological algebra"}, unpublished notes. \\[3mm] [Lee] Y. Lee \emph{"Burghelea-Friedlander-Kappeler's gluing formula for the zeta-determinant and its application to the adiabatic decomposition of the zeta-determinant and the analytic torsion"}, Trans. Amer. Math. Soc., Vol. 355, 10, 4093-4110 (2003) \\[3mm] [LR] J. Lott, M. Rothenberg \emph{"Analytic torsion for group actions"} J. Diff. Geom. 34, 431-481 (1991) \\[3mm] [L\"{u}] W. L\"{u}ck \emph{"Analytic and topological torsion for manifolds with boundary and symmetry"}, J. Diff. Geom. 37, 263-322, (1993) \\[3mm] [Mi] J. Milnor \emph{"Whitehead torsion"}, Bull. Ams. 72, 358-426 (1966) \\[3mm] [Mu] W. M\"{u}ller \emph{"Analytic torsion and R-torsion for unimodular representations"} J. Amer. Math. Soc., Volume 6, Number 3, 721-753 (1993) \\[3mm] [Mu1] W. M\"{u}ller \emph{"Analytic Torsion and R-Torsion of Riemannian manifolds"} Adv. Math. 28, 233-305 (1978) \\[3mm] [Mun] J. Mukres \emph{"Elementary differential topology"} Ann. of Math. Stud. vol. 54, Princeton Univ. Press, Princeton, NJ (1961) \\[3mm] [MZ1] X. Ma, W. Zhang \emph{"$\eta -$invariant and flat vector bundles I"}, Chinese Ann. Math. 27B, 67-72 (2006) \\[3mm] [MZ2] X. Ma, W. Zhang \emph{"$\eta -$invariant and flat vector bundles II"}, Nankai Tracts in Mathematics. Vol. 11. World Scientific, 335-350, (2006) \\[3mm] [Nic] L.I. Nicolaescu \emph{"The Reidemeister torsion of 3-manifolds"}, de Gruyter Studies in Mathematics, vol. 30, Berlin (2003) \\[3mm] [Re1] K. Reidemeister \emph{"Die Klassifikation der Linsenr\"{a}ume"}, Abhandl. Math. Sem. Hamburg 11, 102-109 (1935) \\[3mm] [Re2] K. Reidemeister \emph{"\"{U}berdeckungen von Komplexen"}, J. reine angew. Math. 173, 164-173 (1935) \\[3mm] [ReS] M. Reed, B. Simon \emph{"Methods of Mathematical Physics"}, Vol. II, Acad.N.J. (1979) \\[3mm] [Rh] G. de Rham \emph{"Complexes a automorphismes et homeomorphie differentiable"}, Ann. Inst. Fourier 2, 51-67 (1950) \\[3mm] [RS] D.B. Ray and I.M. Singer \emph{"R-Torsion and the Laplacian on Riemannian manifolds"}, Adv. Math. 7, 145-210 (1971) \\[3mm] [Ru] W. Rudin \emph{"Functional Analysis"}, Second Edition, Mc. Graw-Hill, Inc. Intern. Series in pure and appl. math. (1991) \\[3mm] [RH] Rung-Tzung Huang \emph{"Refined Analytic Torsion: Comparison theorems and examples"}, math.DG/0602231v2 \\[3mm] [Se1] R. Seeley \emph{"The resolvent of an elliptic boundary problem"} Amer. J. Math. 91 889-920 (1969) \\[3mm] [Se2] R. Seeley \emph{"An extension of the trace associated with elliptic boundary problem"} Amer. J. Math. 91 963-983 (1969) \\[3mm] [Sh] M.A. Shubin \emph{"Pseudodifferential operators and Spectral Theory"}, English translation: Springer, Berlin (1086) \\[3mm] [Tu1] V. G. Turaev \emph{"Euler structures, non-singular vector-fields and torsion of Reidemeister type"}, English Translation: Math. USSR Izvestia 34:3 627-662 (1990) \\[3mm] [Tu2] V. G. Turaev \emph{"Torsion invariants of Spin$^c$ structures on three-manifolds"}, Math. Research Letters 4:5 679-695 (1997) \\[3mm] [V] S. Vishik \emph{"Generalized Ray-Singer Conjecture I. A manifold with smooth boundary"}, Comm. Math. Phys. 167, 1-102 (1995) \\[3mm] [Wh] J. H. Whitehead \emph{"Simple homotopy types"}, Amer. J. Math. 72, 1-57 (1950) \end{document}
1,108,101,563,029
arxiv
\section{INTRODUCTION} Screening of nuclear reactions in stellar interiors is a hotly debated issue. The standard Salpeter treatment \citep{Salpeter_1954} was a necessary improvement in early equation-of-state work. Many faithful fans consider this as evidence that the traditional screening formulation must be used for solar models. However, the incompatibility of models generated with the recently revised solar abundances and helioseismic results highlights the need to re-examine the physics used to develop and analyze solar models \citep{Asplund_2009}. In this paper, we re-examine screening in the solar core and present evidence that dynamic effects must be considered when examining screening in stars. \subsection{Electrostatic Screening} \label{sect:static} Under the extreme temperatures and densities of the solar core, the plasma is fully ionized. The free electrons and ions interact with a Coulomb potential energy \begin{equation} \label{eq1} U(r)=\frac{e^2}{r} \;. \end{equation} In this Coulomb system, nearby plasma is polarized by each ion. When two ions approach with the possibility of engaging in a nuclear reaction, each ion is surrounded by a screening cloud. Each ion is attracted to the electrons and repelled by the protons in its partner's cloud. The combined effect of the particles in the screening clouds on the potential energy of the pair of ions is referred to as screening. This electrostatic screening effect reduces the standard Coulomb potential between approaching ions to a screened potential which includes the contribution to the potential from the surrounding plasma. The reduced potential enables the ions to tunnel through the potential barrier more easily, thus enhancing fusion rates. As illustrated by other authors at this meeting \citep{Dappen_2009,Baturin_2009,Straniero_2009,Yusof_2009}, understanding screening is important in solar and stellar modeling. \citet{Salpeter_1954} derived an expression for the enhancement of nuclear reaction rates due to electron screening. By solving the Poisson-Boltzmann equation for electrons and ions in a plasma under the condition of weak screening ($\phi_{\rm{interaction}}<< k_BT$), Salpeter arrived at an expression for the screening energy that is equivalent to that of the Debye-H\"uckel theory of dilute solutions of electrolytes \citep{Debye_1923}, \begin{equation} \label{eq2} U_{\rm{screen}} = \frac{e^2}{\lambda_D} \end{equation} where the Debye length, $\lambda_D$, is the characteristic screening length of a plasma at temperature $T$ with number density $n$, \begin{equation} \label{eq3} \lambda_D^2 = \frac{\epsilon_0 k_B T}{ne^2}. \end{equation} \subsection{Dynamic Screening} \label{sect:dynamic} Although Salpeter's approximation for screening is widely accepted, several papers over the last few decades \citep[e.g.][]{Shaviv_1996, Carraro_1988, Weiss_2001} have questioned either the derivation itself or the validity of applying the approximation to hot, dense, Coulomb systems like the plasma of the solar core. Various work deriving alternative formulae for \emph{electrostatic} screening \citep{Carraro_1988, Opher_2000, Shaviv_1996, Savchenko_1999, Lavagno_2000, Tsytovich_2000} were refuted in subsequent papers \citep[see][for a summary of arguments in Salpeter's defense]{Bahcall_2002}. However, the question of \emph{dynamic} screening remains open. Dynamic screening arises because the protons in a plasma are much slower than the electrons. They are therefore not able to rearrange themselves as quickly around individual faster moving ions. Since nuclear reactions require energies several times the average thermal energy, the ions that are able to engage in nuclear reactions in the Sun are such faster moving ions, which therefore may not be accompanied by their full screening cloud. Salpeter uses the mean-field approach in which the many-body interactions are reduced to an average interaction that simplifies calculations. This technique is quite useful in calculations that rely on the average behavior of the plasma. However, dynamic effects for the fast-moving, interacting ions lead to a screened potential that deviates from the average value. The nuclear reaction rates will therefore differ from those computed with the mean-field approximation. \citet{Shaviv_1996} used the method of molecular dynamics to model the motion of charges in a plasma under solar conditions in order to investigate dynamic screening. The advantage of the molecular-dynamics method is that it does not assume a mean field. Nor does it assume a long-time average potential for the scattering of any two charges, which is necessary in the statistical way to solve Poisson's equation to obtain the mean potential in a plasma. Shaviv and Shaviv attribute the differences between their simulations and Salpeter's theory to dynamic effects. Since their claims have been met with skepticism, we have conducted independent molecular-dynamics simulations to confirm the existence of dynamic effects. The viewpoint presented by Bahcall et al., 2002 can be summarized with their statement ``There is only one right answer, but there are many wrong answers.'' Although we agree that equation (\ref{eq2}) is the right answer to the question of static screening, we contend that this is not the right question to ask. All arguments in favor of Salpeter's formulation rely on a mean-field treatment, an assumption that must be tested before it is implemented. The work presented in this paper addresses the more appropriate question ``is the mean-field approach applicable in stellar plasma?'' Our work will show that there are deviations from the mean field in the case of p-p reactions in the solar core and that dynamic screening should be considered in order to obtain a more accurate representation. \section{METHOD} \label{sect:method} How can we test the question of mean-field theory's relevance in solar plasma? \citet{Shaviv_1996} developed a method that relies on the techniques of molecular dynamics to model the behavior of solar plasma without using mean-field assumptions. Their simulations show deviations from mean-field theory that would lead to changes in nuclear reaction rate calculations. Their claims have been met with skepticism, so we replicated and analyzed their work in order to resolve the issue. In our previous work \citep{Mao_2004, Mao_2009, Mussack_2006, Mussack_2007}, we examined the methods and assumptions used in Shaviv and Shaviv's work, including their treatment of the long-range Coulomb force, the effective quantum potentials, and the system size. We did not find any problems with their techniques. Furthermore, we confirmed that the mean-field theory does not adequately describe the behavior of the plasma. Here we show that the screening energy of two interacting protons in our simulation depends on the relative kinetic energy of the pair. We also determine the dynamically screened interaction potential energy and discuss the significance of this result. In order to numerically determine the effect of dynamic screening on p-p reactions, we modeled a 3-dimensional box of 500 protons and 500 electrons interacting via the Coulomb potential. The effective pair potentials derived for a hydrogen plasma by \cite{Barker_1971, Deutsch_1977, Deutsch_1978, Deutsch_1979} were employed to include quantum corrections. The temperature and density of the solar core ($T=1.6 \rm{x} 10^7 \;\rm{K}$, $\rho = 1.6 \rm{x} 10^5 \; \rm{kg/m^3}$) were used to determine the velocities and density of the particles in the box. A thermostat was implemented to maintain constant temperature. Periodic boundary conditions and the minimum-image convention were applied. The velocity verlet algorithm followed the time evolution of the system. See \cite{Mao_2009} for more details. The screening energy was calculated for pairs of approaching protons in the following way. For each proton, we designated the nearest approaching proton as its partner. Then we tracked each pair of protons through their approach and subsequent retreat. At the point of closest approach, we recorded the separation, $r_c$, and the kinetic energy of the pair, $E_{kinetic}(r_c)$. When the pair was separated by a sufficiently great distance, $R_f$, we recorded the kinetic energy of the pair, $E_{kinetic}(R_f)$, and stopped tracking that pair (for our simulations, $R_f = 2 \; a_B$ where $a_B$ is the Bohr radius). At this point, we designated a new partner and repeated the tracking process. The screening energy of each pair was computed from the difference in energy at $r_c$ and at $R_f$ \begin{equation} \label{eq4} E_{\rm{screen}} =E_{\rm{pair}}(r_c) - E_{\rm{pair}}(R_f) . \end{equation} This represents the energy exchanged between a pair and the surrounding plasma. Equation \ref{eq4} can be expanded as \begin{equation} \label{eq5} E_{\rm{screen}}= \left( E_{\rm{kinetic}}(r_c) + \frac{e^2}{r_c} \right) - \left( E_{\rm{kinetic}}(R_f) + \frac{e^2}{R_f} \right). \end{equation} \section{RESULTS} \label{sect:results} A key ingredient for the screening energy in equation \ref{eq5} is the difference in the kinetic energy of a pair when partner protons are far apart and when they are at their closest separation. \begin{equation} \label{eq6} \Delta E_{\rm{kinetic}} = E_{\rm{kinetic}}(R_f) - E_{\rm{kinetic}}(r_c) . \end{equation} In figure \ref{plot1}, we show the average change in kinetic energy of approaching pairs for each distance of closest approach. For comparison, we also plot the bare Coulomb potential and the statically screened Coulomb potential as a function of separation. \begin{figure} \includegraphics[width=82mm]{mussack_f1} \vskip -0.4cm \caption{Dependence of the average kinetic energy change $<\!\!\triangle E_{\rm{kinetic}}\!\!>$ on the closest distance $r_c$, compared with the Coulomb potential and screened Coulomb potential.} \label{plot1} \vskip 0.3cm \end{figure} We see that at the distance of closest approach, the average kinetic energy exchanged between a pair of protons and the plasma closely matches the statically screened potential. This confirms that Salpeter's static screening can successfully describe \emph{average} properties of the system. However, we can see the dynamic effect on screening when we sort the pairs of particles by relative kinetic energy. Figure \ref{plot2} shows the average energy gained from the plasma by pairs of protons with a given far-apart kinetic energy. This is quite different from the Debye-H\"uckel screening energy calculated for the average closest-approach distance of pairs in each kinetic energy bin. We see that pairs of protons with a kinetic energy less than the thermal energy of the plasma gain more energy from the surrounding plasma than the mean-field result, while pairs with a kinetic energy greater than the thermal energy gain less energy than the mean-field result and even tend to lose energy to the plasma. From this plot, we can estimate the screening energy of the p-p reaction at the Gamow energy of $4.8 \; \rm{kT}$. \begin{figure} \includegraphics[width=82mm]{mussack_f2} \vskip -0.4cm \caption{Dependence of the screening energy from the simulation (squares) on the far-apart kinetic energy $E_{kinetic}^f$. The Debye-H\"uckel screening energy computed at the averege closest-approach distance of pairs of protons with a given far-apart kinetic energy is shown (circles) for comparison.} \label{plot2} \vskip .3cm \end{figure} In order to quantify the dynamic effect on screening in the plasma of the solar core, we have split the total interaction energy into the Coulomb contribution and the interaction energy from the plasma, \begin{equation} \label{eq7} U(r) = \frac{e^2}{r} - E_{\rm{screen}}(r) . \end{equation} For the mean-field treatment, \begin{equation} \label{eq8} E_{\rm{screen,mf}}(r) = \frac{e^2}{r} \left( 1- {\rm{exp}} \left(-r / \lambda_D \right) \right). \end{equation} Because there is no formalism to compute the dynamic effect analytically, we use the simulation results to determine $E_{\rm{screen}}$ for dynamic screening. One key difference from the static screening expression is that the dynamic screening energy is a function of both pair separation and relative velocity. As before, we split the total interaction energy into the Coulomb and screening cloud contributions. \begin{equation} \label{eq9} U(r,v) = U_{\rm{Coulomb}}(r) - E_{\rm{screen,dyn}}(r,v) \end{equation} For comparison, the dynamic screening energy at the distance of closest approach can be written in a form similar to the static screening energy \begin{equation} \label{eq10} E_{\rm{screen,dyn}}(r_c,v) = \frac{e^2}{r_c} \left( 1- {\rm{exp}}(-r_c/{\Lambda_D(v)}) \right) \end{equation} by including a new velocity dependent Debye-like radius, $\Lambda_D(v)$. Figure \ref{plot3} shows the form of $\Lambda_D(v)$ determined from the simulations. \begin{figure} \includegraphics[width=82mm]{mussack_f3} \vskip -0.4cm \caption{Modified Debye length for dynamic screening at the distance of closest approach obtained from simulations.} \label{plot3} \vskip .3cm \end{figure} \section{DISCUSSION} \label{sect:discussion} This work confirms that screening in the hot, dense plasma of stellar cores depends on the relative velocities of the interacting ions. The Debye-H\"uckel screening energy is only valid for describing average properties of the plasma. Since faster ions are more likely to engage in nuclear reactions than thermal ions, the mean-field treatment does not provide an accurate representation of this velocity-skewed phenomenon. In fact, the fast pairs of ions tend to lose energy to the plasma instead of gaining energy from it which would reduce nuclear reaction rates instead of enhancing them. Solar and stellar models should be adjusted to account for this dynamic screening effect. Currently, there is no formalism to compute dynamic screening analytically. This paper is intended to provide insight into the difference between the Salpeter formalism and the numerically determined dynamic screening. A detailed calculation of the dynamic screening correction to the p-p reaction rate in the solar core based on this numerical work is underway for a future publication. However, these numerical calculations will not minimize the need for an analytical formalism for dynamic screening in order to generalize the results to other temperatures, densities, compositions, and reactions. Only then can dynamic screening be encorporated consistently in solar and stellar models. \acknowledgments We thank Dan Mao for helpful discussions about the simulations. This work was supported in part by grant AST-0708568 of the National Science Foundation.
1,108,101,563,030
arxiv
\section{\label{sec:level1}First-level heading} Topological insulators were brought to the forefront of theoretical physics research by the seminal work of Kane and Mele\cite{Kane2005a,Kane2005b}, where it was shown that two-dimensional periodic systems can have unusual physical properties due to the topology of their band structures, namely the quantization of Spin Hall conductivity, and the presence of spin-momentum locked gapless states at the border between insulators of distinct topological classes. The metallic edge states can be understood in a variety of ways, such as the chiral zero mode first studied by Jackiw and Rebbi\cite{Jackiw1976,Hasan2010} or within the formalism of a four-band insulator model where the interaction between two distinct spin polarizations of the electrons put them in two different topological phases of Haldane's model\cite{Haldane1988} (see also Ref.~\onlinecite{Fruchart2013} and references therein), each giving rise to metallic edge states propagating in opposing directions at the border. Despite the tremendous interest in two-dimensional topological insulators (TIs), the study of the physics at the interface of a TI and a regular insulator has been so far hampered by the lack of simple phenomenological models to describe this system. In this letter, I present such a model, inspired from field theory, valid when both the TI and the regular insulator have the same gap, and show that it can be solved analytically, deriving the full spectrum of midgap massive surface states, the chiral metallic states, and the reflection and transmission coefficients for scattering states. Working with units $\hbar=v_F=1$, where $v_F$ is the Fermi velocity of the gapless states at the border, the Lagrangian we are interested in is, in 2+1 dimensions, \begin{equation} \mathcal{L} = \frac{1}{2}(\partial_\mu\phi)^2 - \frac{\lambda}{4}(\phi^2-\eta^2)^2 + i\bar{\psi}\gamma^\mu\partial_\mu\psi -g\phi\bar{\psi}\psi, \end{equation} where $\phi$ is a massless real scalar field, which acquires mass through spontaneous symmetry breaking, and $\psi$ is a four-component spinor, associated with the electron and hole states of different spin, which acquires a mass by coupling with the scalar field by means of the Yukawa term with coupling constant $g>0$. We are interested in the fermionic states propagating through a fixed scalar field $\phi$. The self interaction term places the energy minimum at $\phi=\pm\eta$, and we look for static ($\partial_t\phi=0$) solutions, which are homogeneous in the $x$ coordinate ($\partial_x\phi=0$) and satisfy the boundary conditions $\phi(y=\pm\infty) = \pm\eta$. Under these assumptions, the scalar and fermionic fields each acquire a mass at $|y|\to\infty$, given by $m_s=\sqrt{2\lambda}\eta$ and $m_f = g\eta$. Neglecting the Yukawa term, the equation for the scalar field has the well-known $\mathbb{Z}_2$ kink solution\cite{Vachaspati2006} \begin{equation} \phi = \eta\tanh(m_sy/2) = \eta\tanh(y/\Delta), \end{equation} where I have introduced the parameter $\Delta$, related to the thickness of the domain wall. The bound states for this system in 1+1 dimensions have been studied in Ref.~\onlinecite{Chu2008}, while scattering states have been considered in the limit $\Delta\to0$ in Ref.~\onlinecite{Campanelli2004}. In this work, I generalize both results to (2+1)D and also provide a more elementary solution to the problem. The Dirac equation in the fixed scalar field background reads \begin{equation} (i\gamma^\mu\partial_\mu - g\phi)\psi = 0. \end{equation} To make the connection to topological insulators, we choose the Dirac matrices \begin{equation} \gamma^t = \left[ \begin{matrix} \sigma_z & 0 \\ 0 & \sigma_z \end{matrix} \right], \qquad \gamma^x = \left[ \begin{matrix} i\sigma_y & 0 \\ 0 & -i\sigma_y \end{matrix} \right], \qquad \gamma^y = \left[ \begin{matrix} i\sigma_x & 0 \\ 0 & i\sigma_x \end{matrix} \right], \end{equation} where $\sigma_i$ denote the Pauli spin matrices. The gamma matrices are block diagonal, and the upper and lower components represent the spin up and spin down bands, respectively. The equation reduces to two analogous equations for two-component spinors $\psi_1$ and $\psi_2$. We write the solution for these spinors as \begin{equation} \psi_i(x,y;t) = e^{i(q_xx-Et)}\left\{u_+(y)\left[ \begin{matrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{matrix} \right] +u_-(y)\left[ \begin{matrix} \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{matrix} \right]\right\}. \end{equation} In the context of TIs, the quantity $q_x$ represents the $x$-component of the momentum around a Time-Reversal Invariant Momentum (TRIM)\cite{Fu2007}, given by $\mathbf{q}=\mathbf{k}-\pmb{\lambda}$, since it is the gap closing at this point that causes the appearance of metallic edge states\cite{Fruchart2013}. With the previous ansatz, we obtain \begin{equation} \left[ \begin{matrix} E - g\phi & \mp q_x - \partial_y \\ \pm q_x -\partial_y & -E - g\phi \end{matrix} \right]\left[ \begin{matrix} u_+ - u_- \\ u_+ + u_- \end{matrix} \right] = \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right], \end{equation} where the upper and lower signs appear in the spin up and down components, respectively. Rearranging the equations, we get \begin{align} (\partial_y + g\phi)u_+ &= -(E\pm q_x)u_-, \\ (\partial_y - g\phi)u_- &= (E\mp q_x)u_+. \label{eq:chiral} \end{align} Applying $(\partial_y\mp g\phi)$ on the left, the equations assume the same form for spin up and spin down: \begin{equation} -\partial_y^2 u_\pm + g(g\phi^2\mp\partial_y\phi)u_\pm = (E^2-q_x^2)u_\pm. \label{eq:sch} \end{equation} From now on, we will work with the equation for $u_-$, as the one for $u_+$ can be solved with trivial modifications. Substituting $\phi$, defining $\alpha=2m_f/m_s$ and writing $q_y^2 = E^2-m_f^2-q_x^2$, we get a Schr\"odinger equation for the Modified P\"oschl-Teller potential\cite{Flugge1994}: \begin{equation} u'' +\left[q_y^2 +\frac{\alpha(\alpha-1)}{\Delta^2}\sech^2\left(\frac{y}{\Delta}\right)\right]u =0. \end{equation} It is well-known that, by performing the change of variables \begin{equation} z = \cosh^2(y/\Delta), \qquad u(z) = z^{\alpha/2} v(z), \end{equation} one obtains the hypergeometric equation \begin{equation} z(1-z)v'' + [c-(a+b+1)z]v'-abv = 0, \end{equation} where \begin{equation} a = \frac{1}{2}\left(\alpha+iq_y\Delta\right), \qquad b = \frac{1}{2}\left(\alpha-iq_y\Delta\right), \qquad c = \lambda + \frac{1}{2}. \end{equation} We want the solutions around the singular point $z=1$, given by \begin{align} _2&F_1(a,b,1+a+b-c;1-z), \\ (1-z)^{c-a-b} \ _2&F_1(c-a,c-b,1+c-a-b;1-z), \end{align} where the hypergeometric function $_2F_1$ is defined in terms of the Pochhammer symbol $(q)_n=\Gamma(q+n)/\Gamma(q)$ as \begin{equation} _2F_1(a,b,c;z) = \sum_{n=0}^\infty \frac{(a)_n(b)_n}{(c)_n} \frac{z^n}{n!}. \end{equation} These two solutions are even and odd, and we can write the wave function $u$ in terms of the spatial coordinate $y$ as \begin{align} u_A(y) &= \cosh^\alpha(y/\Delta)_2F_1(a,b,1/2;-\sinh^2y/\Delta), \\ u_B(y) &= \cosh^\alpha(y/\Delta)\sinh(y/\Delta) \times \\ \nonumber &_2F_1\left(a+\frac{1}{2},b+\frac{1}{2},\frac{3}{2},-\sinh^2(y/\Delta)\right). \end{align} To analyze the asymptotic behavior of the solutions at $|y|\to\infty$, we work with the hypergeometric function in the limit $z\to-\infty$ with the identity\cite{Abramowitz2012} \begin{align} _2F_1(a,b,c;&z\to-\infty) = (1-z)^{-a} \frac{\Gamma(c)\Gamma(b-a)}{\Gamma(b)\Gamma(c-a)} \nonumber \\ &+(1-z)^{-b}\frac{\Gamma(c)\Gamma(a-b)}{\Gamma(a)\Gamma(c-b)} \end{align} For scattering states, $q_y$ is real. Omitting unimportant normalization factors, we have \begin{align} u_A(y) &\to \left[ \frac{\Gamma(-iq_y\Delta)e^{iq_y(\Delta\ln 2 - |y|)}}{\Gamma\left(\frac{\alpha-iq_y\Delta}{2}\right)\Gamma\left(\frac{1-\alpha-iq_y\Delta}{2}\right)} + \mathrm{c.c.} \right] \label{eq:asym1} \\ u_B(y) &\to \pm \left[ \frac{\Gamma(-iq_y\Delta)e^{iq_y(\Delta\ln 2 - |y|)}}{\Gamma\left(\frac{\alpha+1-iq_y\Delta}{2}\right)\Gamma\left(\frac{2-\alpha-iq_y\Delta}{2}\right)} + \mathrm{c.c.}\right] \label{eq:asym2} \end{align} Therefore, at $|y|\to\infty$, we have the asymptotic behavior \begin{equation} u_A(y) \to \cos(q_y|y|+\phi_A), \qquad u_B(y) \to \pm \cos(q_y|y|+\phi_B). \end{equation} To study scattering, we first notice that, since the fermion masses at $|y|\to\infty$ are the same, there is no refraction at the interface. The incoming wave either passes through or gets reflected. To understand the influence of spin, we need to compute the reflection and transmission matrices $R$ and $T$, seeking a solution of the form \begin{equation} \psi(y) = \left\{ \begin{array}{lr} (e^{iq_yy} + Re^{-iq_yy})\psi_0, & y\to-\infty \\ Te^{iq_yy}\psi_0, & y \to+\infty \end{array} \right. \end{equation} We are about to see that the matrices $R$ and $T$ are proportional to the identity. For that, it suffices to study the scattering of the wave function given by $u_-(y)$. If we compose a solution of the form \begin{equation} u_- = Au_A + Bu_B, \end{equation} it is possible to show\cite{Flugge1994} that the reflection and transmission coefficients are given by \begin{equation} |R|^2 = \cos^2(\phi_A-\phi_B), \qquad |T|^2=\sin^2(\phi_A-\phi_B). \label{eq:reftrans} \end{equation} The difference of the phases in the Gamma functions in \eqref{eq:asym1} and \eqref{eq:asym2} can be computed by applying $\Gamma(\bar{z})=\overline{\Gamma(z)}$, the Euler reflection formula \begin{equation} \Gamma(z)\Gamma(1-z) = \pi/\sin\pi z, \end{equation} and the identity \begin{equation} \arg\sin(a+bi) = \tan^{-1}\left(\cot a \tanh b\right). \end{equation} After elementary trigonometric manipulations, we get \begin{equation} \tan(\phi_A-\phi_B) = \frac{\sinh(\pi q_y\Delta)}{\sin(\alpha\pi)}. \end{equation} Substituting this value in \eqref{eq:reftrans}, we conclude that the scattering is periodic in $\alpha$, with period 1. Since our solution, valid for $u_-$, can be applied to $u_+$ by the transformation $\alpha\to\alpha+1$, both $u_-$ and $u_+$ waves scatter identically, and the reflection and transmission matrices should be proportional to the identity, as claimed before. Besides that, another interesting physical phenomenon happens. When $\alpha=2m_f/m_s$ (or, in standard units, $\alpha=E_g\Delta/2\hbar v_F)$ is an integer, the wave is transmitted entirely ($|T|^2=1$) and the barrier is transparent. Meanwhile, when $\alpha$ is not an integer and the particle runs almost parallel to the wall (small $q_y$), $|R|^2\to 1$ and the interface is reflective. To analyse the bound states, it suffices to set $q_y = i\kappa$, with $\kappa>0$. Under this condition, the parameters $a$ and $b$ become real, and the first terms in \eqref{eq:asym1} and \eqref{eq:asym2} grow like $e^{\kappa|y|}$. Therefore, the coefficients multiplying them must vanish in order to obtain a normalizable state. That happens when the arguments of the second Gamma function in the denominator are at the poles, located at nonpositive integers. Grouping all the conditions, we must have \begin{equation} E_n^2 - q_x^2 - m_F^2 = -\frac{(\alpha-1-n^-)^2}{\Delta^2} = -\frac{(\alpha-n^+)^2}{\Delta^2}, \end{equation} and it follows that $n^+-n^- = 1$. The states with energy $E_n$ correspond to particle and hole states within the gap, with mass given by \begin{equation} m_n = m_f\sqrt{\frac{2n}{\alpha}-\frac{n^2}{\alpha^2}}, \qquad 1\leq n\leq \alpha \end{equation} Thus, the integral part of $\alpha$ gives the number of massive bound states trapped at the wall. The state with $n^+=0$ is special. The only way to satisfy \eqref{eq:sch} for $u_-$ is with $u_-=0$. Substituting this into \eqref{eq:chiral}, we obtain $E=q_x$ for spin-up and $E=-q_x$ for spin-down. These are, then, massless states, where the sign of the energy dependence gives the group velocity of the wave. It is readily seen that spin-up states propagate to the right, and spin-down ones to the left, as expected in TIs. \begin{figure}[h!] \includegraphics{fig1.eps} \caption{\label{fig:disp} Dispersion relation for the system with parameters $v_F=c/300$, $\Delta=10$ nm and $E_g=0.2$ eV. The calculated value for $\alpha$ is approximately 1.52, giving rise to exactly one massive particle (and hole) state inside the gap.} \end{figure} To observe the massive bound states, it is necessary to have a material with a large band-gap, large wall thickness and small Fermi velocity. A plot with the full spectrum of states is presented in Fig.~\ref{fig:disp}, with parameters chosen in the typical range for condensed matter systems. It is readily seen that it should be possible to engineer materials so that $\alpha>1$. In summary, I have presented a phenomenological model for the interface between trivial and topological 2D insulators with the same band gap $E_g$, that depends on the thickness of the interface $\Delta$ and the Fermi velocity $v_F$. The physics of the system is controlled by the adimensional parameter $\alpha=E_g\Delta/2\hbar v_F$, which controls the mass of the bound states trapped at the interface and also their number. I have shown that, when $\alpha$ is an integer, the interface is transparent to scattering states, and incoming particles are transmitted entirely, and also provided results for typical values of the measurable parameters. \begin{acknowledgements} The author acknowledges financial support from CAPES (PVE grant no. 88887.116797/2016-00). \end{acknowledgements}
1,108,101,563,031
arxiv
\section{Introduction: pseudo-spinor Bose-Einstein condensation}\label{sec:intro} It is customary to refer to pseudo-spinor condensates as gases of ultra-cold atoms that exhibit a macroscopic occupation of the same one-body state (Bose-Einstein condensation) and possess internal spin degrees of freedom which are often coupled to an external resonant micro-wave or radio-frequency radiation field, however, with no significant spin-spin internal interaction (whence the \emph{pseudo}-spinor terminology). The order parameter of the condensation is therefore a multi-component vector, unlike scalar condensates such as liquid $^4\mathrm{He}$, and the dynamical evolution of these quantum fluids observed in the experiments shows an excellent matching with a non-linear effective dynamics for the order parameter. In this work we want to present a rigorous derivation of such non-linear equations from the `first principle' many-body linear Schr\"{o}dinger dynamics. The study of multi-component Bose-Einstein condensates (henceforth also BEC) was spurred on in 1997-1998 by experiments on ultra-cold Rubidium, with condensation coexisting in two different hyperfine states of $^{87}\mathrm{Rb}$ \cite{MBGCW-1997,Matthews_HJEWC_DMStringari_PRL1998,HMEWC-1998,Hall-Matthews-Wieman-Cornell_PRL81-1543} and soon extended to multi-BEC for heteronuclear mixtures such as $^{41}\mathrm{K}$-$^{87}\mathrm{Rb}$ \cite{Modugno-Ferrari-Inguscio-etal-Science2001_multicompBEC}, $^{41}\mathrm{K}$-$^{85}\mathrm{Rb}$ \cite{Modugno-PRL-2002}, $^{39}\mathrm{K}$-$^{85}\mathrm{Rb}$ \cite{MTCBM-PRL2004_BEC_heteronuclear}, $^{85}\mathrm{Rb}$-$^{87}\mathrm{Rb}$ \cite{Papp-Wieman_PRL2006_heteronuclear_RbRb}. In the last two decades the field has expanded through a huge amount of experimental and theoretical studies, for a survey of which we refer to the comprehensive reviews \cite{Ketterle_StamperKurn_SpinorBEC_LesHouches2001, Malomed2008_multicompBECtheory, Hall2008_multicompBEC_experiments,StamperKurn-Ueda_SpinorBose_Gases_2012} (see also \cite[Chapter 21]{pita-stringa-2016}). In order to place the present work into the appropriate mathematical setting, it is instructive to revisit, within the general formalism of many-body quantum mechanics, the essential steps of a typical experiment -- for concreteness we refer to the 1998 pioneering experiment \cite{Matthews_HJEWC_DMStringari_PRL1998,HMEWC-1998,Hall-Matthews-Wieman-Cornell_PRL81-1543}. First and foremost, the experiment involves only a few hyperfine levels of the considered atomic species -- for $^{87}\mathrm{Rb}$ these are the $5S_{1/2}|F=1,m_f=-1\rangle$ and $5S_{1/2}|F=2,m_f=1\rangle$ states: this results in the effective one-body Hilbert space \begin{equation}\label{eq:h-one-body} \mathfrak{h}\;:=\;L^2(\mathbb{R}^3)\otimes\mathbb{C}^2\;\cong\;L^2(\mathbb{R}^3)\oplus L^2(\mathbb{R}^3) \end{equation} as for one spin-$\frac{1}{2}$ particle in three spatial dimensions. (However, for the final measurement process the effective Hilbert space to consider is a larger one, as we shall explain later.) The corresponding many-body bosonic Hilbert space is \begin{equation}\label{eq:H-many-body} \mathcal{H}_N\;:=\;\mathfrak{h}^{\,\otimes_\mathrm{sym} N}\,, \end{equation} the \emph{symmetric} $N$-fold tensor product of $\mathfrak{h}$. Elements of $\mathfrak{h}$ are spinors $\begin{pmatrix} u_\uparrow \\ u_\downarrow\end{pmatrix}$ with $u_\uparrow,u_\downarrow\in L^2(\mathbb{R}^3)$, equivalently, $u\cdot\begin{pmatrix} c_1 \\ c_2\end{pmatrix}$ with $u\in L^2(\mathbb{R}^3)$, $c_1,c_2\in\mathbb{C}$. With reference to the two actual hyperfine levels entering the experiment, we denote $|1,-1\rangle\equiv|\uparrow\rangle\equiv\begin{pmatrix} 1 \\ 0\end{pmatrix}$ and $|2,1\rangle\equiv|\downarrow\rangle\equiv\begin{pmatrix} 0 \\ 1\end{pmatrix}$. Through a very ingenious confining and cooling procedure, $N\sim 10^5$ atoms are prepared inside an optical trap and brought to complete condensation onto the one-body state $\begin{pmatrix} u_0 \\ 0\end{pmatrix}$. The experimental evidence is that \emph{no noticeable non-condensed fraction remains}, thus implying that the \emph{c}onfined and \emph{c}ooled many-body state $\Psi_{N,\mathrm{cc}}\in\mathcal{H}_N$ displays a 100\% macroscopic occupation of the orbital $\begin{pmatrix} u_0 \\ 0\end{pmatrix}$. Ideally this would mean that \begin{equation}\label{eq:initial0u} \Psi_{N,\mathrm{cc}}\;\sim\;\begin{pmatrix} u_0 \\ 0\end{pmatrix}^{\!\!\otimes N}\qquad (\,\|u_0\|_2\;=\;1\,) \end{equation} holds as an identity in $\mathcal{H}_N$, or at least in a thermodynamic limit $N\to \infty$. However, as customary in the mathematical formalisation of complete BEC \cite{LSeSY-ober,am_equivalentBEC}, it is more appropriate to infer the meaning of occupation numbers from the eigenvalues of the one-body reduced density matrix associated with the many-body state -- for otherwise even the negligible change of one single one-body orbital out of $N$ in the state $\begin{pmatrix} u_0 \\ 0\end{pmatrix}^{\!\!\!\!\otimes N}$ would result in a new state that would be essentially orthogonal to the original one. Let us recall that, associated to each $\Psi_{N}\in\mathcal{H}_N$, or more generally to each many-body density matrix $\gamma_N$ on $\mathcal{H}_N$, is the so-called one-body marginal (or one-body reduced density matrix) \begin{equation}\label{eq:partial_trace1} \gamma_N^{(1)}\;=\;\mathrm{Tr}_{N-1}\,\gamma_N\,, \end{equation} where the map $\mathrm{Tr}_{N-1}:\mathcal{B}_1(\mathcal{H}_N)\to\mathcal{B}_1(\mathfrak{h})$ is the \emph{partial trace} from trace class operators acting on $\mathcal{H}_N$ to trace class operators acting on $\mathfrak{h}$. $\mathrm{Tr}_{N-1}\,\gamma_N$ is defined, by duality, by \begin{equation}\label{eq:def_partial_trace_without_basis} \mathrm{Tr}_\mathfrak{h}(A\cdot\mathrm{Tr}_{N-1}\,\gamma_N)\;=\;\mathrm{Tr}_{\mathcal{H}}(A\otimes\mathbbm{1}_{N-1})\cdot \gamma_N))\qquad\forall A\in\mathcal{B}(\mathfrak{h}) \end{equation} (here $\mathcal{B}$ denotes the bounded linear operators on $\mathfrak{h}$ and $\mathcal{B}_1$ the corresponding trace class). In terms of an arbitrary orthonormal basis $(\Xi_k)_k$ of $\mathcal{H}_{N-1,\mathrm{sym}}$ one then has \begin{equation}\label{eq:def_partial_trace_with_basis} \langle \varphi,(\mathrm{Tr}_{N-1}\,\gamma_N)\psi\rangle_\mathfrak{h}\;=\;\sum_{k}\langle\varphi\otimes\Xi_k,\gamma_N\,\psi\otimes\Xi_k\rangle_{\mathcal{H}_{N-1,\mathrm{sym}}}\qquad\forall\varphi,\psi\in\mathfrak{h}\,, \end{equation} and the l.h.s.~of \eqref{eq:def_partial_trace_with_basis} is independent of the choice of the basis. Thus, $\gamma_N^{(1)}$ is obtained by ``tracing out'' $N-1$ degrees of freedom from $\gamma_N$. As a non-negative, bounded, and self-adjoint operator on $\mathfrak{h}$, $\gamma_N^{(1)}$ has a complete set of real non-negative eigenvalues that sum up to 1, that is, there is an orthonormal basis $(\varphi_j^{(N)})_{j=0}^\infty$ of $\mathfrak{h}$ consisting of eigenvectors of $\gamma_N^{(1)}$ with eigenvalues $(n_j^{(N)})_{j=0}^\infty$\, so that \begin{equation} \begin{array}{c} \gamma_N^{(1)}\;=\;\sum_{j=0}^\infty\,n_j^{(N)}\,|\varphi_j^{(N)}\rangle\langle\varphi_j^{(N)}|\,, \\ 1\geqslant n_0^{(N)}\geqslant n_1^{(N)}\geqslant\cdots\geqslant 0\,,\qquad\sum_{j=0}^\infty n_j^{(N)}=1\,. \end{array} \end{equation} Thanks to the bosonic symmetry, each such eigenvalue has the natural interpretation of \emph{occupation number}: indeed, since the one-body observable (given by the global symmetrization of) $\mathcal{O}_j\equiv|\varphi_j^{(N)}\rangle\langle\varphi_j^{(N)}|\otimes\mathbbm{1}_{N-1}$ has expectation $n_j^{(N)}$ in the many-body state $\gamma_N$, as follows from (\ref{eq:def_partial_trace_without_basis}), then $n_j^{(N)}$ expresses \emph{in the sense of the reduced density matrix}, the fraction of particles of the many-body state $\gamma_N$ which occupy the one-body state $\varphi_j^{(N)}$. Complete BEC of the many-body state $\Psi_N$ onto the one-body orbital $\varphi_0\in\mathfrak{h}$ is by definition the occurrence $n_0^{(N)}\sim 1$, $\varphi_0^{(N)}\sim\varphi_0$ and $\gamma_N^{(1)}\sim |\varphi_0\rangle\langle\varphi_0|$, where an underlying thermodynamic limit $N\to\infty$ is tacitly assumed. This is so in the ideal case of a completely factorised $\Psi_N=\varphi_0^{\otimes N}$, however $\gamma_N^{(1)}\sim |\varphi_0\rangle\langle\varphi_0|$ is a much weaker statement, since an amount of correlations that are not negligible in the many-body norm may be present in $\Psi_N$. Thus, the many-body state $\Psi_{N,\mathrm{cc}}$ after the initial confinement and cooling is prepared as in \eqref{eq:initial0u} in the sense of the reduced density matrix, that is, \begin{equation} \gamma_{N,\mathrm{cc}}^{(1)}\;\approx\;\Big|\!\begin{pmatrix} u_0 \\ 0\end{pmatrix}\!\Big\rangle\Big\langle\!\begin{pmatrix} u_0 \\ 0\end{pmatrix}\!\Big|\;=\;\begin{pmatrix} |u_0\rangle\langle u_0|\;\; & \mathbb{O} \\ \mathbb{O}\;\; & \mathbb{O}\end{pmatrix}. \end{equation} Here $u_0$ is the minimiser of a suitable energy functional, which corresponds to the fact that $\Psi_{N,\mathrm{cc}}$ is a ground state, in the sector of `all spin up particles', of the (effective) many-body Hamiltonian \begin{equation}\label{eq:HNprep} \sum_{j=1}^N\big(-{\textstyle\frac{\:\hbar^2}{\,2m\,}}\,\Delta_j+U^\mathrm{trap}(x_j)\big)\;+\!\sum_{1\leqslant j<k\leqslant N}V(x_j-x_k)\, \end{equation} where $V$ is the potential for the two-body interaction that depends only on the particle spatial configuration, and \begin{equation} U^\mathrm{trap}(x)\;=\;\;\begin{pmatrix} U^{\mathrm{trap}}_\uparrow(x) & 0 \\ 0 & U^{\mathrm{trap}}_\downarrow(x)\end{pmatrix \end{equation} models the (typically harmonic) external trapping potential. Since $\Psi_{N,\mathrm{cc}}$ consists (essentially) only of `spin up' particles, it is only subject to the confining potential $U^{\mathrm{trap}}_\uparrow$. In fact, in the experiment $U^{\mathrm{trap}}_\uparrow(x)\approx U^{\mathrm{trap}}_\downarrow(x)$ to within $\sim$0.3\%. The next step of the experiment is a `two-photon transition', consisting of a very quick ($\sim400\mu\mathrm{s}$) pulse of an external oscillating radiofrequency field that couples with the spin of the particles and is tuned close to the hyperfine splitting energy $2V_{\mathrm{hf}}$ between the two levels, so as to connect the $|1,-1\rangle$ state to the $|2,1\rangle$, with an action on each spinor which in the `rotating wave approximation' is generated by \begin{equation}\label{eq:2ph} \mathrm{i}\hbar\partial_t\begin{pmatrix} \psi_1 \\ \psi_2\end{pmatrix}\;=\;\begin{pmatrix} -V_{\mathrm{hf}} & \hbar\Omega \,e^{\mathrm{i}\hbar\omega t} \\ \hbar\Omega \,e^{-\mathrm{i}\hbar\omega t} & V_{\mathrm{hf}}\end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2\end{pmatrix},\quad V_{\mathrm{hf}}\;=\;{\textstyle\frac{1}{2}}\hbar\omega\,,\qquad \begin{pmatrix} \psi_1(0) \\ \psi_2(0)\end{pmatrix}\;=\;\begin{pmatrix} u_0 \\ 0\end{pmatrix}, \end{equation} with $\Omega\sim 2\pi\cdot 625\,\mathrm{Hz}$ and $\omega\sim 2\pi\cdot 6.8\,\mathrm{GHz}$. (Thus, $\omega\gg\Omega$, as appropriate for the rotating wave approximation.) The evolution governed by \eqref{eq:2ph} involves only the spin degrees of freedom, based on the fact that the duration of the applied pulse is much shorter than the characteristic time for the spatial wave function of each spinor to change appreciably. That this is actually so as a consequence of first principles must of course be demonstrated. With the transformation \begin{equation} \begin{pmatrix} \widetilde{\psi}_1 \\ \widetilde{\psi}_2\end{pmatrix}\;:=\; \begin{pmatrix} e^{-\frac{1}{2}\mathrm{i}\hbar\omega t} & 0 \\ 0 & e^{\frac{1}{2}\mathrm{i}\hbar\omega t}\end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2\end{pmatrix} \end{equation} \eqref{eq:2ph} reads \begin{equation}\label{eq:2ph2} \mathrm{i}\hbar\partial_t\begin{pmatrix} \widetilde{\psi}_1 \\ \widetilde{\psi}_2\end{pmatrix}\;=\;\begin{pmatrix} 0 & \hbar\Omega\\ \hbar\Omega & 0\end{pmatrix} \begin{pmatrix} \widetilde{\psi}_1 \\ \widetilde{\psi}_2\end{pmatrix}\,,\qquad \begin{pmatrix} \widetilde{\psi}_1(0) \\ \widetilde{\psi}_2(0)\end{pmatrix}\;=\;\begin{pmatrix} u_0 \\ 0\end{pmatrix}, \end{equation} whence \begin{equation} \begin{pmatrix} e^{-\frac{1}{2}\mathrm{i}\hbar\omega t}\psi_1(t) \\ e^{\frac{1}{2}\mathrm{i}\hbar\omega t}\psi_2(t)\end{pmatrix}\;=\; \begin{pmatrix} \widetilde{\psi}_1(t) \\ \widetilde{\psi}_2(t)\end{pmatrix}\;=\; \begin{pmatrix} \cos\Omega t & -\mathrm{i}\,\sin\Omega t \\ -\mathrm{i}\,\sin\Omega t & \cos\Omega t\end{pmatrix}\begin{pmatrix} \psi_1(0) \\ \psi_2(0)\end{pmatrix}\;=\;\begin{pmatrix} u_0\,\cos\Omega t \\ -\mathrm{i}\,u_0\,\sin\Omega t\end{pmatrix}. \end{equation} It is worth underlying that in the course of the two-photon transition the external field couples simultaneously with the spin of \emph{each} particle and yields then a many-body Schr\"{o}dinger equation that is the product of $N$ copies of \eqref{eq:2ph}). The particles in the (almost) factorised state $\Psi_{N,\mathrm{cc}}$ are therefore all rotated the same way and at the end of this phase, say, at time $t_0$, the many-body state is then transformed into a `\emph{r}otated' one as \begin{equation}\label{eq:psicc-psir} \Psi_{N,\mathrm{cc}}\;\longmapsto\;\Psi_{N,\mathrm{r}}\;\sim\;\begin{pmatrix} u_0\,e^{\frac{1}{2}\mathrm{i}\hbar\omega t_0}\,\cos\Omega t_0 \\ -u_0\,\mathrm{i}\,e^{-\frac{1}{2}\mathrm{i}\hbar\omega t_0}\,\sin\Omega t_0\end{pmatrix}^{\!\!\!\otimes N}\,. \end{equation} $\Psi_{N,\mathrm{r}}$ still exhibits complete BEC, however on a one-body orbital with the same spatial wave-function and rotated spin. Whereas the measurement process, that we shall describe later below and is a destructive procedure, may take place already at this stage for the state $\Psi_{N,\mathrm{r}}$, other steps may be also performed in the experiment, before the final measurement. One possible further step is to let $\Psi_{N,\mathrm{r}}$ relax in the trap until it reaches the ground state of the Hamiltonian \eqref{eq:HNprep} -- having been rotated, now $\Psi_{N,\mathrm{r}}$ is not an eigenstate any more for \eqref{eq:HNprep}. It is expected and observed (and need be proved also from first principles) that this relaxation does not alter substantially the almost factorised structure and produces a \emph{r}otated and \emph{r}elaxed many-body state \begin{equation}\label{eq:psirr} \Psi_{N,\mathrm{cc}}\;\longmapsto\;\Psi_{N,\mathrm{r}}\;\longmapsto\;\Psi_{N,\mathrm{rr}}\;\sim\;\begin{pmatrix} u_0' \\ v_0'\end{pmatrix}^{\!\!\!\otimes N},\qquad\|u_0'\|_2^2+\|v_0'\|_2^2=1\,. \end{equation} Because of the actual experimental values of the trapping and interaction potentials in \eqref{eq:HNprep}, typically $u_0'$ and $v_0'$ are essentially supported on almost disjoint regions of space -- a phenomenon customarily referred to as \emph{phase separation} -- which in particular makes them orthogonal: $u_0'\perp v_0'$. Another possible further step in the experiment, right after the rotation or the relaxation, consists of switching off the confinement too ($U^{\mathrm{trap}}\equiv 0$) and letting the gas expand hydrodynamically, subject only to the mutual inter-particle interaction, for a period of some $\sim20\,\mathrm{ms}$. In the course of this expansion spatial correlations are developed between the $N$ initially factorised spinors: it is an experimental evidence, that one expects to demonstrate also from first principles, that for times that are much smaller then the time scale at which the condensate deteriorates completely the (almost) factorised structure of the many-body state is essentially preserved. This part of the experiment then produces an \emph{e}xpanded condensate with many-body state \begin{equation}\label{eq:psie} \Psi_{N,\mathrm{cc}}\;\longmapsto\;\Psi_{N,\mathrm{r}}\;(\textrm{or }\Psi_{N,\mathrm{r}})\;\longmapsto\;\Psi_{N,\mathrm{e}}\;\sim\;\begin{pmatrix} u \\ v \end{pmatrix}^{\!\!\!\otimes N},\qquad\|u\|_2^2+\|v\|_2^2=1\,. \end{equation} At the end of the rotation, or the relaxation, or the expansion, information is read out of the many-body state ($\Psi_{N,\mathrm{r}}$, or $\Psi_{N,\mathrm{rr}}$, or $\Psi_{N,\mathrm{e}}$) with a procedure that is destructive, however the excellent reproducibility of the condensate $\Psi_{N,\mathrm{cc}}$ allows one to repeat the measurement for various different times. This process can be effectively described in a \emph{larger} one-body Hilbert space than $\mathfrak{h}$ given in \eqref{eq:h-one-body}, for a \emph{third} hyperfine level in the $5 P_{3/2}\;F=3$ manifold is allowed to be reached. One then considers the space \begin{equation \mathfrak{h}'\;:=\;L^2(\mathbb{R}^3)\otimes\mathbb{C}^3\;\cong\;L^2(\mathbb{R}^3)\oplus L^2(\mathbb{R}^3)\oplus L^2(\mathbb{R}^3) \end{equation} where the previous two spinors $|\uparrow\rangle$ and $|\downarrow\rangle$ and the third one $|\mathfrak{m}\rangle$ used for the measurement are identified as \begin{equation} |\uparrow\rangle\equiv\begin{pmatrix} 1 \\ 0 \\ 0\end{pmatrix}\,,\qquad |\downarrow\rangle\equiv\begin{pmatrix} 0 \\ 1 \\ 0\end{pmatrix}\,,\qquad |\mathfrak{m}\rangle\equiv\begin{pmatrix} 0 \\ 0 \\ 1\end{pmatrix}\,. \end{equation} At this effective level, the possible experimental manipulations in this process are: \begin{itemize} \item `pumping' action on $\mathfrak{h}'$ (which is implemented by a short pulse of `repump' light), the effect of which is to produce the change \[ \begin{pmatrix} u \\ v \\ 0\end{pmatrix}\;\stackrel{P}{\longmapsto}\;\begin{pmatrix} 0 \\ u+v \\ 0\end{pmatrix}; \] \item `blowing' action on $\mathfrak{h}'$ (which is implemented by a $\sim2\,\mathrm{ms}$, $\sim60\,\mu\mathrm{W}/\mathrm{cm}^2$ pulse of light that brings $|\downarrow\rangle\mapsto|\mathfrak{m}\rangle$ and has no effect on the $|\uparrow\rangle$ atoms, and `blows away' particles in the hyperfine level $|\mathfrak{m}\rangle$ to a region far from the imaging region, hence practically out of the system), the effect of which is to produce the change \[ \begin{pmatrix} u \\ v \\ 0\end{pmatrix}\;\stackrel{B}{\longmapsto}\;\begin{pmatrix} u \\ 0 \\ 0\end{pmatrix}; \] \item `probing' action on $\mathfrak{h}'$ (which is implemented with a $\sigma^+$ circularly polarised probe beam at $\sim17$ MHz that brings $|\downarrow\rangle\mapsto|\mathfrak{m}\rangle$, while atoms in the $|\uparrow\rangle$ state are far (6.8 GHz) from resonance and invisible to the probe beam), the effect of which is to produce the change \[ \begin{pmatrix} u \\ v \\ 0\end{pmatrix}\;\stackrel{C}{\longmapsto}\;\begin{pmatrix} u \\ 0 \\ v\end{pmatrix}; \] \item and the actual measurement process, that consists of imaging the shadow of the above-mentioned circularly polarised probe beam onto a charged-coupled device camera (CCD) and hence corresponds first to projecting each spinor orthogonally onto the level $|\mathfrak{m}\rangle$, and then to performing one-body position observations (this is indeed the set of data that can read out from the CCD) tracing out the spin degrees of freedom: symbolically, \[ \begin{pmatrix} u \\ v \\ w\end{pmatrix}\;\stackrel{S}{\longmapsto}\;\begin{pmatrix} 0 \\ 0 \\ w\end{pmatrix}\;\stackrel{}{\longmapsto}\;|w\rangle\langle w|\,. \] \end{itemize} This way, spatial measurements for the level $|\uparrow\rangle$ are done via the sequence \begin{equation} \begin{pmatrix} u \\ v \\ 0\end{pmatrix}\;\stackrel{B}{\longmapsto}\;\begin{pmatrix} u \\ 0 \\ 0\end{pmatrix}\;\stackrel{P}{\longmapsto}\;\begin{pmatrix} 0 \\ u \\ 0\end{pmatrix}\;\stackrel{C}{\longmapsto}\;\begin{pmatrix} 0 \\ 0 \\ u\end{pmatrix}\;\stackrel{S+CCD}{\longmapsto}\;\;|u\rangle\langle u|\,, \end{equation} and spatial measurements for the level $|\downarrow\rangle$ are done via a similar sequence that does not include the repump procedure, namely \begin{equation} \begin{pmatrix} u \\ v \\ 0\end{pmatrix}\;\stackrel{C}{\longmapsto}\;\begin{pmatrix} u \\ 0 \\ v\end{pmatrix} \;\stackrel{S+CCD}{\longmapsto}\;|v\rangle\langle v|\,. \end{equation} Since, as we remark once again, all the above-mentioned preparation and measurement procedures involve \emph{simultaneously each spinor} (i.e., it is experimentally impossible to act selectively on some spinors, no multi-body observable of that sort is available), only states of the form $\begin{pmatrix} u \\ v \end{pmatrix}^{\!\!\!\otimes N}$ are manipulated throughout and are accessed to, in the sense of one-body reduced density matrices. For example, no initial state of the form \begin{equation} \Big(\begin{pmatrix} w_1 \\ 0 \end{pmatrix}^{\!\!\!\otimes N_1}\!\!\otimes\begin{pmatrix} 0 \\ w_2 \end{pmatrix}^{\!\!\!\otimes N_2}\Big)_{\mathrm{sym}}\qquad (\|w_1\|_2=\|w_2\|_2=1,\,N_1+N_2=N) \end{equation} is preparable in this context. Let us then survey what one-body interpretation is possible for the $N$-body states of interest. First, as discussed already, a generic (pure) state $\Psi_N\in\mathcal{H}_{N,\mathrm{sym}}$ which is accessible only by one-body observables is effectively described by the one-body marginal \eqref{eq:partial_trace1}, where $N-1$ one-body degrees of freedom have been traced out: $\gamma_N^{(1)}\;=\;\mathrm{Tr}_{N-1}\,|\Psi_N\rangle\langle\Psi_N|$. Thus, when we deal with pseudo-spinor condensates we shall be only concerned with the class of marginals of the form \begin{equation} \gamma_N^{(1)}\;\approx\;\Big|\!\begin{pmatrix} u \\ v\end{pmatrix}\!\Big\rangle\Big\langle\!\begin{pmatrix} u \\ v\end{pmatrix}\!\Big|\;=\;\begin{pmatrix} |u\rangle\langle u| & |u\rangle\langle v| \\ |v\rangle\langle u| & |v\rangle\langle v|\end{pmatrix},\qquad \|u\|_2^2+\|v\|_2^2=1 \end{equation} (asymptotically in $N$). As $\gamma^{(1)}_N$ is a density matrix acting on the one-body Hilbert space $\mathfrak{h}=L^2(\mathbb{R}^3)\otimes\mathbb{C}^2$, natural observables to be evaluated in $\gamma_N^{(1)}$ are orbital-only (i.e., trivial on the spin sector $\mathbb{C}^2$) or spin-only (i.e., trivial on the orbital sector $L^2(\mathbb{R}^3)$). This is precisely what discussed above concerning the experiments: in the measurement process, the $P,B,C,S$ operations are spin-only, whereas the imaging onto the CCD camera is orbital-only. Thus, the expectation on $\gamma_N^{(1)}$ of the spin-only observable of `spin-up' particle is \begin{equation} \mathrm{Tr}_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\Big(\gamma_N^{(1)}\cdot\big(\mathbbm{1}\otimes|\uparrow\rangle\langle\uparrow|\big)\Big)\;=\;\mathrm{Tr}_{\mathbb{C}^2}\big(\gamma_{N,\mathrm{spin}}^{(1)}\,|\uparrow\rangle\langle\uparrow|\big)\;=\;\|u\|_2^2 \end{equation} and the expectation on $\gamma_N^{(1)}$ of the spin-only observable of `spin-down' particle is \begin{equation} \mathrm{Tr}_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\Big(\gamma_N^{(1)}\cdot\big(\mathbbm{1}\otimes|\downarrow\rangle\langle\downarrow|\big)\Big)\;=\;\mathrm{Tr}_{\mathbb{C}^2}\big(\gamma_{N,\mathrm{spin}}^{(1)}\,|\downarrow\rangle\langle\downarrow|\big)\;=\;\|v\|_2^2\,, \end{equation} where \begin{equation} \gamma_{N,\mathrm{spin}}^{(1)}\;:=\;\mathrm{Tr}_{\mathrm{orb}}\gamma_{N}^{(1)}\;=\;\begin{pmatrix} \|u\|_2^2 & \langle v,u\rangle \\ \langle u,v\rangle & \|v\|_2^2\end{pmatrix} \end{equation} is the one-body spin-only reduced density matrix acting on $\mathbb{C}^2$ and the partial trace $\mathrm{Tr}_{\mathrm{orb}}$ traces out the orbital degrees of freedom. Therefore, the many-body state (pseudo-spinor condensate) $\begin{pmatrix} u \\ v \end{pmatrix}^{\!\!\!\otimes N}$ is to be interpreted as an assembly of $N$ identical bosons for each of which the probability of occupying the level $|\uparrow\rangle$ is $\|u\|_2^2$ and the probability of occupying the level $|\downarrow\rangle$ is $\|v\|_2^2$. By combining this with the above discussion on the actual experiments, and in the sense discussed so far, we are to think of the state $\begin{pmatrix} u \\ v \end{pmatrix}^{\!\!\!\otimes N}$ as a many-body state of identical spin-$\frac{1}{2}$ bosons, of which $N\cdot\|u\|_2^2$ are the fraction of particles in the level $|\uparrow\rangle$ and (normalised) spatial orbital $\|u\|_2^{-1}u$, and $N\cdot\|v\|_2^2$ are those in the level $|\downarrow\rangle$ and (normalised) spatial orbital $\|v\|_2^{-1}v$. Joint spatial measurements for both the $|\uparrow\rangle$ and the $|\downarrow\rangle$ level can be performed too, through the sequence \begin{equation} \begin{pmatrix} u \\ v \\ 0\end{pmatrix}\;\stackrel{P}{\longmapsto}\;\begin{pmatrix} 0 \\ u+v \\ 0\end{pmatrix}\;\stackrel{C}{\longmapsto}\;\begin{pmatrix} 0 \\ 0 \\ u+v\end{pmatrix}\;\stackrel{S+CCD}{\longmapsto}\;\;|u+v\rangle\langle u+v|\,. \end{equation} This is a particularly informative measurement when the orbitals $u$ and $v$ are orthogonal, and hence $\|u+v\|_2^2=\|u\|_2^2+\|v\|_2^2=1$, as happens when each spinor relaxes with spatial separation: in this case $|u(x)|^2$ and $|v(x)|^2$ are the spatial density of each spinorial component, whereas $|u(x)|^2+|v(x)|^2$ gives the combined profile of the two. It is worth stressing the deliberately `weak' formulation of the preceding interpretation -- which is, at any rate, all what can be said concerning the class of preparable states of interest. If one was to measure the probability that the state $\gamma_N^{(1)}$ is precisely of the form $\begin{pmatrix} u \\ 0 \end{pmatrix}$ or of the form $\begin{pmatrix} 0 \\ v \end{pmatrix}$, this would be given by the numbers \[ \left\langle \begin{pmatrix} u \\ 0 \end{pmatrix},\,\gamma_N^{(1)}\begin{pmatrix} u \\ 0 \end{pmatrix}\right\rangle_{\!\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}=\;\|u\|_2^4\,,\qquad \left\langle \begin{pmatrix} 0 \\ v \end{pmatrix},\,\gamma_N^{(1)}\begin{pmatrix} 0 \\ v \end{pmatrix}\right\rangle_{\!\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}=\;\|v\|_2^4\,. \] With this analysis in mind, we are now ready to state the mathematical problem for the many-body Schr\"{o}dinger evolution of a state initially prepared with a one-body reduced marginal \begin{equation} \gamma_N^{(1)}\;\approx\;\Big|\!\begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\Big\rangle\Big\langle\!\begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\Big|\;=\;\begin{pmatrix} |u\rangle\langle u_0| & |u_0\rangle\langle v_0| \\ |v_0\rangle\langle u_0| & |v_0\rangle\langle v_0|\end{pmatrix},\qquad \|u_0\|_2^2+\|v_0\|_2^2=1 \end{equation} and to formulate our main results. This will be the object of the next Section. \section{Setting of the problem and main result}\label{sect:main} We consider a system of $N$ identical spin-$\frac{1}{2}$ bosons in three dimensions. The Hilbert space for this system is \begin{equation}\label{eq:HspaceHN} \mathcal{H}_N\;:=\;\left(L^2(\mathbb{R}^3)\otimes\mathbb{C}^2\right)^{\otimes_{\mathrm{sym}} N}\,, \end{equation} as already defined in \eqref{eq:h-one-body}-\eqref{eq:H-many-body}. The system be governed by a Hamiltonian $H$ consisting of a potential part, made of two-body spatial interaction potentials, plus the sum of $N$ one-body Hamiltonians containing a kinetic part, an external spatial trapping potential, and an interaction between the spin of each particle and an external magnetic field. Thus, in self-explanatory notation, and in suitable units, \begin{equation}\label{eq:unscaled_H} H\;=\;\sum_{j=1}^N\Big(-\Delta_{x_j}+U^{\mathrm{trap}}(x_j)+\mathbf{B}(x_j,t)\cdot\mathbf{\sigma}_j\Big)+\sum_{j<k}^NV(x_j-x_k)\,. \end{equation} Clearly, the part $\sum_{j=1}^N\big(-\Delta_{x_j}\big)+\sum_{j<k}^NV(x_j-x_k)$ only acts non-trivially on the spatial degrees of freedom of an element in $\mathcal{H}_N$. The external potential be matrix-valued and with the form \begin{equation} U^{\mathrm{trap}}(x)\;:=\;\begin{pmatrix} U^{\mathrm{trap}}_\uparrow(x) & 0 \\ 0 & U^{\mathrm{trap}}_\downarrow(x)\end{pmatrix}, \end{equation} so as to possibly act in a different manner on the spatial parts of each spinor, however inducing no spin flipping. For the $j$-th particle, $\mathbf{\sigma}_j$ denotes the vector $\mathbf{\sigma}=(\sigma^x,\sigma^y,\sigma^z)$ of the Pauli matrices \[ \sigma^x=\begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix}, \quad \sigma^y=\begin{pmatrix} 0 & -\mathrm{i} \\ \mathrm{i} & 0\end{pmatrix}, \quad \sigma^z=\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix} \] relative to the $j$-th spin degree of freedom, thus acting as the identity on all other spin degrees of freedom. The external magnetic field be the real-valued vector field \begin{equation} \mathbf{B}(x,t)\;:=\;(B_1(x,t),B_2(x,t),-V_{\mathrm{hf}}(x,t)) \end{equation} for suitable functions depending on space and time. The notation is chosen consistently with the experiments -- see \eqref{eq:2ph} above -- where $V_{\mathrm{hf}}(x)\equiv V_{\mathrm{hf}}$ is a uniform field inducing the splitting between the two hyperfine levels, and $B_1(x,t)-\mathrm{i} B_2(x,t)\equiv \Omega e^{\mathrm{i}\omega t}$ is the so-called Rabi field for the spin flipping. No confusion should occur between the hyper-fine coupling potential $V_{\mathrm{hf}}$ and the pair interaction potential $V$. The exact control of the dynamics generated by the Hamiltonian $H$ at finite (large) $N$ is clearly out of reach, both analytically and numerically, therefore rigorous conclusions are rather sought in the thermodynamic limit $N\to\infty$. This is part of a long-standing major mathematical problem for the dynamics of a Bose gas \cite{S-2007,S-2008,Benedikter-Porta-Schlein-2015}, and in fact no rigorous control of the thermodynamic limit is known so far. What is mathematically doable and physically still meaningful is to mimic the actual thermodynamic limit with some caricature of it realised by scaling the Hamiltonian with $N$ in such a way to retain at any $N$ an amount of relevant physical features of the system \cite{am_GPlim}. To this aim, we re-scale as customary the pair interaction potential $V$ in \eqref{eq:unscaled_H} through the so-called `Gross-Pitaevskii scaling' \cite[Chapter 5]{LSeSY-ober}, thus replacing $V$ with the function \begin{equation}\label{eq:GPscaling} V_N(x)\;:=\;N^2 V(Nx)\,. \end{equation} This produces a realistic model for a Bose gas that is very dilute (the effective range and the scattering length of $V_N$ scales as $N^{-1}$, thus much smaller than the mean inter-particle distance $N^{-1/3}$) and with a strong interaction ($\|V_N\|_{\infty}\sim N^2$). Moreover, by regarding $V_N$ as an approximate delta-distribution with a mean-field pre-factor, namely, $V_N(x)\sim N^{-1}(\int_V)\delta(x)$, one concludes that the contribution of the kinetic part of $H$ (made of $N$ terms) and of the potential part of $H$ ($\sim N^2$ terms) are both of order $\mathcal{O}(N)$, which makes the dynamical problem non-trivial also in the limit. Analogous energetic considerations can be made for the typical ground state energy of a dilute Bose gas, which for fairly general interactions is well-known to be asymptotically given by density $\times$ scattering length, and hence in this scaling is a $\mathcal{O}(N)\times\mathcal{O}(N^{-1})=\mathcal{O}(1)$ quantity. We therefore re-write the Hamiltonian as the $N$-dependent operator \begin{equation}\label{eq:scaled_H} H_N\;:=\;\sum_{j=1}^N(-\Delta_{x_j}+S(x_j,t))+N^2\sum_{j<k}^NV(N(x_j-x_k)) \end{equation} acting self-adjointly on $\mathcal{H}_N$, having set for convenience \begin{equation}\label{eq:matrixS} S(x,t)\;:=\;\begin{pmatrix} U^{\mathrm{trap}}_\uparrow(x)-V_{\mathrm{hf}}(x,t)& & B_1(x,t)-\mathrm{i} B_2(x,t) \\\\ B_1(x,t)+\mathrm{i} B_2(x,t) && U^{\mathrm{trap}}_\downarrow(x)+V_{\mathrm{hf}}(x,t)\end{pmatrix} \end{equation} (observe that $S$ coincides formally with its adjoint), and we consider the Cauchy problem for the associated (linear) Schr\"odinger equation \begin{equation}\label{eq:Cauchy_problem} \begin{cases} \;\;\mathrm{i} \partial_t\Psi_N(t)\;=\;H_N\Psi_N(t)\\ \;\;\Psi_N(0)\;=\;\Psi_{N,0} \end{cases} \end{equation} for a given initial datum $\Psi_{N,0}$. Since $H_N$ may depend on time, suitable conditions on the potential $S(x,t)$ will be assumed so as to ensure that the solution to \eqref{eq:Cauchy_problem} exists and is unique in the strong sense for any time. Following the discussion of Section \ref{sec:intro}, we are concerned with the class of initial data of the form \eqref{eq:initial0u}, \eqref{eq:psicc-psir}, \eqref{eq:psirr}, or \eqref{eq:psie}, that is, $N$-body states whose associated one-body reduced density matrix $\gamma_{N,0}^{(1)}$, defined as in \eqref{eq:partial_trace1}-\eqref{eq:def_partial_trace_with_basis} above, are rank-one projections, more precisely \begin{equation}\label{eq:initialgamma} \lim_{N\to\infty}\gamma_{N,0}^{(1)}\;=\;\Big|\!\begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\Big\rangle\Big\langle\!\begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\Big|\,, \qquad \|u_0\|_2^2+\|v_0\|_2^2=1\,, \end{equation} for given one-body orbitals $u_0$ and $v_0$ in $L^2(\mathbb{R}^3)$. Even if a priori the limit in \eqref{eq:initialgamma} can be stated in several inequivalent operator topologies, from the trace norm to the weak operator topology, the bounds \begin{equation}\label{eq:equivalent-BEC-control} 1-\langle\varphi,\gamma_N^{(1)}\varphi\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\;\leqslant\;\mathrm{Tr}_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\big|\,\gamma_{N}^{(1)}-|\varphi\rangle\langle\varphi|\,\big|\;\leqslant\;2\sqrt{1-\langle\varphi,\gamma_N^{(1)}\varphi\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}} \end{equation} (see, e.g., \cite[Eq.~(1.8)]{M-Olg-2016_2mixtureMF}) show that such a convergence can be monitored equivalently in any of them. While \eqref{eq:initialgamma} encodes as desired the assumption of complete occupation of the one-body spinor $\begin{pmatrix} u_0 \\ v_0\end{pmatrix}$, it does not select yet the appropriate energy scale for the initial datum, compatibly with the adopted scaling limit. Indeed, \eqref{eq:initialgamma} also includes completely factorised (uncorrelated) many-body states $\begin{pmatrix} u_0 \\ v_0\end{pmatrix}^{\!\!\!\otimes N}\!\!\in \mathcal{H}_N$, for which however the scaling \eqref{eq:GPscaling} yields anomalously large asymptotics for the expectation of powers $(H_N/N)^k$ ($k\in\mathbb{N}$) of the energy per particle operator. For example, one finds a linear-in-$N$ energy expectation $\langle H_N\rangle\sim N$ with a constant of proportionality macroscopically different that the expected one (due to the emergence in the limit of the first Born approximation $\frac{1}{8\pi}\int_{\mathrm{R}^3} V\mathrm{d} x$ of the scattering length $a$ of $V$); analogously, one finds an anomalously large cubic-in-$N$ expectation $\langle H_N^2\rangle\sim N^3$ (it is the potential part in the Hamiltonian \eqref{eq:scaled_H} to give such a contribution). These behaviours are not typical of the ground state of a Bose gas and are due to the lack of short-scale correlations in the factorised $N$-body state, the presence of which would instead compensate the singular short scale behaviour of $V_N$ as $N\to\infty$. In fact short-scale correlations are shown to form dynamically in a very short transient of time \cite{EMS-2008}: in full analogy to the one-component condensation \cite[Chapter 6]{LSeSY-ober}, it is rather to be expected, and we shall include that in the assumptions on the initial states, that in terms of the many-body energy per particle \begin{equation}\label{eq:manybody_en_part} \mathcal{E}_N[\Psi_N]\;:=\;\frac{1}{N}\langle\Psi_N,H_N\Psi_N\rangle\,,\qquad\Psi_N\in\mathcal{H}_N\,, \end{equation} and of the \emph{two-component Gross-Pitaevskii energy functional} \begin{equation}\label{eq:GPfunctional} \begin{split} \mathcal{E}^{\mathrm{GP}}[u,v]\;:=\;& \int_{\mathbb{R}^3}\!\mathrm{d} x\,\Big(|\nabla u|^2 +|\nabla v|^2 + 4\pi a\big( |u|^4+2 |u|^2|v|^2+ |v|^4\big)\Big)\\ &\quad +\int_{\mathbb{R}^3}\!\mathrm{d} x\, \Big\langle\!\!\begin{pmatrix} u \\ v \end{pmatrix}\!,\,S(t)\! \begin{pmatrix} u\\ v \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\qquad u,v\in L^2(\mathbb{R}^3)\,, \end{split} \end{equation} where $a$ is the ($s$-wave) scattering length associated to the potential $V$, the initial state at $t=0$ satisfy the asymptotics \eqref{eq:initialgamma} \emph{and} \begin{equation}\label{eq:asymptotic_energy} \mathcal{E}_N[\Psi_{N,0}]\;\xrightarrow[]{\;\;\;\;N\to\infty\;\;}\;\mathcal{E}^{\mathrm{GP}}[u_0,v_0]\,. \end{equation} This is inspired by the intuition that the ground state energy of $H_N$ is indeed captured asymptotically by the minimum of the functional $\mathcal{E}^{\mathrm{GP}}$, which is a theorem for BEC in one component \cite{LSeSY-ober}, and by the intention to explore the many-body dynamics of initial states that are prepared close to the ground state. For the time being, \eqref{eq:initialgamma} and \eqref{eq:asymptotic_energy} form a working hypothesis that at a high level of confidence is expected to cover precisely the class of initial states of the experiments. At later times $t>0$ the many-body evolution $\Psi_{N,t}$, i.e., the solution to \eqref{eq:Cauchy_problem} with initial datum $\Psi_{N,0}$, is observed in the experiments to preserve complete BEC, in the sense of one-body marginals, onto a time-dependent one-body spinor $\begin{pmatrix} u_t\\ v_t \end{pmatrix}$ whose behaviour is governed by the following system of coupled non-linear cubic Schr\"{o}dinger equations \cite{Ketterle_StamperKurn_SpinorBEC_LesHouches2001, Malomed2008_multicompBECtheory, Hall2008_multicompBEC_experiments,StamperKurn-Ueda_SpinorBose_Gases_2012, pita-stringa-2016} \begin{equation}\label{eq:GPsystem_extended} \begin{split} i\partial_t u_t\;&=\;(-\Delta +U^{\mathrm{trap}}_\uparrow)u_t+8\pi a(|u_t|^2+|v_t|^2)u_t-V_{\mathrm{hf}}\,u_t+(B_1-\mathrm{i} B_2)v_t \\ i\partial_t v_t\;&=\;(-\Delta +U^{\mathrm{trap}}_\downarrow) v_t+8\pi a(|u_t|^2+|v_t|^2)v_t+V_{\mathrm{hf}}\,v_t+(B_1+\mathrm{i} B_2)u_t \end{split} \end{equation} with initial data $u_{t=0}\equiv u_0$ and $v_{t=0}\equiv v_0$. Here, again, $a$ is the ($s$-wave) scattering length associated to the non-scaled pair potential $V$. The explicit dependence of $U^{\mathrm{trap}}$ and of $\mathbf{B}\equiv(B_1,B_2,-V_{\mathrm{hf}})$ on space and time is omitted for short. We have already commented in Section \ref{sec:intro} that the experiment may well have the trapping potential switched off. In terms of the matrix-valued potential $S(x,t)$ introduced in \eqref{eq:matrixS} and of the `one-body non-linear Hamiltonians' \begin{equation}\label{eq:onebody-nonlin-Hamilt} \begin{split} h^{(u,v)}_{11}\;&:=\;-\Delta +U^{\mathrm{trap}}_\uparrow+8\pi a(|u_t|^2+|v_t|^2)-V_{\mathrm{hf}}\;=\;-\Delta+S_{11}+8\pi a(|u_t|^2+|v_t|^2) \\ h^{(u,v)}_{22}\;&:=\;-\Delta +U^{\mathrm{trap}}_\downarrow+8\pi a(|u_t|^2+|v_t|^2)+V_{\mathrm{hf}}\;=\;-\Delta+S_{22}+8\pi a(|u_t|^2+|v_t|^2) \end{split} \end{equation} we re-write \eqref{eq:GPsystem_extended} in the compact form \begin{equation} \label{eq:coupled} \begin{split} i\partial_t u_t\;&=\;h^{(u,v)}_{11} u_t+ S_{12}v_t \\ i\partial_t v_t\;&=\;h^{(u,v)}_{22} v_t+ S_{21}u_t\,. \end{split} \end{equation} In order to prove this picture from first principles, and hence to provide a rigorous derivation of the system \eqref{eq:coupled} as the effective non-linear evolution emerging from the many-body linear Schr\"{o}dinger dynamics \eqref{eq:Cauchy_problem}, one must establish at any time $t>0$ the convergence \begin{equation} \gamma^{(1)}_{N,t}\;\xrightarrow[]{\;\;\;\;N\to\infty\;\;}\;\Big|\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\Big\rangle\Big\langle\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\Big| \end{equation} of the one-body density matrix $\gamma^{(1)}_{N,t}$ associated with $\Psi_{N,t}$ onto the solution $(u_t,v_t)$ to \eqref{eq:coupled} with initial datum $(u_0,v_0)$, thus closing the diagram \begin{equation}\label{scheme_for_marginals} \begin{CD} \Psi_N @>\scriptsize\textrm{partial trace}>>\gamma_N^{(1)} @>N\to\infty>> \Big| \!\!\begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\!\Big\rangle\Big\langle\!\! \begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\!\Big| \\ @ V\scriptsize\begin{array}{c} \textrm{many-body} \\ \textrm{\textbf{linear} dynamics} \end{array} VV @V VV @VV\scriptsize\begin{array}{c}\textrm{\textbf{non-linear}} \\ \textrm{Schr\"{o}dinger eq.} \end{array}V \\ \Psi_{\! N,t} @>\scriptsize\textrm{\qquad\qquad\qquad\;}>>\gamma_{\! N,t}^{(1)} @>N\to\infty>> \Big| \!\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big\rangle\Big\langle\!\! \begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big|\,. \end{CD} \end{equation} To this aim, we impose the following set of assumptions: \begin{itemize} \item[(A1)] The matrix potential $S\equiv(S_{jk})_{j,k\in\{1,2\}}$ be given with \begin{equation*} S_{ij}\in C^1 ( \mathbb{R}_t , L^\infty_x (\mathbb{R}^3))\cap W^{1,\infty}( \mathbb{R}_t , L^\infty_x (\mathbb{R}^3)) \end{equation*} and $S=S^*$. \item[(A2)] The real-valued interaction potential $V$ be given such that $V\in L^\infty(\mathbb{R}^3)$, $V$ has compact support, and for almost every $x\in\mathbb{R}^3$ $V$ is spherically symmetric and $V\geqslant 0$. Let $a$ denote the $s$-wave scattering length associated to $V$ (see, e.g., \cite[Appendix C]{LSeSY-ober}). Correspondingly, let $V_N$ the re-scaled potential \eqref{eq:GPscaling} associated to $V$. \item[(A3)] Associated to the potentials fixed in (A1)-(A2), let $H_N$ be the many-body Hamiltonian \eqref{eq:scaled_H} acting at each time $t$ on the $N$-body Hilbert space $\mathcal{H}_N$ fixed in \eqref{eq:HspaceHN}, let $\mathcal{E}_N$ be the the many-body energy-per-particle functional \eqref{eq:manybody_en_part}, and let $\mathcal{E}^{\mathrm{GP}}$ be the two-component Gross-Pitaevskii energy functional \eqref{eq:GPfunctional}. \item[(A4)] Two functions $u_0,v_0\in H^2(\mathbb{R}^3)$ be given with $\|u_0\|^2+\|v_0\|^2=1$ and such that the Cauchy problem associated to the non-linear system \eqref{eq:coupled} with initial datum $(u_0,v_0)$ admits a unique solution $(u,v)$ with $u\equiv u_t(x)$, $v\equiv v_t(x)$, and \begin{equation}\label{eq:well_posedness} \begin{pmatrix} u \\ v\end{pmatrix}\in C\big(\mathbb{R}_t,H^2_x(\mathbb{R}^3)\otimes\mathbb{C}^2\big) \,. \end{equation} \item[(A5)] Associated to the spinor $\begin{pmatrix} u_0 \\ v_0\end{pmatrix}$ fixed in (A4), a sequence $(\Psi_{N,0})_{N\in\mathbb{N}}$ of initial $N$-body states be given with $\Psi_N\in\mathcal{H}_N$ and $\|\Psi_N\|=1$, such that the corresponding sequence $(\gamma_{N,0}^{(1)})_{N\in\mathbb{N}}$ of one-body reduced density matrices satisfies the BEC asymptotics \eqref{eq:initialgamma} in the quantitative form \begin{equation} \label{eq:hypconvergence} \mathrm{Tr}\,\Big|\gamma_{N,0}^{(1)}-\Big| \!\!\begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\!\Big\rangle\Big\langle\!\! \begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\!\Big|\;\leqslant\;\frac{\;\mathrm{const.}}{N^{\eta_1}} \end{equation} \emph{and} the energy asymptotics \eqref{eq:asymptotic_energy} in the quantitative form \begin{equation}\label{eq:hypconvergence_energy} \big|\,\mathcal{E}_N[\Psi_{N,0}]\to \mathcal{E}^{\mathrm{GP}}[u_0,v_0]\,\big|\;\leqslant\;\frac{\;\mathrm{const.}}{N^{\eta_2}} \end{equation} for some constants $\eta_1,\eta_2>0$. \end{itemize} Some remarks are in order. First, we have already argued that assumption (A5) is expected to select the class of initial states relevant in the experiments. We underline, in particular, that assumption (A1) includes precisely the experimental potentials $V_{\mathrm{hf}}(x)\equiv V_{\mathrm{hf}}$ and $S_{12}(x,t)=B_1(x,t)-\mathrm{i} B_2(x,t)\equiv \Omega e^{\mathrm{i}\omega t}$ for suitable constants $V_{\mathrm{hf}},\Omega,\omega\geqslant 0$. Also the repulsive inter-particle pair interaction is consistent with what observed in the experiments. As a second important remark, we observe that both the dynamical evolutions we deal with in our assumptions, namely the linear many-body Schr\"{o}dinger dynamics and the non-linear Gross-Pitaevskii dynamics, are well posed. Concerning the former, it can be deduced from (A3) by means of standard arguments (see, e.g., \cite{Aiba-Yajima-2013,Jochen-Griesemer-2014} for a recent discussion) that $H_N$ has a time-independent (dense) domain $\mathcal{D}_N\subset \mathcal{H}_N$ of self-adjointness and there exists a unique unitary propagator for \eqref{eq:Cauchy_problem} on $\mathcal{H}_N$, that is, a family $\{U_N(t,s)\,|\,t,s\in\mathbb{R}\}$ of unitaries on $\mathcal{H}_N$, strongly continuous on $\mathcal{H}_N$ with respect to $(t,s)$, satisfying $U_N(t,s)U_N(s,r)=U_N(t,r)$ and $U_N(t,t)=\mathbbm{1}$ for any $t,s,r\in\mathbb{R}$, and with the additional properties that, equipping $\mathcal{D}_N$ with the graph norm of $H_N|_{t=0}$, each $U_N(t,s)$ is bounded on $\mathcal{D}$, and for each $\Phi_N\in\mathcal{D}_N$ the function $U_N(t,s)\Phi_N$ is continuous in $\mathcal{D}_N$ with respect to $(t,s)$, it is of class $C^1$ in $\mathcal{H}_N$, and \begin{equation} \mathrm{i}\partial_t U_N(t,s)\Phi_N\;=\;H_N U_N(t,s)\Phi_N\,,\qquad \mathrm{i}\partial_s U_N(t,s)\Phi_N\;=\;- U_N(t,s) H_N \Phi_N\,. \end{equation} The non-linear Cauchy problem associated to \eqref{eq:coupled} is well-posed too (in fact, it is defocusing and energy sub-critical), which is seen by exploiting an amount of standard analysis that can be found in the closely related works \cite{Jungel-Weishaupl2013_2compNLS_blowup,Antonelli-Weishaupl-2013,Bunoiu-Precup-2016} and which we do not aim at develop explicitly here. Under the above assumptions we are able to prove our main result here below. It is a result of persistence in time of pseudo-spinorial BEC and of rigorous derivation of the non-linear effective dynamics. It is formulated as follows. \begin{theorem}\label{theorem:main} Consider, for each $N\in\mathbb{N}$, $N\geqslant 2$, the sequence of systems consisting of $N$ spin-$\frac{1}{2}$ identical bosons in three dimensions, subject to the Hamiltonian $H_N$ and initialised at time $t=0$ in the state $\Psi_{N,0}$ of complete BEC onto the one-body spinor $\begin{pmatrix} u_0 \\ v_0\end{pmatrix}$, according to the assumptions (A1)-(A5) above. For each $t>0$ let $\Psi_{N,t}$ be the solution to the many-body Schr\"{o}dinger equation \eqref{eq:Cauchy_problem} with initial datum $\Psi_{N,0}$, let $\gamma_{N,t}^{(1)}$ be the associated one-body reduced density matrix, and let $(u_t,v_t)$ be the solution to the non-linear Gross-Pitaevskii system \eqref{eq:coupled} with initial datum $(u_0,v_0)$. Then, at any $t$, \begin{equation}\label{eq:thesis} \lim_{N\to\infty}\gamma_{N,t}^{(1)}\;=\;\Big|\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\Big\rangle\Big\langle\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\Big| \end{equation} in trace norm, and \begin{equation}\label{eq:thm_thesis} \lim_{N\to\infty}\mathcal{E}_N[\Psi_{N,t}]\;=\;\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\,. \end{equation} \end{theorem} We shall present the proof of Theorem \ref{theorem:main} in Section \ref{sect:strategy}, after completing a number of preparatory steps in Sections \ref{sec:preliminaries} and \ref{sect:scattering}. For the time being, let us complete the discussion of our result by highlighting a couple of relevant aspects. For the technique we use in the proof, an adaptation of the `counting' projection method developed by Pickl \cite{kp-2009-cmp2010,Pickl-JSP-2010,Pickl-LMP-2011,Pickl-RMP-2015}, the precise rate of convergence of the limit \eqref{eq:thesis} remains somewhat implicit: it could be well tracked down through the many inequalities occurring in the proof, but it would turn out to be given by a surely non-optimal inverse power $N^{-\eta}$ for some small $\eta>0$ that depends on $u_0$, $v_0$, and on the potentials chosen in $H_N$. For this reason, even if \eqref{eq:thesis} is quantitative, we omit any reference to the rate of convergence in $N$. For a sharper and more explicit rate it would be of interest to adapt to spinors a different technique for the control of the leading one-body effective dynamics for bosons, which has been developed in the Gross-Pitaevskii scaling in the recent works \cite{B-DO-S-gp-2015,BS-2017}. Furthermore, Theorem \ref{theorem:main} can be generalised to suitable modifications of the many-body Hamiltonian $H_N$, in particular to the case where particles are charged and hence coupled to the external magnetic field through a minimal coupling, which results in replacing the one-particle kinetic operator $-\Delta$ with the magnetic Laplacian $-\Delta_A:=-(\nabla+\mathrm{i} A)^2$. For milder scalings in $V_N$ than the Gross-Pitaevskii one this would be a fairly easy application of Pickl's method, and in fact the magnetic Laplacian can be included too with the Gross-Pitaevskii scaling \eqref{eq:GPscaling}, but in this latter case a more careful analysis is needed: to this purpose an amount of previously missing details in the literature have been recently worked out by one of us in \cite{AO-GP_magnetic_lapl-2016volume}. Last, we remark that when $\mathbf{B}=\mathbf{0}$ in \eqref{eq:unscaled_H} and hence $S_{12}=S_{21}=\mathbb{O}$ in \eqref{eq:matrixS}, no spin flipping is induced any longer by the Hamiltonian and the model becomes the same as that of a mixture of two Bose gas in interaction and no interconversion (i.e., no population change) among species. In this case, as our Theorem \ref{theorem:main} would also give, persistence of BEC in each component emerges as $N\to\infty$, governed by a non-linear system completely analogous to \eqref{eq:GPsystem_extended} but without the Rabi terms: this picture was already proved recently in \cite{M-Olg-2016_2mixtureMF} (and subsequently in \cite{Anap-Hott-Hundertmark-2017}) in the mean-field scaling, and in \cite{AO-GPmixture-2016volume} in the Gross-Pitaevskii scaling. \section{Preparatory material for the proof}\label{sec:preliminaries} We begin in this Section the preparation for the proof of our main result, Theorem \ref{theorem:main}, introducing the needed algebraic tools and an amount of technical estimates. This requires an adaptation to the spinor case of the `counting' projection method developed by Pickl \cite{kp-2009-cmp2010,Pickl-JSP-2010,Pickl-LMP-2011,Pickl-RMP-2015}: this results in a number of straightforward generalisations, plus some non-trivial steps that need be performed in the spinor case and are absent in the scalar case. We start introducing two key operators that are central in our analysis, namely the time-dependent rank-one orthogonal projection $p_t$ and its complement $q_t$ given by \begin{equation} p_t\;:=\;\Big| \!\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big\rangle\Big\langle\!\! \begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big|\,,\qquad q_t\;:=\;\mathbbm{1}-p_t\,, \end{equation} where $\begin{pmatrix} u_t \\ v_t\end{pmatrix}\in L^2(\mathbb{R}^3)\otimes\mathbb{C}^2$ is the spinor of functions that solve the non-linear Gross-Pitaevskii system \eqref{eq:coupled} with initial datum $(u_0,v_0)$. To make the notation lighter, we shall omit the explicit reference to the time dependence of $p_t$ and $q_t$ on time,and simply write $p$ and $q$. The operators $p$ and $q$ are naturally lifted onto $\mathcal{H}_N=\left(L^2(\mathbb{R}^3)\otimes\mathbb{C}^2\right)^{\otimes_{\mathrm{sym}} N}$ in the form of the $2N$ operators \begin{equation}\label{eq:def_pj_qj} \begin{array}{l} p_j:=\underbrace{\mathbbm{1}\otimes\dots\otimes\mathbbm{1}}_{j-1}\otimes \,p \otimes \underbrace{\mathbbm{1}\otimes\dots\otimes\mathbbm{1}}_{N-j} \\ q_j:=\underbrace{\mathbbm{1}\otimes\dots\otimes\mathbbm{1}}_{j-1}\otimes \,q \otimes \underbrace{\mathbbm{1}\otimes\dots\otimes\mathbbm{1}}_{N-j} \end{array}\quad\qquad j\in\{1,\dots, N\}\,. \end{equation} Thus, $p_j$ acts on $\Psi_N\in \mathcal{H}_N$ as \[ (p_j\Psi_N)(x_1,\dots,x_N)\;=\;\begin{pmatrix} u_t(x_j)\\v_t(x_j) \end{pmatrix}\int_{\mathbb{R}^3} \mathrm{d} y\:\Big\langle\!\!\begin{pmatrix} \overline{u_t}(y)\\\overline{v_t}(y) \end{pmatrix},\Psi_N(x_1,\dots,x_{j-1},y,x_{j+1},\dots,x_N)\Big\rangle_{\!\mathbb{C}_y^{2}}\,. \] Next, we introduce the orthogonal projections \begin{equation}\label{eq:defPk} P_k\;:=\sum_{a\in\{0,1\}^N \atop \sum_{i=1}^N a_i=k}\:\bigotimes_{i=1}^N\,p_i^{1-a_i}q_i^{a_i}\;=\;(q_1\otimes\cdots\otimes q_k\otimes p_{k+1}\otimes\cdots\otimes p_N)_{\textrm{`sym'}}\qquad k\in\{0,\dots,N\}\,, \end{equation} and then set $P_k=\mathbbm{O}$ if $k<0$ or $k>N$. In \eqref{eq:defPk} the symbol `sym' denotes the mere sum (without normalisation factor) of all possible permuted versions of the considered string of $N$ one-body projections. The Hilbert subspace that $P_k$ projects onto is naturally interpreted as the space of $N$-body states with exactly $k$ particles `out of the condensate', in the sense of orthogonality with respect to the particle $\begin{pmatrix} u_t \\ v_t\end{pmatrix}$. It is also simple to check that \begin{equation}\label{eq:properties_of_Pk} [P_k,P_\ell]\;=\;\delta_{k,\ell}P_k\,,\qquad\sum_{k=0}^N P_k\;=\;\mathbbm{1}\,. \end{equation} In the following, whenever no notational confusion arises, we shall omit the tensor product sign $\otimes$ and simply write, for instance, $p_1\cdots p_k q_{k+1}\cdots q_N$ in place of $p_1\otimes\cdots\otimes p_k\otimes q_{k+1}\otimes\cdots\otimes q_N$. With the $P_k$'s at hand, and fixed a weight function $f:\mathbb{N}\rightarrow\mathbb{R}$, we form the operators \begin{equation} \widehat f\;:=\;\sum_{k=0}^N f(k)P_k \end{equation} and, for fixed $d\in\mathbbm{N}$, the `shifted' version \begin{equation}\label{eq:shifted_fd} \widehat{f_d}\;:=\;\sum_{k=-d}^{N-d} f(k+d)P_k\,. \end{equation} Some special choices of the weight $f$ will be useful in the proof. One is \begin{equation} n(k)\;:=\;\sqrt{\dfrac{k}{N}}\,,\label{eq:weightn} \end{equation} in terms of which one has \begin{equation}\label{eq:weights} \widehat{n}^{\,2}\;=\;\frac{1}{N}\sum_{k=0}^Nk\,P_k\;=\;\frac{1}{N}\sum_{k=0}^N\,\sum_{i=1}^N\,q_i\,P_k\;=\;\frac{1}{N}\sum_{i=1}^{N}q_i\,. \end{equation} Thanks to \eqref{eq:weights} and to symmetry, one has \begin{equation}\label{eq:relation_q_n} \langle\Psi_N,q_1\Psi_N\rangle\;=\;\langle\Psi_N,\widehat{n}^{\,2}\Psi_N\rangle\;\leqslant\; \langle\Psi_N,\widehat{n}\,\Psi_N\rangle\,,\qquad \Psi_N\in\mathcal{H}_N\,. \end{equation} Another useful weight is going to be \begin{equation}\label{eq:weightm} m(k)\;:=\;\begin{cases} \sqrt{k/N},\qquad \qquad \qquad \,\,\,\,\,\,\, k\ge N^{1-2\xi}\\\\ \frac{1}{2}(N^{-1+\xi}k+N^{-\xi}),\quad \quad\text{else}, \end{cases} \end{equation} for some $\xi>0$ to be chosen sufficiently small. The choice \eqref{eq:weightm} makes the function $\mathbb{R}\ni k\mapsto m(k)$ differentiable. Let us collect some useful properties of the operators defined above. \begin{lemma}\label{lemma:tools} Let $f,g:\mathbb{N}\rightarrow\mathbb{R}$ be given, together with an operator $A_{ij}$ on $\mathcal{H}_N$ that acts non-trivially only on the $i$-th and $j$-th particle, for given $i,j\in\{1,\dots,N\}$. One has the following properties. \begin{itemize} \item[(i)] Commutativity: \begin{equation}\label{eq:commutativity_fhat_ghat} \widehat{f}\,\widehat{g}\;=\;\widehat{g}\,\widehat{f}\;=\;\widehat{f\,g}\,, \end{equation} \begin{equation}\label{eq:commutativity_fhat_P} [\widehat{f},p_\ell]\;=\;[\widehat{f},q_\ell]\;=\;[\widehat{f},P_k]\;=\;\mathbbm{O}\qquad\forall \ell\in\{1,\dots,N\}\,,\quad\forall k\in\{0,\dots,N\}\,. \end{equation} \item[(ii)] Shift: \begin{equation} \widehat{f}\,Q_1\,A_{ij}\,Q_2\;=\;Q_1\,A_{ij}\,Q_2\,\widehat{f}_{z-s}\,, \label{eq:commutation} \end{equation} where $Q_1,Q_2\in\{p_ip_j,p_iq_j,q_ip_j,q_iq_j\}$, $z$ is the number of $q$'s inside $Q_1$ and $s$ is the number of $q$'s inside $Q_2$. \item[(iii)] For $m$ defined in \eqref{eq:weightm}, define \begin{equation}\label{eq:defn_ma_mb} \begin{split} &\widehat{m}^a\;:=\;\widehat{m}-\widehat{m_1}\\ &\widehat{m}^b\;:=\;\widehat{m}-\widehat{m_2}\\ &\widehat{m}^c\;:=\;\widehat{m}-2\widehat{m_2}+\widehat{m_4}\\ &\widehat{m}^d\;:=\;\widehat{m}-\widehat{m_1}-\widehat{m_2}+\widehat{m_3}\\ &\widehat{m}^e\;:=\;\widehat{m}-2\widehat{m_1}+\widehat{m_2} \end{split} \end{equation} and \begin{equation} R_{(ij)}\;:=\;p_ip_j\,\widehat{m}^b+(p_iq_j+q_ip_j)\widehat{m}^a. \label{eq:defnr} \end{equation} Then one has \begin{equation}\label{eq:commut_r} [A_{ij},\widehat{m}]\;=\;[A_{ij},R_{(ij)}]\,. \end{equation} Moreover, if $K_{hr}$ is a bounded operator on $\mathcal{H}_N$ acting non-trivially only on the $h$-th and $r$-th particle, with $h\notin\{i,j\}$ and $r\notin\{i,j\}$, one has (see also Remark \ref{rem:R} below) \begin{equation}\label{eq:commut_r2} \begin{split} [K_{hr},R_{(ij)}]\;=\;[K_{hr}\,,\,& p_ip_jp_hp_r\,\widehat{m}^c\\ +&p_ip_j(p_hq_r+q_hp_r)\,\widehat{m}^d\\ +&(p_iq_j+q_ip_j)p_hp_r\,\widehat{m}^d\\ +&(p_iq_j+q_ip_j)(p_hq_r+q_hp_r)\,\widehat{m}^e]. \end{split} \end{equation} \item[(iv)] Operator bounds: there exist constants $C,\widetilde{C}>0$ such that \begin{align} \|\widehat{m}^a\|_{\mathrm{op}}\;&\leqslant\; CN^{-1+\xi} \label{eq:ma}\\ \|\widehat{m}^b\|_{\mathrm{op}}\;&\leqslant\; CN^{-1+\xi} \label{eq:mb}\\ \|R_{(ij)}\|_{\mathrm{op}}\;&\leqslant\; CN^{-1+\xi} \label{eq:r}\\ \|\widehat{m}^z\|_{\mathrm{op}}\;&\leqslant\; \widetilde{C}N^{-2+3\xi}\quad\forall z\in\{c,d,e\} \label{eq:mcde}, \end{align} where $\|\;\|_{\mathrm{op}}$ denotes the operator norm. \end{itemize} \end{lemma} \begin{proof} Part (i) is an immediate consequence of the mutual orthogonality of the $P_\ell$'s, and of the $p_\ell$'s with the $q_\ell$'s. To establish part (ii) one observes that \[ P_\ell \,Q_1\,A_{ij}\,Q_2\;=\;Q_1\,A_{ij}\,Q_2\,P_{\ell+s-z}\qquad\forall\ell\in\{0,\dots,N\}\,, \] since all the $p_r$'s and the $q_r$'s with $r\notin\{i,j\}$ commute with $A_{ij}$, and the identity above in turn implies the thesis. For part (iii) we compute the difference (recall the notation \eqref{eq:shifted_fd} for the shifted operators) \[ \begin{split} [A_{ij},\widehat{m}]-[A_{ij},R_{(ij)}]\;&=\;[A_{ij},\widehat{m}]-[A_{ij},p_ip_j(\widehat{m}-\widehat{m_2})+(p_iq_j+q_ip_j)(\widehat{m}-\widehat{m_1})] \\ &=\;[A_{ij},q_i\,q_j\,\widehat{m}]+[A_{ij},p_i\,p_j\,\widehat{m_2}+(p_i\,q_j+q_i\,p_j)\widehat{m_1}] \end{split} \] and we multiply by $\mathbbm{1}=p_ip_j+p_iq_j+q_ip_j+q_iq_j$ from the left: using the shift property \eqref{eq:commutation} and $p_iq_i=\mathbbm{O}$ we find \[ \begin{split} p_ip_j&\big( [A_{ij},\widehat{m}]-[A_{ij},R_{(ij)}]\big)\;= \\ &=\;p_ip_jA_{ij}q_iq_j\widehat{m}+p_ip_jA_{ij}p_ip_j\widehat{m}_2-p_ip_j\widehat{m}_2A_{ij}+p_ip_jA_{ij}(p_iq_j+q_ip_j)\widehat{m_1} \\ &=\;p_ip_j\widehat{m}_2A_{ij}q_iq_j+p_ip_j\widehat{m}_2A_{ij}p_ip_j-p_ip_j\widehat{m}_2A_{ij}+p_ip_j\widehat{m}_2A_{ij}(p_iq_j+q_ip_j) \\ &=\;\mathbbm{O}\,, \end{split} \] \[ \begin{split} q_iq_j&\big( [A_{ij},\widehat{m}]-[A_{ij},R_{(ij)}]\big)\;= \\ &=\; q_jA_{ij}q_iq_j\widehat{m}-q_iq_j\widehat{m}\,A_{ij}+q_iq_jA_{ij}p_ip_j\widehat{m_2}+q_iq_jA_{ij}(p_iq_j+q_ip_j)\widehat{m_1} \\ &=\;q_iq_j\widehat{m}A_{ij}q_iq_j-q_iq_j\widehat{m}\,A_{ij}+q_iq_j\widehat{m}A_{ij}p_ip_j+q_iq_j\widehat{m}\,A_{ij}(p_iq_j+q_ip_j)\\ &=\;\mathbbm{O}\,, \end{split} \] and analogously $q_ip_j\big( [A_{ij},\widehat{m}]-[A_{ij},R_{(ij)}]\big)=\mathbbm{O}$. Thus, \eqref{eq:commut_r} follows. To prove \eqref{eq:commut_r2} we use \eqref{eq:defnr} and \eqref{eq:defn_ma_mb} and write \begin{equation} \label{eq:second_commutator} \begin{split} [K_{hr},R_{(ij)}]\;=\;&p_ip_j[K_{hr} ,\,\widehat{m}^b]+(p_iq_j+q_ip_j)[K_{hr},\widehat{m}^a]\\ \;=\;&p_ip_j[K_{hr} ,\,\widehat{m}]-p_ip_j[K_{hr} ,\,\widehat{m_2}]+(p_iq_j+q_ip_j)[K_{hr},\widehat{m}]-(p_iq_j+q_ip_j)[K_{hr},\widehat{m_1}]. \end{split} \end{equation} Now, we observe that the analogue of \eqref{eq:commut_r} is valid too when $\widehat{m}$ is replaced by a shifted $\widehat{m_k}$, i.e., \[ [K_{hr} ,\,\widehat{m_k}]=[K_{hr} ,\,p_hp_r(\widehat{m_k}-\widehat{m_{k+2}})+(p_hq_r+q_hp_r)(\widehat{m_k}-\widehat{m_{k+1}})] \] (which is precisely \eqref{eq:commut_r} for $k=0$). This is proven exactly the same way as \eqref{eq:commut_r}. By applying last identity to \eqref{eq:second_commutator}, one gets \[ \begin{split} [K_{hr},R_{(ij)}]\;=&\;p_ip_j[K_{hr} ,\,\,p_hp_r(\widehat{m}-\widehat{m_{2}})+(p_hq_r+q_hp_r)(\widehat{m}-\widehat{m_{1}})]\\ &\; -p_ip_j[K_{hr} ,\,\,p_hp_r(\widehat{m_2}-\widehat{m_{4}})+(p_hq_r+q_hp_r)(\widehat{m_2}-\widehat{m_{3}})]\\ &\;+(p_iq_j+q_ip_j)[K_{hr},\,\,p_hp_r(\widehat{m}-\widehat{m_{2}})+(p_hq_r+q_hp_r)(\widehat{m}-\widehat{m_{1}})]\\ &\;-(p_iq_j+q_ip_j)[K_{hr},\,p_hp_r(\widehat{m_1}-\widehat{m_{3}})+(p_hq_r+q_hp_r)(\widehat{m_1}-\widehat{m_{2}})], \end{split} \] which, upon rearrangement, is exactly \eqref{eq:commut_r2}. As for part (iv), it follows from \eqref{eq:properties_of_Pk} that the $P_\ell$'s produce a direct orthogonal decomposition of $\mathcal{H}_N$ and hence \[ \|\widehat{f}\|_{\mathrm{op}}\;=\;\mathop{\text{sup}}_{k\in\{0,\dots, N\}}|f(k)|\,. \] As a consequence, treating the function $k\mapsto m(k)$ as continuously defined on $k\in\mathbb{R}$, \[ \|\widehat{m}^a\|_{\mathrm{op}}\;=\;\sup_{k\in\{0,\dots, N\}}|m(k)-m(k+1)|\;\leqslant\;C\sup_{k\in\{0,\dots, N\}}\big|m'(k)\big|\;\leqslant\; C \,N^{-1+\xi}\,. \] A similar reasoning shows that the same bound holds for $\|\widehat{m}^b\|_{\mathrm{op}}$ and hence also for $\|R_{(ij)}\|_{\mathrm{op}}$. Last, we establish \eqref{eq:mcde} starting with $\|\widehat{m}^c\|_{\mathrm{op}}$. By the mean value theorem, \[ \begin{split} \|\widehat{m}^c\|_{\mathrm{op}}\;=&\;\sup_{k\in\{0,\dots, N\}}|m(k)-m(k+2)+m(k+4)-m(k+2)|\;=\;\sup_{k\in\{0,\dots, N\}}|-2m'(\beta_k)+2m'(\gamma_k)|\\ \;\leqslant&\;\;\;C\sup_{k\ge N^{1-2\xi}}|m''(\theta_k)| \end{split} \] for some $\beta_k\in(k,k+2)$, $\gamma_k\in(k+2,k+4)$, and $\theta_k\in(k,k+4)$. Since \[ m''(k)\;=\;\begin{cases}\quad 0\qquad\qquad\qquad\text { for }\,k\;<N^{1-2\xi}\\ \,\,\,-\dfrac{1}{4\sqrt{N\,k^3}}\,\quad\qquad\text { for }k\;>\;N^{1-2\xi}, \end{cases} \] $\|\widehat{m}^c\|_{\mathrm{op}}$ is globally bounded by $C\,(Nk^3)^{-1/2}$ when $k=N^{1-2\xi}$, whence \eqref{eq:mcde}. $\|\widehat{m}^d\|_\mathrm{op}$ and $\|\widehat{m}^e\|_\mathrm{op}$ are treated analogously. \end{proof} \begin{remark}\label{rem:R} The notation for the operator $R_{(ij)}$ is \emph{not} meant to indicate that the only non-trivial action is on the $i$-th and $j$-th variables; it simply indicates that $R_{(ij)}$ depends on $x_i$ and $x_j$ in a more complicated (non-symmetric) way than on all the other variables. \end{remark} When dealing with $N$-body wave functions with only partial bosonic symmetry, the following bounds become useful and replace the identity \eqref{eq:weights}. \begin{lemma} \label{lemma:nq} Let $\Psi_N\in L^2(\mathbb{R}^3)^{\otimes N}$ be symmetric with respect to permutations of the $b$ variables $x_{i_1},\dots x_{i_b}$ for some integer $b$ with $2\leqslant b\leqslant N$, and let $f:\mathbb{N}\rightarrow\mathbb{R}$. Then, for any pair of indices $i,j\in\{i_1,\dots,i_b\}$ with $i\neq j$, one has \begin{eqnarray} \|\widehat{f}\:q_i\Psi_N\|^2\;&\leqslant\;&\frac{N}{b}\|\widehat{f}\;\widehat{n}\;\Psi_N\|^2\,. \label{eq:nq1} \end{eqnarray} \end{lemma} \begin{proof} The bound \eqref{eq:nq1} follows from \[ \begin{split} \|\widehat{f}\;\widehat{n}\;\Psi_N\|^2\;&=\;\langle\Psi_N,\widehat{n}\;\widehat{f}^{\;2}\;\widehat{n}\;\Psi_N\rangle\;=\;\langle\Psi_N,\widehat{f}^{\;2}\;\widehat{n}^{\,2}\;\Psi_N\rangle\;=\;\frac{1}{N}\sum_{\ell=1}^N\langle\Psi_N,\widehat{f}^{\;2} q_\ell\,\Psi_N\rangle\\ &\geqslant\;\frac{1}{N}\sum_{k\in\{i_1\dots i_b\}}\langle\Psi_N,\widehat{f}^{\;2}\,q_k\,\Psi_N\rangle\;=\;\frac{b}{N}\,\|\widehat{f}\:q_i\Psi_N\|^2\,, \end{split} \] where we used \eqref{eq:commutativity_fhat_ghat} in the second step, \eqref{eq:weights} in the third, \eqref{eq:commutativity_fhat_P} in the fourth and fifth. \end{proof} Further bounds will turn out to be needed on the operator norm of multiplication operators `dressed' with one or two projections $p$. Such norms are estimated in terms of $\|u_t\|_\infty$ and $\|v_t\|_\infty$, which by assumption (A4) are uniformly bounded quantities in time. \begin{lemma} \label{lemma:dressed} Let $h\in L^1(\mathbb{R}^3)$ and $g\in L^2(\mathbb{R}^3)$, and let $i,j\in\{1,\dots,N\}$. One has \begin{align} \|g(x_i-x_j)p_j\|_{\mathrm{op}}\;\leqslant\;C(t)\,\|g\|_2\label{eq:dressed2}\\ \|p_ig(x_i-x_j)\|_{\mathrm{op}}\;\leqslant\;C(t)\,\|g\|_2\label{eq:dressed3} \end{align} for some function $C(t)>0$ depending on $\|u_t\|_\infty$ and $\|v_t\|_\infty$\,, and not on $N$. \end{lemma} \begin{proof} We observe that \[ p_j h(x_i-x_j)p_j\;=\;p_j\big(h*|u_t|^2(x_i)+h*|v_t|^2(x_i)\big)\,. \] Using this fact and Young's inequality we get \[ \|p_j h(x_i-x_j)p_j\|_{\mathrm{op}}\;\leqslant\;\|p_j\|_{\mathrm{op}}\big\|h*|u_t|^2+h*|v_t|^2\big\|_\infty\;\leqslant\; C\,\|h\|_1(\|u_t\|_\infty^2+\|v_t\|_\infty^2)\,. \] Then \eqref{eq:dressed2} follows from the above inequality (for $h=g^2$) through \[ \begin{split} \|g(x_i-x_j)p_j\|_{\mathrm{op}}^2\;&=\;\sup_{\substack{\Psi_N\in\mathcal{H}_N \\ \|\Psi_N\|=1}}\langle\Psi_N,p_j\,g^2(x_i-x_j)p_j\Psi_N\rangle\;\leqslant\; \|p_jg^2(x_i-x_j)p_j\|_{\mathrm{op}} \\ &\leqslant\; \|g^2\|_1\,(\|u\|_\infty^2+\|v\|_\infty^2)\;\leqslant\; 2\,\|g\|_2^2\,(\|u\|_\infty+\|v\|_\infty)^2\,. \end{split} \] The bound \eqref{eq:dressed3} follows by taking the adjoint of \eqref{eq:dressed2}. \end{proof} In particular, the bounds of Lemma \ref{lemma:dressed} above, when applied to the re-scaled potential $V_N$, provide useful $N$-dependent estimates that we collect for convenience here below. \begin{lemma}\label{lemma:potential} Let $\Psi_N\in\mathcal{D}(H_N)\subset\mathcal{H}_N$ with $\|\Psi_N\|=1$ and $\mathcal{E}_N[\Psi_N]\leqslant \kappa \,N$ uniformly in $N$ for some $\kappa>0$, and consider the potential $V_N$ defined in \eqref{eq:GPscaling}. Then \begin{eqnarray} \|V_N(x_1-x_2)\Psi_N\|\;&\leqslant&\; C \,N^{1/2}\label{eq:potential} \\ \|p_1V_N(x_1-x_2)\Psi_N\|\;&\leqslant&\; C \,N^{-1}\label{eq:dressedpotential} \end{eqnarray} for some constant $C>0$ that in \eqref{eq:potential} depends on $\kappa$, on $\|V\|_\infty$\,, and on $\|S\|_{L^\infty_t L^\infty_x}$\,, and in \eqref{eq:dressedpotential} depends additionally on $\mathrm{supp}(V)$ and on the (uniform in time) bound on $\|u_t\|_\infty$ and $\|v_t\|_\infty$\,. \end{lemma} \begin{proof} To prove \eqref{eq:potential} we combine the estimate \[ \begin{split} \|V_N(x_1-x_2)\Psi_N\|^2\;&=\;\|\sqrt{V_N(x_1-x_2)}\sqrt{V_N(x_1-x_2)}\Psi_N\|^2\\ &\leqslant\; \|\sqrt{V_N(x_1-x_2)}\|_\infty^2\;\|\sqrt{V_N(x_1-x_2)}\;\Psi_N\|^2\\ &\leqslant \|V\|_\infty \;N^2\,\langle\Psi_N,V_N(x_1-x_2)\Psi_N\rangle \end{split} \] with the estimate \[ \begin{split} \mathcal{E}_N[\Psi_N]\;\geqslant\;-N\|S\|_{L^\infty_t L^\infty_x}+\sum_{i<j}^N\langle\Psi_N,V_N(x_i-x_j)\Psi_N\rangle\,, \end{split} \] thus finding \[ \|V_N(x_1-x_2)\Psi_N\|^2\;\leqslant\;\|V\|_\infty \,\big(\mathcal{E}_N[\Psi_N]+N\|S\|_{L^\infty_t L^\infty_x}\big)\;\leqslant\;\|V\|_\infty \,(\kappa+\|S\|_{L^\infty_t L^\infty_x})\:N\,. \] To prove \eqref{eq:dressedpotential} we estimate \[ \begin{split} \|p_1V_N(x_1-x_2)\Psi_N\|\;\leqslant\;\|p_1\mathbbm{1}_{\mathrm{supp}(V_N)}(x_1-x_2)\|_{\mathrm{op}}\;\|V_N(x_1-x_2)\Psi_N\|\,, \end{split} \] where $\mathbbm{1}_{\mathrm{supp}(V_N)}$ is the characteristic function of the support of $V_N$; the first factor in the r.h.s.~above is estimated, using \eqref{eq:dressed3}, as \[ \|p_1\mathbbm{1}_{\mathrm{supp}(V_N)}(x_1-x_2)\|_{\mathrm{op}}\;\leqslant\;C\,\|\mathbbm{1}_{\mathrm{supp}(V_N)}\|_2\;\leqslant\;C\,N^{-3/2} \] for some constant $C$ depending on (the support of) the non-scaled potential $V$ and on the (uniform in time) bound on $\|u_t\|_\infty$ and $\|v_t\|_\infty$\,, whereas the second factor is estimated as $C\,N^{1/2}$ by \eqref{eq:potential}. The product of these two bounds yields \eqref{eq:dressedpotential}. \end{proof} \section{Zero-energy scattering problem and short scale structure}\label{sect:scattering} As well known (see \cite{LSeSY-ober,EMS-2008,S-2007,S-2008,Benedikter-Porta-Schlein-2015} and the references therein), the understanding of both the ground state and the dynamics of a dilute Bose gas analysed in the Gross-Pitaevskii scaling limit is intimately related to the two-body scattering problem at zero energy. The latter determines the short scale structure of the typical many-body state under consideration, which is crucial to identify the correlation pattern at the leading order in the energy and in the evolution dynamics of the state. In this Section we collect the main facts from the two-body scattering problem at zero energy needed for the specific technique that we make use of in the present work. To this aim we follow closely the recent works \cite{Pickl-RMP-2015,Jeblick-Leopold-Pickl-2DGP-2016}. We start by recalling (e.g., from \cite[Appendix C]{LSeSY-ober}) that given a potential $V$ satisfying our assumption (A2), the scattering length of $V$ is the quantity \begin{equation} a\;:=\;\frac{1}{8\pi}\int_{\mathbb{R}^3} \mathrm{d} x\, V(x) f(x)\,, \end{equation} where $f$ is the so-called zero-energy scattering solution, that is, the solution to the problem \begin{equation} \big(-\Delta+{\textstyle\frac{1}{2}}V\big)f\;=\;0\,,\qquad f(x)\xrightarrow[]{\;\;|x|\rightarrow\infty\;\;}1\,. \end{equation} By scaling, one sees that the scattering length $a_N$ and the zero-energy scattering solution $f_N$ relative to the re-scaled potential $V_N(x)=N^2 V(Nx)$ are given by \begin{equation}\label{eq:aN} a_N\;=\;\frac{a}{N}\,,\qquad\qquad f_N(x)\;=\;f(Nx)\,. \end{equation} In particular, $f_N$ has the peculiar structure at the spatial scale $|x|\sim N^{-1}$: in fact, \begin{equation}\label{eq:fN_2} f_N(x)\;\underset{|x|\to\infty}{\approx}\;1-\frac{a}{N|x|}\,,\qquad\textrm{and}\qquad 1-\frac{a}{N|x|}\;\leqslant\;f_N(x)\;\leqslant 1\quad\forall x\neq 0\,. \end{equation} Along the main proof it is going to be technically convenient to replace the actual potential $V_N$ with a surrogate repulsive potential with a milder scaling and an easier controllability, supported on a spherical shell surrounding, disjointly, the ball of the support of $V_N$. For suitable $\beta\in(0,1)$, let \begin{equation} \label{eq:def_W_beta} W_\beta(x)\;:=\;\begin{cases} \:4\pi \,a_N \,N^{3\beta} & N^{-\beta}<|x|<R_\beta\\ \;\;0 & \textrm{otherwise}\,. \end{cases} \end{equation} Thus, by construction, for $N$ large enough one has \begin{equation} \text{supp}(V_N)\cap\text{supp}(W_\beta)\;=\;\emptyset\,. \end{equation} The spatial scale $R_\beta$ is fixed by the scattering properties of the \emph{difference} potential $V_N-W_\beta$: denoting by $f_\beta$ the zero-energy scattering solution relative to such potential, namely the solution to the problem \begin{equation}\label{eq:zesp-beta} \big(-\Delta+{\textstyle\frac{1}{2}}(V_N-W_\beta\big)\big)f_\beta\;=\;0\,,\qquad f_\beta(x)\xrightarrow[]{\;\;|x|\rightarrow\infty\;\;}1\,, \end{equation} it can be easily argued that the internal, repulsive potential $V_N$ and the external-shell, attractive potential $-W_\beta$ conspire in \eqref{eq:zesp-beta} so as to yield for $V_N-W_\beta$ a \emph{smaller} scattering length as compared to that of $V_N$; then $R_\beta$ is set precisely to the minimum value above $N^{-\beta}$ which makes the scattering length of $V_N-W_\beta$ \emph{vanish} and hence makes $f_\beta(x)$ \emph{constant} for $|x|>R_\beta$. It can be also proved \cite[Lemma 5.1]{Pickl-RMP-2015} that \begin{equation} \label{eq:R_beta} R_\beta\;=\;\mathcal{O}(N^{-\beta})\qquad\textrm{as}\qquad N\to\infty \end{equation} and hence the spherical shell where $W_\beta$ is supported in is entirely of the order $N^{-\beta}$. Crucial when replacing $V_N$ with $W_\beta$ is the function \begin{equation} g_\beta\;:=\;1-f_\beta. \label{eq:defng} \end{equation} whose relevant properties are collected in the following Lemma. \begin{lemma}\label{prop:g} In terms of $a_N$, $f_\beta$, and $g_\beta$ given, respectively, in \eqref{eq:aN}, \eqref{eq:zesp-beta}, and \eqref{eq:defng}, one has \begin{equation}\label{eq:properties_fb_gb} 0\;\leqslant\;f_N(x)\;\leqslant\;f_\beta(x)\,,\qquad\textrm{whence also}\qquad |g_\beta(x)|\;\leqslant\;\frac{a_N}{|x|}\,\mathbbm{1}_{\{|x|\leqslant R_\beta\}}\qquad\forall x\neq 0\,, \end{equation} and \begin{align} \|g_\beta\|_1\;&\leqslant\; C \,N^{-(1+2\beta)} \label{eq:g1}\\ \|g_\beta\|_{3/2}\;&\leqslant\; C\, N^{-(1+\beta)}\label{eq:g32} \\ \|g_\beta\|_2\;&\leqslant\; C \,N^{-(1+\frac{1}{2}\beta)}\label{eq:g2} \end{align} for some constant $C>0$ depending on $V$ and $\beta$\,. \end{lemma} \begin{proof} The inequalities \eqref{eq:properties_fb_gb} are taken from \cite[Lemma 5.1]{Pickl-RMP-2015}. As for the estimates \eqref{eq:g1}-\eqref{eq:g2}, they follow from \eqref{eq:properties_fb_gb}, for % \[ \|g_\beta\|_1\;\leqslant\; a_N\int_0^{R_\beta}\frac{1}{|x|}\,\mathrm{d} x\;\leqslant\; C\, N^{-1-2\beta} \] and \[ \|g_\beta\|_2^2\;\leqslant\; a^2_N\int_0^{R_\beta}\frac{1}{\,|x|^2}\,\mathrm{d} x\;\leqslant\;C\, N^{-2-\beta}\,, \] plus $L^1$-$L^2$ interpolation for the $L^{3/2}$-estimate; the constant $C>0$ depends on $V$ and $\beta$. \end{proof} \section{Proof of Theorem \ref{theorem:main}}\label{sect:strategy} We come now to the actual proof of Theorem \ref{theorem:main}. As customary in this scheme, what exactly is going to be controlled is the quantity \begin{equation} \label{eq:alpha_tilde} \begin{split} \widetilde{\alpha}_N(t)\;&:=\;1-\Big\langle\!\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!,\gamma_{N,t}^{(1)}\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big\rangle_{\!\!L^2(\mathbb{R}^3\otimes\mathbb{C}^2)} \\ &=\;\frac{1}{N}\sum_{j=1}^N\big\langle\Psi_{N,t},(q_t)_{\!j}\Psi_{N,t}\big\rangle_{\!\mathcal{H}_N}\;=\;\big\langle\Psi_{N,t},\widehat{n}_t^{\,2}\,\Psi_{N,t}\big\rangle_{\!\mathcal{H}_N}\,, \end{split} \end{equation} where $\Psi_{N,t}$ is the solution at time $t$ to the many-body Schr\"{o}dinger equation \eqref{eq:Cauchy_problem} with initial datum $\Psi_{N,0}$, $\gamma_{N,t}^{(1)}$ is the associate one-body reduced density matrix, $(u_t,v_t)$ is the solution to the non-linear Gross-Pitaevskii system \eqref{eq:coupled} with initial datum $(u_0,v_0)$, the projections $(q_t)_{\!j}$ with $j\in\{1,\dots,N\}$ are defined in \eqref{eq:def_pj_qj}, and the operator $\widehat{n}_t$ is defined in \eqref{eq:weightn}. It is natural to regard $\widetilde{\alpha}_N(t)$ as the displacement from 100\% of the macroscopic occupation number in the many-body state $\Psi_{N,t}$ of the one-body spinor $\begin{pmatrix} u_t \\ v_t\end{pmatrix}$, thus the vanishing of $\widetilde{\alpha}_N(t)$ has a clear meaning of condensation. In fact, as a consequence of the bounds \eqref{eq:equivalent-BEC-control}, the condition \begin{equation}\label{eq:convergence} \lim_{N\rightarrow\infty}\widetilde{\alpha}_N(t)\;=\;0 \end{equation} and the condition \begin{equation}\label{eq:thesis_again} \lim_{N\to\infty}\mathrm{Tr}_{L^2(\mathbb{R}^3\otimes\mathbb{C}^2)}\Big|\gamma_{N,t}^{(1)}-\Big|\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\Big\rangle\Big\langle\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\Big|\;\Big|\;=\;0 \end{equation} appearing in the actual thesis of Theorem \ref{theorem:main}, are \emph{equivalent}. We shall then prove \eqref{eq:convergence}. The typical way how the smallness of $\widetilde{\alpha}_N(t)$ is controlled at $t>0$, given its smallness at $t=0$, is a Gr\"onwall-type estimate of the form \begin{equation}\label{eq:gronwalltilde} \frac{\mathrm{d}}{\mathrm{d} t}\widetilde\alpha_N(t)\;\leqslant\; C\,(\widetilde\alpha_N(t)+N^{-\eta}) \end{equation} for some constants $C,\eta>0$. In the mean-field scaling, when $V_N(x)$ is instead re-scaled as $N^{-1}V(x)$ and the emerging effective dynamics is governed by the non-linear Hartree equation, differentiating in time directly in $\widetilde\alpha_N(t)$ produces terms that are as small as $\widetilde\alpha_N(t)$ itself or as some negative power of $N$, whence the desired estimate \cite{kp-2009-cmp2010,Pickl-LMP-2011}. However, in the Gross-Pitaevskii scaling, the short-distance behaviour of $V_N$ is so singular in $N$ that a direct differentiation in time yields terms that are not controllable directly by $\widetilde\alpha_N(t)$ and $N^{-\eta}$. Indeed, the scheme of \cite{kp-2009-cmp2010,Pickl-LMP-2011} would rather give, as argued in \cite[Section 6.1.1]{Pickl-RMP-2015}, a bound of the form \begin{equation}\label{eq:temp_bound} \frac{\mathrm{d}}{\mathrm{d} t}\widetilde\alpha_N(t)\;\leqslant\; C\,\big(\widetilde\alpha_N(t)+\langle\Psi_{N,t},\widehat{n}_t\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}+|\mathcal{E}_N[\Psi_{N,t}]-\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\,|+o(1)\big) \end{equation} as $N\to\infty$, showing that in order to control the variation of $\widetilde\alpha_N(t)$ one also needs a control on the larger quantity $\langle\Psi_{N,t},\widehat{n}_t\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\geqslant\langle\Psi_{N,t},\widehat{n}_t^{\,2}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}= \widetilde\alpha_N(t)$. In turn, \eqref{eq:temp_bound} suggests that Gr\"onwall Lemma should be rather applied to the quantity \[ \langle\Psi_{N,t},\widehat{n}_t\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}+|\mathcal{E}_N[\Psi_{N,t}]-\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\,|\,, \] except that differentiating it in time would now produce expectations of $\widehat{n}-\widehat{n}_1$ and $\widehat{n}-\widehat{n}_2$ (the shifted operator $\widehat{n}_d$ being defined in \eqref{eq:shifted_fd}), for which the only manageable control would be in terms of the expectation of $N^{-1}\widehat{\partial_k n}$, but the derivative of the weight function $k\mapsto n(k)$ turns out to be too singular at $k=0$ to produce good estimates. Following these considerations, in analogy to the discussion in \cite[Section 6.1.1]{Pickl-RMP-2015}, one is led to select \begin{equation}\label{eq:def_a<} \alpha_N^<(t)\;:=\;\langle\Psi_{N,t},\widehat{m}_t\Psi_{N,t}\rangle_{\!\mathcal{H}_N}+|\mathcal{E}_N[\Psi_{N,t}]-\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\,| \end{equation} as a convenient quantity to Gr\"onwall-control in time, where $m(k)$ is the smoothed weight function \eqref{eq:weightm} obtained by regularising $n(k)$ at small $k$. Let us recall that by construction \begin{equation}\label{eq:m-n-n2} \max\{n(k),N^{-\xi}\}\;\geqslant\; m(k)\;\geqslant\;n(k)\;\geqslant\;n^2(k)\,,\qquad k\in[0,N]\,. \end{equation} The choice of the control \eqref{eq:def_a<} with the weight \eqref{eq:weightm} turns out to bring an efficient Gr\"onwall-like bound to prove the limits \eqref{eq:thesis}-\eqref{eq:thm_thesis} of Theorem \ref{theorem:main} \emph{provided that} the potential $V_N(x)=N^2V(N x)$ is replaced with a softer scaling potential \begin{equation}\label{eq:def_Vtilde} \widetilde{V}_N(x)\;:=\;N^{3\gamma-1}V(N^\gamma x) \end{equation} defined for some $\gamma\in(0,1)$, as can be seen by reasoning as in \cite[Sections 6.1.1 and 6.1.2]{Pickl-RMP-2015}. However, in the actual Gross-Pitaevskii scaling (i.e., $\gamma=1$ in \eqref{eq:def_Vtilde}) a further modification of $\alpha_N^<(t)$ is necessary, for otherwise the peculiar short scale structure induced by the zero-energy scattering problem associated with $V_N$ -- see \eqref{eq:aN}-\eqref{eq:fN_2} above -- would prevent us to close a Gr\"onwall-type argument for $\alpha_N^<(t)$. In \cite{Pickl-RMP-2015} this difficulty for the one-component condensate is cleverly circumvented by `dressing' the projections $p_j$ (that `count' the particles in the condensate) with a typical Jastrow factor built upon the zero-energy scattering solution $f_\beta$ defined in \eqref{eq:zesp-beta}, relative to the smoothed potential $V_N-W_\beta$ defined in \eqref{eq:def_W_beta}. In analogy to that, also in the spinor case we shall make the replacement \[ p_1\;\mapsto\;\prod_{j=2}^N\,f_\beta(x_1-x_j)p_1\prod_{k=2}^Nf_\beta(x_1-x_k)\,, \] the precise value of $\beta$ to be fixed conveniently. Let us recall from Section \ref{sect:scattering} that $f_\beta$ is actually constant in the outer region $|x|>\mathcal{O}(N^{-\beta})$ and has a smoothed behaviour as $|x|\to 0$. If now one was to re-do the projection-based analysis of \cite{kp-2009-cmp2010,Pickl-LMP-2011} with the insertion in $\frac{\mathrm{d}}{\mathrm{d} t}\widetilde{\alpha}_N(t)$ of such dressed projections, then, as shown in \cite[Section 6.2.1]{Pickl-RMP-2015}, one would get terms of the form \[ \langle\Psi_{N,t},(q_t)_1\Psi_{N,t}\rangle_{\!\mathcal{H}_N}+2(N-1)\,\mathfrak{Re}\langle\Psi_{N,t},g_\beta(x_1-x_2)(p_t)_1\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \] up to three-body re-collision terms. This finally motivates the following: \begin{definition} For $\beta\in(0,1)$ and $\Psi_{N,t}$ as in the assumptions of Theorem \ref{theorem:main}, we define at each time $t$ \begin{equation}\label{eq:def_alphaN} \begin{split} \alpha_N(t)\;&:=\;\alpha^<_N(t)-N(N-1)\,\mathfrak{Re}\langle\Psi_{N,t},g_\beta(x_1-x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ &=\;\langle\Psi_{N,t},\widehat{m}_t\Psi_{N,t}\rangle_{\!\mathcal{H}_N}+|\mathcal{E}_N[\Psi_{N,t}]-\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\,| \\ &\qquad\quad -N(N-1)\,\mathfrak{Re}\langle\Psi_{N,t},g_\beta(x_1-x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \,, \end{split} \end{equation} where $\alpha^<_N(t)$ is defined in \eqref{eq:def_a<}, $m(k)$ is the weight function \eqref{eq:weightm}, $g_\beta$ is the cut-off function defined in \eqref{eq:defng}, and $R_{(ij),t}$ is the operator in \eqref{eq:defnr}. \end{definition} We have thus \emph{three} indicators, $\widetilde{\alpha}_N(t)$, $\alpha^<_N(t)$, and $\alpha_N(t)$. First, we see that $\alpha_N(t)$ and $\alpha^<_N(t)$ are close and coincide asymptotically as $N\to\infty$. \begin{lemma}\label{lem:aa<} Under the assumptions of Theorem \ref{theorem:main}, and for any $\beta\in(0,1)$ chosen in the definition \eqref{eq:def_alphaN} of $\alpha_N(t)$, there exist a constant $\eta>0$ and a function $C(t)>0$, that depends only on $\|u_t\|_\infty$\,, $\|v_t\|_\infty$\,, $V$, and $\beta$ and is independent of $N$, such that, for $N$ large enough, one has \begin{equation} \label{eq:apriori} |\alpha_N(t)-\alpha^<_N(t)|\;\leqslant\;C(t)\,N^{-\eta}\,. \end{equation} \end{lemma} \begin{proof} From \eqref{eq:def_alphaN} one has \begin{equation*} \begin{split} |\alpha_N(t)-\alpha^<_N(t)|\;&\leqslant\;N^2\,\big| \langle\Psi_{N,t},g_\beta(x_1-x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \big|\,, \end{split} \end{equation*} and from \eqref{eq:defnr} one has \[ \begin{split} \big| \langle\Psi_{N,t},g_\beta(x_1-x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \big| \;&\leqslant\;\big| \langle\Psi_{N,t},g_\beta(x_1-x_2)\,(p_t)_1(p_t)_2\,\widehat{m}^b_t\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \big| \\ &\quad + \big| \langle\Psi_{N,t},g_\beta(x_1-x_2)\,(p_t)_1(q_t)_2\,\widehat{m}^a_t\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big| \\ &\quad + \big| \langle\Psi_{N,t},g_\beta(x_1-x_2)\,(q_t)_1(p_t)_2\,\widehat{m}^a_t\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \big| \\ &\leqslant\;\|g_\beta(x_1-x_2)\,(p_t)_1\|_{\mathrm{op}}\:\big(\|\widehat{m}^b_t\|_{\mathrm{op}}+\|\widehat{m}^a_t\|_{\mathrm{op}}\big) \\ &\qquad + \|(p_t)_2\: g_\beta(x_1-x_2)\|_{\mathrm{op}}\:\|\widehat{m}^a_t\|_{\mathrm{op}}\,. \end{split} \] Therefore, \[ \begin{split} |\alpha_N(t)-\alpha^<_N(t)|\;&\leqslant\;N^2\big(\|g_\beta(x_1-x_2)\,(p_t)_1\|_{\mathrm{op}}+ \|(p_t)_2\: g_\beta(x_1-x_2)\|_{\mathrm{op}}\big)\:\big(\|\widehat{m}^b_t\|_{\mathrm{op}}+\|\widehat{m}^a_t\|_{\mathrm{op}}\big) \\ &\leqslant\;N^2\,C(t)\,\|g_\beta\|_2\,N^{-1+\xi}\;\leqslant\;N^2\,C(t)\,N^{-1-\frac{1}{2}\beta}\,N^{-1+\xi}\;=\;C(t)\,N^{-\frac{1}{2}\beta+\xi}\,, \end{split} \] where we used \eqref{eq:ma}-\eqref{eq:mb} and \eqref{eq:dressed2}-\eqref{eq:dressed3} in the second inequality, and \eqref{eq:g2} in the third inequality, for some function $C(t)>0$ depending only on $\|u_t\|_\infty$, $\|v_t\|_\infty$, $V$, and $\beta$. Here $\xi$ is the constant used in the definition \eqref{eq:weightm} of the weight $m(k)$. By taking it small enough, i.e., $0<\xi<\frac{1}{2}\beta$, one obtains the constant $\eta:=\frac{1}{2}\beta-\xi>0$ of the thesis. \end{proof} Next, we can prove the following estimate. \begin{proposition} \label{prop:bound} Under the assumptions of Theorem \ref{theorem:main}, there exists $\beta\in(0,1)$ such that the bound \begin{equation}\label{eq:gronwall} |\alpha_N(t)|\;\leqslant\;C(t)\big(\alpha_N^<(0)+|\mathcal{E}_N[\Psi_{N,0}]-\mathcal{E}^{\mathrm{GP}}[u_0,v_0]|+N^{-\eta}\big)+\int_0^tC(s)\,\alpha_N^<(s)\,\mathrm{d} s \end{equation} holds for some constants $\eta>0$ and $N_0\in\mathbb{N}$, and some function $C(t)>0$ depending on $\|u_t\|_{H^2}$\,, $\|v_t\|_{H^2}$\,, and $V$, but not on $N$. \end{proposition} The proof of Proposition \ref{prop:bound} is the subject of Section \ref{sect:estimate}. With Lemma \ref{lem:aa<} and Proposition \ref{prop:bound} at hand, we are now ready to prove Theorem \ref{theorem:main}. \begin{proof}[Proof of Theorem \ref{theorem:main}] By means of the comparison \eqref{eq:apriori}, the bound \eqref{eq:gronwall} can be turned into \[\tag{*} \alpha_N^<(t)\;\leqslant\;\widehat{C}(t)\big(\alpha_N^<(0)+|\mathcal{E}_N[\Psi_{N,0}]-\mathcal{E}^{\mathrm{GP}}[u_0,v_0]|+N^{-\eta'}\big)+\int_0^t\widehat{C}(s)\,\alpha_N^<(s)\,\mathrm{d} s \] for suitable $\widehat{C}(t)>0$ and $\eta'>0$. The assumption (A5) now guarantees that the terms in the r.h.s.~of (*) above which are evaluated at $t=0$ are small in $N$. This is clear for the energy difference, because of the bound \eqref{eq:hypconvergence_energy}, whereas concerning $\alpha_N^<(0)$ we argue as follows. First we estimate \[ \begin{split} \langle\Psi_{N,0},\widehat{m}\:\Psi_{N,0}\rangle_{\!\mathcal{H}_N}\;&\leqslant\;N^{-\xi}+\langle\Psi_{N,0},\widehat{n}\:\Psi_{N,0}\rangle_{\!\mathcal{H}_N}\;\leqslant\;N^{-\xi}+\langle\Psi_{N,0},\widehat{n}^{\,2}\:\Psi_{N,0}\rangle_{\!\mathcal{H}_N}^{\!1/2} \\ &=\;N^{-\xi}+\sqrt{\widetilde{\alpha}_N(0)}\;\leqslant\;N^{-\xi}+\sqrt{\mathrm{Tr}_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\Big|\gamma_{N,0}^{(1)}-\Big| \!\!\begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\!\Big\rangle\Big\langle\!\! \begin{pmatrix} u_0 \\ v_0\end{pmatrix}\!\!\Big|}\;, \end{split} \] where we used \eqref{eq:m-n-n2} in the first inequality, then a Schwarz inequality, then \eqref{eq:alpha_tilde} in the third step, and finally \eqref{eq:equivalent-BEC-control} in the last inequality, and where $\xi>0$ is the constant used in the definition \eqref{eq:weightm} of the weight $m(k)$). Then by \eqref{eq:hypconvergence} and \eqref{eq:hypconvergence_energy} of assumption (A5), \[ \begin{split} \alpha^<_N(0)\;&=\;\langle\Psi_{N,0},\widehat{m}\:\Psi_{N,0}\rangle_{\!\mathcal{H}_N}+|\mathcal{E}_N[\Psi_{N,0}]-\mathcal{E}^{\mathrm{GP}}[u_0,v_0]\,| \\ &\leqslant\;C \,(N^{-\xi}+N^{-\frac{1}{2}\eta_1}+N^{-\eta_2})\;\leqslant\; C\,N^{-\eta''} \end{split} \] for some $\eta''>0$. With these bounds at $t=0$ the inequality (*) takes the form \[ \alpha_N^<(t)\;\leqslant\; \widetilde{C}(t)\,N^{-\eta}+\int_0^t\widetilde{C}(s)|\alpha_N^<(s)|\,\mathrm{d} s \] for suitable $\widetilde{C}(t)>0$ and $\eta>0$. A Gr\"onwall-like estimate \cite[Theorem 1.3.2]{Pachpatte-ineq} then gives \[ \alpha_N^<(t)\;\leqslant\; \widetilde{C}(t)\,N^{-\eta}+\int_0^t\mathrm{d} s\:\widetilde{C}(s)^2\,N^{-\eta}e^{\int_s^t\widetilde{C}(r)\mathrm{d} r}\;\equiv\; C(t)\,N^{-\eta}\,, \] having set $C(t)$ accordingly. Owing to \eqref{eq:def_a<}, the latter estimate implies at once both \[ C(t)\,N^{-\eta}\;\geqslant\;|\mathcal{E}_N[\Psi_{N,t}]-\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\,|\,, \] which yields the limit \eqref{eq:thm_thesis} of the statement of Theorem \ref{theorem:main}, and \[ \begin{split} C(t)\,N^{-\eta}\;&\geqslant\;\langle\Psi_{N,t},\widehat{m}_t\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\;\geqslant\;\langle\Psi_{N,t},\widehat{n}_t^2\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\;=\;\widetilde{\alpha}_N(t) \\ &\geqslant\;\Big(\mathrm{Tr}_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\Big|\gamma_{N,t}^{(1)}-\Big| \!\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big\rangle\Big\langle\!\! \begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big|\,\Big)^{\!2} \end{split} \] (where we used \eqref{eq:m-n-n2} in the second step, \eqref{eq:alpha_tilde} in the third, and \eqref{eq:equivalent-BEC-control} in the fourth), which yields the limit \eqref{eq:convergence} and hence also the limit \eqref{eq:thesis} of the statement of Theorem \ref{theorem:main}. \end{proof} \section{Proof of Proposition \ref{prop:bound}}\label{sect:estimate} This Section is devoted to the proof of Proposition \ref{prop:bound}, which is the missing step to complete the proof of Theorem \ref{theorem:main}. In order to produce the estimate \eqref{eq:gronwall}, it is convenient to re-express the quantity $\alpha_N(t)$ defined in \eqref{eq:def_alphaN} as \begin{equation}\label{eq:alpha_beta_energy} \alpha_N(t)\;=\;|\mathcal{E}_N[\Psi_{N,t}]-\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\,|+\delta_N(t)\,, \end{equation} where \begin{equation}\label{eq:def_deltaN} \delta_N(t)\;:=\; \langle\Psi_{N,t},\widehat{m}_t\Psi_{N,t}\rangle_{\!\mathcal{H}_N}-N(N-1)\,\mathfrak{Re}\langle\Psi_{N,t},g_\beta(x_1-x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \,, \end{equation} and to analyse the two summands in the r.h.s.~of \eqref{eq:alpha_beta_energy} separately. In analogy to \cite{Pickl-RMP-2015}, we introduce the following $(N,t)$-dependent quantities. \begin{itemize} \item[(a)] A quantity that, as shown later, controls the energy difference in \eqref{eq:alpha_beta_energy}, and precisely \begin{equation}\label{eq:def_dN_a} \delta^{(a)}_{N}\!(t)\;:=\;\langle\Psi_{N,t}, \dot S(x_1,t)\Psi_{N,t}\rangle_{\!\mathcal{H}_N}-\Big\langle\!\! \begin{pmatrix}u_t\\v_t\end{pmatrix}\! ,\,\dot S(t)\begin{pmatrix}u_t\\v_t\end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\,. \end{equation} Here, consistently with the notation of Sections \ref{sec:intro} and \ref{sect:main}, $S(x_j,t)$ denotes the operator-valued matrix $S$ (defined in \eqref{eq:matrixS}) acting on the spatial+spin degrees of freedom of the $j$-th particle. \item[(b)] A `modified interaction term', containing the new potential $W_\beta$ defined in \eqref{eq:def_W_beta} as well as the function $f_\beta$ introduced in \eqref{eq:zesp-beta}: \begin{equation}\label{eq:def_dN_b} \begin{split} \delta_N^{(b)}\!(t)\;:=&\;-N(N-1)\,\mathfrak{Im}\Big(\langle\Psi_{N,t},Z^{(\beta)}_{N,t}(x_1,x_2)R_{(12),t}\Psi_{N,t}\rangle_{\!\mathcal{H}_N}+ \\ &\qquad\qquad\qquad\qquad+ \langle\Psi_{N,t},g_\beta(x_1-x_2)R_{(12),t}\,Z_{N,t}(x_1,x_2)\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\Big)\,, \end{split} \end{equation} where \begin{equation} \begin{split}\label{eq:defz_beta} Z^{(\beta)}_{N,t}&(x_1,x_2)\;:=\;f_\beta(x_1-x_2)\times \\ & \; \times \Big(W_\beta(x_1-x_2)-{\textstyle\frac{8\pi a}{N-1}} (|u_t(x_1)|^2+|v_t(x_1)|^2+|u_t(x_2)|^2+|v_t(x_2)|^2)\Big) \end{split} \end{equation} and \begin{equation} \label{eq:defz} Z_{N,t}(x_1,x_2)\;:=\;V_N(x_1-x_2)-{\textstyle\frac{8\pi a}{N-1}}\Big(|u_t(x_1)|^2+|v_t(x_1)|^2+|u_t(x_2)|^2+|v_t(x_2)|^2\Big)\,. \end{equation} \item[(c)] A term containing mixed spatial derivatives of $g_\beta$ and $R_{(12),t}$\,: \begin{equation}\label{eq:def_dN_c} \delta_N^{(c)}\!(t)\;:=\;-4N(N-1)\mathfrak{Im}\langle\Psi_{N,t},\nabla_1g_\beta(x_1-x_2)\nabla_1R_{(12),t}\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \end{equation} \item[(d)] A `three particle correction' term: \begin{equation}\label{eq:def_dN_d} \begin{split} \delta_N^{(d)}\!(t)\;:=\;N(N-1)&(N-2)\, \mathfrak{Im}\,\langle\Psi_{N,t},g_\beta(x_1-x_2)\\ &\times[V_N(x_1-x_3)+8\pi a(|u_t(x_3)|^2+|v_t(x_3)|^2),R_{(12),t} ]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \end{split} \end{equation} \item[(e)] A `four particle correction' term: \begin{equation}\label{eq:def_dN_e} \begin{split} \delta_N^{(e)}\!(t)\;&:=\;{\textstyle\frac{1}{2}}\,N(N-1)(N-2)(N-3)\,\times \\ &\qquad\times\,\mathfrak{Im}\,\langle\Psi_{N,t},g_\beta(x_1-x_2)[V_N(x_3-x_4),R_{(12),t} ]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \end{split} \end{equation} \item[(f)] A `correction term' for the mean-field potential: \begin{equation}\label{eq:def_dN_f} \begin{split} &\!\!\!\!\!\!\!\!\!\!\delta_N^{(f)}\!(t)\;:=\;N(N-2)\,\times \\ &\!\!\!\!\!\!\!\!\!\!\times\,\mathfrak{Im}\langle\Psi_{N,t},g_\beta(x_1-x_2)\big(|u_t(x_1)|^2+|u_t(x_2)|^2+|v_t(x_1)|^2+|v_t(x_2)|^2,R_{(12),t} \big)\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \end{split} \end{equation} \end{itemize} Concerning the quantities above, we shall establish the following three results. \begin{lemma}\label{lem:enest} Under the assumptions of Theorem 2.1, one has \begin{equation}\label{eq:enest} \frac{\mathrm{d}}{\mathrm{d} t}\big(\mathcal{E}_N[\Psi_{N,t}]-\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\big)\;=\;\delta^{(a)}_{N}\!(t)\,,\qquad t\geqslant 0\,. \end{equation} \end{lemma} \begin{proposition} \label{prop:alphaestimate} Under the hypothesis of Theorem \ref{theorem:main}, one has \begin{equation}\label{eq:ddelta_abcdef} \frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)\;=\; \delta_N^{(b)}\!(t)+\delta_N^{(c)}\!(t)+\delta_N^{(d)}\!(t)+\delta_N^{(e)}\!(t)+\delta_N^{(f)}\!(t)\,,\qquad t\geqslant 0\,. \end{equation} \end{proposition} \begin{proposition}\label{prop:each_g_abcdef} Under the hypothesis of Theorem \ref{theorem:main}, for any $j\in\{a,b,c,d,e,f\}$ one has \begin{equation}\label{eq:bound_dN} |\delta_N^{(j)}\!(t)|\;\leqslant\;C(t)(\alpha^<_N(t)+N^{-\eta})\,,\qquad t\geqslant 0,\quad N\geqslant N_0\,, \end{equation} for some constants $\eta>0$ and $N_0\in\mathbb{N}$, and some function $C(t)>0$ depending on $\|u_t\|_{H^2}$\,, $\|v_t\|_{H^2}$\,, and $V$, but not on $N$. \end{proposition} With Lemma \ref{lem:enest}, Proposition \ref{prop:alphaestimate}, and Proposition \ref{prop:each_g_abcdef} at hand, we are able to obtain \eqref{eq:gronwall}. \begin{proof}[Proof of Proposition \ref{prop:bound}] Integrating \eqref{eq:ddelta_abcdef} in time and using the bounds \eqref{eq:bound_dN} yields \[ |\delta_N(t)|\;\leqslant \;C(0)(\alpha^<_N(0)+N^{-\eta})+\int_0^t C(s)(\alpha^<_N(s)+N^{-\eta})\,\mathrm{d} s\,. \] In turn, integrating \eqref{eq:enest} in time and using the bound \eqref{eq:bound_dN} yields \[ |\mathcal{E}_N[\Psi_{N,t}]-\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\,|\;\leqslant\;|\mathcal{E}_N[\Psi_{N,0}]-\mathcal{E}^{\mathrm{GP}}[u_0,v_0]\,|+\int_0^t C(s)(\alpha^<_N(s)+N^{-\eta})\,\mathrm{d} s\,. \] Combining the last two inequalities above, and using \eqref{eq:alpha_beta_energy}, one then obtains \[ |\alpha_N(t)|\;\leqslant\;\widetilde{C}(t)\big(\alpha_N^<(0)+|\mathcal{E}_N[\Psi_{N,0}]-\mathcal{E}^{\mathrm{GP}}[u_0,v_0]|+N^{-\eta}\big)+\int_0^t\widetilde{C}(s)\,\alpha_N^<(s)\,\mathrm{d} s \] for suitable $\widetilde{C}(t)>0$, thus concluding the proof. \end{proof} To complete our programme, we pass now to the proof of Lemma \ref{lem:enest}, Proposition \ref{prop:alphaestimate}, and Proposition \ref{prop:each_g_abcdef}. \begin{proof}[Proof of Lemma \ref{lem:enest}] Let us consider the time derivative of each energy functional separately. For $\mathcal{E}_N[\Psi_{N,t}]=N^{-1}\langle\Psi_{N,t},H_N\Psi_{N,t}\rangle_{\!\mathcal{H}_N}$, the action of the time derivative on the two vectors $\Psi_{N,t}$ produces a null term, due to \eqref{eq:Cauchy_problem} and to the self-adjointness of $H_N$, so what remains is the expectation of the time derivative of the time-dependent part of $H_N$ itself. Owing to the bosonic symmetry, this is precisely \begin{equation}\label{eq:derivative_linear} \frac{\mathrm{d}}{\mathrm{d} t}\mathcal{E}_N[\Psi_{N,t}]\;=\;\langle\Psi_{N,t}, \dot S(x_1,t)\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \end{equation} As for $ \mathcal{E}^{\mathrm{GP}}$, let us introduce the spinorial Hamiltonian \begin{equation} \label{eq:spin_hamiltonian} h^{\mathrm{GP}}\;:=\;\begin{pmatrix} h^{(u,v)}_{11}&S_{12}\\S_{21}&h^{(u,v)}_{22} \end{pmatrix} \end{equation} with entries defined in \eqref{eq:matrixS} and \eqref{eq:onebody-nonlin-Hamilt}. At each $t\geqslant 0$ the operator $h^{\mathrm{GP}}(t)$ acts on the one-body Hilbert space $L^2(\mathbb{R}^3)\otimes\mathbb{C}^2$. In terms of $h^{\mathrm{GP}}$, \[ \mathcal{E}^{\mathrm{GP}}[u_t,v_t]\;:=\;\Big\langle\!\!\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!,\bigg[\,(h^{\mathrm{GP}})(t)-\begin{pmatrix} 4\pi a\big( |u_t|^2+|v_t|^2\big) &0\\0&4\pi a\big( |u_t|^2+|v_t|^2\big) \end{pmatrix}\bigg]\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\,. \] Then, using \eqref{eq:coupled}, \begin{equation}\label{eq:time_derivative_partial} \begin{split} &\frac{\mathrm{d}}{\mathrm{d} t}\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\;=\; \\ &\quad=\; -\mathrm{i}\,\Big\langle\!\!\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!,\bigg[\,h^{\mathrm{GP}}-\begin{pmatrix} 4\pi a\big( |u_t|^2+|v_t|^2\big) &0\\0&4\pi a\big( |u_t|^2+|v_t|^2\big) \end{pmatrix}, h^{\mathrm{GP}} \bigg]\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\\ &\qquad\quad +\Big\langle\!\!\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!,\bigg(\,\frac{\mathrm{d}}{\mathrm{d} t}\begin{pmatrix} 4\pi a\big( |u_t|^2+|v_t|^2\big) &0\\0&4\pi a\big( |u_t|^2+|v_t|^2\big) \end{pmatrix} \!\!\bigg)\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\\ &\qquad\quad+\Big\langle\!\!\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!,\dot S(t)\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2} \,. \end{split} \end{equation} For the second summand of \eqref{eq:time_derivative_partial} we compute, by the Leibniz rule, \[ \begin{split} \Big\langle&\!\!\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!,\bigg(\,\frac{\mathrm{d}}{\mathrm{d} t}\begin{pmatrix} |u_t|^2+|v_t|^2 &0\\0& |u_t|^2+|v_t|^2 \end{pmatrix} \!\!\bigg)\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\\ =&\;\big\langle u_t,\Big[\big(\partial_t\overline u_t\big)u_t+\overline u_t\big(\partial_tu_t\big)\Big] u_t\big\rangle_{L^2(\mathbb{R}^3)}\;+\;\big\langle u_t,\Big[\big(\partial_t\overline v_t\big)v_t+\overline v_t\big(\partial_tv_t\big)\Big] u_t\big\rangle_{L^2(\mathbb{R}^3)}\\ &+\;\big\langle v_t,\Big[\big(\partial_t\overline u_t\big)u_t+\overline u_t\big(\partial_tu_t\big)\Big] v_t\big\rangle_{L^2(\mathbb{R}^3)}\;+\;\big\langle v_t,\Big[\big(\partial_t\overline v_t\big)v_t+\overline v_t\big(\partial_tv_t\big)\Big] v_t\big\rangle_{L^2(\mathbb{R}^3)}\\ \;=&\;\big\langle \partial_t u_t,|u_t|^2u_t\big\rangle_{L^2(\mathbb{R}^3)}\;+\;\big\langle u_t,|u_t|^2\big(\partial_tu_t\big)\big\rangle_{L^2(\mathbb{R}^3)}+\big\langle \partial_t v_t,|u_t|^2v_t\big\rangle_{L^2(\mathbb{R}^3)}\;+\;\big\langle v_t,|u_t|^2\big(\partial_tv_t\big)\big\rangle_{L^2(\mathbb{R}^3)}\\ &+\;\big\langle \partial_t u_t,|v_t|^2u_t\big\rangle_{L^2(\mathbb{R}^3)}\;+\;\big\langle u_t,|v_t|^2\big(\partial_tu_t\big)\big\rangle_{L^2(\mathbb{R}^3)}+\big\langle \partial_t v_t,|v_t|^2v_t\big\rangle_{L^2(\mathbb{R}^3)}\;+\;\big\langle v_t,|v_t|^2\big(\partial_tv_t\big)\big\rangle_{L^2(\mathbb{R}^3)}. \end{split} \] Bringing the latter expression back to spinorial form gives \[ \begin{split} \Big\langle\!\!\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!,\bigg(\,\frac{\mathrm{d}}{\mathrm{d} t}&\begin{pmatrix} |u_t|^2+|v_t|^2 &0\\0& |u_t|^2+|v_t|^2 \end{pmatrix} \!\!\bigg)\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\\ =&\;\,\Big\langle\frac{\mathrm{d}}{\mathrm{d} t}\begin{pmatrix} u_t \\ v_t \end{pmatrix},\begin{pmatrix} |u_t|^2+|v_t|^2 &0\\0& |u_t|^2+|v_t|^2 \end{pmatrix} \! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\\ &\;+\,\Big\langle\!\!\begin{pmatrix} u_t\\ v_t \end{pmatrix},\begin{pmatrix} |u_t|^2+|v_t|^2 &0\\0& |u_t|^2+|v_t|^2 \end{pmatrix} \frac{\mathrm{d}}{\mathrm{d} t}\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}, \end{split} \] which, since by \eqref{eq:coupled} and \eqref{eq:spin_hamiltonian} the time derivatives produce $-\mathrm{i} h^{\mathrm{GP}}$, yields \[ \begin{split} \Big\langle\!\!\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!,\bigg(\,\frac{\mathrm{d}}{\mathrm{d} t}&\begin{pmatrix} |u_t|^2+|v_t|^2 &0\\0& |u_t|^2+|v_t|^2 \end{pmatrix} \!\!\bigg)\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\\ & \quad=\;-\,\mathrm{i}\,\Big\langle\!\!\begin{pmatrix}\label{eq:appendix_2} u_t \\ v_t \end{pmatrix}\!,\bigg[\,\begin{pmatrix} |u_t|^2+|v_t|^2 &0\\0& |u_t|^2+|v_t|^2 \end{pmatrix}, h^{\mathrm{GP}} \bigg]\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}. \end{split} \] This identity shows that an exact cancellation takes place between the first two summands of \eqref{eq:time_derivative_partial}, and using also $[h^{\mathrm{GP}},h^{\mathrm{GP}}]=\mathbb{O}$) one gets \[ \frac{\mathrm{d}}{\mathrm{d} t}\mathcal{E}^{\mathrm{GP}}[u_t,v_t]\;=\;\Big\langle\!\!\begin{pmatrix} u_t \\ v_t \end{pmatrix}\!,\dot S(t)\! \begin{pmatrix} u_t\\ v_t \end{pmatrix}\!\!\Big\rangle_{\!L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}. \] Comparing the quantities $\frac{\mathrm{d}}{\mathrm{d} t}\mathcal{E}_N[\Psi_{N,t}]$ and $\frac{\mathrm{d}}{\mathrm{d} t}\mathcal{E}^{\mathrm{GP}}[u_t,v_t]$ computed above with \eqref{eq:def_dN_a} yields finally the conclusion \eqref{eq:enest}. \end{proof} Next, in order to establish the identity \eqref{eq:ddelta_abcdef} of Proposition \ref{prop:alphaestimate}, let us single out the following fact. \begin{lemma} \label{lemma:ddt}Under the assumptions of Theorem \ref{theorem:main}, one has \[ \frac{\mathrm{d}}{\mathrm{d} t}\langle\Psi_{N,t},\widehat{m}_t \Psi_{N,t}\rangle_{\!\mathcal{H}_N}\;=\;-N(N-1)\,{\mathfrak{Im}}\,\langle\Psi_{N,t},Z_{N,t}(x_1,x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,,\qquad t\geqslant 0\,, \] with $Z_{N,t}(x_1,x_2)$ defined in \eqref{eq:defz}. \end{lemma} \begin{proof} First, owing to \eqref{eq:coupled} and to the definition \eqref{eq:spin_hamiltonian} of $h^{\mathrm{GP}}$, \begin{equation}\label{eq:derivative_projector_p} \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\,p_t\;&=\;\frac{\mathrm{d}}{\mathrm{d} t}\bigg(\:\Big| \!\!\begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big\rangle\Big\langle\!\! \begin{pmatrix} u_t \\ v_t\end{pmatrix}\!\!\Big|\:\bigg)\;=\; -\mathrm{i}\,[h^{\mathrm{GP}}(t),p_t] \\ \frac{\mathrm{d}}{\mathrm{d} t}\,q_t\;&=\;-\frac{\mathrm{d}}{\mathrm{d} t}\,p_t\;=\;\mathrm{i}\,[h^{\mathrm{GP}}(t),p_t] \;=\;-\mathrm{i}\,[h^{\mathrm{GP}}(t),q_t]\,, \end{split} \end{equation} whence, differentiating in time in \eqref{eq:defPk}, \begin{equation}\label{eq:derivative_projector_P} \frac{\mathrm{d}}{\mathrm{d} t}\,P_k\;=\;-\mathrm{i}\Big[\sum_{j=1}^Nh^{\mathrm{GP}}_j,P_k\Big],\qquad k\in\{0,\dots,N\}\,. \end{equation} When the time derivative of $\langle\Psi_{N,t},\widehat{m}_t \Psi_{N,t}\rangle_{\!\mathcal{H}_N}$ hits the $\Psi_{N,t}$'s, this produces a commutator term $[H_N,\widehat{m}_t]$, owing to \eqref{eq:Cauchy_problem}, whereas when the time derivatives hits each $P_k$ in $\widehat{m}_t$, this produces a commutator term of the form \eqref{eq:derivative_projector_P}. Thus, \[ \frac{\mathrm{d}}{\mathrm{d} t}\,\langle\Psi_{N,t},\widehat{m}_t \Psi_{N,t}\rangle_{\!\mathcal{H}_N}\;=\;\mathrm{i} \,\big\langle\Psi_{N,t},\big[H_N-\sum_{j=1}^N h^{\mathrm{GP}}_j,\,\widehat{m}_t\big]\Psi_{N,t}\big\rangle_{\!\mathcal{H}_N}\,. \] In the r.h.s.~above an exact cancellation occurs between the terms $\sum_{j=1}^N(-\Delta_{x_j})+\sum_{j=1}^N S(x_j,t)$ given by $H_N$ and the same terms given by $\sum_{j=1}^N h^{\mathrm{GP}}_j$, and what remains is \[ \frac{\mathrm{d}}{\mathrm{d} t}\,\langle\Psi_{N,t},\widehat{m}_t \Psi_{N,t}\rangle_{\!\mathcal{H}_N}\;=\;\mathrm{i} \,\big\langle\Psi_{N,t},\big[\sum_{i<j}^NV_N(x_i-x_j)-\sum_{i=1}^N8\pi a(|u_t(x_i)|^2+|v_t(x_i)|^2,\,\widehat{m}_t\big]\Psi_{N,t}\big\rangle_{\!\mathcal{H}_N}\,. \] Because of the bosonic symmetry of $\Psi_{N,t}$ and $\widehat{m}_t$, the identity above reads also \[ \begin{split} &\frac{\mathrm{d}}{\mathrm{d} t}\,\langle\Psi_{N,t},\widehat{m}_t \Psi_{N,t}\rangle_{\!\mathcal{H}_N}\;= \\ &\;\; =\;{\textstyle\frac{1}{2}}\,\mathrm{i}\,N(N-1)\big\langle\Psi_{N,t},\big[V_N(x_1-x_2)\\ &\qquad\qquad\qquad\qquad-{\textstyle\frac{8\pi a}{N-1}}(|u_t(x_1)|^2+|v_t(x_1)|^2)+|u_t(x_2)|^2+|v_t(x_2)|^2),\,\widehat{m}_t\big]\Psi_{N,t}\big\rangle_{\!\mathcal{H}_N}\\ &\;\;=\;{\textstyle\frac{1}{2}}\,\mathrm{i}\,N(N-1)\langle\Psi_{N,t},[Z_{N,t}(x_1,x_2),\,\widehat{m}_t]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,, \end{split} \] where $Z_{N,t}(x_1,x_2)$ is defined in \eqref{eq:defz}. Last, \[\begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\,\langle\Psi_{N,t},\widehat{m}_t \Psi_{N,t}\rangle_{\!\mathcal{H}_N}\;&=\;{\textstyle\frac{1}{2}}\,\mathrm{i}\,N(N-1)\,\langle\Psi_{N,t},[Z_{N,t}(x_1,x_2),R_{(12),t}]\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &=\;-N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},Z_{N,t}(x_1,x_2)R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,, \end{split} \] where in the first identity we used \eqref{eq:commut_r} of Lemma \ref{lemma:tools}(iii) and in the second identity we used the property $\langle\varphi,[A,B]\varphi\rangle\;=\;2\,\mathrm{i}\,\mathfrak{Im}\langle\varphi, AB\varphi\rangle$ of bounded symmetric operators $A$ and $B$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:alphaestimate}] It follows at once from the definition \eqref{eq:def_deltaN} of $\delta_N$ and from Lemma \ref{lemma:ddt} above that \[ \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)\;&=\;-N(N-1)\,{\mathfrak{Im}}\,\langle\Psi_{N,t},Z_{N,t}(x_1,x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ &\qquad -N(N-1)\,\frac{\mathrm{d}}{\mathrm{d} t}\mathfrak{Re}\langle\Psi_{N,t},g_\beta(x_1-x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \,. \end{split} \] In the r.h.s.~above the time derivative can either hit the $\Psi_{N,t}$'s, thus producing $H_N\Psi_{N,t}$ via \eqref{eq:Cauchy_problem}, or the operator $R_{(12),t}$: in the latter case, we see from the definition \eqref{eq:defnr} of $R_{(ij)}$ and from \eqref{eq:derivative_projector_p} that \[ \frac{\mathrm{d}}{\mathrm{d} t}\,R_{(k\ell)}\;=\;-\mathrm{i}\Big[\sum_{j=1}^Nh^{\mathrm{GP}}_j,R_{(k\ell)}\Big]\,. \] Therefore, \[ \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)\;&=\;-N(N-1)\,{\mathfrak{Im}}\,\langle\Psi_{N,t},Z_{N,t}(x_1,x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &\qquad-\,N(N-1)\,\mathfrak{Im}\,\big\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[\sum_{j=1}^N h^{\mathrm{GP}}_j,R_{(12),t}\big]\Psi_{N,t}\big\rangle_{\!\mathcal{H}_N}\\ &\qquad-\,N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},g_\beta(x_1-x_2)R_{(12,t)}\,H_N\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &\qquad+\,N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},H_N \,g_\beta(x_1-x_2)R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &=\;-N(N-1)\,{\mathfrak{Im}}\,\langle\Psi_{N,t},Z_{N,t}(x_1,x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &\qquad +N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},[\,H_N,g_\beta(x_1-x_2)]\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &\qquad+N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[H_N-\sum_{j=1}^Nh^{\mathrm{GP}}_j,R_{(12),t}\big]\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \end{split} \] In the last summand above, both $g_\beta(x_1-x_2)$ and $R_{(12)}$ break the full bosonic symmetry: as a consequence, $H_N-\sum_{j=1}^Nh^{GP}_j$ produces several terms, depending on the presence or absence of the variables $x_1$ and $x_2$. We find \begin{align*} &\frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)\;=\;-N(N-1)\,{\mathfrak{Im}}\,\langle\Psi_{N,t},Z_{N,t}(x_1,x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ &\;+N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},[\,H_N,g_\beta(x_1-x_2)]\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ &\;+N(N-1)\mathfrak{Im}\langle\Psi_N,g_\beta(x_1-x_2)[Z(x_1,x_2),R_{(12),t}]\Psi_N\rangle_{\!\mathcal{H}_N} \\ &\;+N(N-2)\mathfrak{Im}\langle\Psi_{N,t},g_\beta(x_1-x_2)\big(|u_t(x_1)|^2+|u_t(x_2)|^2+|v_t(x_1)|^2+|v_t(x_2)|^2,R_{(12),t} \big)\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ &\;+N(N-1)(N-2)\mathfrak{Im}\,\langle\Psi_{N,t},g_\beta(x_1-x_2)[V_N(x_1-x_3)+8\pi a(|u_t(x_3)|^2+|v_t(x_3)|^2),R_{(12),t} ]\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ &\;+{\textstyle\frac{1}{2}}\,N(N-1)(N-2)(N-3)\mathfrak{Im}\,\langle\Psi_{N,t},g_\beta(x_1-x_2)[V_N(x_3-x_4),R_{(12),t} ]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \end{align*} The last \emph{three} summands in the r.h.s.~above are recognised to be, respectively, $\delta_N^{(f)}(t)$, $\delta_N^{(d)}(t)$, and $\delta_N^{(e)}(t)$, whence \[ \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)\;=&\;\delta_N^{(d)}(t) + \delta_N^{(e)}(t)+\delta_N^{(f)}(t)+N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},[\,H_N,g_\beta(x_1-x_2)]\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ &\;-N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},(1-g_\beta(x_1-x_2))\, Z_{N,t}(x_1,x_2)\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ &\;-N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},g_\beta(x_1-x_2)\,R_{(12),t}\, Z_{N,t}(x_1,x_2)\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \,. \end{split} \] By means of the identity \begin{equation*} (1-g_\beta(x_1-x_2))Z_{N,t}(x_1,x_2)=Z_{N,t}^{(\beta)}(x_1,x_2)+(V_N(x_1-x_2)-W_\beta(x_1-x_2))f_\beta(x_1-x_2), \end{equation*} that follows from \eqref{eq:defng}, \eqref{eq:defz_beta}, and \eqref{eq:defz}, the above expression for $\frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)$ takes the form \[ \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)\;&=\;\delta_N^{(d)}(t) + \delta_N^{(e)}(t)+\delta_N^{(f)}(t) \\ &\qquad -N(N-1)\,\mathfrak{Im}\Big(\langle\Psi_{N,t},Z^{(\beta)}_{N,t}(x_1,x_2)R_{(12),t}\Psi_{N,t}\rangle_{\!\mathcal{H}_N}+ \\ &\qquad \qquad\qquad\qquad\qquad+ \langle\Psi_{N,t},g_\beta(x_1-x_2)R_{(12),t}\,Z_{N,t}(x_1,x_2)\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\Big)\,, \\ &\qquad -N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},(V_N(x_1-x_2)-W_\beta(x_1-x_2))f_\beta(x_1-x_2)R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &\qquad +N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},[\,H_N,g_\beta(x_1-x_2)]\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \\ & = \;\delta_N^{(b)}(t) + \delta_N^{(d)}(t) + \delta_N^{(e)}(t)+\delta_N^{(f)}(t) \\ &\qquad -N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},(V_N(x_1-x_2)-W_\beta(x_1-x_2))f_\beta(x_1-x_2)R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &\qquad +N(N-1)\,\mathfrak{Im}\,\langle\Psi_{N,t},[\,H_N,g_\beta(x_1-x_2)]\,R_{(12),t}\,\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \end{split} \] Last, let us focus on the last two summands in the r.h.s.~above. Precisely at this level a cancellation occurs in which the difference $V_N-W_\beta$ is controlled by the commutator $[H_N,g_\beta]$, at the cost of the further term $\delta_N^{(c)}$ that is going to appear in a moment. We compute \[ \begin{split} [H_N&,g_\beta(x_1-x_2)]\;=\;-[H_N,f_\beta(x_1-x_2)]\;=\;[\Delta_{x_1}+\Delta_{x_2},f_\beta(x_1-x_2)] \\ &=\;(\Delta_{x_1}+\Delta_{x_2})f_\beta(x_1-x_2)+2(\nabla_{x_1}f_\beta(x_1-x_2))\nabla_{x_1}+2(\nabla_{x_2}f_\beta(x_1-x_2))\nabla_{x_2} \\ &=\;(V_N(x_1-x_2)-W_\beta(x_1-x_2))f_\beta(x_1-x_2)\\ &\quad\,\,\,\,-2(\nabla_{x_1}g_\beta(x_1-x_2))\nabla_{x_1}-2(\nabla_{x_2}g_\beta(x_1-x_2))\nabla_{x_2} \end{split} \] having used \eqref{eq:defng} in the first identity and the zero-energy scattering equation \eqref{eq:zesp-beta} in the last one. We thus see that the $(V_N-W_\beta)f_\beta$-term gets cancelled out in the above expression for $\frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)$, whereas the $(\nabla g)\nabla$-terms produce precisely the expression \eqref{eq:def_dN_c} for $\delta_N^{(c)}$. The conclusion is \[ \frac{\mathrm{d}}{\mathrm{d} t}\delta_N(t)\;=\;\delta_N^{(b)}(t) +\delta_N^{(c)}(t) +\delta_N^{(d)}(t) + \delta_N^{(e)}(t)+\delta_N^{(f)}(t)\,, \] which completes the proof. \end{proof} Last, we establish the bounds \eqref{eq:bound_dN}. \begin{proof}[Proof of Proposition \ref{prop:each_g_abcdef}] Let us discuss each case $\delta_N^{(j)}$, $j\in\{a,b,c,d,e,f\}$ separately. \medskip \noindent\textbf{Term $\delta_N^{(a)}$.} We recall that \[ \delta_N^{(a)}(t)\;=\;\langle\Psi_{N,t}, \dot S(x_1,t)\Psi_{N,t}\rangle_{\!\mathcal{H}_N}-\Big\langle \begin{pmatrix}u_t\\v_t\end{pmatrix} ,\,\dot S(t)\begin{pmatrix}u_t\\v_t\end{pmatrix}\Big\rangle_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\,. \] Inserting $\mathbbm{1}=p_t+q_t$ into the first summand yields \[ \begin{split} \delta_N^{(a)}(t)\;=\;&\langle\Psi_{N,t}, (p_t)_1\dot S(x_1,t)(p_t)_1\Psi_{N,t}\rangle_{\!\mathcal{H}_N}+\langle\Psi_{N,t}, (q_t)_1\dot S(x_1,t)(q_t)_1\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\\ &\;+2 \mathfrak{Re} \langle\Psi_{N,t}, (q_t)_1\dot S(x_1,t)(p_t)_1\Psi_{N,t}\rangle_{\!\mathcal{H}_N}-\Big\langle \begin{pmatrix}u_t\\v_t\end{pmatrix} ,\,\dot S(x,t)\begin{pmatrix}u_t\\v_t\end{pmatrix}\Big\rangle_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\,. \end{split} \] We then use the identity \begin{equation}\label{eq:starid} p_1A(x_1)p_1=p_1\Big\langle\begin{pmatrix}u\\v\end{pmatrix} ,\,A(x)\begin{pmatrix}u\\v\end{pmatrix}\Big\rangle_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\,, \end{equation} which is valid for any 2x2 operator-valued matrix $A(x)$, the $L^\infty$-boundedness of $\dot S$ (see assumption (A1)), and the invertibility of $\widehat{n}_t^{1/2}$ on the range of $(q_t)_1$ (i.e., $\mathbbm{1}_{\operatorname{Ran}(q_t)_1}=\widehat{n}_t^{-1/2}\widehat{n}_t^{1/2}$), so as to obtain \begin{align} \label{eq:delta_a_partial1} \delta_N^{(a)}(t)\;\leqslant\;|\delta_N^{(a)}(t)|\;\leqslant\;&\Big(1-\|(p_t)_1\Psi_{N,t}\|^2\Big)\,\bigg|\Big\langle \begin{pmatrix}u_t\\v_t\end{pmatrix} ,\,\dot S(t)\begin{pmatrix}u_t\\v_t\end{pmatrix}\Big\rangle_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\bigg|\\ &+\|\dot S\|_{L^\infty_t L^\infty_x}\|(q_t)_1\Psi_{N,t}\|^2 \label{eq:delta_a_partial2}\\ &+2 \big| \langle\Psi_{N,t}, \hat n^{-1/2}_t\widehat n^{1/2}_t\,(q_t)_1\dot S(x_1,t)(p_t)_1\Psi_{N,t}\rangle_{\mathcal{H}_N}\big|.\label{eq:delta_a_partial3} \end{align} The term \eqref{eq:delta_a_partial1} is controlled by $\|\dot S\|_{L^\infty_tL^\infty_x}\,\|(q_t)_1\Psi_{N,t}\|^2$ (indeed $1-\|(p_t)_1\Psi_{N,t}\|^2=\|(q_t)_1\Psi_{N,t}\|^2$). In the term \eqref{eq:delta_a_partial3} we shift $\widehat{n}_t^{1/2}$ to $\widehat{n}_{1,t}^{1/2}$ by means of \eqref{eq:commutation}. This and a Schwarz inequality yield \begin{equation} \label{eq:delta_a_partial4} \begin{split} &|\delta_N^{(a)}(t)|\;\leqslant\;2\,\|\dot S\|_{L^\infty_tL^\infty_x}\,\bigg(\|(q_t)_1\Psi_{N,t}\|^2+\Big|\langle\Psi_{N,t}, \widehat n^{-1/2}_t\,(q_t)_1\dot S(x_1,t)\widehat n^{1/2}_{1,t}(p_t)_1\Psi_{N,t}\rangle_{\mathcal{H}_N}\Big|\bigg)\\ &\leqslant\;\widetilde C\;\bigg(\|(q_t)_1\Psi_{N,t}\|^2+\|\widehat n^{-1/2}_t(q_t)_1\Psi_{N,t}\|\sqrt{\langle\Psi_{N,t},\widehat n_{1,t}^{1/2}(p_t)_1\dot S(x_1,t)^2(p_t)_1\widehat n_{1,t}^{1/2}\Psi_{N,t}\rangle_{\mathcal{H}_N}}\bigg), \end{split} \end{equation} for some constant $\widetilde C>0$. Moreover, owing to \eqref{eq:starid}, \[ \|p_1\dot S(x_1,t)^2p_1\|_{\mathrm{op}}\;=\;\|p_1\|_{\mathrm{op}}\;\Big|\Big\langle \begin{pmatrix}u\\v\end{pmatrix} ,\,\dot S(x,t)^2\begin{pmatrix}u\\v\end{pmatrix}\Big\rangle_{L^2(\mathbb{R}^3)\otimes\mathbb{C}^2}\Big|\;\leqslant\;\|\dot S\|_{L^\infty_t L^\infty_x}^2\,, \] and hence \[ \sqrt{\langle\Psi_{N,t},\widehat n_{1,t}^{1/2}(p_t)_1\dot S(x_1,t)^2(p_t)_1\widehat n_{1,t}^{1/2}\Psi_{N,t}\rangle_{\mathcal{H}_N}}\;\leqslant\;\|\dot S\|_{L^\infty_t L^\infty_x}\|\widehat n_{1,t}^{1/2}\Psi_{N,t}\|\,; \] also, \[ \|(q_t)_1\Psi_{N,t}\|^2\;=\;\|\widehat n^{1/2}_t\Psi_{N,t}\|^2 \] due to \eqref{eq:relation_q_n}, and \[ \|\widehat n^{-1/2}_t(q_t)_1\Psi_{N,t}\|\;\leqslant\;\|\widehat n^{1/2}_t\Psi_{N,t}\| \] due to Lemma \ref{lemma:nq}. These facts, together with the operator bound $\widehat{n}_1\;\leqslant\;\widehat{n}+N^{-1/2}\mathbbm{1}$, give \[ \begin{split} |\delta_N^{(a)}(t)|\;&\leqslant\;\widehat{C}\big(\|\widehat n^{1/2}_t\Psi_{N,t}\|^2+\|\widehat n^{1/2}_t\Psi_{N,t}\|\|\widehat n^{1/2}_{1,t}\Psi_{N,t}\|\big)\\ &\leqslant\;\widehat{C}\bigg(\|\widehat n^{1/2}_t\Psi_{N,t}\|^2+\|\widehat n^{1/2}_t\Psi_{N,t}\|\sqrt{\|\widehat{n}^{1/2}_t\Psi_{N,t}\|^2+\frac{1}{\sqrt{N}}}\bigg)\\ &\leqslant\;\widehat{C}\Big(\|\widehat n^{1/2}_t\Psi_{N,t}\|^2+\|\widehat n^{1/2}_t\Psi_{N,t}\|^2+\frac{1}{\,N^{1/4}}\|\widehat{n}^{1/2}_t\Psi_{N,t}\|\Big)\;\leqslant\;C\Big(\|\widehat n^{1/2}_t\Psi_{N,t}\|^2+\frac{1}{\sqrt{N}}\Big) \end{split} \] for some constants $\widehat C,C>0$. Last, applying \eqref{eq:m-n-n2}, we conclude \[ |\delta_N^{(a)}(t)|\;\leqslant\;C\Big(\alpha_N^<(t)+\frac{1}{\sqrt{N}}\Big). \] \bigskip \noindent\textbf{Term $\delta_N^{(b)}$.} This term is crucial, for it is the only one containing, through $Z_{N,t}^{(\beta)}$, the actual difference between $W_\beta$ and the effective non-linear potential -- and \emph{this} difference is controllable, unlike the analogous difference with $V_N$ in place of $W_\beta$. Concerning the $Z_{N,t}$-term in $\delta_N^{(b)}$, which alone would not be controllable either, the nearby $g_\beta$ allows for an efficient estimate too. We start with splitting \begin{align} \delta_N^{(b)}(t)\;=&\;-N(N-1)\mathfrak{Im}\langle\Psi_{N,t},g_\beta(x_1-x_2)R_{(12),t}\;Z_{N,t}(x_1,x_2)\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \label{eq:gamma_b2}\\ &\;-N(N-1)\mathfrak{Im}\langle\Psi_{N,t},\,Z_{N,t}^{(\beta)}(x_1,x_2)R_{(12),t}\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \label{eq:gamma_b1}\;. \end{align} In order to bound \eqref{eq:gamma_b2} we observe that from \eqref{eq:defnr} each summand of $R_{(12),t}$ contains at least one $p_t$, either in the variable $x_1$ or $x_2$. Since \eqref{eq:gamma_b2} is symmetric under exchange of $x_1\leftrightarrow x_2$, it follows that $(p_t)_1(q_t)_2$ and $(q_t)_1(p_t)_2$ give the same contribution. Then \[ \begin{split} |\eqref{eq:gamma_b2}|\;\leqslant\;2\,N^2\|g_\beta(x_1-x_2)(p_t)_1\|_{\mathrm{op}}\big(\|\widehat{m}^a_t\|_{\mathrm{op}}+\|\widehat{m}^b_t\|_{\mathrm{op}}\big)\|(p_t)_1 Z_{N,t}(x_1,x_2)\Psi_{N,t}\|\,, \end{split} \] having used the $p$'s coming from $R_{(12)}$ to multiply both $g_\beta$ and $Z_{N,t}(x_1,x_2)$. By means of the bounds \eqref{eq:ma}, \eqref{eq:mb}, \eqref{eq:dressed2}, \eqref{eq:dressedpotential}, and \eqref{eq:g2}, and the fact that the most singular contribution to $Z_{N,t}$ is given by $V_N$, we obtain \begin{equation} \label{eq:gamma_b2_final} |\eqref{eq:gamma_b2}|\;\leqslant\; \widehat{C}(t)\, N^{1+\xi}\|g_\beta\|_2\|(p_t)_1V_N(x_1,x_2)\Psi_{N,t}\|\;\leqslant\; \widetilde{C}(t)\,N^{-1-\frac{\beta}{2}+\xi}, \end{equation} for suitable $\widehat{C}(t),\widetilde{C}(t)>0$ that depend on $\|u_t\|_{\infty}$ and $\|v_t\|_{\infty}$ but not on $N$. $Z_{N,t}$ contains also terms depending on $|u_t|^2$ and $|v_t|^2$ that are bounded the same way. The summand \eqref{eq:gamma_b1}, in turn, is split as \begin{align} &|\eqref{eq:gamma_b1}|\;\leqslant\;N^2\,\big|\langle\Psi_{N,t},\big(W_\beta(x_1-x_2)f_\beta(x_1-x_2)\nonumber\\ &\qquad\qquad-{\textstyle \frac{8\pi a}{N-1}}\,(|u_t(x_1)|^2+|v_t(x_1)|^2+|u_t(x_2)|^2+|v_t(x_2)|^2)\big)R_{(12),t}\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big| \label{eq:gamma_b3}\\ &+ N \,\big|\langle\Psi_{N,t},8\pi a (|u_t(x_1)|^2+|v_t(x_1)|^2+|u_t(x_2)|^2+|v_t(x_2)|^2)g_\beta(x_1-x_2)R_{(12),t}\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|, \label{eq:gamma_b4} \end{align} having used $f_\beta=1-g_\beta$ and the definition \eqref{eq:defz_beta} of $Z_{N,t}^{(\beta)}$. We now recognise that the summand \eqref{eq:gamma_b3} can be estimated by means of a general result from \cite{Pickl-RMP-2015} which for convenience we state in Lemma \ref{lemma:appendix_2}. Indeed, the potential $W_\beta f_\beta$ does satisfy the conditions \eqref{eq:Wtilde_1}, \eqref{eq:Wtilde_2}, \eqref{eq:Wtilde_3} of Lemma \ref{lemma:appendix_2}: condition \eqref{eq:Wtilde_1} is obvious from \eqref{eq:def_W_beta} and \eqref{eq:R_beta}; condition \eqref{eq:Wtilde_2} follows from \eqref{eq:def_W_beta} and the uniform boundedness of $f_\beta$; condition \eqref{eq:Wtilde_3} is explicitly checked in \cite[Lemma 5.1]{Pickl-RMP-2015}. Condition \eqref{eq:Phi_N} is satisfied too, where the vector $\Phi_N$ of Lemma \ref{lemma:appendix_2} is, in our case, precisely $\Psi_{N,t}$. Indeed, due to the positivity of $V_N$, \[ \begin{split} \mathcal{E}_N[\Psi_{N,t}]&\;=\;\|\nabla_1\Psi_{N,t}\|^2+\langle \Psi_{N,t}, S (x_1,t) \Psi_{N,t}\rangle_{\mathcal{H}_N}+\frac{1}{2}(N-1)\langle\Psi_{N,t}, V_N(x_1-x_2)\Psi_{N,t}\rangle_{\mathcal{H}_N}\\ &\;\geqslant\;\|\nabla_1\Psi_{N,t}\|^2+\langle \Psi_{N,t}, S (x_1,t) \Psi_{N,t}\rangle_{\mathcal{H}_N}\,, \end{split} \] whence \begin{equation} \label{eq:estimate_H1} \|\nabla_1\Psi_{N,t}\|^2\;\leqslant\;\big|\mathcal{E}_N[\Psi_{N,t}]\big|+\|S\|_{L^\infty_t\,L^\infty_x}\,. \end{equation} On the other hand, integrating the bound \[ \frac{\mathrm{d}}{\mathrm{d} t}\mathcal{E}_N[\Psi_{N,t}]\;\leqslant\; \|\dot S\|_{L^\infty_t\,L^\infty_x} \] (see \eqref{eq:derivative_linear} above), yields \begin{equation} \label{eq:estimate_energy} \mathcal{E}_N[\Psi_{N,t}]\;\leqslant\;G(t) \end{equation} for some positive and $N$-\emph{independent} function $G(t)$. Thus, \eqref{eq:estimate_H1}-\eqref{eq:estimate_energy} prove condition \eqref{eq:Phi_N}. Therefore, Lemma \ref{lemma:appendix_2} applies and \begin{equation}\label{eq:gamma_b3_final} |\eqref{eq:gamma_b3}|\;\leqslant\;{c}(t)\,(\alpha^<_N(t)+N^{-\eta'})\, \end{equation} for some ${c}(t)>0$. The summand \eqref{eq:gamma_b4} is estimated straightforwardly as \begin{equation} \begin{split}\label{eq:gamma_b4_final} |\eqref{eq:gamma_b4}|&\;\leqslant\;C\,N\|g_\beta(x_1-x_2)p_1\|_{\mathrm{op}}\big(\|u_t\|_\infty^2+\|v_t\|_\infty^2\big)\big(\|\widehat{m}^a\|_{\mathrm{op}}+\|\widehat{m}^b\|_{\mathrm{op}}\big)\\ &\;\leqslant\; \widetilde{c}(t) N^{-1-\beta/2+\xi}, \end{split} \end{equation} thanks to \eqref{eq:ma}, \eqref{eq:mb}, \eqref{eq:dressed2}, and \eqref{eq:g2}, where $C$ is a positive constant and $\widetilde{ c}(t)>0$ depends on $\|u_t\|_{\infty}$, $\|v_t\|_{\infty}$\,. Choosing $\xi$ small enough, \eqref{eq:gamma_b3_final} and \eqref{eq:gamma_b4_final} yield \[ |\eqref{eq:gamma_b1}|\;\leqslant\;\text {max}\{c(t),\widetilde{c}(t)\}\,(\alpha^<_N(t)+N^{-\eta''}) \] for some $\eta''>0$, which, combined with \eqref{eq:gamma_b2_final}, again with $\xi$ small enough, finally gives \[ |\delta_N^{(b)}(t)|\;\leqslant\; C(t)\,(\alpha^<_N(t)+N^{-\eta}) \] for some $\eta>0$ and $C(t):=\text {max}\{\widetilde{C}(t),c(t), \widetilde{ c}(t)\}$. \bigskip \noindent\textbf{Term $\delta_N^{(c)}$.} The term \[ \delta_N^{(c)}(t)\;=\;-4N(N-1)\mathfrak{Im}\langle\Psi_{N,t},\nabla_1g_\beta(x_1-x_2)\nabla_1R_{(12),t}\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \] has the very same structure as the term $\gamma_c$ discussed in \cite[page 38]{Pickl-RMP-2015}: therefore, re-doing the same computations therein we obtain \[ |\delta_N^{(c)}(t)|\;\leqslant\; C(t)\,(\alpha^<_N(t)+N^{-\eta}) \] for some $\eta>0$ and some $C(t)>0$ depending on $\|u_t\|_{H^2}$ and $\|v_t\|_{H^2}$ but not on $N$. \bigskip \noindent\textbf{Term $\delta_N^{(d)}$.} Let us split \begin{align} &\delta_N^{(d)}(t)\;=\;N(N-1)(N-2)\mathfrak{Im}\langle\Psi_{N,t},g_\beta(x_1-x_2)\Big[V_N(x_1-x_3),R_{(12),t}\Big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \label{eq:gamma_d_1}\\ &+N(N-1)(N-2)\mathfrak{Im}\langle\Psi_{N,t},g_\beta(x_1-x_2)\Big[8\pi a(|u_t(x_3)|^2+|v_t(x_3|^2),R_{(12),t}\Big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\,. \label{eq:gamma_d_2} \end{align} Since the summand \eqref{eq:gamma_d_1} has the very same structure as the quantity $\gamma_d$ defined in \cite[Definition 6.3]{Pickl-RMP-2015}, one can merely repeat the analysis of \cite[Appendix A.2]{Pickl-RMP-2015} in order to bound it and to obtain \begin{equation} \label{eq:gamma_d_final1} |\eqref{eq:gamma_d_1}|\;\leqslant\; \widetilde{C}(t)\,(\alpha^<_N+N^{-\eta'}) \end{equation} for some $\eta'>0$ and $\widetilde{C}(t)>0$ depending on $\|u_t\|_{\infty}$ and $\|v_t\|_{\infty}$ but not on $N$. In turn, we bound \eqref{eq:gamma_d_2} by means of \eqref{eq:commut_r2} in the form $[K_{34},R_{(12)}]$, and collecting all terms this produces \[ \begin{split} |&\eqref{eq:gamma_d_2}| \;\leqslant \;8\pi\,a\,N^3\big|\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[|u_t(x_3)|^2+|v_t(x_3)|^2\,,\, (p_t)_1(p_t)_2(p_t)_3(p_t)_4\,\widehat{m}^c_t\big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|\\ +&8\pi\,a\,N^3\big|\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[|u_t(x_3)|^2+|v_t(x_3)|^2\,,\,(p_t)_1(p_t)_2((p_t)_3(q_4)_t+(q_t)_3(p_t)_4)\,\widehat{m}^d_t\Big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|\\ +&8\pi\,a\,N^3\big|\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[|u_t(x_3)|^2+|v_t(x_3)|^2\,,\,((p_t)_1(q_t)_2+(q_t)_1(p_t)_2)(p_t)_3(p_t)_4\,\widehat{m}^d_t\Big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|\\ +&8\pi\,a\,N^3\big|\langle\Psi_{N,t},g_\beta(x_1-x_2)\times\\ &\qquad\times\big[|u_t(x_3)|^2+|v_t(x_3)|^2\,,\,((p_t)_1(q_t)_2+(q_t)_1(p_t)_2)((p_t)_3(q_t)_4+(q_t)_3(p_t)_4)\,\widehat{m}^e_t\big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|\,. \end{split} \] In each summand above there is at least one among $p_1$ and $p_2$ which we commute through $|u_t(x_3)|^2+|v_t(x_3)|^2$ until it hits $g_\beta$, and using $\|g_{12}p_1\|_{\mathrm{op}}=\|g_{12}p_2\|_{\mathrm{op}}$ we get \[ |\eqref{eq:gamma_d_2}| \;\leqslant \;16\pi\,a\,N^3\|g_\beta(x_1-x_2)(p_t)_1\|_{\mathrm{op}}\,\, \big\||u_t|^2+|v_t|^2\big\|_\infty \big(\|\widehat{m}_t^c\|_{\mathrm{op}}+4\,\|\widehat{m}_t^d\|_{\mathrm{op}}+4\,\|\widehat{m}_t^e\|_{\mathrm{op}}\big)\,. \] Further, using the bounds \eqref{eq:mcde}, \eqref{eq:dressed2}, and \eqref{eq:g2}, we obtain \begin{equation}\label{eq:gamma_d_final2} |\eqref{eq:gamma_d_2}|\;\leqslant\; \widehat{C}(t)\,N^{-\beta/2+3\xi} \end{equation} for some $\widehat{C}(t)>0$ depending on $\|u_t\|_\infty$ and $\|v_t\|_\infty$ but not on $N$. Finally, for $\xi$ small enough, \eqref{eq:gamma_d_final1} and \eqref{eq:gamma_d_final2} give \[ |\delta_N^{(d)}(t)|\;\leqslant \;C(t)\,(\alpha^<_N+N^{-\eta}) \] for some $\eta>0$ and for $C(t):=\text {max}\{\widetilde{C}(t),\widehat C(t)\}$. \bigskip \noindent\textbf{Term $\delta_N^{(e)}$.} Also for the term \[ \delta_N^{(e)}(t)\;=\;\dfrac{1}{2}N(N-1)(N-2)(N-3)\mathfrak{Im}\langle\Psi_{N,t},g_\beta(x_1-x_2)\Big[V_N(x_3-x_4),R_{(12),t}\Big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \] we use \eqref{eq:commut_r2} in the form $[K_{34},R_{(12)}]$, and we get \[ \begin{split} |\delta_N^{(e)}(t)| &\;\leqslant \;\,N^4\,\big|\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[V_N(x_3-x_4)\,,\, (p_t)_1(p_t)_2(p_t)_3(p_t)_4\,\widehat{m}^c_t\big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|\\ +&\,N^4\,\big|\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[V_N(x_3-x_4)\,,\,(p_t)_1(p_t)_2((p_t)_3(q_t)_4+(q_t)_3(p_t)_4)\,\widehat{m}^d_t\big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|\\ +&\,N^4\,\big|\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[V_N(x_3-x_4)\,,\,((p_t)_1(q_t)_2+(q_t)_1(p_t)_2)(p_t)_3(p_t)_4\,\widehat{m}^d_t\big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|\\ +&\,N^4\,\big|\langle\Psi_{N,t},g_\beta(x_1-x_2)\times\\ &\qquad\times\big[V_N(x_3-x_4)\,,\,((p_t)_1(q_t)_2+(q_t)_1(p_t)_2)((p_t)_3(q_t)_4+(q_t)_3(p_t)_4)\,\widehat{m}^e_t\big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N}\big|\,. \end{split} \] As done for $\delta_N^{(d)}$, we commute one $p_1$ or $p_2$ through, until when it hits $g_\beta$. Moreover, we write $V_N=\mathbbm{1}_{\operatorname{supp}V_N}V_N$, and either $V_N(x_3-x_4)$ can be commuted through so as to multiply $\Psi_{N,t}$ in the left entry of the scalar product, or it already multiplies $\Psi_{N,t}$ in the right entry. The $\mathbbm{1}_{\operatorname{supp}V_N}$'s will then be used to provide extra $N$-decay. We thus find \[ \begin{split} |\delta_N^{(e)}(t)|\;\leqslant\;& C\,N^4\, \|g_\beta(x_1-x_2)(p_t)_1\|_{\mathrm{op}}\big(\|\mathbbm{1}_{\mathrm{supp}V_N}(x_3-x_4)(p_t)_3\|_{\mathrm{op}}+\|(p_t)_3\mathbbm{1}_{\mathrm{supp}V_N}(x_3-x_4)\|_{\mathrm{op}}\big)\\ &\quad\times\big(\|\widehat{m}^c_t\|_{\mathrm{op}}+\|\widehat{m}^d_t\|_{\mathrm{op}} + \|\widehat{m}^e_t\|_{\mathrm{op}} \big) \|V_N(x_3-x_4)\Psi_{N,t}\|\,, \end{split} \] for some constant $C>0$. We complete the estimate using the bounds \eqref{eq:mcde}, \eqref{eq:dressed2}, \eqref{eq:dressed3}, \eqref{eq:potential}, \eqref{eq:dressedpotential}, and \eqref{eq:g2}, together with $\|\mathbbm{1}_{\mathrm{supp}V_N}\|_2\;= \;C'\,N^{-3/2}$, and we get \[ |\delta_N^{(e)}(t)|\;\leqslant\; C(t) N^{-\frac{\beta}{2}+3\xi} \] for $C(t)>0$ that depends on $\|u_t\|_{\infty}$ and $\|v_t\|_{\infty}$ but not on $N$. Taking $\xi$ small enough we obtain the desired estimate. \bigskip \noindent\textbf{Term $\delta_N^{(f)}$.} In \[ \delta_N^{(f)}(t)\;=\;N(N-2)\mathfrak{Im}\langle\Psi_{N,t},g_\beta(x_1-x_2)\big[|u_t(x_1)|^2+|u_t(x_2)|^2+|v_t(x_1)|^2+|v_t(x_2)|^2,R_{(12),t}\big]\Psi_{N,t}\rangle_{\!\mathcal{H}_N} \] the function $g_\beta$ can be always commuted so as to become adjacent to one of the $p$'s contained in $R_{(12),t}$. Thus, in the usual way, we get \[ \begin{split} |\delta_N^{(f)}(t)|&\;\leqslant\; \widetilde{C}(t)\, N^2\|g_\beta(x_1-x_2)(p_t)_1\|_{\mathrm{op}}\big(\|\widehat{m}^a_t\|_{\mathrm{op}}+\|\widehat{m}^b_t\|_{\mathrm{op}}\big)\\ &\;\leqslant\; \widehat{C}(t)\, N^{2}\|g_\beta\|_2\big(\|\widehat{m}^a_t\|_{\mathrm{op}}+\|\widehat{m}^b_t\|_{\mathrm{op}}\big)\\ &\;\leqslant\; C(t) N^{-\beta/2+\xi} \end{split} \] for suitable functions $\widetilde{C}(t),\widehat{C}(t),C(t)>0$ depending on $\|u_t\|_{\infty}$ and $\|v_t\|_{\infty}$, having used \eqref{eq:ma}, \eqref{eq:mb}, \eqref{eq:dressed2}, and \eqref{eq:g2}. Taking $\xi$ small enough we obtain the desired estimate. \end{proof} \section*{Acknowledgements} For this work both authors had the pleasure to benefit from instructive discussions with N.~Benedikter, S.~Cenatiempo, N.~Leopold, P.~T.~Nam, P.~Pickl, L.~Pitaevskii, M.~Porta, G.~Roati, C.~Saffirio, and S.~Stringari, which took place on the occasion of the their recent visits at SISSA, as well as from many exchanges with G.~Dell'Antonio and A.~Trombettoni at SISSA. We also warmly acknowledge the kind hospitality of N.~Benedikter at the University of Copenhagen, of S.~Cenatiempo at the GSSI L'Aquila, and of M.~Porta and B.~Schlein at the University of Zurich. This work is partially supported by the 2014-2017 MIUR-FIR grant ``\emph{Cond-Math: Condensed Matter and Mathematical Physics}'' code RBFR13WAET and by the mobility financial support of INdAM-GNFM.
1,108,101,563,032
arxiv
\section{Introduction} Let $\{X_N\}$ be a finite state stationary markov process over the alphabet $\Sigma = \{1,\ldots,s\}$. Let $\{Y_N\}$ be its noisy observation (on the same alphabet). Let $M = M_{s \times s} = \{m_{ij}\}$ be the Markov transition matrix and $R = R_{s \times s}$ be the emission matrix, i.e. $P(X_{N+1} = j | X_N = i) = m_{ij}$ and $P(Y_N = j | X_N = i) = r_{ij}$. We assume that the Markov matrix $M$ is strictly positive ($m_{ij} > 0$), and denote its stationary distribution by the (column) vector $\pi$ ,satisfying $\pi^t M = \pi^t$. \noindent The process $Y$ can be viewed as a noisy observation of $X$, through a noisy channel. It is known as a {\it Hidden Markov Process (HMP)}, and is determined by the parameters $M$ and $R$. More generally, {\it HMPs} have a rich and developed theory, and enourmous applications in various fields (see \cite{Merhav,Rabiner}). \noindent An important property of the process $Y$ is its entropy rate. The Shannon entropy rate of a stochastic process (\cite{Shannon}) measures the amount of 'uncertainty per-symbol'. More formally, for $i \leq j$, let $[X]_i^j$ denote the vector $(X_i,\ldots,X_j)$. Then the entropy rate $\bar{H}(Y)$ is defined as : \begin{equation} \bar{H}(Y) = \lim_{N \to \infty} \frac{H([Y]_1^N)}{N} \label{entropy_rate_def}\end{equation} \noindent Where $H(X) = -\sum_X P(X) \log P(X)$; Here and throughout the paper we use natural logarithms, so the entropy is measured in {\it NATS}, and also adopt the convention $0 \log 0 \equiv 0$. We sometimes omit the realization $x$ of the variable $X$, so $P(X)$ should be understood as $P(X = x)$. The entropy rate can also be computed via the conditional entropy as: $\bar{H}(Y) = \lim_{N \to \infty} H(Y_N | [Y]_1^{N-1})$, since for a stationary process the two limits exist and coincide (\cite{Cover}). The conditional entropy $H(Y|X)$ (where $X,Y$ are sets of r.v.s.) represents the average uncertainty of $Y$, assuming that we know $X$, that is $H(Y|X) = \sum_{x} P(X = x) H(Y | X=x) $. By the chain rule for entropy, it can also be viewed as a difference of entropies, $H(Y|X) = H(X,Y) - H(X)$, which will be used later. \noindent There is at present no explicit expression for the entropy rate of a {\it HMP} (\cite{Merhav,Jacquet}). Few recent works (\cite{Jacquet, Weissman:02,ZKD1}) have dealt with finding the asymptotic behavior of $\bar{H}$ in several parameter regimes. However, they concentrated only on binary alphabet, and proved rigorously only bounds or at most second (\cite{ZKD1}) order behavior. \noindent Here we generalize and prove a conjecture posed in \cite{ZKD1}, which justifies (under some mild assumptions) the computation of $\bar{H}$ as a series expansion in the High Signal-to-Noise-Ratio ('High-SNR') regime. The expansion coefficients were given in \cite{ZKD1}, for the symmetric binary case. In this case, the matrices $M$ and $R$ are given by : $$ M = \begin{pmatrix} 1-p & p \\ p & 1-p \end{pmatrix} \: \:, \: \: R = \begin{pmatrix} 1-\epsilon & \epsilon \\ \epsilon & 1-\epsilon \end{pmatrix} $$ \noindent and the process is characterized by the two parameters $p,\epsilon$. The High-SNR expansion in this case is an expansion in $\epsilon$ around zero. \noindent In section \ref{two_thms_sec}, we present and prove our two main theorems; Thm. \ref{high_SNR_thm} is a generalization of a conjecture raised in \cite{ZKD1} which connects the coefficients of entropies using finite histories to the entropy rate. Proving it justifies the High-SNR expansion of \cite{ZKD1}. We also give Thm. \ref{almost_memoryless_thm}, which is the analogous of Thm. \ref{high_SNR_thm} in a different regime, termed 'Almost-Memoryless' ('A-M'). \noindent In section \ref{series_coeff_sec} we use our two new theorems to compute the first coefficients in the series expansions for the two regimes. We give the first-order asymptotics for a general alphabet, as well as higher order coefficients for the symmetric binary case. \noindent In section \ref{analytic_sec} we estimate the radius of convergence of our expansions using a finite number of terms, and compare our results for the two regimes. We end with conclusions and future directions. \section{From Finite system entropy to entropy rate} \label{two_thms_sec} In this section we prove our main results, namely Thms. \ref{high_SNR_thm} and \ref{almost_memoryless_thm}, which relate the coefficients of the finite bounds $C_N$ to those of the entropy rate $\bar{H}$ in two different regimes. \subsection{The High SNR Regime} \label{high_SNR_sec} \noindent This regime was dealt in further details in \cite{ZKD1,ZKD2}, albeit with no rigorous justification for the obtained series expansion. In the High-SNR regime the observations are likely to be equal to the states, or in other words, the emission matrix $R$ is close to the identity matrix $I$. We therefore write $R = I + \epsilon T$, where $\epsilon > 0$ is a small constant and $T = \{t_{ij}\}$ is a matrix satisfying $t_{ii} < 0, \: t_{ij} \geq 0 , \: \forall i \neq j$ and $\sum_{j=1}^{s}{t_{ij}} = 0$. The entropy rate in this regime can be given as an expansion in $\epsilon$ around zero. We state here our new theorem, connecting the entropy of finite systems to the entropy rate in this regime. \begin{theorem} Let $H_N \equiv H_N(M,T,\epsilon) = H([Y]_1^N)$ be the entropy of a finite system of length $N$, and let $C_N = H_N - H_{N-1}$. Assume\footnote{It is easy to show that the functions $C_N$ are differentiable to all orders in $\epsilon$, at $\epsilon =0$. The assumption which is not proven here is that they are in fact analytic with a radius of analyticity which is uniform in N, and are uniformly bounded within some common neighborhood of $\epsilon =0$} that there is some (complex) neighborhood $B_{\rho}(0) = \{\epsilon : |\epsilon| < \rho \} \subset \mathbb{C}$ of $\epsilon=0$, in which the (one-variable) functions $\{ C_N \}, \bar{H}$ are analytic in $\epsilon$, with a Taylor expansion given by : \begin{equation} C_N(M,T,\epsilon) = \sum_{k=0}^{\infty} C_N^{(k)} \epsilon^k, \quad \bar{H}(M,T,\epsilon) = \sum_{k=0}^{\infty} C^{(k)} \epsilon^k \end{equation} \noindent (The coefficients $C_N^{(k)}$ are functions of the parameters $M$ and $T$. From now on we omit this dependence). Then: \begin{equation} N \geq \lceil \frac{k+3}{2} \rceil \Rightarrow C_N^{(k)} = C^{(k)} \end{equation} \label{high_SNR_thm} \end{theorem} \noindent The recent result (\cite{Marcus}) on analyticity of $\bar{H}$ is not applicable near $\epsilon=0$, therefore the analytic domain of $C_N$, and, more importantly $\bar{H}$, will be discussed elsewhere. \\ \noindent $C_N$ is actually an upperbound (\cite{Cover}) for $\bar{H}$. The behavior stated in Thm. \ref{high_SNR_thm} was discovered previously using symbolic computations, but was proven only for $k \leq 2$ , and only for the symmetric binary case (see \cite{ZKD1}). \noindent Although technically involved , the proof of Thm. \ref{high_SNR_thm} is based on the following two simple ideas. First, we distinguish between the noise parameters at different sites. This is done by considering a more general process $\{Z_N\}$, where $Z_i$'s emission matrix is $R_i = I + \epsilon_iT$. The joint distribution of $[Z]_1^N$ is thus determined by $M$,$T$ and $[\epsilon]_{1}^N$. We define the following functions : \begin{equation} F_N(M,T,[\epsilon]_1^{N}) = H([Z]_1^N) - H([Z]_1^{N-1}) \end{equation} \noindent Setting all the $\epsilon_i$'s equal, reduces us back to the $Y$ process, so in particular $F_N(M,T, (\epsilon,\ldots,\epsilon)) = C_N(\epsilon)$. \noindent Second, we observe that if a particular $\epsilon_i$ is set to zero, the corresponding observation $Z_i$ must equal the state $X_i$. Thus, conditioning back to the past is 'blocked'. This can be used to prove the following : \begin{lemma} Assume $\epsilon_j=0$ for some $1 < j < N$. Then : $$ F_N([\epsilon]_1^N) = F_{N-j+1}([\epsilon]_{j+1}^N) $$ \begin{proof} \noindent $F$ can be written as a sum of conditional entropies : \begin{equation} F_N = -\sum_{[Z]_1^N} P([Z]_1^{N-1}) P(Z_N | [Z]_1^{N-1}) \log P(Z_N | [Z]_1^{N-1}) \label{F_cond_eq}\end{equation} \noindent Where the dependence on $[\epsilon]_1^N$ and $M,T$ comes through the probabilities $P(..)$. Since $\epsilon_j=0$, we must have $X_j = Z_j$, and therefore (since the $X_i$'s form a Markov chain), conditioning further to the past is 'blocked', that is : \begin{equation} \epsilon_j = 0 \Rightarrow P(Z_N | [Z]_1^{N-1}) = P(Z_N | [Z]_j^{N-1}) \label{blocking_cond} \end{equation} \noindent (Note that eq. (\ref{blocking_cond}) is true for $j < N$, but not for $j=N$). Substituting in eq. (\ref{F_cond_eq}) gives : $$ F_N = -\sum_{[Z]_1^N} P([Z]_1^{N-1}) P(Z_N | [Z]_j^{N-1}) \log P(Z_N | [Z]_j^{N-1}) = $$ $$ -\sum_{Z_{j}^N} P([Z]_j^{N-1}) P(Z_N | [Z]_j^{N-1}) \log P(Z_N | [Z]_j^{N-1}) $$ \begin{equation} = F_{N-j+1} \end{equation} \end{proof} \label{F_cond_lemma} \end{lemma} \noindent Let $\vec{k} = [k]_1^N$ be a vector with $k_i \in \{\mathbb{N} \cup 0\}$. Define its 'weight' as $\omega(\vec{k}) = \sum_{i=1}^N k_i$. Define also : \begin{equation} F_N^{\vec{k}} \equiv \left. \frac{\partial^{\omega(\vec{k})} F_N}{\partial \epsilon_1^{k_1},\ldots,\partial \epsilon_N^{k_N}} \right|_{\vec{\epsilon} = 0} \end{equation} \noindent The next lemma shows that adding zeros to the left of $\vec{k}$ leaves $F_N^{\vec{k}}$ unchanged : \begin{lemma} Let $\vec{k} = [k]_1^N$ with $k_1 \leq 1$. Denote $\vec{k}^{(r)}$ the concatenation of $\vec{k}$ with $r$ zeros : $\vec{k}^{(r)} = (\underbrace{0,\ldots,0}_r,k_1,\ldots,k_N)$. Then : $$ F_N^{\vec{k}} = F_{r+N}^{\vec{k}^{(r)}} \quad, \forall r \in \mathbb{N} $$ \begin{proof} \noindent Assume first $k_1 = 0$. Using lemma \ref{F_cond_lemma}, we get : $$ F_{N+r}^{\vec{k}^{(r)}}([\epsilon]_1^{N+r}) = \left. \frac{\partial^{\omega(\vec{k}^{(r)})} F_{r+N}([\epsilon]_1^{N+r})}{\partial \epsilon_{r+2}^{k_2},\ldots,\partial \epsilon_{r+N}^{k_N}} \right|_{\vec{\epsilon} = 0} = $$ \begin{equation} \left. \frac{\partial^{\omega(\vec{k})} F_{N}([\epsilon]_{r+1}^{N+r})}{\partial \epsilon_{r+2}^{k_2},\ldots,\partial \epsilon_{r+N}^{k_N}} \right|_{\vec{\epsilon} = 0} = F_N^{\vec{k}}([\epsilon]_{r+1}^{r+N}) \label{k_is_zero_eq} \end{equation} \noindent The case $k_1 = 1$ is reduced back to the case $k_1=0$ by taking the derivative. We denote by ${[Z]_1^N}^{(j \to r)}$ the vector which is equal to $[Z]_1^N$ in all coordinates except on coordinate $j$, where $Z_j = r$. Using eq. (\ref{k_is_zero_eq}), we get : $$ F_{N+1}^{\vec{k}^{(1)}}([\epsilon]_1^{N+1}) = \left. \frac{\partial^{\omega(\vec{k})-1}}{\partial \epsilon_3^{k_2} \dots \partial \epsilon_{N+1}^{k_N}} \left[ \left. \frac{\partial F_{N+1}}{\partial \epsilon_2} \right|_{\epsilon_2 = 0} \right] \right|_{\vec{\epsilon} = 0} = $$ $$ \frac{\partial^{\omega(\vec{k})-1}}{\partial \epsilon_3^{k_2} \dots \partial \epsilon_{N+1}^{k_N}} \Biggl\{ - \sum_{r=1}^{s} t_{X_i r} \sum_{[Z]_1^{N+1}} $$ $$ \left[ P({[Z]_1^{N+1}}^{(2 \to r)}) \log P(Z_{N+1} | [Z]_1^{N}) - \right. $$ $$ \left. \left. \left. P(Z_{N+1} | [Z]_1^{N}) P({[Z]_1^{N}}^{(2 \to r)}) \right] \right|_{\epsilon_2 = 0} \Biggr\} \right|_{[\epsilon]_1^{N+1} = 0} = $$ $$ \frac{\partial^{\omega(\vec{k})-1}}{\partial \epsilon_2^{k_2} \dots \partial \epsilon_{N}^{k_N}} \Biggl\{ - \sum_{r=1}^{s} t_{X_i r} \sum_{[Z]_1^{N}} $$ $$ \left[ P({[Z]_1^{N}}^{(1 \to r)}) \log P(Z_{N} | [Z]_1^{N-1}) - \right. $$ \begin{equation} \left. \left. \left. P(Z_{N} | [Z]_1^{N-1}) P({[Z]_1^{N}}^{(1 \to r)}) \right] \right|_{\epsilon_1 = 0} \Biggr\} \right|_{[\epsilon]_1^{N} = 0} = F_{N}^{\vec{k}}([\epsilon]_1^{N})\end{equation} \end{proof} \label{zero_tail_lemma} \end{lemma} \noindent $C_N^{(k)}$ is obtained by summing $F_N^{\vec{k}}$ on all $\vec{k}$'s with weight $k$ : \begin{equation} C_N^{(k)} = \sum_{\vec{k},\omega(\vec{k})=k} F_N^{\vec{k}} \end{equation} \noindent We now show that one does not need to sum on all such $\vec{k}$'s, as many of them give zero contribution : \begin{lemma} Let $\vec{k} = (k_1,\ldots,k_N)$. If $\exists i < j < N$, with $k_i \geq 1, k_j \leq 1$, then $F_N^{\vec{k}} = 0$. \begin{proof} \noindent Assume first $k_j = 0$. Using lemma \ref{F_cond_lemma} we get $$ F_N^{\vec{k}} \equiv \left. \frac{\partial^{\omega(\vec{k})} F_N(\vec{\epsilon})}{\partial \epsilon_1^{k_1},\ldots,\partial \epsilon_N^{k_N}} \right|_{\vec{\epsilon} = 0} = \left. \frac{\partial^{\omega(\vec{k})} F_{N-j+1}([\epsilon]_j^N)}{\partial \epsilon_1^{k_1},\ldots,\partial \epsilon_N^{k_N}} \right|_{\vec{\epsilon} = 0} = $$ \begin{equation} \frac{\partial^{\omega(\vec{k})-1}}{\partial \epsilon_1^{k_1},\ldots, \partial \epsilon_i^{k_i-1},\ldots, \partial \epsilon_N^{k_N}} \left[\left. \frac{\partial F_{N-j+1}([\epsilon]_j^N)}{\partial \epsilon_i} \right] \right|_{\vec{\epsilon} = 0} = 0 \end{equation} \noindent The case $k_j = 1$ is more difficult, but follows the same principles. Write the probability of $Z$ : $$ P([Z]_1^N) = \sum_{[X]_1^N} P([X]_1^N) P([Z]_1^N | [X]_1^N) = $$ \begin{equation} \sum_{[X]_1^N} P([X]_1^N) \prod_{i=1}^N (\delta_{X_i Z_i} + \epsilon_i t_{X_i Z_i})\end{equation} \noindent where $\delta_{ij}$ is Kronecker delta. Write now the derivative with respect to $\epsilon_j$: $$ \left. \frac{\partial P([Z]_1^N)}{\partial \epsilon_j} \right|_{\epsilon_j = 0} = $$ $$ \left. \sum_{[X]_1^N} \left[ P([X]_1^N) t_{X_j Z_j} \prod_{i \neq j} (\delta_{X_i Z_i} + \epsilon_i t_{X_i Z_i}) \right] \right|_{\epsilon_j = 0} = $$ \begin{equation} \left. \left\{ \sum_{r=1}^{s} t_{X_i r} P({[Z]_1^N}^{(j \to r)}) \right\} \right|_{\epsilon_j = 0} \end{equation} \noindent Using Bayes' rule $P(Z_N | [Z]_1^{N-1}) = \frac{P([Z]_1^N)}{P([Z]_1^{N-1})}$, we get : $$ \left. \frac{\partial P(Z_N | [Z]_1^{N-1})}{\partial \epsilon_j} \right|_{\epsilon_j = 0} = $$ $$ \frac{1}{P([Z]_1^{N-1})} \sum_{r=1}^{s} t_{X_i r} \left[ P({[Z]_1^N}^{(j \to r)}) - \right. $$ \begin{equation} \left. \left. P(Z_N | [Z]_1^{N-1} ) P({[Z]_1^{N-1}}^{(j \to r)}) \right] \right|_{\epsilon_j = 0} \end{equation} \noindent This gives : $$ \left. \frac{\partial [P([Z]_1^N) \log P(Z_N | [Z]_1^{N-1})]}{\partial \epsilon_j} \right|_{\epsilon_j = 0} = $$ $$ \sum_{r=1}^{s} t_{X_i r} \left\{ P({[Z]_1^N}^{(j \to r)}) \log P(Z_N | [Z]_1^{N-1}) + \right. $$ \begin{equation} \left. \left. P({[Z]_1^N}^{(j \to r)}) - P(Z_N | [Z]_1^{N-1}) P({[Z]_1^{N-1}}^{(j \to r)})\right\} \right|_{\epsilon_j = 0} \end{equation} And therefore : $$ \left. \frac{\partial F_N}{\partial \epsilon_j} \right|_{\epsilon_j = 0} = $$ $$ -\sum_{r=1}^{s} t_{X_i r} \Biggl\{ \sum_{[Z]_1^N} \left[ P({[Z]_1^N}^{(j \to r)}) \log P(Z_N | [Z]_1^{N-1}) - \right. $$ $$ \left. \left. P(Z_N | [Z]_1^{N-1}) P({[Z]_1^{N-1}}^{(j \to r)}) \right] \Biggr\} \right|_{\epsilon_j = 0} = $$ $$ \Biggl\{ -\sum_{r=1}^{s} t_{X_i r} \sum_{[Z]_j^N} \left[ P({[Z]_j^N}^{(1 \to r)}) \log P(Z_N | [Z]_j^{N-1}) - \right. $$ \begin{equation} \left. \left. P(Z_N | [Z]_j^{N-1}) P({[Z]_j^{N-1}}^{(1 \to r)}) \right] \Biggr\} \right|_{\epsilon_1 = 0} \label{F_N_deriv_eq} \end{equation} \noindent Where the latter equality comes from using eq. (\ref{blocking_cond}), which 'blocks' the dependence backwards. Eq. \ref{F_N_deriv_eq} shows that $\left. \frac{\partial F_N}{\partial \epsilon_j} \right|_{\epsilon_j = 0}$ does not depend on $\epsilon_i$ for $i < j$, therefore $\frac{\partial^{k_i+1} F_N}{\partial \epsilon_i^{k_i} \partial \epsilon_j} = 0$ and $F_N^{\vec{k}} = 0$. \end{proof} \label{No_hole_strong_lemma} \end{lemma} \noindent We are now ready to prove Thm. \ref{high_SNR_thm}, which follows directly from lemmas \ref{zero_tail_lemma} and \ref{No_hole_strong_lemma} : \begin{proof} \noindent Let $\vec{k} = [k]_1^N$ with $\omega(\vec{k}) = k$. Define its 'length' (from right, considering only entries larger than one) as $l(\vec{k}) = N+1-\min_{k_i > 1} \{i \}$. It easily follows from lemma \ref{No_hole_strong_lemma} that if $F_N^{\vec{k}} \neq 0$, we must have $l(\vec{k}) \leq \lceil \frac{k+3}{2} \rceil - 1$. Therefore, according to lemma \ref{zero_tail_lemma} we have : \begin{equation} F_N^{\vec{k}} = F_{\lceil \frac{k+3}{2} \rceil}^{(k_{N-\lceil \frac{k+3}{2} \rceil+1},\ldots,{k_N})} \end{equation} \noindent for all $\vec{k}$'s in the sum. Summing on all $F_N^{\vec{k}}$ with the same 'weight', we get $C_N^{(k)} = C_{\lceil \frac{k+3}{2} \rceil}^{(k)}, \quad \forall N > \lceil \frac{k+3}{2} \rceil$. From the analyticity of $C_N$ and $\bar{H}$ around $\epsilon=0$, one can show by induction that $\lim_{N \to \infty} C_N^{(k)} = C^{(k)}$, therefore we must have $C_N^{(k)} = C^{(k)}, \quad \forall N \geq \lceil \frac{k+3}{2} \rceil $. \\ \end{proof} \label{strong_thm} \subsection{The Almost Memoryless Regime} \label{almost_memoryless_sec} \noindent In the A-M regime, the Markov transition matrix is close to uniform. Thus, throughout this section, we assume that $M$ is given by $M = U + \delta T$, such that $U$ is a constant (uniform) matrix, $u_{ij} = s^{-1}$, $\delta > 0$ is a small constant and $T$ satisfies $\sum_{j=1}^{s}{t_{ij}} = 0$. Thus the process is entirely characterized by the set of parameters $(R,T,\delta)$, where $R$ again denotes the emission matrix. \noindent Interestingly, similarly to the High-SNR regime, the conditional entropy given a finite history gives the correct entropy rate up to a certain order which depends on the finite history taken. In the A-M regime we can also prove analyticity of $\{C_N\}$ and $\bar{H}$ in $\delta$ near $\delta=0$. This is stated as : \begin{theorem} Let $H_N \equiv H_N(R,T,\delta) = H([Y]_1^N)$ be the entropy of a finite system of length $N$, and let $C_N = H_N - H_{N-1}$. Then : \begin{enumerate} \item There is some (complex) neighborhood $B_{\rho}(0) = \{\delta : |\delta| < \rho \} \subset \mathbb{C}$ of $\delta=0$, in which the (one-variable) functions $\{ C_N \}, \bar{H}$ are analytic in $\delta$, with a Taylor expansion denoted by : \begin{equation} C_N(M,T,\delta) = \sum_{k=0}^{\infty} C_N^{(k)} \delta^k, \quad \bar{H}(M,T,\epsilon) = \sum_{k=0}^{\infty} C^{(k)} \delta^k \end{equation} (The coefficients $C_N^{(k)}$ are functions of the parameters $M$ and $T$.) \item With the above notations : \begin{equation} N \geq \lceil \frac{k+3}{2} \rceil \Rightarrow C_N^{(k)} = C^{(k)} \end{equation} \end{enumerate} \label{almost_memoryless_thm} \end{theorem} \begin{proof} \begin{enumerate} \item The proof of analyticity relies on the recent result, namely Thm. 1.1 in \cite{Marcus}. In order to use this result, we need to present the {\it HMP} $Y$ in the following way : We introduce the new alphabet $\Gamma \subset \Sigma \times \Sigma $ defined by : $$ \Gamma = \Big\{ w = (w_x,w_y) : w_x,w_y \in \Sigma, r_{w_x w_y} > 0 \Big\} $$ We also introduce the function $\Phi : \Gamma \to \Sigma$, defined by $\Phi(w) \equiv \Phi(w_x,w_y) = w_y$. Let $w,v \in \Gamma$ with $w = (w_x, w_y), v = (v_x,v_y)$. One can look at the new Markov process $W = (X,Y)$, defined on $\Gamma$ by the transition matrix $\Delta_{|\Gamma| \times |\Gamma|}$, which is given by $\Delta_{wv} \equiv P(W_{N+1} = v | W_N = w) = m_{w_x v_x} r_{v_x v_y}$. Then the process $Y$ can be defined as $Y_N = \Phi(W_N)$. Using the above representation, clearly $\Delta$ is analytically parameterized by $\delta$. Moreover, there is some (real) neighborhood $B_{\rho'}(0) \subset \mathbb{R}$ in which all of $\Delta$'s entries are positive. Therefore, Thm. 1.1 from \cite{Marcus} applies here, and according to its proof, $\{C_N\}$ and $\bar{H}$ are analytic (as functions of $\delta$) in some complex neighborhood $B_{\rho}(0) \subset \mathbb{C}$ of zero. \item The proof of part 2 is very similar to that of Thm. \ref{high_SNR_thm}. Distinguishing between the sites by setting $M_i = U + \delta_i T$ in site $i$, we notice that if one sets $\delta_i = 0$ for some $i$, then $M_i$ becomes uniform, and thus knowing $Z_i$ 'blocks' the dependence of $Z_N$ on previous $Z_j$'s ($\forall j < i$). The rest of the proof continues in an analogous way to the proof of Thm. \ref{high_SNR_thm} (including the three lemmas therein), and its details are thus omitted here. \end{enumerate} \end{proof} \section{Computation of the series coefficients} \label{series_coeff_sec} An immediate application of Thms. \ref{high_SNR_thm} and \ref{almost_memoryless_thm} is the computation of the first terms in the series expansion for $\bar{H}$ (assuming its existance), by simply computing these terms for $C_N$ for $N$ large enough. In this section we compute, for both regimes, the first order for the general alphabet case, and also give few higher order terms for the simple symmetric binary case. Our method for computing $C^{(k)}$ is straightforward. We compute $C_N^{(k)}$ for $N = \lceil \frac{k+3}{2} \rceil$ by simply enumerating all sequences $[Y]_1^N$, computing the $k$-th coefficient in $P([Y]_1^N) \log P([Y]_1^N)$ for each one, and summing their contribution. This computation is, however, exponential in $k$, and thus raises the challenge of designing more efficient algorithms, in order to compute further orders and for larger alphabets. \noindent Before giving the calculated coefficients, we need some new notations. For a vector $\alpha$, $diag(\alpha)$ denotes the square matrix with $\alpha$'s elements on the diagonal. We use Matlab-like notation to denote element-by-element operations on matrices. Thus, for matrices $A$ and $B$, $log A$ is a matrix whose elements are $\{\log a_{ij} \}$, and $[A .* B]$ is a matrix whose elements are $\{a_{ij} b_{ij} \}$. $\xi$ denotes the (column) vector of $N$ ones. \subsection{The High-SNR expansion} According to Thm. \ref{high_SNR_thm}, computing $C_2$ enables us to extract $\bar{H}^{(k)}$. This is used to show the following : \begin{proposition} Let $R = I + \epsilon T$. Assume that the entropy rate $\bar{H}$ is analytic in some neighborhood of $\delta = 0$. Then $\bar{H}$ satisfies : $$ \bar{H} = -\pi^t [M .* \log M] \xi + \xi^t \Bigl\{ diag(\log (\pi)) T^t diag(\pi) M - $$ \begin{equation} [diag(\pi) M T + T^t diag(\pi) M] .* [\log(diag(\pi) M)] \Bigr\} \xi \epsilon + O(\epsilon^2) \label{general_1st_order_final} \end{equation} \begin{proof} Noting that according to Thm. \ref{high_SNR_thm}, $\bar{H} = C_2 + O(\epsilon^2)$, we first compute (exactly) $C_2$, and then expand it by substituting $R = I + \epsilon T$. Write $C_2$ as : $$ C_2 = H(Y_N | Y_{N-1}) = $$ \begin{equation} -\sum_{i,j} P(Y_N = j, Y_{N-1} = i) \log \frac{P(Y_N = j, Y_{N-1} = i)}{P(Y_{N-1}=i)} \label{C_2_eq}\end{equation} \noindent We can express the above probabilities as : $$ P(Y_{N-1}=i) = [\pi R]_i $$ \begin{equation} P(Y_N = j, Y_{N-1} = i) = [R^t diag(\pi) M R]_{ij} \equiv F_{ij} \label{prob_ys_eqs}\end{equation} \noindent Substituting eq. (\ref{prob_ys_eqs}) in eq. (\ref{C_2_eq}), and writing in matrix form, we get : \begin{equation} C_2 = \Bigl\{ [log(\pi R)] F - \xi^T [F .* log F] \Bigr\} \xi \label{general_alphabet_ hmm_entropy} \end{equation} \noindent Substituting $R = I + \epsilon T$ gives : $$ F = diag(\pi) M + [diag(\pi) M T + T^t diag(\pi) M] \epsilon + O(\epsilon^2), $$ $$ F .* \log F = [diag(\pi) M] .* \log(diag(\pi) M) + $$ $$ \Bigl\{[diag(\pi) M T + T^t diag(\pi) M] .* [I + \log(diag(\pi) M)]\Bigr\} \epsilon + $$ \begin{equation} O(\epsilon^2) \label{F_log_F_first_order} \end{equation} \noindent Substituting these in eq. (\ref{general_alphabet_ hmm_entropy}) gives, after simplification, the result (\ref{general_1st_order_final}). \end{proof} \label{high_SNR_first_order} \end{proposition} \noindent We note that prop. \ref{high_SNR_first_order} above is a generalization of the result obtained by \cite{Jacquet} for a binary alphabet. \noindent Turning now into the symmetric binary case, the first eleven orders of the series expansion were given in \cite{ZKD1}, but only the first two were proved to be correct. Thm. \ref{high_SNR_thm} proves the correctness of the entire expansion from \cite{ZKD1} (under the analyticity assumption on $\bar{H}$), which is not repeated here. \subsection{The almost memoryless expansion} \noindent By Thm. \ref{almost_memoryless_thm}, one can expand the entropy rate around $M=U$ by simply computing the coefficients $C_N^{(k)}$ for $N$ large enough. For example, by computing $C_2$ we have established, in analogous to prop. \ref{high_SNR_first_order}, the first order : \begin{proposition} Let $M = U + \delta T$. Then $\bar{H}$ satisfies : $$ \bar{H} = \log s - s^{-1} \xi^t R [\log (R^t \xi)] - $$ \begin{equation} \xi^t \Big[(s^{-1} R^t T R) .* log(s^{-1} R^t U R) \Big] \xi \delta + O(\delta^2) \label{prop_weak_memory} \end{equation} \begin{proof} Since $\bar{H} = C_2 + O(\delta^2)$, we expand $C_2$ (as given in eq. (\ref{general_alphabet_ hmm_entropy})) in $\delta$. $M$ is simply replaced by $U + \delta T$. Dealing with $\pi$ is more problematic. Note that the stationary distribution of $U$ is $s^{-1} \xi$. We write $\pi = s^{-1} \xi + \delta \psi + O(\delta^2)$, and solve : \begin{equation} (s^{-1} \xi + \delta \psi) (U + \delta T) = (s^{-1} \xi + \delta \psi) + O(\delta^2) \end{equation} \noindent It follows that $\psi$ should satisfy $\psi (I - U) = \xi T$, where $I$ is the identity matrix. We cannot invert $I-U$ since it is of rank $s-1$. The extra equation needed for determining $\psi$ uniquely comes from the requirement $\sum_{i=1}^s \psi_i = 0$. Substituting $M =U + \delta T$ and $\pi = s^{-1} \xi + \psi \delta + O(\delta^2)$ in eq. (\ref{general_alphabet_ hmm_entropy}), one gets : $$ C_2 = \Big\{log(s^{-1} \xi R) s^{-1} R^t U R - $$ $$ \xi^t [(s^{-1} R^t U R) .* log (s^{-1} R^t U R)] \Big\} \xi + $$ $$ \bigg\{\log (s^{-1} \xi R) R^t [s^{-1} diag(\xi) T + diag(\psi) U] R - $$ $$ \xi^t \Big[ \Big(R^t (s^{-1} diag(\xi) T + diag(\psi) U) R \Big) .* $$ \begin{equation} \Big( s U + \log(s^{-1} R^t U R)\Big) \Big] \bigg \} \xi \delta + O(\delta^2) \label{almost_memoryless_inter_eq}\end{equation} \noindent After further simplification, most terms in eq. \ref{almost_memoryless_inter_eq} cancel out, and we are left with the result (\ref{prop_weak_memory}). \end{proof} \label{almost_memoryless_first_order} \end{proposition} \noindent In \cite{Weissman:01} it was shown that the first order term vanishes for the symmetric binary case, which is consistent with eq. \ref{prop_weak_memory}. Our result holds for general alphabets and process parameters. Looking at the symmetric binary case might be misleading here, since by doing so one fails to see the linear behavior in $\delta$ for the general case. \noindent We have computed higher orders for the symmetric binary case by expanding $C_N$ for $N=8$, which gives us $C^{(k)}$ for $k \leq 13$. In this case the expansion is in the parameter $\delta = \frac{1}{2}-p$, and gives (for better readability the dependency on $\epsilon$ is represented here via $\mu = 1-2\epsilon$) : $$ \bar{H} = \log(2) - \mu^4 \bigg[2 \delta^2 + \frac{4}{3} (7 \mu^4-12 \mu^2+6) \delta^4+ $$ $$ \frac{32}{15}(46 \mu^8-120 \mu^6+120 \mu^4-60 \mu^2+15) \delta^6 + $$ $$ \frac{32}{21}(1137 \mu^{12}-4088\mu^{10}+5964 \mu^8-4536 \mu^6+1946 \mu^4- $$ $$ 504 \mu^2+84) \delta^8 + \frac{512}{45} (3346 \mu^{16}-15120 \mu^{14}+28800 \mu^{12}- $$ $$ 30120 \mu^{10}+18990 \mu^8 - 7560 \mu^6+1980 \mu^4-360 \mu^2+45) \delta^{10} + $$ $$ \frac{1024}{165} (159230 \mu^{20}-874632 \mu^{18}+ 2091100 \mu^{16}-2857360 \mu^{14}+ $$ $$ 2465100 \mu^{12}-1400960 \mu^{10}+532312 \mu^8-135960 \mu^6+ $$ \begin{equation} 24145 \mu^4- 3300 \mu^2+330) \delta^{12} \bigg] + O(\delta^{14}) ; \label{series_almost_memoryless} \end{equation} \noindent The above expansion generalizes a result from \cite{Weissman:01}, who proved $\bar{H} = \log(2) - 2 \mu^4 \delta^2 + o(\delta^2)$. Note that for the first few coefficients, all odd powers of $\delta$ vanish, and the coefficients are all polynomials of $\mu^2$, which makes this series simpler than the one obtained in the High-SNR regime (\cite{ZKD1}). \section{Radius of Convergence} \label{analytic_sec} The usefulness of a series expansion such as the ones derived in eq. (\ref{series_almost_memoryless}) and in \cite{ZKD1} for practical purposes, highly depends on the radius of convergence. Determining the radius is a difficult problem, as it relates to the domain of analyticity of $\bar{H}$. In Thm. \ref{almost_memoryless_thm} we proved that the radius for the A-M expansion is positive. \begin{figure} \centerline{ \psfig{figure=TwoRadiusesComparison.eps,height=12.5cm,width=8.5cm} } \caption{Approximations for $\bar{H}$ using first few terms in its series expansion. a. The High-SNR expansions using $9,10$ and $11$ terms for $p=0.2$ deviate from the bounds for large values of $\epsilon$. The first few terms of the expansion have alternating signs, therefore the direction of the deviation is determined by the parity of the number of terms taken. b. The A-M expansions using $8,10$ and $12$ terms for $\epsilon=0.2$ remain within the bounds for any value of $p$. \label{TwoExpansionsComparison_fig}} \vspace{1cm} \end{figure} \noindent For the High-SNR case, we gave a numerical estimation of the radius of convergence $\rho(p)$ as a function of $p$ (\cite{ZKD2}), based on the first few known terms. When one applies the same procedure to the coefficients of the A-M expansion, the numerical values of the estimated radius are much higher. The difference is demonstrated in fig. \ref{TwoExpansionsComparison_fig}. In this figure, the (finite) series expansions with up to twelve'th order is compared to two known bounds on $\bar{H}$ from \cite{Cover}. The upper bound is simply $C_N = H(Y_N | [Y]_1^{N-1})$ and the lower bound is $c_N \equiv H(Y_N | X_1, [Y]_1^{N-1})$, for $N=2$. As can be seen from the figure, for the High-SNR case at $p=0.2$, the finite-order expansions are not within the bounds for large values of $\epsilon$. For the A-M case, for $\epsilon=0.2$, the finite-order expansions remain within the bounds for any $0<p<\frac{1}{2}$. \\ \noindent The estimated radius $\rho(p)$ for the High-SNR expansion, is plotted as a function of $p$ in fig. \ref{AnalyticDomain_fig}.a. In our context, the result of \cite{Marcus} proves that $\bar{H}(p,\epsilon)$ is real analytic in the domain $\Omega \subset \mathbb{R}^2, \quad \Omega = \{(p,\epsilon): 0<p,\epsilon<1 \}$ (it is not known whether $\Omega$ is maximal with that respect). This domain is shown in fig. \ref{AnalyticDomain_fig}.b. For any $0<\epsilon<1$, the A-M expansion is near the point $(\epsilon, \frac{1}{2})$ which is an interior point of $\Omega$. The High-SNR expansion is near some point $(p,0)$, which lies on the boundary of $\Omega$. \begin{figure} \centerline{\psfig{figure=AnalyticDomain.eps,height=12.5cm,width=8.5cm} } \caption{a. The estimated radius of convergence $\rho(p)$ for the High-SNR expansion as a function of $p$. b. The domain $\Omega$ (shaded gray area) in the $\mathbb{R}^2$ plane for which it is known \cite{Marcus} that $\bar{H}$ is real analytic in $(p,\epsilon)$. The A-M expansion is near the vertical line $p=\frac{1}{2}$. The High-SNR expansion is near the horizonal boundaries at $\epsilon=0$ and $\epsilon=1$. \label{AnalyticDomain_fig}} \vspace{1cm} \end{figure} \section{Conclusion} We presented a generalization and proof of the conjecture introduced in \cite{ZKD1}, relating the expansion coefficients of finite system entropies to those of the entropy rate for {\it HMPs}. Our new theorems shed light on the connection between finite and infinite chains, as well as give a practical and straightforward way to compute the entropy rate as a series expansion up to an arbitrary power. \noindent The surprising 'settling' of the expansion coefficients $C_N^{(k)} = C^{(k)}$ for $N \geq \lceil \frac{k+3}{2} \rceil$, hold for the entropy. For other functions involving only conditional probabilities (e.g. relative entropy between two {\it HMPs}) a weaker result holds: the coefficients 'settle' for $N \geq k$. We note that this is still a highly non-trivial result, as it is known that for other regimes (e.g. 'rare-transitions' \cite{Weissman:03}), a finite chain of any length does not give the correct asymptotic behavior even to the first order. We also estimated the radius of convergence for the expansion in the two regimes, 'High-SNR' and 'A-M', and demonstrated their quantitatively different behavior. Further research in this direction, which closely relates to the domain of analyticity of the entropy rate, is still required. \section*{Acknowledgment} M.A. is grateful for the hospitality shown him at the Weizmann Institute, where his work was supported by the Einstein Center for Theoretical Physics and the Minerva Center for Nonlinear Physics. The work of I.K. at the Weizmann Institute was supported by the Einstein Center for Theoretical Physics. E.D. and O.Z. were partially supported by the Minerva Foundation and by the European Community's Human Potential Programme under contract HPRN-CT-2002-00319, STIPCO.
1,108,101,563,033
arxiv
\section{Introduction} One of the first science observations carried out with the {\it Spitzer Space Telescope} (Werner et al. 2004) was the non-proprietary extragalactic First Look Survey (xFLS) which was designed to characterize the infrared sky at previously unexplored sensitivities. The {\it IRAS} mission first uncovered the presence of infrared luminous galaxies in the local universe (Neugebauer et al. 1984; Soifer et al. 1987), and the {\it ISO} infrared (Elbaz et al. 1999, 2002; Rodighiero et al. 2005) and ground-based submillimeter and millimeter observations (Blain et al. 2002 and references therein) have highlighted the importance that infrared luminous galaxies have on the general understanding of galaxy evolution. Since the cosmic infrared background (CIB) peaks in the far-infrared (FIR) (Hauser \& Dwek 2001), studying the properties of galaxies that are bright in the FIR is crucial for constraining models of galaxy evolution. In this paper, we present 70$\mu$m and 160$\mu$m observations of the xFLS field using the Multiband Imaging Photometer for {\it Spitzer} (MIPS, Rieke et al. 2004). The xFLS is a 4\,deg$^2$ survey. The SWIRE {\it Spitzer} survey (Londsdale et al. 2004) covers a wider area (49\,deg$^2$) to similar depths, and deeper observations covering smaller areas are being taken by the MIPS Instrument Team as part of the Guaranteed Time Observers (GTO) program (e.g., Dole et al. 2004a) and other groups. Although the xFLS is not unique in terms of depth or area coverage, the field has a large number ($\sim 3000$) of spectroscopic redshifts (P. Choi et al. 2005, in preparation; F. Marleau et al. 2005, in preparation), ancillary radio (Condon et al. 2003), and optical imaging data (Fadda et al. 2004) that permit detailed multi-wavelength studies over a relatively large area. In this paper, we measure the source counts and use the available redshifts to constrain the rest-frame spectral energy distributions (SEDs) and derive the average infrared properties for the xFLS 70$\mu$m and 160$\mu$m populations. A cosmology of H$_0=70\,\rm{km~s}^{-1}\,{\rm Mpc}^{-1}$, $\Omega_{\rm M}=0.3$, and $\Omega_{\Lambda}=0.7$ is assumed throughout this paper. \section{Observations} The xFLS survey covers a 4\,deg$^2$ region ($17^{\rm h}18^{\rm m}00^{\rm s}$, $+59^{\circ}30^{\prime}00^{\prime\prime}$) within the northern continuous viewing zone of {\it Spitzer}\footnote{The extragalactic FLS data can be retrieved from the {\it Spitzer} Science Center at http://ssc.spitzer.caltech.edu/fls/}. Inside the xFLS main field a smaller verification strip of 0.25\,deg$^{2}$ centered at $17^{\rm h}17^{\rm m}00^{\rm s}$, $+59^{\circ}45^{\prime}00^{\prime\prime}$ was observed with an integration time of 4 times that of the main survey to characterize the completeness and source reliability of the main survey. In total, 27.7 hours of xFLS observations were taken in 2003 December with the MIPS instrument. An additional 16.8 hours of observations within the xFLS main field were taken in 2005 May to characterize the performance of the 70$\mu$m array at warmer telescope temperatures ($T_{mirror}\simeq 9.5$\,K). These observations produced useful data at 70$\mu$m, but the 160$\mu$m data were not usable. Figure~1 shows the layout of the observations. All of the MIPS observations were taken using the medium scan-rate mapping mode (4.2\,s data collection events [DCEs]). The main-survey data were taken with adjacent 2\,deg scan legs that were offset in the cross-scan direction by $276^{\prime\prime}$ (nearly the full field of view of the arrays). The main-survey was covered twice to identify potential asteroids and to increase the redundancy of the data set. Unfortunately, half of the MIPS-70 array was rendered useless after launch due to a cable failure outside the instrument (Rieke et al. 2004). Therefore, at 70$\mu$m each position of the main survey was covered by only one scan leg. Since the data were not scheduled exactly as originally planned, slight rotations between the 8 astronomical observation requests (AORs) yielded small gaps with zero coverage in the south-west corner of the main-survey 70$\mu$m map. The verification strip was observed using 4 AORs with 0.5\,deg scan legs at the medium scan rate. Each AOR covered the 0.5\,deg$\times$0.5\,deg field using cross-scan steps of $148^{\prime\prime}$ (slightly less than half an array field of view). The warm test data taken in 2005 May were centered on the 70$\mu$m main-field. These data consisted of 6 AORs with 1.75\,deg scan legs. Cross-scan steps of $148^{\prime\prime}$ were used to map the 1.75\,deg$\times$1.75\,deg field once. Table~1 shows the average integration times, sensitivities, and area coverage for the 70$\mu$m and 160$\mu$m observations. \section{Data Reduction} \subsection{BCD Pipeline Processing} The raw 70$\mu$m and 160$\mu$m (MIPS-Germanium [Ge]) data were downloaded from the {\it Spitzer} Science Center (SSC) archive and were reduced using the offline Ge Reprocessing Tools (GeRT), available from the public SSC website. The GeRT uses an offline version of the SSC pipeline to produce the basic calibrated data products (BCDs), following the algorithms derived by the MIPS Instrument Team and the MIPS Instrument Support Team (Gordon et al. 2005). The processing was done using the latest software (SSC pipeline version S12) to take advantage of improvements not currently available for the online xFLS data products (made with pipeline version S11). BCD processing has two main steps: (1) calculation of the slope of the data ramp, and (2) calibration of the slope image. For the xFLS MIPS-Ge data, the raw 4.2\,s data ramps are comprised of 32 non-destructive reads per pixel (one DCE). After correcting for the electronic nonlinearity, cosmic ray events and other discontinuities are identified in the data ramps. The SSC pipeline identifies discontinuities using a maximum likelihood technique (Hesselroth et al. 2000). Linear slopes are calculated for the ramp segments between discontinuities and checked for consistency. The slopes from the segments are combined based on the empirical errors estimated from the scatter of the data within the ramp segments. Average slopes for each pixel are calculated to produce an uncalibrated slope image. The second step of BCD processing is the calibration of the slope image. The calibration of MIPS-Ge data is based on frequent measurements of internal stimulator flashes (stims) which are used to track the responsivity of the detectors as a function of time. For the xFLS medium scan-rate data, the stim flashes are observed every 25 DCEs (118\,s). The stim flash signal is measured by subtracting the previous ``background'' DCE from the stim frame which is taken at the same position on the sky. For each AOR, a stim response function ($SR[t]$) is calculated from interpolating between the stim minus background measurements. After the determination of the stim response as a function of time ($SR[t]$), the BCD data are calibrated using the following equation: \begin{equation} BCD(t)=FC[U(t)/SR(t) - DARK]/IC, \end{equation} where U(t) is the uncalibrated slope image, DARK is the dark calibration file, and IC is the illumination correction calibration file which corrects for the combined illumination pattern from the telescope and the stim flash signal. The DARK and IC calibration files are stable and are generated by combining data from several different campaigns to improve the signal-to-noise. The flux conversion factor ($FC$) converts the instrument units into physical surface brightness units of MJy\,sr$^{-1}$ and is derived from observations of standard calibrators (Sec. 3.4). \subsection{BCD Filtering} Before the 70$\mu$m bias level was lowered in 2004 March, the 70$\mu$m data showed significant data artifacts. Examples and discussion of MIPS-Ge artifacts are shown in the MIPS Data Handbook\footnote{http://ssc.spitzer.caltech.edu/mips/dh/}. The two main artifacts impacting the xFLS data are the stim flash latents and the variations of the slow response as a function of time. The 160$\mu$m data are affected by these issues to a lesser degree due to the faster time constants of the 160$\mu$m stressed-Germanium detectors. For point sources, the stim latent and slow response residuals are additive effects. The filtering of the 160$\mu$m data is straight-forward. We used the filtered BCDs products, which remove a running median per pixel as a function of time by subtracting the median value for the surrounding DCEs closest in time (ignoring the current DCE, stim DCEs, and bad data). This high-pass time median filter removes data artifacts as well as the extended cirrus emission. To remove the stim latent artifacts at 160$\mu$m, we simply ignored the first DCE after the stim flash since the stim latents decay away within one DCE. The filtering process for the 70$\mu$m data is slightly more complicated. The time median filter by itself does not remove all of the data artifacts. Stim latents remain for many DCEs and are correlated by column. Since the scan map direction is nearly along the columns of the array (along the y-axis), the column artifacts are amplified. We remove the column residuals by subtracting the median of the values along each column for every BCD. The combination of the high-pass time median filter per pixel and the column median filter removes the bulk of the data artifacts at 70$\mu$m. The resultant rms is lower for narrow high-pass median filter widths ($<15$ DCEs), but narrow filter windows can yield significant negative side-lobes around bright sources. In addition, column filtering does not maintain point source calibration for the brightest sources ($\ga 0.5$\,Jy). To avoid significant negative side-lobes and to preserve calibration, the filtering of the MIPS-Ge data was done in two passes. The data from the first filtering pass were co-added, and sources were extracted (Sec. 3.3) to find the location of the bright sources. The source positions within the original BCDs were masked and new filtering corrections were calculated in the second pass, ignoring the pixels containing sources. This second-pass filtering technique minimizes the data artifacts while preserving point-source calibration. Different filtering methods were tested and optimized using the GeRT. At 70$\mu$m, we applied column filtering followed by a high-pass time median filter with a width of 12 DCEs to yield the deepest image. The final sensitivity of the filtered 70$\mu$m image was improved by more than a factor of two in comparison to data with no filtering. Filtering has less of an impact at 160$\mu$m, but is useful for deep observations of faint point sources. We adopted a high-pass time median filter width of 12 DCEs for the 160$\mu$m data using the second-pass filtering technique as done for 70$\mu$m. \subsection{Data Coaddition and Source Extraction} The filtered BCDs were coadded using the SSC mosaicing and source extraction software (MOPEX) (Makovoz \& Marleau 2005). The data were combined ignoring bad data flagged in the BCD mask files (bmask) and the bad pixels defined by the static pixel mask (pmask). Using the redundancy of the MIPS data, additional spurious data values were removed via outlier rejection. The remaining data were corrected for array distortions and projected onto a sky grid with square pixels that were more than 2 times smaller than the original pixels. The output mosaic pixels are $4\arcsec$ at 70$\mu$m and $8\arcsec$ at 160$\mu$m. The data were averaged using weights proportional to the fractional area subtended by the original pixels as projected onto the output pixel grid. Since the calibration of the main, verification, and the warm 70$\mu$m data sets are consistent within measurement errors, all data were combined to yield the deepest images. Sources were extracted from the final images using MOPEX. The MOPEX software uses a Point-source Response Function (PRF) image to optimize source detection and point source fitting. A PRF calibration image was made for each MIPS band by coadding isolated bright point sources within the xFLS field. A point source probability (PSP) image was made from non-linear matched filtering of the input image with the PRF. The PSP image represents the probability of having a point source above the noise at each pixel. The PSP image was multiplied by the input image to yield the filtered image used for source detection. The filtered image enhances the presence of point sources in the mosaic while smoothing out noise features that do not match the input PRF, providing a more robust image for source detection. After source detection, the original images were fitted with the PRF image via $\chi^2$ minimization to extract source positions and flux densities. Regions containing multiple peaks were de-blended. For extended sources and/or close blends, not well fitted by the PRF, large aperture measurements were used to derive the total flux density for the source. The noise across the images is non-uniform. For optimal source detection and extraction, an accurate representation of the uncertainties is needed. The MOPEX uncertainty image based on the input BCD pipeline uncertainties is useful for identifying the relative uncertainties across the image, but the pipeline errors typically underestimate the absolute uncertainty level for low background regions. The median level of the MOPEX uncertainty image was scaled to match the average empirical noise in the mosaics. By using a properly scaled uncertainty image, we find an average $\chi^2$ for the fitted point sources of $\chi^2\simeq 1$, suggesting that the uncertainty image is self-consistent with the average point source errors in the data. Several tests were done to validate the photometric results from MOPEX. No calibration differences were found for different sub-sets of the data. We also compared the photometry done with the MOPEX software with that from the program StarFinder (Diolaiti et al. 2000). Both programs use an empirical PRF to fit source flux density. The results from MOPEX and StarFinder agree to better than 5\% for bright point sources ($\ga 50$\,mJy at 70$\mu$m and $\ga$\,200\,mJy at 160$\mu$m) and better than 10\% rms down to the limits of the catalog, consistent with the uncertainties in the data. For faint sources detected in the MIPS-Ge bands, PRF fitting techniques yield more reliable results than aperture measurements. By comparing results between the verification data and the main survey, we find a consistency of $\pm10$\% for PRF matching techniques and a dispersion for aperture measurements of larger than $\pm25$\% at faint flux density levels. \subsection{Calibration} The absolute calibration is derived using observations of stars at 70$\mu$m and observations of asteroids and well-studied luminous infrared galaxies at 160$\mu$m. The data were processed assuming flux conversion factors of 634 MJy\,sr$^{-1}$ per MIPS-70 data unit and 42 MJy\,sr$^{-1}$ per MIPS-160 data unit. The flux densities in the source catalogs were multiplied by the correction factors of 1.16 and 1.17 for the MIPS-70 and MIPS-160 bands respectively, which includes the color corrections associated with a constant $\nu f_{\nu}$ SED and updated calibration conversion factors based on the latest measurements. Additional color corrections for sources with non-constant $\nu f_{\nu}$ SEDs are negligible since the color corrections are similar for the range of SEDs appropriate for galaxies; $f_{\nu}\propto \nu^{-\alpha}$, where $\alpha \sim 0$\,--\,3. For flux density ranges of 50\,mJy\,--\,2\,Jy, the absolute calibration uncertainty is estimated to be about 15\% and 25\% for the 70$\mu$m and 160$\mu$m bands respectively. No corrections have been made for possible flux nonlinearities, which may be significant for sources brighter than this nominal flux density range. Observers are recommended to check the latest information on MIPS calibration at the SSC web site. We confirmed that the absolute 70$\mu$m flux density scale for the xFLS data is consistent with previous measurements made by {\it IRAS} to within 10\%. For comparison with {\it IRAS}, we interpolated between the flux densities in {\it IRAS}-60 and {\it IRAS}-100 bands (Beichman et al. 1988) to the effective wavelength of the MIPS-70 band (71.4$\mu$m)\footnote{The effective wavelengths of the MIPS-24, 70, and 160 bands are 23.7$\mu$m, 71.4$\mu$m, and 155.9$\mu$m, respectively. Throughout this paper, the flux densities for the MIPS-bands are defined as S24$\equiv$ S$_\nu$(23.7$\mu$m), S70$\equiv$ S$_\nu$(71.4$\mu$m), and S160$\equiv$ S$_\nu$(155.9$\mu$m).} applying the appropriate color corrections. For the five brightest sources in the field detected by {\it IRAS}, we derive a flux density ratio of {\it Spitzer/IRAS} $=1.0\pm0.1$ at 71.4$\mu$m. The calibration at 160$\mu$m is more uncertain given that xFLS sources have not been observed previously at wavelengths longer than 100$\mu$m. Simple SED model fits to the data indicate that the calibration at 160$\mu$m is consistent with {\it IRAS} to within 30\%. \subsection{Mosaics and Catalogs} The MIPS-Ge mosaics and catalogs are available online at the SSC website. For both of the MIPS-70 and 160$\mu$m bands, we provide coadded mosaics of the entire xFLS data sets along with the corresponding coverage maps and uncertainty images. The mosaics are from the filtered products and have been background subtracted. The absolute level of the uncertainty image has been scaled to match the empirical noise within the mosaics. The coverage map represents the effective number of input BCDs for each pixel of the mosaic after outlier rejection. There are regions within the 70$\mu$m and 160$\mu$m mosaics with zero coverage. The 70$\mu$m image has gaps in the south-west corner of the mosaic due to non-overlapping AORs. The 160$\mu$m mosaics have a regular pattern of low coverage due to the unusable block of five detectors and the low redundancy of these observations. Point-source catalogs were made for sources with S/N$>7$ at each wavelength to insure high reliability. Sources near the edges or within regions of very low coverage were deleted from the public catalogs to avoid potentially spurious sources. In total, 687 70$\mu$m sources (Table~2) and 207 160$\mu$m sources (Table~3) are cataloged. The 70$\mu$m and 160$\mu$m catalogs are independent. The catalogs can be combined together or with xFLS data at other wavelengths, depending on the specific application. The average point source rms ($1\sigma$) is 2.8\,mJy and 20\,mJy for the 70$\mu$m warm+main data and 160$\mu$m main-survey data, respectively. For the deeper verification regions, the point source rms is 1.6\,mJy at 70$\mu$m and 10\,mJy at 160$\mu$m. The images were examined visually to validate source detection and to determine the appropriate method for deriving the flux densities on a source by source basis. About 98\% of the derived flux densities are based on PRF fitting (Sec.\,3.3). The remaining sources mostly consist of bright extended galaxies and/or close blends that are not well fitted by the PRF. In these cases, large aperture measurements were made to derive the total flux densities. For bright extended sources that have one or more neighboring point sources within the aperture, the flux density of the bright galaxy was derived by subtracting the PRF measurement(s) of the faint point source(s) from the total aperture measurement. Table~2\&3 show the format for an example portion of the xFLS MIPS 70$\mu$m and 160$\mu$m catalogs published in the online edition of the Journal. The average radial positional errors are $2\farcs6$ and $5\farcs2$ ($1\sigma$) for the S/N$>7$ MIPS-70 and 160$\mu$m sources respectively. No systematic differences are found for the 70$\mu$m source coordinates ($<0\farcs2$) in comparison to the more accurate 24$\mu$m positions. For consistency with the 24$\mu$m and 70$\mu$m data, we correct the 160$\mu$m positions by $4\farcs7$ to compensate for the systematic positional offset measured for the 160$\mu$m sources (a known issue for pre-S12 versions of the pointing pipeline). The flux densities have been color corrected assuming a constant $\nu f_{\nu}$ SED. The tabulated flux density errors include the absolute flux density uncertainties of 15\% and 25\% at 70$\mu$m and 160$\mu$m, respectively. \subsection{Reliability and Completeness} The verification data were used to test the reliability of the source detection and extraction techniques. For a high level of reliability at $S/N < 10$, it is important that the uncertainty image accurately reflects the small-scale spatial variations in the noise across the images. Using the uncertainties based on the BCDs, properly scaled to represent the average empirical noise in the data (Sec.\,3.3), we obtained good results. Based on the deeper verification data, we do not find spurious sources with S/N $>5$ in the main survey within the verification field. Sources detected at S/N $>4$ are 80-85\% reliable, and detections at the S/N $>3$ level are only 50-60\% reliable. Based on these results, we adopted a S/N $>5$ cut for deriving the source counts in Sec.\,4.1. A more conservative S/N$>7$ criterion was adopted for the public catalogs (Sec. 3.5). At 70$\mu$m the completeness and source counts were measured within the overlapping region of the warm and main fields (Fig. 1), while the entire main-field xFLS area was used at 160$\mu$m. Both empirical completeness estimates based on the verification data and simulated completeness measurements were made. The simulations of completeness were carried out by adding point sources with different flux densities into the mosaics at random locations, and then extracting sources using the same techniques adopted for the source catalogs. The simulations and empirical methods for estimating completeness gave consistent results. Using the adopted S/N$>5$ criterion, the completeness level falls rapidly below 60\% for S70$<14$\,mJy (Fig. 2) and S160$<100$\,mJy (Fig. 3). Similar completeness simulations were done for the deeper verification field, and we find 60\% completeness levels of approximately 9\,mJy and 60\,mJy at 70$\mu$m and 160$\mu$m, respectively. \section{Results and Discussion} \subsection{Source Counts} The source counts are derived for the main xFLS and verification fields based on S/N$>5$ catalogs corrected for completeness. At 70$\mu$m we use the 3.3 deg$^2$ region containing both the main and warm survey data. The 160$\mu$m counts are based on the entire 4.5 deg$^2$ area of the main survey. Figures 4\&5 show the differential source counts ($dN/dS\times S^{2.5}$) at 70$\mu$m and 160$\mu$m respectively. Over the entire observed range of flux density, both the 70$\mu$m and 160$\mu$m counts increase at super-Euclidean rates with decreasing flux density. In total 845 sources at 70$\mu$m were detected ($>5\sigma$ at S70$<440$\,mJy) within the main$+$warm field area and 186 sources were detected (S70$<70$\,mJy) within the verification field (Table~4). In comparison at 160$\mu$m, 227 sources were detected in the main-field (S160$<880$\,mJy) and 45 sources (S160$<140$\,mJy) were found in the verification field (Table~5). The source counts are consistent between the main and verification data for the overlapping range of flux densities. The xFLS counts are also consistent within errors with the previous MIPS-Ge counts published by Dole et al. (2004a). The error bars for the source counts include the Poisson noise, the completeness errors, and the absolute flux density calibration uncertainty. The data were binned such that the errors are not dominated by Poisson statistics, except for the highest flux density bins. In the lowest flux density bins, the errors on completeness dominate the uncertainties. Dole et al. (2004a) found a difference in the 70$\mu$m counts between the CDF-S and Marano observations, which they attributed to field variations. The measured 70$\mu$m xFLS counts are within the range of values previously measured, and closer to the CDF-S counts at faint flux densities. The xFLS-Ge counts are also consistent within uncertainties with the evolutionary models of Lagache et al. (2004), which were updated to match the {\it Spitzer} 24$\mu$m counts. However, the xFLS data points are systematically slightly lower than the predictions from current models. Counts from the wide-area SWIRE survey would be needed to check for possible field variations and to test whether the models require slight modifications. Although the xFLS 70$\mu$m counts are at fainter levels than those previously published (Dole et al. 2004a), they are not yet deep enough to measure the location of the expected turn over of the differential counts (Fig. 4). The flux density at which the differential counts turn over provides important constraints on evolutionary models (Lagache et al. 2004; Chary \& Elbaz 2001; King \& Rowan-Robinson 2003; Xu et al. 2001), as shown by the 24$\mu$m data (Marleau et al. 2004; Papovich et al. 2004; Chary et al. 2004). At 160$\mu$m {\it Spitzer} is not expected to measure the turn over in the counts (Fig. 5), since the 160$\mu$m data are expected to be limited by confusion below 40\,mJy (Dole et al. 2004b). The differential counts at 70$\mu$m are expected to turn over at around 10\,mJy (Lagache et al. 2004), which is well above the confusion limit, suggesting that deeper 70$\mu$m counts would provide useful constraints for galaxy evolution models. \subsection{Infrared Colors} For high reliability, the source counts were derived using a S/N$>5$ criterion for the individual bands. However, it is possible to maintain a high level of reliability at lower S/N by using the MIPS-24$\mu$m and/or radio data of the field. For studying the infrared colors, we used S/N$>4$ source lists to increase the number of sources detected at high-redshift in the MIPS-Ge bands. The 70$\mu$m source list was bandmerged with the 24$\mu$m catalog (D. Fadda et al., in preparation). At S/N$>4$, 1779 xFLS-70 sources have counterparts at 24$\mu$m within $6\arcsec$. The different resolutions of the 24$\mu$m, 70$\mu$m, and 160$\mu$m data sets ($6\arcsec$, $18\farcs5$, and $40\arcsec$, respectively) complicate the identification of the counterparts in the MIPS bands. Out of the 1779 70$\mu$m sources, 9\% have multiple candidate 24$\mu$m counterparts. To avoid potential spurious matches, we only used sources with one-to-one matches for studying the infrared colors. The $4\sigma$ xFLS-160 source list was matched to the 70$\mu$m data set using a positional matching radius of $13\arcsec$. The resulting matched MIPS-160+70 source list was matched to the MIPS-70+24 source list. We find 301 160$\mu$m sources with one-to-one matches between all three MIPS-bands. Out of the 1618 xFLS-70 sources with one 24$\mu$m positional match, 427 currently have redshifts. Approximately 57\% of these sources come from the NOAO WIYN-Hydra radio-selected redshift survey (F. Marleau et al. in preparation), 16\% of the redshifts are from the Keck-DEIMOS 24$\mu$m-selected sample (P. Choi et al., in preparation), and the remaining 27\% of redshifts are from the SLOAN survey (Strauss et al. 2002). The largest redshift to date is for a quasar at $z=3.56$. For the 160$\mu$m-selected sources with detections in all three MIPS-bands, about half (146) have known redshifts. Although the redshift survey of xFLS-Ge sources is not nearly complete (only 26\% of the 70$\mu$m sources and 49\% of the 160$\mu$m sources currently have redshifts), there are a sufficient number of sources to study the general trends in the infrared colors as a function of redshift for the sample. We adopt two sets of SED models for studying the infrared colors. The first set of SED models is based on a simple modified blackbody (Blain, Barnard, \& Chapman 2003). The SED is expressed as $f_{\nu} = \epsilon_{\nu}B_{\nu}$, where $B_{\nu}(T_{d},\nu)$ is the blackbody function for a dust temperature $T_{d}$, and $\epsilon_{\nu} \propto \nu^{\beta}$ is the dust emissivity. In the mid-IR, we substitute a power-law of the form $f_{\nu} \propto \nu^{-\alpha}$, smoothly matching $\epsilon_{\nu}B_{\nu}$ at longer wavelengths (Blain et al. 2003). For this simple SED model, there are three free parameters: $T_{d}$, $\beta$, and $\alpha$. The S160/S70 ratio constrains $T_{d}$, while the S70/S24 ratio measures $\alpha$. The dust emissivity index $\beta$ is not very well constrained by the {\it Spitzer} data alone. For simplicity, we adopt a constant value of $\beta=1.5$ which is consistent with the results for low-redshift {\it IRAS} galaxies (Dunne et al. 2000). The second set of SED models uses a physically more realistic approach by assuming a power-law distribution of dust masses as a function of radiation field intensity (Dale et al. 2001). The dust mass ($M_d$) as a function of the radiation field ($U$) is given by $dM_{d}(U) \propto U^{-\gamma} dU$ (Dale et al. 2001; Dale \& Helou 2002)\footnote{We use $\gamma$ to represent the ``$\alpha$'' parameter of Dale et al. (2001).}. Quiescent, cirrus-like dust regions are expected to have $\gamma\simeq 2.5$, while environments near active H{\sc ii} regions are expected to be approximated by $\gamma\simeq1$. For a mixture of active and quiescent regions, the average effective $\gamma$ of star-forming galaxies should be between $1\la \gamma \la 2.5$. Figure 6 shows the observed 70$\mu$m/24$\mu$m flux density ratio (S70/S24) plotted as a function of redshift with several predictions based on the SEDs of local galaxies. The majority of the xFLS sources have S70/S24 ratios within the range of values between the extreme ultraluminous starburst of Arp\,200 and the more typical starburst M82. At moderate and high redshifts, infrared-cool ultraluminous infrared galaxies (ULIRGs, $L>10^{12}\,L_{\odot}$), e.g., Arp\,220, are expected to have the highest S70/S24 ratios, while warm-infrared ULIRGs, e.g., Mrk 231, and AGN dominated sources are expected to have lower S70/S24 ratios. Spiral galaxies without a strong infrared excess in their SED (e.g., M100) show decreasing S70/S24 ratios as a function of redshift; however, these galaxies are not detected at high redshifts ($z>0.5$) in the xFLS-Ge survey. In general, the S70/S24 ratio can be used to help distinguish between AGN-dominated sources from star-forming galaxies, except in cases for which strong polycyclic aromatic hydrocarbon (PAH) emission features are redshifted into the 24$\mu$m band (e.g., M82 at $z\sim2$ in Fig. 6) and for AGN with strong silicate absorption features (e.g., Spoon et al. 2004) at $z\sim1.5$. For the xFLS sample currently with redshifts, only 5\% show low S70/S24 ratios consistent with AGN SEDs. The AGN population is fitted with $T_d=90\pm30$\,K and $\alpha=1.1\pm0.3$ (Fig. 6), which is consistent with the {\it IRAS} observations of Seyfert galaxies (Miley, Neugebauer, \& Soifer 1985). The majority of the xFLS sample with redshifts are starbursts at $z<0.5$. Figure 6 shows a large concentration of sources at $z\sim 0.2$. The S70/S24 ratios based on the SEDs of quiescent and active galaxies (Dale \& Helou 2002) intersect at $z\sim0.2$, which may contribute to this concentration of data points. For the galaxies with $z\simeq0.2$, the average observed infrared color is S70/S24$=14\pm5$. In comparison with the {\it IRAS} population (Soifer et al. 1989; Sanders et al. 2003), the {\it IRAS}-60/{\it IRAS}-25 ratios correspond to a predicted ratio of S70/S24$=14\pm3$, consistent with the measured xFLS infrared colors. For the simple SED model, the average S70/S24 ratio corresponds to an infrared index of $\alpha=2.4$. Using SEDs of Dale \& Helou (2002), values of $\gamma \sim 2$ give the best agreement with the average S70/S24 ratios. Although the majority of the 70$\mu$m sources do not have redshifts, the distribution of the S70/S24 ratios is the same for galaxies with and without redshifts (Fig. 7). The average mid-IR spectral index is $\alpha=2.4\pm0.4$. For a simple power-law representation of the mid-IR SED, the modeled S70/S24 ratio is constant with redshift (e.g., Starburst solid line in Fig. 6). However, galaxies are expected to show significant variations in the S70/S24 ratio as strong PAH emission and mid-IR absorption features (e.g., Armus et al. 2004) are redshifted through the 24$\mu$m band (as seen for Arp 220 in Fig. 6). Using simple modified blackbody SEDs, we calculate S160/S70 as a function of redshift and dust temperature for a constant value of $\alpha=2.4$ (Fig. 8). The average 160$\mu$m/70$\mu$m color temperature for the dust temperature is 30$\pm$5\,K. This temperature is slightly lower than the average temperature of $T_{d} = 38\pm3$\,K (Dunne et al. 2000) and $T_d\simeq 35$--40\,K (Soifer et al. 1989) derived for {\it IRAS} galaxies, but the longer wavelength 160$\mu$m band is more sensitive to regions with cooler dust temperatures than those observed by {\it IRAS}. The blackbody temperature of 30\,K derived here agrees well with the long-wavelength {\it ISO} observations of infrared-bright spiral galaxies (Bendo et al. 2003). Bendo et al. (2003) found that $T_{d}\simeq 30$\,K provided the best fit to the {\it ISO} data for $\beta\sim$1--2, consistent with the results in Figure 8. In the context of the SED models of Dale \& Helou (2002), values of $\gamma \sim 2$--2.5 provide the best agreement with the observed S160/S70 ratios. The distribution of the S160/S70 ratios is measurably different for sources with and without redshifts (Fig. 9). Unlike the S70/S24 ratio which is on average roughly constant with redshift, the S160/S70 ratio is expected to increase with redshift (Fig. 8). Figure~9 shows an excess of 160$\mu$m sources without redshifts having high S160/S70 ratios (Log[S160/S70]$\ga0.8$). These results may suggest that the excess sources are at high redshift ($z\ga 0.5$) and that the current redshift sample of 160$\mu$m sources may be biased toward lower redshifts. \subsection{Infrared Luminosity} For the 70$\mu$m selected sources, we can estimate the average infrared luminosity for the xFLS population. The rest-frame wavelength of the MIPS-70 band corresponds to 60$\mu$m at a redshift of $z\simeq0.2$. The FIR luminosity (42.5--122.5$\mu$m) can be derived from the rest-frame 60$\mu$m and 100$\mu$m flux densities (Helou, Soifer, \& Rowan-Robinson 1985). By using the SED associated with the average model parameters of $T_d\simeq30$\,K and $\alpha\simeq2.4$ for the xFLS sources, we estimate a rest-frame S100/S60 ratio of 2.3. This is consistent with the values of S100/S60$=2\pm1$ observed for the bulk of the {\it IRAS} galaxies (Sanders et al. 2003). Based on the rest-frame S100/S60 ratio, the corresponding FIR flux is FIR(W\,m$^{-2})=6.1\times 10^{-14}$\,(S70/Jy) for the observed S70 flux density at $z\simeq0.2$. For galaxies at $z\simeq0.2$, the average value of S70 is 33\,mJy. This corresponds to a FIR luminosity of $L$(FIR)$=6.0\times 10^{10}\,L_{\odot}$. Using the models of Dale et al. (2001) and the estimated S100/S60 ratio, the total infrared luminosity (3--1100$\mu$m) $L$(TIR)$=2.3L$(FIR). Hence, the average total infrared luminosity of the 70$\mu$m-selected xFLS galaxies at $z\simeq 0.2$ is about $L$(TIR)$=1.4\times 10^{11}\,L_{\odot}$. Although the redshift survey for the 70$\mu$m-selected sources is not currently complete, the infrared luminosity function can be estimated at low redshift for bright flux densities. At S70$>50$\,mJy, 72\% (65/90) of sources have redshifts. The majority of sources without redshifts are suspected to be at high redshift given that the observed redshift distribution declines significantly at $z>0.3$ and the models predict that the majority of sources at these flux density levels are at $z>0.3$. Based on the models of Lagache et al. (2004), about 40--45\% of galaxies with S70$>50$\,mJy are predicted to be at $z<0.3$. Assuming $\sim100$\% completeness at $z<0.3$ (i.e., assume all sources with S70$>50$\,mJy currently without redshifts are at $z>0.3$), we find 64$\pm9$\% (58/90) of galaxies with S70$>50$\,mJy and $z<0.3$. If the current xFLS redshift surveys are not complete at $z<0.3$ (with S70$>50$\,mJy), then the models would be even more discrepant with the observations. Hence, the models may under predict the percentage of low redshift 70$\mu$m sources at bright flux densities. There are two potential caveats on this result. First, a small percentage ($<1$\%) of 70$\mu$m sources do not have 24$\mu$m counterparts, and these sources are likely to be at high redshift (e.g., at $z\sim1.5$ where the silicate absorption feature is redshifted into the 24$\mu$m band). However, there is only one 70$\mu$m source without a 24$\mu$m counterpart with S70$>50$\,mJy, so this population, by itself, cannot account for the apparent discrepancy with the models. The second caveat is that we have thrown out 9\% of the 70$\mu$m sources with multiple possible 24$\mu$m matches (Sec. 4.2). The 70$\mu$m sources with multiple candidate 24$\mu$m counterparts tend to be brighter on average and typically have low redshifts. In fact, we find only one such source with $z>0.3$ and with S70$>50$\,mJy (1/70 with redshifts). Additional redshift surveys are needed to confirm the findings here which suggest that the current models slightly over predict the percentage of bright 70$\mu$m sources (S70$>50$\,mJy) at high-redshift ($z>0.3$) and their contribution to the total CIB. To estimate the infrared luminosity function, we use the sample of 58 galaxies with S70$>50$\,mJy and $z<0.3$. We adopt the $1/V_{max}$ method following the calculation of the {\it IRAS} $\nu L_{\nu}(60\mu$m) luminosity function (Soifer et al. 1987). For each source, the maximum volume ($V_{max}$) at which the source would be included in the sample is computed, using $V_{max}(z=0.3)$ as an upper bound for the survey. The 70$\mu$m flux density is converted to a rest-frame 60$\mu$m flux density using a power-law interpolation between the observed 24$\mu$m and 70$\mu$m measurements. The calculated xFLS 60$\mu$m luminosity function ($\rho$) shown in Figure~10 represents the space density of galaxies per Mpc$^3$ per magnitude in luminosity. Even though the current xFLS sample is a factor of 100 deeper in flux density than the {\it IRAS} 60$\mu$m bright galaxy sample (Soifer et al. 1987; Sanders et al. 2003), the xFLS sample does not yet probe the high luminosity end of the luminosity function (Log($\nu L_{\nu}[60\mu$m$]/\,L_{\odot})>11.5$), given the low redshift range (median redshift of only $z=0.096$) and small area of the current sample. The derived xFLS luminosity function agrees well with the {\it IRAS} 60$\mu$m luminosity function (Soifer et al. 1987). Below Log($\nu L_{\nu}[60\mu$m$]/\,L_{\odot})<10.5$, $\rho \propto (\nu\,L_{\nu})^{-0.8}$ (Soifer et al. 1987), while at higher luminosities $\rho \propto (\nu\,L_{\nu})^{-2.2}$ (Sanders et al. 2003). Additional redshifts for the faint S70$\sim$10\,mJy sources are needed to constrain the evolution of the luminosity function out to $z\sim1$. \subsection{Infrared to Radio Correlation} In the local universe, galaxies over a wide range of luminosity and Hubble types follow the empirical IR/radio correlation (Helou et al. 1985; Condon 1992). Observations of {\it ISO} sources (Gruppioni et al. 2003) indicate that the mid-IR to radio relationship holds out to $z\sim 0.6$, and the data for the SCUBA sources suggest that the FIR to radio relationship may be applicable even at $z\sim 2$--3 (e.g., Chapman et al. 2005). With {\it Spitzer} we can measure the IR/radio relationship out to $z\sim 1$ by direct observations near the peak of the SED, instead of estimating the IR luminosity from the shorter mid-IR or longer sub-mm wavelengths. Appleton et al. (2004) presented the first estimate of the IR/radio relationship based on {\it Spitzer} data. Their results were based on the early analysis of the xFLS data. Here we present updated results using more sensitive 70$\mu$m data. The sample of 427 xFLS-70 sources with redshifts and with only one candidate 24$\mu$m counterpart (Sec.\,4.2) was matched to the $4\sigma$ radio catalog (Condon et al. 2003). Within $3\arcsec$, 325 of these xFLS-70 sources (76\%) have radio counterparts. Figure 11 shows the observed S70/S(20\,cm) ratio as a function of redshift for the matched galaxies. At low redshift, the FIR to radio $q$-parameter is defined as $q\equiv{\rm Log}(FIR/(3.75\times10^{12} {\rm W\,m}^{-2})) - {\rm Log}(S(20{\rm cm})/({\rm W\,m}^{-2}{\rm Hz}^{-1}))=2.3\pm0.2$ (Helou et al. 1985). Based on the typical SED of the xFLS sample of galaxies (Sec.\,4.2), the corresponding predicted $q$ parameter for the 70$\mu$m to radio flux density ratio is $q_{70}\equiv {\rm Log}(S70/S[20{\rm cm}]) =2.09$ at $z\simeq 0.2$. We measure an average value of $q_{70}=2.10\pm0.16$ for xFLS galaxies at $z\simeq 0.2$. By comparison, Appleton et al. (2004) derived a similar value of $q_{70}=2.16\pm0.17$, but unlike the previous results we observe the expected decrease of observed $q_{70}$ as a function of redshift (Fig. 11). Based on the SEDs of local galaxies, we expect the observed $q_{70}$ parameter to decrease with redshift since the average IR spectral index is significantly steeper than the radio spectral index. Assuming an IR spectral index of $\alpha=2.4$ based on the observed average ratio of S70/S24$=14$ and the typical non-thermal radio spectral index of $\alpha=0.8$ ($f_{\nu} \propto \nu^{-\alpha}$, Condon et al. 1992), we expect $q_{70} \propto (1+z)^{-1.6}$. From a least-squares fit to the data, we estimate $q_{70} \propto (1+z)^{-1.4\pm0.6}$. Hence, the observed $q_{70}$ for the xFLS galaxies follows the expected trend out to at least $z\sim 1$ (Fig. 11). \section{Conclusions} The {\it Spitzer} 70$\mu$m and 160$\mu$m observations of the xFLS are presented. For the deeper verification field data, we measure number counts down to 8\,mJy and 50\,mJy ($5\sigma$) in the 70$\mu$m and 160$\mu$m bands, respectively. The observed xFLS counts are consistent with previous measurements (Dole et al. 2004a) and are consistent within the uncertainties with the evolutionary models (Lagache et al. 2004). Based on the models of Lagache et al. (2004), approximately 35\% of the CIB is resolved at 70$\mu$m and 15\% is resolved at 160$\mu$m at the depth of the verification data. The observed fraction of low redshift galaxies at bright 70$\mu$m flux densities is larger than model predictions and the total counts appear slightly systematically lower. These results may suggest the models overestimate the contribution of high-redshift galaxies at bright 70$\mu$m flux densities to the total CIB. Deeper 70$\mu$m observations are needed for measuring the expected turn over in the differential source counts to provide better constraints on the evolutionary models. The observed xFLS infrared colors S70/S24 and S160/S70 are consistent with the results from the {\it IRAS} and {\it ISO} missions. Modeled SED fits suggest an average 160$\mu$m/70$\mu$m color temperature for the dust of $T_{d}\simeq 30$\,K, for the 160$\mu$m-selected sample of galaxies. This temperature is consistent with {\it ISO} observations of spirals (Bendo et al. 2003). The average S70/S24 ratio implies an infrared spectral index of $\alpha\simeq 2.4$, which agrees with expectations from the average {\it IRAS} S60/S24 ratio (Soifer et al. 1989; Sanders et al. 2003). The observed 70$\mu$m infrared to radio correlation of the xFLS sources also agrees well with the FIR-to-radio correlation found for local star-forming galaxies (Helou et al. 1985; Condon et al. 1992). We observe a trend of decreasing S70/S(20\,cm) as a function of redshift consistent with expectations based on the SEDs of local galaxies. Given the lack of sufficient redshift measurements for the high-redshift faint xFLS-Ge sources, we can only derive an infrared luminosity function at low redshift for only the brightest 70$\mu$m galaxies. With these data, we calculate a rest-frame 60$\mu$m luminosity function for the xFLS that agrees well with the {\it IRAS} luminosity function (Soifer et al. 1987). In the future, it may be possible to measure the evolution of the luminosity function out to moderate redshifts with the xFLS data when additional redshift surveys of faint 70$\mu$m sources become available. We thank all of our colleagues associated with the {\it Spitzer} mission who have made these observations possible. This work is based on observations made with the {\it Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407.
1,108,101,563,034
arxiv
\section{Introduction} \label{sec:Intro} Most of the currently known exoplanets orbit relatively cool, low-mass (FGKM) stars, with $\ensuremath{T_{\rm eff}}<6500$ K. The reason for this is likely observational bias rather than an actual paucity of planets around hotter stars: traditionally, precise radial velocity observations have been the dominant method to find exoplanets, and, more recently, confirm transiting giant planets. Hot stars, however, typically rotate rapidly. Above the Kraft break \citep{Kraft:1967}, at $\ensuremath{T_{\rm eff}}\sim6250$~K, stars no longer have the thick surface convective zones necessary to maintain a strong magnetic dynamo which can efficiently transport angular momentum to the outgoing stellar wind. More massive stars therefore tend to retain their initial rapid rotation throughout their main sequence lifetimes. Typical $\ensuremath{v\sin{I_*}}$ values are in excess of 100 \ensuremath{\rm km\ s^{-1}}\ for the entire B9-F2 spectral type range \citep{Royer:2007}. The rotational broadening of these stars' lines and the paucity of absorption lines from their hot atmospheres makes it difficult or impossible to measure the stellar reflex motion due to even giant planets. For this reason, stars hotter than mid-F have typically been ignored by planet surveys. Planets around these stars were too difficult to find with radial velocities or confirm as transiting planets, and massive stars are too rare to cause a significant number of microlensing events. There are, however, other ways to discover planets around early-type stars. Direct imaging observations are largely insensitive to the stellar properties. Indeed, early-type stars are {\it more} amenable to direct imaging observations than are solar-type stars because they are on average younger and their planets therefore hotter and brighter. Many of the known directly-imaged planets are around such stars \citep[e.g.,][]{Marois:2008,Marois:2010,Lagrange:2010}. Planets can also be found around these stars by observing them after they have left the main sequence. As they evolve the stars cool and expand, slowing their rotation and increasing the number of spectral lines, thus becoming amenable to precise radial velocity observations. Radial velocity surveys have discovered a large number of giant planets around intermediate-mass subgiant and giant stars \citep[e.g.,][]{Johnson:2011,Reffert:2015}, which suggest that the frequency of giant planets around A stars may be higher than that around FGK stars. The issue of whether these ``retired A stars'' are actually intermediate-mass stars, or rather lower-mass interlopers, is, however, controversial \citep[e.g.,][]{Lloyd:2011,Schlaufman:2013,Johnson:2013,Stello:2017}. Despite the success of direct imaging and radial velocity surveys, there are still limitations to these methods as far as probing the overall planetary populations of A and early F stars. Principally, neither method can probe the close-in, short-period population of planets, similar to those that {\it Kepler} has discovered around lower-mass stars. The angular separations between these planets and their host stars are much too small to be resolved with current or future direct imaging facilities, and such short-period planets will have been engulfed and destroyed as the stellar radii expand during the evolution off of the main sequence. Although early-type stars have typically been ignored by transit surveys, efforts to discover planets around these stars have been increasingly successful. The first transiting planet to be discovered around a hot, rapidly rotating star was WASP-33b \citep{CollierCameron2010}. It was confirmed using a combination of Doppler tomography (where the perturbation to the rotationally broadened stellar line profile during the transit due to the Rossiter-McLaughlin effect is spectroscopically resolved, showing that the planet candidate orbits the target star and is not a background eclipsing binary), and low-precision radial velocity observations (showing that the transiting object has a mass below the substellar limit). Since then, a growing number of transiting hot Jupiters around rapidly rotating A and early F stars have been discovered and (in most cases) confirmed using Doppler tomography, such as CoRoT-11b \citep{Gandolfi:2010,Gandolfi:2012}, Kepler-13Ab \citep{Szabo:2011,Johnson:2014}, HAT-P-57b \citep{Hartman:2015}, MASCARA-1b \citep{Talens:2017b}, and XO-6b \citep{Crouzet:2017}. The Kilodegree Extremely Little Telescope \citep[KELT;][]{Pepper:2003,Pepper:2007,Pepper:2012} project has been particularly effective in finding hot Jupiters around A and early F stars. Indeed, more than half of the KELT planets discovered to date orbit stars above the Kraft break \citep[][]{Siverd:2012,Pepper:2013,Collins:2014,Bieryla:2015,Stevens:2016,Zhou:2017,McLeod:2017,Temple:2017,Gaudi:2017,Lund:2017,Siverd:2017}, or, in the case of KELT-11, was above the Kraft break when it was on the main sequence \citep{Pepper:2016}. As discussed in \cite{Lund:2017}, this is partially by design and Malmquist bias, and partially by accident. As KELT was designed to target brighter stars ($8<V<11$) than other transit surveys, hot, bright stars are overrepresented in its sample due to the larger volume probed. Additionally, many of the transiting hot Jupiters around cooler stars in this magnitude range had already been discovered by the time KELT had obtained a sufficient quantity of data to find planets. Together, these two forces drove us towards the discovery of planets around hot stars. Another region of parameter space that has not been fully explored is hot Jupiter occurrence as a function of stellar multiplicity. While there are a large number of known hot Jupiters in binary stellar systems--indeed, \cite{Ngo:2016} found a higher occurrence rate of stellar binary companions for hot Jupiter hosts than for field stars--only a handful of hot Jupiters are known in higher-order stellar systems. Four transiting hot Jupiters are known in hierarchical triple systems: KELT-4 \citep{Eastman:2016}, HAT-P-8, WASP-12 \citep{Bechter:2014}, and Kepler-13 \citep{Santerne:2012}, and none in higher-order multiples. In the first three of these the hot Jupiter orbits the primary star, the mass ratios between the primary and secondary/tertiary stars are large, and the companions were resolved with direct imaging. In Kepler-13, the primary and secondary stars are of a similar mass, while the tertiary star is much less massive, and the tertiary was discovered through radial velocity observations of the secondary \citep{Santerne:2012,Johnson:2014}. Transit survey follow-up is typically biased against the latter type of system; visual binaries are often excluded from survey target lists, and the presence of multiple lines in a spectrum (especially if some of them are moving) is typically taken as reason to conclude that a planet candidate is a false positive, even if such a system could in fact host a planet \citep[see][for a recent case of a confirmed planet discovered in a binary stellar system with two sets of lines in the spectra]{Siverd:2017}. Planets in hierarchical triple systems are also of interest because their dynamics are richer than those of hot Jupiters in stellar binaries \citep[e.g.,][]{Hamers:2017,Fang:2017}; certain configurations could enhance the efficiency of hot Jupiter formation by high-eccentricity (Kozai-Lidov) migration \citep{Hamers:2017}. Such extreme systems help us test the limits of planet formation and migration. Here we present the discovery of a new transiting hot Jupiter, KELT-21b, confirmed using Doppler tomography. KELT-21 is not only the most rapidly rotating star known to host a transiting giant planet (with $\ensuremath{v\sin{I_*}}=146$ \ensuremath{\rm km\ s^{-1}}), but also is likely one of the few planet host stars known to be part of a hierarchical triple system. \section{Discovery and Follow-Up Observations} \label{sec:Obs} \subsection{Discovery} \label{sec:Discovery} The star HD 332124 was observed by the KELT-North telescope in KELT North Field 11 (KN11; center coordinates RA=$19^h27^m00\fs0$, Dec=+31\arcdeg39\arcmin56\farcs2 J2000.0). This is the same field that also yielded our other two most rapidly rotating planet hosts, KELT-9b \citep{Gaudi:2017} and KELT-20b/MASCARA-2b \citep{Lund:2017,Talens:2017}, as well as KELT-8b \citep{Fulton:2015}. In the analysis of 6783 observations of HD 332124 obtained between 2007 May 29 and 2014 Nov 25 UT, we identified a candidate transit signal with a period of 3.612782 days and a depth of $\sim1\%$; the target was given the KELT candidate ID of KC11C039077. Our data reduction and analysis and candidate selection procedures are described in \cite{Siverd:2012}. We show the KELT photometry of KELT-21 in Fig.~\ref{fig:DiscoveryLC}, and list the literature parameters of the system in Table~\ref{tab:LitProps}. \begin{figure} \centering \includegraphics[width=1.1\columnwidth, angle=0, trim = 0 5.2in 0 2.6in]{f1.pdf} \caption{\footnotesize Discovery light curve for KELT-21b based on 6783 observations from the KELT-North telescope. The data have been phase-folded on the preliminary value for the period, 3.612782 days. The black points depict the data, while the red points are the data binned on a 5-minute time scale.} \label{fig:DiscoveryLC} \end{figure} \begin{table} \footnotesize \centering \caption{System Properties of KELT-21 } \begin{tabular}{llcc} \hline \hline Other identifiers\dotfill & \multicolumn{3}{l}{HD 332124} \\ & \multicolumn{3}{l}{TYC 2676-1274-1} \\ & \multicolumn{3}{l}{2MASS J20191200+3234517} \\ & \multicolumn{3}{l}{TIC 203189770} \\ \hline Parameter & Description & Value & Ref. \\ \hline $\alpha_{\rm J2000}$\dotfill &Right Ascension (RA)\dotfill & $20^h19^m12\fs004$ & 1 \\ $\delta_{\rm J2000}$\dotfill &Declination (Dec)\dotfill & +32\arcdeg34\arcmin51\farcs77 & 1 \\ \\ $B_{\rm T}$\dotfill &Tycho $B_{\rm T}$ mag.\dotfill & 10.76 $\pm$ 0.04 & 1 \\ $V_{\rm T}$\dotfill &Tycho $V_{\rm T}$ mag.\dotfill & 10.48 $\pm$ 0.04 & 1 \\ \\ $J$\dotfill & 2MASS $J$ mag.\dotfill & 10.149 $\pm$ 0.022 & 2 \\ $H$\dotfill & 2MASS $H$ mag.\dotfill & 10.121 $\pm$ 0.021 & 2 \\ $K_{\rm S}$\dotfill & 2MASS $K_{\rm S}$ mag.\dotfill & 10.090 $\pm$ 0.014 & 2 \\ \\ \textit{W1}\dotfill & \textit{WISE1} mag.\dotfill & $10.064 \pm 0.022$ & 3 \\ \textit{W2}\dotfill & \textit{WISE2} mag.\dotfill & $10.106 \pm 0.021$ & 3 \\ \textit{W3}\dotfill & \textit{WISE3} mag.\dotfill & $10.252 \pm 0.069$ & 3 \\ \textit{W4}\dotfill & \textit{WISE4} mag.\dotfill & $>9.158$ & 3 \\ \\ $\mu_{\alpha}$\dotfill & Gaia DR1 proper motion\dotfill & 1.933 $\pm$ 0.791 & 4 \\ & \hspace{3pt} in RA (mas yr$^{-1}$) & & \\ $\mu_{\delta}$\dotfill & Gaia DR1 proper motion\dotfill & -1.317 $\pm$ 0.804 & 4 \\ & \hspace{3pt} in DEC (mas yr$^{-1}$) & & \\ \\ $RV^{*}$\dotfill & Systemic radial \hspace{9pt}\dotfill & $-13.0 \pm 1.0$ & \S\ref{sec:Spectra} \\ & \hspace{3pt} velocity (\ensuremath{\rm km\ s^{-1}}) & & \\ $\ensuremath{v\sin{I_*}}$\dotfill & Stellar rotational \hspace{7pt}\dotfill & $146.03 \pm 0.48$ & \S\ref{sec:GlobalFit} \\ & \hspace{3pt} velocity (\ensuremath{\rm km\ s^{-1}}) & & \\ Spec. Type\dotfill & Spectral Type\dotfill & A8V & \S\ref{sec:GlobalFit} \\ Age\dotfill & Age (Gyr)\dotfill & $1.6 \pm 0.1$ & \S\ref{sec:Evol} \\ $d$\dotfill & Distance (pc)\dotfill & $415 \pm 49$ & 4 \\ $\Pi$\dotfill & Parallax (mas) \dotfill & $2.41 \pm 0.28$ & 4 \\ $A_V$\dotfill & Visual extinction (mag) & $0.00_{-0.00}^{+0.34}$ & \S\ref{sec:SED} \\ $U^{\dagger}$\dotfill & Space motion (\ensuremath{\rm km\ s^{-1}})\dotfill & $9.2 \pm 1.9$ & \S\ref{sec:UVW} \\ $V$\dotfill & Space motion (\ensuremath{\rm km\ s^{-1}})\dotfill & $-0.5 \pm 1.1$ & \S\ref{sec:UVW} \\ $W$\dotfill & Space motion (\ensuremath{\rm km\ s^{-1}})\dotfill & $8.9 \pm 1.9$ & \S\ref{sec:UVW} \\ \hline \hline \end{tabular} \begin{flushleft} \footnotesize{ \textbf{\textsc{NOTES:}} $^{*}RV$ is defined on the IAU system. $^{\dagger}U$ is positive in the direction of the Galactic Center. References are: $^1$\citet{Hog:2000}, $^2$\citet{Cutri:2003}, $^3$\citet{Cutri:2014},$^4$\citet{Brown:2016} Gaia DR1 (http://gea.esac.esa.int/archive/), corrected for the position-dependent systematic offset found by \cite{Stassun:2016}. } \end{flushleft} \label{tab:LitProps} \end{table} \subsection{Photometric Follow-up from KELT-FUN} \label{sec:Photom} After identification of the candidate transit signal in the KELT-North photometry, we obtained follow-up transit photometry of this target using the KELT Follow-Up Network (KFUN; Collins et al.\ in prep). The purpose of these observations was to first confirm the existence of a transit event for this star, and then check that the transit shape was consistent with that of a planet and that the transit was achromatic. We describe the facilities and observations obtained briefly in the following sections; KELT-21 passed all of these tests, and so we then pursued spectroscopic observations to confirm the planetary nature of the candidate (\S\ref{sec:Spectra}). We scheduled observations using a modified version of the TAPIR observation planning software \citep{Jensen:2013}, and all observations were reduced using the AstroImageJ package\footnote{http://www.astro.louisville.edu/software/astroimagej/} \citep{Collins:2013,Collins:2017}. We observed portions of seven transits of KELT-21b between 2014 Aug 7 and 2017 May 29 UT; two of these were observed simultaneously in two different filters, for a total of nine transit light curves. Details of these are listed in Table~\ref{tab:Photom}, and the data are shown in Fig.~\ref{fig:All_light curve}. \subsubsection{Salerno University Observatory} We observed the transit of 2014 Aug 28 at the Salerno University Observatory, located in Fisciano, Italy. At the time of the observations it hosted a Celestron C14 telescope (0.35~m aperture) on a German equatorial mount, with an SBIG ST2000XM CCD and standard Bessell filters; the observations were conducted in the $I$ band. The field of view was 11'x14' with a plate scale of 0\farcs59 pix$^{-1}$. \subsubsection{Roberto Zambelli's Observatory} We observed the ingress of the transit of 2014 Oct 25 at Zambelli's Robotic Observatory (ZRO) in Sarzana, Italy. We used a Meade 12'' (0.3048 m) f/10 telescope with a focal reducer giving a final value of f/6.3. The images were captured using an SBIG ST8XME CCD. The CCD has $1530\times1020$ pixels and a $23.'52\times15.'68$ field of view, giving a plate scale of 0\farcs92 pix$^{-1}$. The observations were obtained in the $V$ band with an exposure length of 200 seconds. Strong winds halted the observations about an hour before mid-transit. \subsubsection{Canela's Robotic Observatory} We observed a transit of KELT-21b on 2016 Jul 18 from Canela's Robotic Observatory (CROW) in Portalegre, Portugal. Located at an altitude of 600 m, the observatory is equipped with a Meade SCT 30 cm telescope with f/5.56 and an SBIG ST-10XME CCD camera with a KAF3200ME detector, giving a plate scale of 0\farcs82 pix$^{-1}$. We observed with a Johnson $V$ filter manufactured by Custom Scientific. \subsubsection{Kutztown University Observatory} We observed KELT-21 from the Kutztown University Observatory (KUO) for seven consecutive hours on 2016 Aug 24 in alternating $V$ and $I$ filters, covering the full $\sim4$-hour transit of KELT-21b and an additional 3 hours of pre-ingress and post-egress baseline. A total of 332 images were collected (166 in each band) with exposure times of 60 s each. We used KUO's 0.6 m f/8 Ritchey-Chr\'{e}tien optical telescope and an SBIG STXL-6303E CCD camera. The detector's array of $3072\times2048$ 9\,$\mu$m pixels provides a $19.'5\times13.'0$ field of view, and with $2\times2$ binning, the effective plate scale is $0\farcs76$ pix$^{-1}$. The CCD was kept at an operating temperature of $-25^{\circ}$ C. KUO is located on the campus of Kutztown University in Kutztown, Pennsylvania, USA. \subsubsection{Westminster College Observatory} We observed a full transit of KELT-21b from the Westminster College Observatory (WCO), Pennsylvania, USA, on 2016 Aug 24 UT in the $r'$ filter. The observations employed a 0.35 m $f/11$ Celestron C14 Schmidt-Cassegrain telescope and an SBIG STL-6303E CCD camera with a $\sim$ 3k $\times$ 2k array of 9 $\mu$m pixels, yielding a $24\arcmin \times 16\arcmin$ field of view and $1\farcs44$ pix$^{-1}$ plate scale at $3 \times 3$ pixel binning. \subsubsection{University of Louisville Manner Telescope} We observed a full transit of KELT-21b using the University of Louisville Manner Telescope (ULMT) located at the Mt.\ Lemmon summit of Steward Observatory, Arizona, USA, on 2017 May 29 UT. Exposures were taken in alternating $g$ and $i$ filters yielding pseudo-simultaneous observations in the two bands. The observations employed a 0.6 m f/8 RC Optical Systems Ritchey-Chr\'{e}tien telescope and an SBIG STX-16803 CCD camera with a 4k$\times$4k array of 9\,$\mu$m pixels, yielding a $26\farcm6 \times 26\farcm6$ field of view and $0\farcs39$ pix$^{-1}$ plate scale. These photometric observations were obtained simultaneously with the spectroscopic observations with LBT/PEPSI and TRES described below. \begin{figure} \vspace{.0in} \includegraphics[width=1\linewidth,height=5in]{f2a.pdf} \vspace{-.25in} \includegraphics[width=1\linewidth, trim = 0 0.9in 0 5.5in]{f2b.pdf} \caption{(Top) Transit light curves of KELT-21b from the KELT Follow-Up Network. The red line represents the best fit model from the global fit in that photometric band. Each light curve is offset vertically by an arbitrary amount for clarity. (Bottom) All follow-up transits combined into one light curve (grey) and a 5 minute binned light curve (black). The red line is the combined and binned models for each transit. We emphasize that the combined light curve is only for display purposes; the individual transit light curves were used in our analysis. } \label{fig:All_light curve} \end{figure} \begin{table*} \footnotesize \centering \setlength\tabcolsep{1.5pt} \caption{Photometric follow-up observations of KELT-21\MakeLowercase{b}} \begin{tabular}{lcccccccc} \hline \hline Observatory & Location & Aperture & Plate scale& Date & Filter & Exposure & Detrending parameters$^a$ \\ & & (m) & ($\rm \arcsec~pix^{-1}$)& (UT) & & Time (s) & & \\ \hline SUO & Fisciano Salerno, Italy & 0.3556 & 0.59 & 2014 Aug 28 & $I$ & 60 & Airmass \\%full ZRO & Sarzana, Italy & 0.3048 & 0.92 & 2014 Oct 25 & $V$ & 200 & Airmass\\ CROW & Portalegre, Portugal & 0.3048 & 0.82 & 2016 Jul 18 & $V$ & 120 & Airmass, Meridian Flip\\ KUO & PA, USA & 0.6096 & 0.76 & 2016 Aug 24 & $V$ & 60 & Airmass\\ KUO & PA, USA & 0.6096 & 0.76 & 2016 Aug 24 & $I$ & 60 & Airmass\\ WCO & PA, USA & 0.35 & 1.44 & 2016 Aug 24 & $r'$ & 15 & Airmass\\ ULMT & AZ, USA & 0.6096 & 0.39 & 2017 May 29 & $g'$ & 50 & Airmass\\ ULMT & AZ, USA & 0.6096 & 0.39 & 2017 May 29 & $i'$ & 100 & Airmass\\ \hline \hline \end{tabular} \begin{flushleft} \footnotesize{ \textbf{\textsc{NOTES:}} $^a$Photometric parameters allowed to vary in global fits and described in the text. Abbreviations: SUO: Salerno University Observatory; ZRO: Zambelli's Robotic Observatory; CROW: Canela's Robotic Observatory; KUO: Kutztown Observatory, Kutztown University; WCO: Westminster College Observatory; ULMT: University of Louisville-Manner Telescope.} \end{flushleft} \label{tab:Photom} \end{table*} \subsection{Spectroscopic Follow-up} \label{sec:Spectra} As the photometric follow-up campaign confirmed the existence of transits for KELT-21 and showed that the transits were both achromatic and with a shape appropriate for a transiting planet, we began spectroscopic follow-up observations. These consisted of three steps: first, reconnaissance spectroscopy to measure the stellar parameters; second, low-precision radial velocity (RV) observations to exclude large radial velocity variations that would have indicated that the transiting object was an M dwarf or a brown dwarf; and, finally, Doppler tomographic observations to confirm that the transiting object transits the star KELT-21. In the following sections we describe these observations. \begin{figure} \includegraphics[width=1\linewidth, trim = 0 1.5in 0 5.5in]{f3a.pdf} \vspace{-.3in} \includegraphics[width=1\linewidth, trim = 0 1.5in 0 6.0in]{f3b.pdf} \vspace{.1in} \caption{(Top) The TS23 (black) and TRES (blue) RV measurements of KELT-21 with the best-fit model shown in red. The systemic radial velocity has been subtracted from each dataset. The residuals to the fit are shown below. (Bottom) The RV measurements phase-folded to the global fit-determined ephemeris. The predicted radial velocity Rossiter-McLaughlin signal is shown at 0.25 phase. The residuals are shown below.} \label{fig:RVs} \end{figure} \subsubsection{TRES Spectra} \label{sec:TRES} We obtained 20 spectra of KELT-21 with the Tillinghast Reflector \'Echelle Spectrograph \citep[TRES; e.g.,][]{FureszThesis:2008}, on the 1.5\,m telescope at Fred Lawrence Whipple Observatory, Mt.\ Hopkins, Arizona, USA. TRES is a fibre fed \'echelle spectrograph with a spectral coverage of 3900--9100\,\AA{} over 51 \'echelle orders, with a spectral resolving power of $R=44,000$. The spectra are recorded by a $2048\times4608$ CCD. We obtained 11 of the observations to constrain the mass of the planet, over the full range of out-of-transit phases, although we excluded two of these which happened to be obtained in-transit from the analysis. We also obtained nine spectra during the transit on 2017 May 29 UT to measure the Doppler tomographic signal of the transiting planet. These latter data are described further in \S\ref{sec:SpinOrbit}. Most of the observations had an exposure time of 1800 seconds, but a few had lengths of as short as 240 seconds or as long as 3000 seconds. The best TRES spectra have a per-pixel signal-to-noise ratio of $\sim65$ near 5200 \AA. We reduced these data using version 2.55 of the TRES pipeline, a custom pipeline written in IDL by L.\ Buchhave. We analyzed the TRES spectra using two different methods, one in order to measure the absolute radial velocity of KELT-21, and another in order to measure the relative RVs of the star to constrain the planetary mass. In order to determine the absolute velocity of KELT-21, we used only the six strongest out-of-transit TRES spectra. We cross-correlated these with a rotating model template spectrum with parameters similar to that of KELT-21 (see \S\S\ref{sec:SpecPars} and \ref{sec:GlobalFit}). The weighted average and standard deviation derived from these six spectra \citep[following][]{Buchave:2010,Quinn:2012} is $-12.4 \pm 1.0$ \ensuremath{\rm km\ s^{-1}}\, on the TRES native system; converting to the IAU velocity system, this gives an absolute RV of $-13.0 \pm 1.0$ \ensuremath{\rm km\ s^{-1}}\, for KELT-21. The relative radial velocities were obtained by fitting the line broadening profiles of each spectrum. The line broadening profiles are derived via a least-squared deconvolution approach, as per \citet{Donati1997}, against a non-rotating synthetic template derived using the SPECTRUM\footnote{http://www1.appstate.edu/dept/physics/spectrum/spectrum.html} code with the ATLAS9 atmosphere models \citep{Castelli:2004}, over the wavelength range of 3900--6250\,\AA{}. The broadening kernel is fitted with a function accounting for rotation, macroturbulence, and instrumental broadening, and shifted in velocity to match the observation. The uncertainties were determined from the order-to-order velocity scatter, divided by the square root of the number of orders used. We show these data in Fig.~\ref{fig:RVs}, and list the RV measurements in Table~\ref{tab:Spectra}. \subsubsection{TS23 Spectra} We obtained 12 spectra of KELT-21 covering a wide range of orbital phases using the 2.7 m Harlan J. Smith Telescope at McDonald Observatory, Texas, USA, and its Robert G. Tull coud\'e spectrograph \citep{Tull:1995}. We conducted these observations between 2016 Oct 25 and 2017 Jun 19 UT. Observing conditions ranged from clear to thin or scattered clouds, with seeing typically between 1\farcs0 and 1\farcs5 but occasionally as poor as 2\farcs0. We used the spectrograph in its TS23 configuration, with a 1\farcs2$\times$8\farcs2 slit providing a resolving power of $R=60,000$ and coverage from 3570 \AA\,to 10200 \AA; the spectral coverage is complete below 5691~\AA. A $2048\times2048$ Tektronix CCD captures 58 spectral orders. Exposure times ranged from 120 to 1200 seconds, giving per-pixel signal-to-noise ratios as high as 120 (more typically $\sim60$) near 5500 \AA. We reduced the spectra using standard IRAF tasks, and measured relative radial velocities from the spectra using the same methodology as we used for the TRES data. The resulting measurements are shown in Fig.~\ref{fig:RVs} and tabulated in Table~\ref{tab:Spectra}. \subsubsection{PEPSI Spectra} We obtained high-resolution spectra of KELT-21 with $R=\lambda/\Delta\lambda=120,000$ with the Potsdam \'Echelle Polarimetric and Spectroscopic Instrument \citep[PEPSI;][]{Strassmeier:2015} spectrograph at the 2$\times$8.4\,m Large Binocular Telescope (LBT) on Mt.\ Graham, Arizona, USA. PEPSI is a fiber-fed white-pupil \'echelle spectrograph with two arms (blue and red optimized). We employed two of the six cross dispersers (CD\,II and CD\,IV), which covered the wavelength ranges 4265--4800\,\AA\ and 5441--6278\,\AA\ simultaneously. The instrument is stabilized in a pressure- and temperature-controlled chamber and is fed by three pairs of octagonal fibers per LBT unit telescope \citep[for overall performance characterization we refer to][]{Strassmeier:2017}. For the present observations, we used the 200\,$\mu$m core fibers and the five-slice image slicer to achieve the spectral resolution of 120,000. The spectra are sampled with 4.2 pixels per resolution element. The fiber core projection on the sky is 1\farcs5. The spectrum is recorded by two 10.3k$\times$10.3k STA1600LN CCDs with 9 $\mu$m pixels. The observations occurred on 2017 May 29 UT and lasted for 4 hours, concluding when we had to close for sunrise. We set the integration time to 1200~s. CCD read-out and overhead sums to 90~s and enabled a sequence of 13 back-to-back spectra. The target altitude was very low at the beginning of the observing sequence with an airmass of $\sim2.1$. The sky was clear and seeing at the start of the sequence was 1\farcs0 but deteriorated to 1\farcs3 at the end of the sequence. Peak signal-to-noise ratios were 140 and 100 per pixel in CD\,IV and CD\,II, respectively. Data reduction was done with the software package SDS4PEPSI (``Spectroscopic Data Systems for PEPSI''), based on \cite{Ilyin:2000} and described in more detail in \cite{Strassmeier:2017}. It relies on adaptive selection of parameters by using statistical inference and robust estimators. The standard reduction steps include bias overscan detection and subtraction, scattered light extraction from the inter-order space and subtraction, definition of \'echelle orders, optimal extraction of spectral orders, wavelength calibration, and a self-consistent continuum fit to the full 2D image of extracted orders. Our Doppler tomographic analysis of these spectra is described in \S\ref{sec:SpinOrbit}. \begin{table} \centering \caption{Radial Velocities of KELT-21} \label{tab:Spectra} \begin{tabular}{lrrrr} \hline \hline \multicolumn{1}{l}{\bjdtdb} & \multicolumn{1}{c}{RV} & \multicolumn{1}{c}{$\sigma_{\rm RV}$} & \multicolumn{1}{c}{Phase} & \multicolumn{1}{c}{Instrument}\\ & \multicolumn{1}{c}{(\ensuremath{\rm km\ s^{-1}})} &\multicolumn{1}{c}{(\ensuremath{\rm km\ s^{-1}})} & & \\ \hline 2456902.88217 & -8.88 & 1.31 & 0.20 & TRES \\ 2457642.70515$^{\dagger}$ & -9.87 & 0.50 & 0.98 & TRES \\ 2457642.73542$^{\dagger}$ & -8.96 & 0.94 & 0.99 & TRES \\ 2457902.80358$^{\dagger}$ & -9.67 & 0.71 & 0.98 & TRES \\ 2457902.82614$^{\dagger}$ & -8.61 & 1.36 & 0.99 & TRES \\ 2457902.84897$^{\dagger}$ & -10.26 & 2.28 & 0.99 & TRES \\ 2457902.87114$^{\dagger}$ & -10.80 & 1.00 & 1.00 & TRES \\ 2457902.89397$^{\dagger}$ & -11.51 & 0.62 & 0.00 & TRES \\ 2457902.91618$^{\dagger}$ & -11.03 & 0.62 & 0.01 & TRES \\ 2457902.93846$^{\dagger}$ & -12.27 & 0.91 & 0.02 & TRES \\ 2457902.96073$^{\dagger}$ & -11.17 & 0.61 & 0.02 & TRES \\ 2457903.94587 & -11.25 & 0.75 & 0.30 & TRES \\ 2457905.81193 & -11.11 & 0.75 & 0.81 & TRES \\ 2457910.86090 & -10.84 & 0.58 & 0.21 & TRES \\ 2457913.82452 & -10.52 & 0.53 & 0.03 & TRES \\ 2457914.86709 & -9.77 & 0.53 & 0.32 & TRES \\ 2457915.95596 & -11.01 & 0.50 & 0.62 & TRES \\ 2457916.89286 & -10.86 & 0.50 & 0.88 & TRES \\ 2457917.95251 & -11.23 & 1.21 & 0.17 & TRES \\ 2457919.91707 & -10.92 & 0.52 & 0.72 & TRES \\ 2457686.57439 & -9.75 & 1.62 & 0.13 & TS23 \\ 2457686.71975 & -10.86 & 0.93 & 0.17 & TS23 \\ 2457687.58795 & -9.83 & 0.81 & 0.41 & TS23 \\ 2457732.56483 & -9.48 & 1.60 & 0.86 & TS23 \\ 2457734.59627 & -10.45 & 0.68 & 0.42 & TS23 \\ 2457735.58946 & -10.71 & 1.23 & 0.69 & TS23 \\ 2457736.57659 & -15.99 & 1.96 & 0.97 & TS23 \\ 2457919.93596 & -10.01 & 2.78 & 0.72 & TS23 \\ 2457920.92942 & -10.71 & 1.48 & 1.00 & TS23 \\ 2457921.91076 & -10.59 & 1.27 & 0.27 & TS23 \\ 2457922.93250 & -9.49 & 1.89 & 0.55 & TS23 \\ 2457923.91227 & -10.44 & 1.86 & 0.82 & TS23 \\ \hline \end{tabular} \begin{flushleft} \footnotesize{ \textbf{\textsc{NOTES:}} See Table~\ref{tbl:KELT-21b} for the RV zero point values. We have assumed a minimum uncertainty of 0.5 \ensuremath{\rm km\ s^{-1}}\ on the radial velocities measurements. $^{\dagger}$: denotes that the observation takes place during transit and was excluded from our RV analysis (although the spectra from BJD 2457902 were used in our Doppler tomographic analysis: \S\ref{sec:SpinOrbit}).} \end{flushleft} \end{table} \subsection{High Contrast AO Imaging} \label{sec:AO} We obtained adaptive optics (AO) images of KELT-21 on 2017 June 12 UT using the NIRC2 instrument on Keck II, Maunakea, Hawaii, USA. The observing conditions were excellent. KELT-21 was observed at an airmass of $\sec{z}=1.07$ with seeing estimated to be $\sim$0\farcs3. The narrow camera mode was used to provide a plate scale of $9.942 \pm 0.05$ mas pix$^{-1}$. A three-point dither pattern was implemented to avoid the noisy quadrant of the NIRC2 detector. A total of thirty frames were recorded using position angle mode in the $K_s$ band; the sequence resulted in a total integration time of thirty seconds. No off-axis sources were identified in raw frames, so we did not obtain images in complementary filters. Later inspection of processed frames, however, revealed two faint companions to the south of KELT-21 (top panel of Fig.~\ref{fig:AO}). Standard AO imaging reduction methods were used to flat-field the images, correct for bad pixels, and subtract the sky background \citep{Crepp:2012}. Upon noticing the companions from the NIRC2 automated pipeline, we performed several experiments to confirm their nature, including studying how the signal-to-noise ratio improved with increasing number of processed frames from different detector quadrants. We also carefully assessed image registration internal to the pipeline to ensure that virtual copies of the primary star or other effects were not creating multiple off-axis signals. Finally, a contrast curve was generated to serve as a self-consistency check for the detected companions' flux levels and limit the existence of other sources near KELT-21 (bottom panel of Fig~\ref{fig:AO}). In order to construct the contrast curve, we divide the image into a grid with a cell size set to the FWHM of the PSF, and calculate the RMS over each 3$\times$3 set of cells. The contrast curve denotes the 5$\sigma$ median-combined RMS value azimuthally averaged over the $3\times3$ cells centered at a given radial separation from KELT-21. The companions are more than a magnitude above the contrast curve, indicating that their detection is secure, and we do not detect any other sources of comparable or greater brightness near KELT-21. Although well-separated from the primary star by just over an arcsecond, the two sources themselves have comparable flux and are only marginally spatially resolved. Utilizing brute-force modeling of the companions as a combined source using aperture photometry, we find a combined magnitude difference of $\Delta K_s = 6.39 \pm 0.06$ compared to the primary star. We then employed Markov Chain Monte Carlo (MCMC) methods to compute the relative brightness and astrometric positions of the closely separated binary pair. To disentangle the contribution of each source, we use the technique described in \citet{Bechter:2014} to self-consistently fit the AO data. Specifically, we model the core and halo of each source using a modified Moffat function that includes nuisance parameters such as the residual sky background pedestal. After numerically identifying the multi-dimensional parameter space global minimum, we ran 1 million iterations to generate posterior distributions that quantify the relative position and brightness of the putative companions and their uncertainties. Results are shown in Table~\ref{tab:astr_phot}. We will consider the candidate companions in more detail in \S\ref{sec:Triple}. \begin{figure} \centering \includegraphics[width=1.05\columnwidth, trim = 0 0.75in 3.5in 1.0in]{f4a.pdf}\\ \vspace{-36pt} \includegraphics[width=1.05\columnwidth]{f4b.pdf} \caption{\footnotesize Top: Keck NIRC2 AO image of the KELT-21 system, showing the primary star and the two faint companions B and C. Middle: zoom-in on the companions B and C. Bottom: contrast curve for these observations.} \label{fig:AO} \end{figure} \begin{table} \centering \caption{Properties of the Likely Companions KELT-21 B and C} \label{tab:astr_phot} \begin{tabular}{l l c} \hline \hline \multicolumn{1}{l}{Parameter} & \multicolumn{1}{l}{Description (Units)} & \multicolumn{1}{c}{Value} \\ \hline \multicolumn{3}{l}{KELT-21 B} \\ $\rho_{AB}$ & Separation (mas) & $1261 \pm 12$ \\ $PA_{AB}$ & Position Angle ($^{\circ}$, east of north) & $185.3 \pm 0.1$ \\ $\Delta K_s$ & Contrast (mag) & $7.00 \pm 0.06$ \\ $a_{\perp,AB}$ & Projected Separation (AU) & $523 \pm 62$ \\ $M_B$ & Estimated Mass (\ensuremath{\,M_\Sun}) & $0.13_{-0.01}^{+0.02}$ \\ \hline \multicolumn{3}{l}{KELT-21 C} \\ $\rho_{AC}$ & Separation (mas) & $1214 \pm 14$ \\ $PA_{AC}$ & Position Angle ($^{\circ}$, east of north) & $186.6 \pm 0.1$ \\ $\Delta K_s$ & Contrast (mag) & $7.30 \pm 0.06$ \\ $a_{\perp,AC}$ & Projected Separation (AU) & $504 \pm 60$ \\ $M_C$ & Estimated Mass (\ensuremath{\,M_\Sun}) & $0.11 \pm 0.01$ \\ \hline \multicolumn{3}{l}{KELT-21 BC} \\ $\rho_{BC}$ & Separation (mas) & $55 \pm 16$ \\ $PA_{BC}$ & Position Angle ($^{\circ}$, east of north) & $324 \pm 10$ \\ $\Delta K_s$ & Contrast (mag) & $0.30 \pm 0.08$ \\ $a_{\perp,BC}$ & Projected Separation (AU) & $22.9 \pm 7.1$ \\ \hline \hline \end{tabular} \begin{flushleft} \footnotesize{Note: all parameters are quoted at the epoch of the observations, 2017 Jun 12 UT. Subscripts AB, AC, and BC refer to mutual parameters between stars A (the planet host star KELT-21) and B, A and C, and B and C, respectively. The physical parameters of KELT-21 B and C are calculated assuming that they are physically associated with KELT-21; see \S\ref{sec:Triple}. The quoted uncertainties on these parameters are calculated given the formal uncertainties on the parameters of KELT-21 and the photometric measurements but neglect sources of systematics such as uncertainties in stellar isochrones.} \end{flushleft} \end{table} \section{Host Star Characterization} \label{sec:Star} \subsection{SED Analysis} \label{sec:SED} We followed the approach of previous KELT discoveries and fitted the broadband spectral energy distribution (SED) of KELT-21 using a \citet{Kurucz:1992} atmosphere model. We adopted the broadband fluxes from the available all-sky photometric catalogs, in particular $B_T$ and $V_T$ from {\it Tycho-2}, $JHK_S$ from {\it 2MASS}, and WISE1--3 from {\it AllWISE}. We also adopted the {\it Gaia\/} DR1 parallax, $\pi = 2.41 \pm 0.28$~mas, corrected for the systematic offset determined by \citet{Stassun:2016}. These data are listed in Table~\ref{tab:LitProps}. The free parameters of the fit were \ensuremath{T_{\rm eff}}\ and the extinction, $A_V$, limited by the maximum line-of-sight $A_V$ from the \citet{Schlegel:1998} dust maps. Since \ensuremath{\log{g_*}}\ and \feh\ are of secondary importance to the SED, we assumed a main-sequence \ensuremath{\log{g_*}}\ = 4 and solar metallicity. We neglected the presence of the candidate companions discussed in \S\ref{sec:AO} as they are much fainter than KELT-21 (combined $\Delta K_S=6.39 \pm 0.06$) and so should have a negligible effect on the SED. The best-fit SED, has $\chi_\nu^2 = 4.2$ for 5 degrees of freedom. The best-fit parameters are \ensuremath{T_{\rm eff}}\ = $8000_{-250}^{+1000}$~K and $A_V = 0.00_{-0.00}^{+0.34}$. Integrating the SED gives an extinction-corrected bolometric flux at Earth of $F_{\rm bol} = 1.61_{-0.13}^{+0.78} \times 10^9$ ergs s$^{-1}$ cm$^{-2}$. \subsection{Spectroscopic Analysis} \label{sec:SpecPars} We determined the properties of KELT-21 using The Payne, a newly developed approach for determining stellar parameters from simultaneously fitting the observed spectrum and spectral energy distribution self-consistently with {\it ab initio} models. The basic framework of The Payne algorithm is given in \cite{Ting:2016} and \cite{Rix:2016}, and full details of the code will be given in Cargile et al.\ (in prep). Using The Payne, we model five of the individual TRES spectra of KELT-21. In the inference, we fit a wavelength region of $\sim$200 \AA\ around the Mg \textsc{i} (Mg b) triplet at $\sim$5200 \AA, and all available photometry from the Tycho-2, 2MASS, and AllWISE catalogs (Table~\ref{tab:LitProps}); we did not use the AllWISE $W4$ band as there is only a limit available in $W4$. For each TRES spectrum, we infer the most probable \ensuremath{T_{\rm eff}}, \ensuremath{\log{g_*}}, \feh, [$\alpha$/Fe], radial velocity, intrinsic stellar broadening, instrumental broadening profile, stellar radius, distance, and extinction in the $V$ band (A$_{V}$). We apply priors on the known instrumental profile for the TRES instrument ($R=44,000$), the Gaia DR1 parallax distance ($\Pi =2.41 \pm 0.28$ mas), and the surface gravity inferred from the planetary transit model (\ensuremath{\log{g_*}}$=$4.16$\pm$0.1; see \S\ref{sec:GlobalFit}). In order to determine the overall best-fit stellar parameters for KELT-21, we take the median and standard deviation of the results from the modeling of the five individual spectra. We note that the standard deviation of the five TRES spectra is very similar to the measurement errors on the most probable fit for the individual spectra, suggesting that it is a good representation of the formal measurement uncertainties from The Payne. We show the best-fit SED in Fig.~\ref{fig:SED}. \begin{figure} \includegraphics[width=1.1\columnwidth]{f5.pdf} \caption{\footnotesize Spectral energy distribution of KELT-21. The purple line shows the best-fit model from The Payne and 50 random draws from the posteriors (only visible as a finite width to the line), and the black points show the literature photometry used. The error bars on each point represent the width of each photometric band.} \label{fig:SED} \end{figure} \begin{figure} \includegraphics[width=1.1\columnwidth]{f6.pdf} \caption{\footnotesize A section of one TRES spectrum of KELT-21, showing the data in black, the best-fit model from The Payne (with \feh=$-0.410 \pm 0.032$, [$\alpha$/Fe]=$0.145 \pm 0.053$) in red, and a model with the same stellar parameters except \feh=0.0, [$\alpha$/Fe]=0.0 in blue. It is visually apparent that the low-metallicity, $\alpha$-enhanced model is a better fit to the data--the solar-metallicity model generally overpredicts the line depths--and so we conclude that the low metallicity of KELT-21 is robust despite the rapid stellar rotation.} \label{fig:FeHcomp} \end{figure} Our analysis with The Payne produced stellar parameters of \ensuremath{T_{\rm eff}}=$7587 \pm 82$ K, \feh=$-0.410 \pm 0.032$, [$\alpha$/Fe]=$0.145 \pm 0.053$, and \ensuremath{v\sin{I_*}}$=144.3 \pm 1.2$ \ensuremath{\rm km\ s^{-1}}. The uncertainties on our metallicity and $\alpha$-enhancement measurements are small, which is somewhat surprising given the difficulty of spectral analysis for hot, rapidly rotating stars like KELT-21. Nonetheless, we argue the sub-solar metallicity is robust. In Fig.~\ref{fig:FeHcomp} we show part of one of our TRES spectra, along with the best-fit model and a model with solar metallicity. The solar-metallicity model consistently overpredicts the depths of the spectral lines. Additionally, solutions with \feh=0, [$\alpha$/Fe]=0 have posterior probabilities of $\sim10^{-8}-10^{-10}$ (as do solutions with either \feh=0 or [$\alpha$/Fe]=0 individually) in most of our fits with The Payne. We thus conclude that KELT-21 is indeed metal-poor, which is unusual for relatively young ($<2$ Gyr), hot stars; we explore the implications of this measurement in more detail in \S\ref{sec:metals}. Additionally, the \ensuremath{T_{\rm eff}}\ value found here is 1.6$\sigma$ discrepant from that found in our SED analysis in \S\ref{sec:SED}. This is likely due to the fact that our SED-only analysis assumed \feh$=0$, which is not the case; our analysis with The Payne provides an equally good fit to the literature photometry, and so we proceed using these stellar parameters. The Payne value of \ensuremath{v\sin{I_*}} is mostly consistent ($1.3\sigma$ difference) with that measured independently from the PEPSI spectra (\S\ref{sec:GlobalFit}). A \ensuremath{T_{\rm eff}}\ of 7587 K corresponds to a spectral type of A8, per the \ensuremath{T_{\rm eff}}-spectral type calibration of \cite{Pecaut:2013}, while the surface gravity value of \ensuremath{\log{g_*}}=$4.173_{-0.015}^{+0.016}$ from the global fit (\S\ref{sec:GlobalFit}) indicates that KELT-21 is on the main sequence. We therefore find a spectral type of A8V for KELT-21; this is one of only a handful of A stars known to host a transiting planet. \subsection{Evolutionary Analysis} \label{sec:Evol} In Fig.~\ref{fig:hrd} we show KELT-21 in the \ensuremath{T_{\rm eff}}-\ensuremath{\log{g_*}} plane, i.e., the Kiel diagram, along with a Yonsei-Yale \citep[YY;][]{Demarque:2004} evolutionary track for the mass and \feh\ of KELT-21. The evolutionary track implies an age of $\sim1.6 \pm 0.1$ Gyr for KELT-21. KELT-21 will start evolving off the main sequence within the next few hundred million years, and within $\sim1$ Gyr it will begin ascending the red giant branch. \begin{figure} \includegraphics[width=1.0\columnwidth, trim = 1.0in 1.0in 1.0in 1.0in]{f7.pdf} \caption{The location of KELT-21 in the Kiel diagram. The best-fit \ensuremath{T_{\rm eff}}\ and \ensuremath{\log{g_*}}\ from the global model fit is shown as the red point, while the gray swath shows a Yonsei-Yale evolutionary track for a star with the best-fit values of \ensuremath{M_{*}}\ and \feh; the locations on the best-fit model corresponding to several values of the stellar age are shown as the blue points, with ages quoted in Gyr.} \label{fig:hrd} \end{figure} \subsection{UVW Space Motion and Galactic Location} \label{sec:UVW} In order to put the KELT-21 system into a broader Galactic context, we calculated its three-dimensional Galactic space motion ($U,V,W$). We obtained the proper motion and parallax of KELT-21 from Gaia DR1 \citep{Brown:2016}, and corrected the parallax for systematic biases per the formulation of \cite{Stassun:2016}. We also used the absolute radial velocity of the system on the IAU scale as determined from our TRES spectra (\S\ref{sec:TRES}). All of these parameters are listed in Table~\ref{tab:LitProps}. We calculated the Galactic space motion using the IDL routine \texttt{GAL\_UVW}\footnote{https://idlastro.gsfc.nasa.gov/ftp/pro/astro/gal\_uvw.pro}, which is based upon the methodology of \cite{Johnson:1987}, and we used the value of the Solar velocity with respect to the local standard of rest found by \cite{Coskunoglu:2011}. We find that KELT-21 has a space motion of ($U$,$V$,$W$) = ($9.2 \pm 1.9$, $-0.5 \pm 1.1$, $8.9 \pm 1.9$) km s$^{-1}$; we use the coordinate system such that the Galactic center is in the direction of positive $U$. Using the criteria of \cite{Bensby:2003}, this corresponds to a 99.4\% probability that KELT-21 is a member of the Milky Way's thin disk. This is expected given the relatively high mass, and therefore relatively low age, of KELT-21. We note that the \cite{Bensby:2003} criteria are not strictly applicable to KELT-21, as they were derived for the solar neighborhood and KELT-21 is located at a distance of 0.4 kpc. KELT-21 is located close to the solar circle, however (as $l=71.4814^{\circ}$, $b=-1.9865^{\circ}$), suggesting that the \cite{Bensby:2003} criteria are likely not unreasonable. Indeed, our integration of the orbit of KELT-21 (\S\ref{sec:metals}) confirms its thin-disk kinematics. Given that KELT-21 is located very close to the Galactic plane, significant extinction and reddening might be expected. The Pan-STARRS 1 dust map\footnote{http://argonaut.skymaps.info/} \citep{Green:2015} predicts a reddening of $E(B-V)=0.09^{+0.02}_{-0.03}$ at the distance and position of KELT-21, although this is close enough that there are not enough stars for the dust map to be fully reliable. Using a standard value of $R_V=3.1$, this corresponds to an expected extinction of $A_V=0.28_{-0.09}^{+0.06}$, which is consistent to within $1\sigma$ with the value of $A_V=0.00_{-0.00}^{+0.34}$ found in our SED analysis (\S\ref{sec:SED}). The PEPSI spectra also show significant interstellar absorption with a complex velocity structure in the Na \textsc{i} D lines, as is expected for a star in this direction and distance. \section{Planet Characterization} \label{sec:Planet} \subsection{Doppler tomographic characterization} \label{sec:SpinOrbit} We analyzed the PEPSI and in-transit TRES data using Doppler tomographic methodology. When a planet transits a rotating star, the obscured regions of the stellar disk do not contribute to the formation of the rotationally-broadened stellar absorption line profile. The subtracted light results in a perturbation to the line profile at velocities corresponding to the radial velocities of the obscured surface elements. This is known as the Rossiter-McLaughlin effect \citep{Rossiter:1924,McLaughlin:1924}. For slowly-rotating stars, this is typically interpreted as an anomalous radial velocity shift during the transit due to the changing line centroids \citep[e.g.,][]{Triaud:2010}. For sufficiently rapid rotation and/or sufficiently high spectral resolution, however, we can spectroscopically resolve the rotationally-broadened line profile and the line profile perturbation. This is Doppler tomography \citep[e.g.,][]{CollierCameron2010,Johnson:2014}. Detection of the line profile perturbation confirms that the planet candidate does indeed transit the target rapidly rotating star. Furthermore, the motion of the line profile perturbation across the line profile during the transit is diagnostic of the spin-orbit misalignment $\lambda$, which is the angle between the stellar spin and planetary orbital angular momentum vectors projected onto the plane of the sky. Our procedure to extract the time series line profiles from the PEPSI data is essentially the same as used by \cite{Johnson:2014,Johnson:2015,Johnson:2017}. In short, we use least squares deconvolution \citep{Donati1997} to extract the average line profile from each spectrum. We use a line mask with initial guesses for the line depths taken from a Vienna Atomic Line Database \citep[VALD;][]{Ryabchikova:2015} stellar model with the stellar parameters of KELT-21; best-fit line depths are then found using the stacked PEPSI spectra before the final line profiles are extracted. One modification from earlier methodology is necessary because of the narrower bandwidth of PEPSI with respect to the spectrographs used by \cite{Johnson:2014,Johnson:2015,Johnson:2017}. A significant fraction of the PEPSI blue arm spectrum is occupied by the H$\gamma$ line and its wings. Rather than entirely excluding this region of the spectrum, as was done in the vicinity of strong lines by \cite{Johnson:2014,Johnson:2015,Johnson:2017}, we instead subtract off a model of the H$\gamma$ line profile from \cite{Kurucz:1979}\footnote{As tabulated at http://kurucz.harvard.edu/grids/gridm01/bm01k2.datcd}. The PEPSI red arm spectrum has very few lines strong enough to use for the Doppler tomographic analysis; we only utilize a few lines blueward of 5680 \AA, while the Na \textsc{i} D lines are not usable as they suffer from strong interstellar contamination. We show the extracted time series line profile residuals, displaying the planetary transit, in the top left panel of Fig.~\ref{fig:DT}. \begin{figure*} \centering \includegraphics[scale=0.52, trim = 1.0in 4.95in 0.5in 4.25in]{f8a.pdf} \includegraphics[scale=0.52, trim = 1.0in 4.95in 1.0in 4.25in]{f8b.pdf} \\ \includegraphics[scale=0.52, trim = 1.0in 4.25in 0.5in 4.25in]{f8c.pdf} \includegraphics[scale=0.52, trim = 1.0in 4.25in 1.0in 4.25in]{f8d.pdf} \\ \includegraphics[scale=0.52, trim = 1.0in 5.25in 0.5in 4.25in]{f8e.pdf} \includegraphics[scale=0.52, trim = 1.0in 5.25in 1.0in 4.25in]{f8f.pdf} \\ \caption{\footnotesize The Doppler tomographic data for KELT-21. The PEPSI observations are shown in the left column, and the TRES observations in the right column. The top row shows the data, the middle row the best-fit model of the line profile perturbation due to the transit, and the bottom row the residuals after subtraction of the best-fit model. In all panels time increases from bottom to top, and each colorscale row shows the deviation of the line profile at that time from the average line profile. The transit is the bright streak moving from lower left to upper right. Vertical lines mark the center of the line profile at $v=0$ and the edges at $v=\pm\ensuremath{v\sin{I_*}}$, a horizontal line shows the time of mid-transit, and the four small crosses depict the times of first through fourth contacts. The data have been binned down by a factor of 4 in the velocity axis as compared to the raw data for display purposes to better show the transit; the full-resolution data were used in the global fits. All panels showing data use the same color-scale range for better comparison; all pixels outside this range have been set to these extreme values. } \label{fig:DT} \end{figure*} For the TRES Doppler tomographic observations, we calculate a line broadening kernel from each transit spectrum as per the least-squares deconvolution process described in Section~\ref{sec:TRES}. To detect the tomographic shadow of the transiting planet, we subtract the out-of-transit broadening kernel from the in-transit kernels \citep{CollierCameron2010,Zhou:2016}. The residuals, showing the transit signal, are shown in the upper right panel of Fig~\ref{fig:DT}. By inspection it is apparent that the planetary orbit is prograde and well-aligned, as the line profile perturbation moves from the blueshifted to the redshifted limb during the course of the transit, and spans nearly the full velocity range of $\pm\ensuremath{v\sin{I_*}}$. Our global fit (\S\ref{sec:GlobalFit}) confirms this qualitative assessment: we obtain $\lambda=-5.6_{-1.9}^{+1.7 \circ}$. The implications of this measurement are discussed in \S\ref{sec:lambda} and \S\ref{sec:compimplications}. \subsection{EXOFAST Global Fit} \label{sec:GlobalFit} To measure the system parameters for KELT-21, we simultaneously fit all photometric and spectroscopic (including Doppler tomographic) data using a heavily modified version of EXOFAST \citep{Eastman:2013}. To determine the radius and mass of KELT-21, we use the YY stellar evolutionary tracks \citep{Demarque:2004} or the Torres empirical relations \citep{Torres:2010}. We conduct two separate global fits (YY and Torres) with the eccentricity of the planet's orbit set to zero. We assume a circular orbit as our radial velocity measurements are unable to measure the planetary mass, let alone provide meaningful constraints on the orbital eccentricity, and hot Jupiters also typically have circular or nearly circular orbits due to strong tidal damping. For a detailed description of the global modeling process, see \citet{Siverd:2012}. We use the SED + spectroscopic fit determined \ensuremath{T_{\rm eff}}\ of $7587 \pm 82$ K and \feh\ of $-0.410 \pm 0.032$ (\S\ref{sec:SpecPars}) as a prior. From an analysis of the KELT-North light curve, we add a prior on the period and transit center time. We also included priors of \ensuremath{v\sin{I_*}}$=146.0 \pm 0.5$ \ensuremath{\rm km\ s^{-1}}\ and a width of a Gaussian non-rotating line profile of $5.2 \pm 0.8$ \ensuremath{\rm km\ s^{-1}}\, derived from a preliminary fit to the PEPSI line profiles (\S\ref{sec:SpinOrbit}) using the line profile model described in \cite{Johnson:2014}. After the global fit (both YY and Torres) we measure an independent ephemeris from a linear fit of the determined transit center times for each follow up light curve. We then reran both fits using this new $T_c$ and Period and their uncertainties as priors to obtain the final results. We do not include the KELT-North light curve in any of the global fits due to difficult-to-quantify blending. We also place a prior on \ensuremath{R_{*}}\ of 1.53$\pm$0.43 \ensuremath{\,R_\Sun}\ using the {\it Gaia} parallax (See Table \ref{tab:LitProps}) and the measured bolometric flux from our SED analysis (\S\ref{sec:SED}). We adopt the YY fit for the discussion of the KELT-21 system. See Tables \ref{tbl:KELT-21b} and \ref{tbl:KELT-21b_part2} for the results of both global fits. We neglected any contribution of the candidate companions described in \S\ref{sec:AO} to the photometric data or the spectra. Based upon the $\Delta K_S$ contrast values from the AO data and assuming physical association of the companions with KELT-21 (see \S\ref{sec:Triple}), we estimate flux ratios between the exoplanet host star and the combined light of the companions of $1.5\times10^{-4}$ in the $V$ band and $3.3\times10^{-4}$ in the $I_C$ band. Not accounting for this flux would cause a systematic underestimate in the transit depth much less than the uncertainty due to photometric noise, and so we can safely neglect this contaminating flux. We also note that this implies that the companions are too faint to cause the transit signal if one of them were to be an eclipsing binary (a possibility which is also excluded by our detection of the Doppler tomographic transit signal). \begin{table*} \scriptsize \centering \setlength\tabcolsep{1.5pt} \caption{Median values and 68\% confidence interval for the physical and orbital parameters of the KELT-21 system} \label{tbl:KELT-21b} \begin{tabular}{lccccc} \hline \hline Parameter & Description (Units) & \textbf{Adopted Value} & Value \\ & & \textbf{(YY circular)} & (Torres circular) \\ \hline Stellar Parameters & & & \\ ~~~$M_{*}$\dotfill &Mass (\ensuremath{\,M_\Sun})\dotfill & $1.458_{-0.028}^{+0.029}$&$1.526_{-0.067}^{+0.070}$\\ ~~~$R_{*}$\dotfill &Radius (\ensuremath{\,R_\Sun})\dotfill & $1.638\pm0.034$&$1.663\pm0.041$\\ ~~~$L_{*}$\dotfill &Luminosity (\ensuremath{\,L_\Sun})\dotfill & $8.03_{-0.53}^{+0.54}$&$8.28_{-0.58}^{+0.60}$\\ ~~~$\rho_*$\dotfill &Density (cgs)\dotfill & $0.468_{-0.024}^{+0.026}$&$0.468_{-0.024}^{+0.026}$\\ ~~~$\log{g_*}$\dotfill &Surface gravity (cgs)\dotfill & $4.173_{-0.014}^{+0.015}$&$4.180\pm0.016$\\ ~~~$\ensuremath{T_{\rm eff}}$\dotfill &Effective temperature (K)\dotfill & $7598_{-84}^{+81}$&$7600_{-84}^{+81}$\\ ~~~$\feh$\dotfill &Metallicity\dotfill & $-0.405_{-0.033}^{+0.032}$&$-0.405\pm0.032$\\ ~~~$v\sin{I_*}$\dotfill &Rotational velocity (\ensuremath{\rm km\ s^{-1}})\dotfill & $146.03\pm0.48$&$146.03_{-0.50}^{+0.49}$\\ ~~~$\lambda$\dotfill &Spin-orbit alignment (degrees)\dotfill & $-5.6_{-1.9}^{+1.7}$&$-5.6_{-1.9}^{+1.7}$\\ ~~~$NR Vel. W.$\dotfill &Non-rotating line width (\ensuremath{\rm km\ s^{-1}})\dotfill & $5.17\pm0.48$&$5.09\pm0.74$\\ \hline Planet Parameters & & & \\ ~~~$P$\dotfill &Period (days)\dotfill & $3.6127647\pm0.0000033$&$3.6127647\pm0.0000033$\\ ~~~$a$\dotfill &Semi-major axis (AU)\dotfill & $0.05224_{-0.00034}^{+0.00035}$&$0.05304_{-0.00079}^{+0.00080}$\\ ~~~$M_{P}$\dotfill &Mass (\ensuremath{\,M_{\rm J}})\dotfill & $\color{red}(<3.91)$&$\color{red}(<4.07)$\\ ~~~$R_{P}$\dotfill &Radius (\ensuremath{\,R_{\rm J}})\dotfill & $1.586_{-0.040}^{+0.039}$&$1.610\pm0.045$\\ ~~~$\rho_{P}$\dotfill &Density (cgs)\dotfill & $\color{red}(<1.24)$&$\color{red}(<1.23)$\\ ~~~$\log{g_{P}}$\dotfill &Surface gravity\dotfill & $\color{red}(<3.59)$&$\color{red}(<3.59)$\\ ~~~$T_{eq}$\dotfill &Equilibrium temperature (K)\dotfill & $2051_{-30}^{+29}$&$2051\pm29$\\ ~~~$\Theta$\dotfill &Safronov number\dotfill & $0.0048_{-0.0039}^{+0.025}$&$0.0050_{-0.0041}^{+0.026}$\\ ~~~$\langle F \rangle$\dotfill &Incident flux (\ensuremath{\rm 10^9 erg~s^{-1} cm^{-2}})\dotfill & $4.01\pm0.23$&$4.02_{-0.22}^{+0.23}$\\ \hline Radial Velocity Parameters & & & \\ ~~~$T_C$\dotfill &Time of inferior conjunction (\bjdtdb)\dotfill & $2457295.93434_{-0.00042}^{+0.00041}$&$2457295.93435\pm0.00042$\\ ~~~$K$\dotfill &RV semi-amplitude (m/s)\dotfill & $\color{red}(<399.6)$&$\color{red}(<406.3)$\\ ~~~$M_P\sin{i}$\dotfill &Minimum mass (\ensuremath{\,M_{\rm J}})\dotfill & $\color{red}(<3.91)$&$\color{red}(<4.07)$\\ ~~~$M_{P}/M_{*}$\dotfill &Mass ratio\dotfill & $\color{red}(<0.0025)$&$\color{red}(<0.0026)$\\ ~~~$u$\dotfill &RM linear limb darkening\dotfill & $0.5413_{-0.0033}^{+0.0058}$&$0.5412_{-0.0032}^{+0.0054}$\\ ~~~$\gamma_{McDonald}$\dotfill &\ensuremath{\rm km\ s^{-1}}\dotfill & $-10.50\pm0.36$&$-10.51_{-0.36}^{+0.37}$\\ ~~~$\gamma_{TRES}$\dotfill &\ensuremath{\rm km\ s^{-1}}\dotfill & $-10.72_{-0.18}^{+0.17}$&$-10.71\pm0.18$\\ \hline \multicolumn{4}{l}{Linear Ephemeris from Follow-up Transits} \\ ~~~$P_{Trans}$\dotfill &Period (days)\dotfill & 3.6127628 $\pm$ 0.0000038 &---\\ ~~~$T_0$\dotfill &Linear ephemeris from transits (\bjdtdb)\dotfill & 2457382.640727 $\pm$ 0.00041 &---\\ \hline \hline \hline \end{tabular} \begin{flushleft} \footnotesize \textbf{\textsc{NOTES}} \\ \vspace{.1in} \footnotesize 3$\sigma$ limits are reported for KELT-21b's mass and parameters dependent on the planetary mass, and are shown in red. \footnotesize The gamma velocity reported here uses an arbitrary zero point for the multi-order relative velocities. Uncertainties are based strictly on formal uncertaintes on the input data and do not take into account systematic effects. The quoted stellar parameters are for the planet host star KELT-21. \end{flushleft} \end{table*} \begin{table*} \scriptsize \centering \setlength\tabcolsep{1.5pt} \caption{Median values and 68\% confidence intervals for the physical and orbital parameters for the KELT-21 System} \label{tbl:KELT-21b_part2} \begin{tabular}{lccccc} \hline \hline Parameter & Description (Units) & \textbf{Adopted Value} & Value \\ & & \textbf{(YY circular)} & (Torres circular) \\ \hline \hline Primary Transit & & & \\ ~~~$R_{P}/R_{*}$\dotfill &Radius of the planet in stellar radii\dotfill & $0.09952_{-0.00073}^{+0.00071}$&$0.09951_{-0.00071}^{+0.00070}$\\ ~~~$a/R_*$\dotfill &Semi-major axis in stellar radii\dotfill & $6.86_{-0.12}^{+0.13}$&$6.86_{-0.12}^{+0.13}$\\ ~~~$i$\dotfill &Inclination (degrees)\dotfill & $86.46_{-0.34}^{+0.38}$&$86.46_{-0.34}^{+0.38}$\\ ~~~$b$\dotfill &Impact parameter\dotfill & $0.423_{-0.039}^{+0.033}$&$0.423_{-0.039}^{+0.033}$\\ ~~~$\delta$\dotfill &Transit depth\dotfill & $0.00990\pm0.00014$&$0.00990\pm0.00014$\\ ~~~$T_{FWHM}$\dotfill &FWHM duration (days)\dotfill & $0.15242_{-0.00061}^{+0.00062}$&$0.15241_{-0.00063}^{+0.00062}$\\ ~~~$\tau$\dotfill &Ingress/egress duration (days)\dotfill & $0.01864\pm0.00076$&$0.01863_{-0.00075}^{+0.00076}$\\ ~~~$T_{14}$\dotfill &Total duration (days)\dotfill & $0.17106_{-0.00092}^{+0.00091}$&$0.17104_{-0.00091}^{+0.00093}$\\ ~~~$P_{T}$\dotfill &A priori non-grazing transit probability\dotfill & $0.1313_{-0.0023}^{+0.0022}$&$0.1313\pm0.0023$\\ ~~~$P_{T,G}$\dotfill &A priori transit probability\dotfill & $0.1603_{-0.0030}^{+0.0028}$&$0.1603\pm0.0029$\\ ~~~$u_{1I}$\dotfill &Linear Limb-darkening\dotfill & $0.1486_{-0.0070}^{+0.012}$&$0.1487_{-0.0069}^{+0.011}$\\ ~~~$u_{2I}$\dotfill &Quadratic Limb-darkening\dotfill & $0.3067_{-0.012}^{+0.0099}$&$0.3065_{-0.012}^{+0.0098}$\\ ~~~$u_{1Sloang}$\dotfill &Linear Limb-darkening\dotfill & $0.3475_{-0.0040}^{+0.0068}$&$0.3472_{-0.0039}^{+0.0063}$\\ ~~~$u_{2Sloang}$\dotfill &Quadratic Limb-darkening\dotfill & $0.3442_{-0.0023}^{+0.0014}$&$0.3444_{-0.0021}^{+0.0013}$\\ ~~~$u_{1Sloani}$\dotfill &Linear Limb-darkening\dotfill & $0.1662_{-0.0078}^{+0.013}$&$0.1663_{-0.0077}^{+0.012}$\\ ~~~$u_{2Sloani}$\dotfill &Quadratic Limb-darkening\dotfill & $0.3123_{-0.012}^{+0.0100}$&$0.3122_{-0.012}^{+0.0099}$\\ ~~~$u_{1Sloanr}$\dotfill &Linear Limb-darkening\dotfill & $0.2277_{-0.0076}^{+0.012}$&$0.2277_{-0.0076}^{+0.011}$\\ ~~~$u_{2Sloanr}$\dotfill &Quadratic Limb-darkening\dotfill & $0.3370_{-0.0100}^{+0.0079}$&$0.3369_{-0.0095}^{+0.0079}$\\ ~~~$u_{1V}$\dotfill &Linear Limb-darkening\dotfill & $0.2901_{-0.0072}^{+0.011}$&$0.2900_{-0.0071}^{+0.011}$\\ ~~~$u_{2V}$\dotfill &Quadratic Limb-darkening\dotfill & $0.3365_{-0.0074}^{+0.0053}$&$0.3365_{-0.0070}^{+0.0053}$\\ \hline Secondary Eclipse & & &\\ ~~~$T_{S}$\dotfill &Predicted time of eclipse (\bjdtdb)\dotfill & $2457294.12796_{-0.00042}^{+0.00041}$&$2457294.12796\pm0.00042$\\ \hline \end{tabular} \begin{flushleft} \footnotesize \textbf{\textsc{NOTES}} \\ \vspace{.1in} \footnotesize Uncertainties are based strictly on formal uncertaintes on the input data and do not take into account systematic effects. \end{flushleft} \end{table*} \subsection{Transit Timing Variation Analysis} \label{sec:TTVs} Using the fiducial global model determined transit center times (see Table \ref{tab:TTVs}), we searched for transit timing variations in the KELT-21 system. We ensure that all follow-up lightcurves are using BJD$_{\rm TDB}$ time stamps \citep{Eastman:2010}. Additionally, all follow-up members synchronize their telescope control computers to a standard clock prior to observing. This is typically done periodically throughout an observing night. We perform a linear fit to the determined transit mid times, obtaining a linear ephemeris of T$_0$ = 2457382.640727 $\pm$ 0.00041 (BJD$_{\rm TDB}$) and P = 3.6127628 $\pm$ 0.0000038 days, with $\chi^2$ = 27.2 and 6 degrees of freedom. Although several of the data points lie more than $1\sigma$ from the zero $O-C$ line, two of the greatest outliers are the observations from SUO (large scatter) and ZRO (ingress only) which we would expect to have lower timing precision. Furthermore, the WCO observations are of the same transit as that observed by KUO, which show no deviations from the linear ephemeris. These deviations are likely due to the systematics inherent to ground-based transit timing observations \citep{Carter:2009}. We thus find no conclusive evidence for any astrophysical TTVs in our data and adopt this as the best ephemeris for predicting future transit times. \begin{table} \centering \caption{Transit times from KELT-21\MakeLowercase{b} Photometric Observations.} \label{tab:TTVs} \begin{tabular}{r@{\hspace{12pt}} l r r r c} \hline \hline \multicolumn{1}{c}{Epoch} & \multicolumn{1}{c}{$T_\textrm{C}$} & \multicolumn{1}{l}{$\sigma_{T_\textrm{C}}$} & \multicolumn{1}{c}{O-C} & \multicolumn{1}{c}{O-C} & Telescope \\ & \multicolumn{1}{c}{(\bjdtdb)} & \multicolumn{1}{c}{(s)} & \multicolumn{1}{c}{(s)} & \multicolumn{1}{c}{($\sigma_{T_\textrm{C}}$)} & \\ \hline -134 & 2456898.527802 & 169 & -233.74 & -1.38 & SUO\\ -118 & 2456956.337374 & 91 & 229.94 & 2.50 & ZRO\\ 57 & 2457588.567597 & 67 & -52.87 & -0.79 & CROW\\ 67 & 2457624.696694 & 69 & 74.02 & 1.07 & KUO\\ 67 & 2457624.695694 & 72 & -12.38 & -0.17 & KUO\\ 67 & 2457624.693194 & 60 & -228.38 & -3.80 & WCO\\ 144 & 2457902.879452 & 67 & 75.71 & 1.13 & ULMT\\ 144 & 2457902.879181 & 45 & 52.30 & 1.16 & ULMT\\ \hline \hline \end{tabular} \begin{flushleft} \footnotesize{Epochs are given in orbital periods relative to the value of the inferior conjunction time from the global fit.} \end{flushleft} \end{table} \begin{figure} \includegraphics[width=1\linewidth,trim = 0.25in 0.2in 5.0in 9.0in]{f9.pdf} \caption{The transit time residuals for KELT-21b using the inferior conjunction time from the global fit to define the epoch. The data are listed in Table \ref{tab:TTVs}.} \label{fig:TTVs} \end{figure} \subsection{Tidal Evolution and Insolation History} \label{sec:Insolation} Using the POET code \citep{Penev:2014}, we followed the past and future tidal orbital evolution of the KELT-21 system under the constant phase lag (constant tidal quality factor) assumption. We incorporated the evolution of the stellar radius and luminosity, and followed the transfer of angular momentum from the star to the orbit. Note that in this case, the minimum equatorial velocity of the star (i.e., \ensuremath{v\sin{I_*}}) implies that the star is spinning super-synchronously; given the stellar radius and \ensuremath{v\sin{I_*}}\ found in \S\ref{sec:GlobalFit}, $P_{\mathrm{rot}}<0.57$ days, compared to $P=3.61$ days for the planet. Thus tidal dissipation causes the planet to move outward and the star to spin down. Calculations were performed with three different assumptions for the modified tidal quality factor of KELT-21: $Q_*' = 10^{6.03}$, $10^7$, and $10^8$ to demonstrate the range of plausible evolutions. The value of $Q_*' = 10^{6.03}$ was chosen as any smaller value of $Q_*'$ would require the planet to be inside the star at an age of 200 Myr. While tidal quality factors $>10^7$ are plausible, especially for ``hot'' stars like KELT-21, the orbital evolution on the main sequence is already small for $Q_*' = 10^7$, and differences in orbital evolution for $Q_*' = 10^7$ and $10^8$ are thus minimal. We show the evolution of the semi-major axis of KELT-21b in the top panel Fig.~\ref{fig:semimajoraxis}, and of the stellar insolation received by KELT-21b in the bottom panel. The changes in semi-major axis and insolation are small for $Q_*'\geq10^7$. For $Q_*'=10^{6.03}$ the evolution is more rapid and KELT-21b would have needed to begin its life at the stellar surface, and so the actual value of $Q_*'$ is likely to be larger than $10^{6.03}$. Since we used the $3\sigma$ upper mass limit of $3.91$ \ensuremath{\,M_{\rm J}}\ for these calculations, the value of $Q_*'$ could be smaller if the planetary mass is smaller, and the orbital evolution would be smaller for lower planetary mass at fixed $Q_*'$. Regardless of the value of $Q_*'$, larger changes will begin occurring in a few hundred million years, when KELT-21 begins evolving off of the main sequence. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{f10a.pdf}\\ \includegraphics[width=1.0\columnwidth]{f10b.pdf} \caption{\footnotesize Predicted evolution of the semi-major axis of (top) and stellar insolation experienced by (bottom) KELT-21b under the influence of stellar tides. Teal, magenta, and dark yellow lines depict the evolution assuming a stellar tidal quality factor of $Q_*' = 10^{6.03}$, $10^7$ and $10^8$, respectively.} \label{fig:semimajoraxis} \end{figure} \section{Discussion} \label{sec:Discussion} \subsection{Spin-Orbit Misalignment} \label{sec:lambda} With our Doppler tomographic observations we measured the spin-orbit misalignment, $\lambda$, obtaining $\lambda=-5.6_{-1.9}^{+1.7 \circ}$. This, however, is not the true obliquity of the planetary orbit, but rather this value projected onto the plane of the sky. The full three-dimensional spin-orbit misalignment $\psi$ is a more physically meaningful quantity than $\lambda$, but its calculation requires knowledge of not only $\lambda$ and the planetary orbital inclination $i$, but also the inclination of the stellar rotation axis $I_*$. The first two of these are straightforward to measure; the latter is not, and we cannot measure this quantity for KELT-21 using our existing data. We can, however, set limits upon $\psi$ by making reasonable assumptions to limit $I_*$, as we did in \cite{Siverd:2017} and \cite{Lund:2017}. We follow \cite{Iorio:2011} by assuming that KELT-21 must be rotating at less than the break-up velocity, which must limit $I_*$, since the equatorial velocity is $v_{\mathrm{eq}}=\ensuremath{v\sin{I_*}}/\sin I_*$. Doing so with our best-fit values for the system parameters, we obtain a break-up velocity of $v_{\mathrm{eq,max}}=232.5 \pm 3.3$ \ensuremath{\rm km\ s^{-1}}, stellar inclination of $38.2^{\circ}<I_*<141.8^{\circ}$, and true orbital obliquity of $3.7^{\circ}<\psi<55.9^{\circ}$, the latter two ranges at $1\sigma$ confidence. While KELT-21b certainly has a prograde orbit, we cannot exclude the possibility that it is significantly misaligned with respect to the stellar rotation despite the alignment along the line of sight. In Fig.~\ref{fig:lambdavsTeff} we show the spin-orbit misalignments of all hot Jupiters for which this has been measured, along with KELT-21b. KELT-21 is well above the Kraft break, where hot Jupiters tend to have a wide range of misalignments \citep{Winn:2010}. Although its well-aligned orbit is somewhat unusual in this context, some other hot Jupiters around hot stars are also aligned, such as KELT-7b \citep{Zhou:2016} and KELT-20b/MASCARA-2b \citep{Lund:2017,Talens:2017}. Even for an isotropic distribution of misalignments, some systems should be aligned by chance. Indeed, it is perhaps unsurprising that KELT-21, as the most rapidly rotating hot Jupiter host star known to date (bottom panel of Fig.~\ref{fig:lambdavsTeff}), possesses an apparently well-aligned planet. In order for an aligned planet to transit, its host star must have $I_*\sim90^{\circ}$, and so on average the \ensuremath{v\sin{I_*}} values for aligned planets should be higher than those for misaligned planets \citep[e.g.,][]{Schlaufman:2010,Winn:2017}. The aligned orbit of KELT-21b suggests that it could have migrated to its current position through its protoplanetary disk, and that a dynamically hot migration mechanism, like planet-planet scattering \citep[e.g.,][]{Lin:1996} or the Kozai-Lidov mechanism \citep[e.g.,][]{Fabrycky:2007,Naoz:2012}, is not required to explain KELT-21b. The value of \feh$=-0.405_{-0.033}^{+0.032}$ that we have found is also interesting in the context of planet migration. \cite{Dawson:2013} found that hot and warm Jupiters orbiting lower-metallicity (\feh$<0$) stars typically have lower orbital eccentricities than those around higher-metallicity stars, and attributed this to disk migration for lower-metallicity stars and planet-planet scattering for higher-metallicity stars, which due to the planet-metallicity correlation \citep[e.g.,][]{Fischer&Valenti:2005} should form more giant planets. Planet-planet scattering should result in misaligned orbits, while disk migration should produce aligned orbits. That KELT-21, with \feh$<0$, has an aligned planet, could fit into this picture. On the other hand, the presence of possible stellar companions (\S\ref{sec:AO}) suggests that the Kozai-Lidov mechanism could still be responsible for the migration of KELT-21b; we discuss this possibility in more detail in \S\ref{sec:compimplications}. \begin{figure*} \centering \includegraphics[scale=0.7]{f11a.pdf}\\ \vspace{-10pt} \includegraphics[scale=0.7]{f11b.pdf} \caption{\footnotesize Top: distribution of the spin-orbit misalignments of hot Jupiters from the literature as a function of $T_{\mathrm{eff}}$, after \cite{Winn:2010}. Planets with host stars with $T_{\mathrm{eff}}<6250$ K are shown in blue, those with $T_{\mathrm{eff}}>6250$ K in red, and those with uncertainties of $>20^{\circ}$ on their reported values of $\lambda$ in gray. A vertical dashed line denotes the approximate location of the Kraft break at 6250 K. Large cyan circles highlight planets discovered by the KELT survey, and KELT-21b is shown by the large dark red star; uncertainties on its parameters are smaller than the plot symbol size. HAT-P-57 is shown with the two sets of error bars without an associated plot point, denoting the two degenerate solutions found by \cite{Hartman:2015}. Bottom: $v\sin I_*$ values for these stars. The steady increase in average rotational velocity above the Kraft break is apparent. We assembled the literature sample using John Southworth's TEPCat Rossiter-McLaughlin Catalogue\protect\footnote{http://www.astro.keele.ac.uk/jkt/tepcat/}.} \label{fig:lambdavsTeff} \end{figure*} \subsection{KELT-21 as a Hierarchical Triple System} \label{sec:Triple} \subsubsection{Are the Companions Bound?} Candidate stellar companions discovered through high-resolution imaging are typically confirmed to be bound to the primary star through detection of common proper motion \citep[e.g.,][]{Ngo:2016}. Alternately, if only a single epoch of photometry is available, multi-color photometry can be used to check that the secondary is consistent with being at the same distance as the primary \citep[e.g.,][]{Evans:2016}. Finally, spectra can be used to determine whether the companion shares the systemic velocity of the primary \citep[e.g.,][]{Siverd:2017}. We, however, have only a single epoch of single-color photometry of the candidate companions found in \S\ref{sec:AO}, and the candidate companions are too faint to be detected in our spectra. We cannot apply any of these methods to confirm or assess whether the companions are bound. Instead, we follow a similar methodology to \cite{Oberst:2016}, who used source counts from 2MASS to argue that the probability of a chance alignment between KELT-16 with a star at least as bright as their candidate companion was small. The companions to KELT-21, however, are more than two magnitudes fainter in $K_S$ than the companion to KELT-16, and are below the 2MASS completeness limit. We therefore used the same method as \cite{Oberst:2016}, but using star counts from a Galactic model rather than from actual data. We generated Galactic models for the KELT-21 field using the v1.6 of the TRILEGAL code\footnote{http://stev.oapd.inaf.it/cgi-bin/trilegal\_1.6} \citep{Girardi:2005} and with the Besan\c{c}on code\footnote{http://model2016.obs-besancon.fr/} \citep{Robin:2003}. We generated a model for a one square degree field centered on the location of KELT-21 ($l=71.4814^{\circ}$, $b=-1.9865^{\circ}$). For TRILEGAL we used the extinction from the \cite{Schlafly:2011} reddening maps at this location\footnote{https://irsa.ipac.caltech.edu/applications/DUST/}; for the Besan\c{c}on model we used the dust map of \cite{Marshall:2006}. We assumed a binary fraction of 0.33 and flat mass ratio distribution between 0.2 and 1.0, after \cite{Raghavan:2010}, and otherwise used the default TRILEGAL and Besan\c{c}on parameters. We neglect the contribution of background galaxies as their source density is much smaller than that of stars near the Galactic plane. The source density of galaxies with $K<17$ is $\mathcal{O}(10^3)$ deg$^{-2}$ \citep[e.g.,][]{Smith:2009}, while both TRILEGAL and Besan\c{c}on predict a stellar source density of $\mathcal{O}(10^5)$ deg$^{-2}$ in the same magnitude range. We counted the number of model sources brighter than the combined $K_S$ magnitude of the companions ($K_S=16.47$), and approximated the probability of a chance superposition as this total times the fraction of the 1 square degree model area that is less than 1\farcs2 from KELT-21 (this being the separation of the actual companions). This resulted in a probability of 3.8\% (TRILEGAL) or 5.0\% (Besan\c{c}on) of a chance superposition. This suggests that the companions are likely to be bound, but the chance that they are background sources is too high to claim physical association with any certainty. We can, however, further leverage the binary nature of the companion. First, we can assess the probability that the candidate companions are bound {\it to each other} using similar methodology. Here the probability of a chance superposition of an object brighter than $K_S=17.38$ within 55 mas of the brighter candidate companion is 0.013\% (TRILEGAL) or 0.019\% (Besan\c{c}on). We conclude that the companions are very likely to be bound to each other. We can now assess the probability of the chance superposition of a {\it bound binary} close to KELT-21. To do so, we use the same TRILEGAL model as earlier, which accounts for the presence of binaries but does not contain any information on the separation of the binaries. For each binary in the TRILEGAL model, we drew a random orbital period from the log-normal distribution found by \cite{Raghavan:2010} to approximate their results (mean of $\log P=5.03$, standard deviation $\sigma_{\log P}=2.28$), converted this to semi-major axis using Kepler's Third Law and the masses of the components from TRILEGAL, and computed a projected separation using the simplifying assumption of circular orbits and drawing a random orbital phase. We then calculated the magnitude difference $\Delta K_S$ between the components using the \texttt{isochrones} code \citep{Morton:2015}, assuming the age, masses, and metallicity of the system from TRILEGAL. We then assessed the total number of binary systems with projected separations larger than our resolution limit of $\lambda/D=$45 mas, and smaller than the separation between KELT-21 and the candidate companions, and with a magnitude difference between the components smaller than that between the observed candidate companions. The Besan\c{c}on model does not automatically include binaries, and so we randomly assigned binary companions to 33\% of the model stars with a mass ratio drawn from a flat distribution over the interval $0.2\leq q \leq 1$ and then otherwise followed the same methodology as for TRILEGAL. This indicates a probability of 0.035\% (TRILEGAL) or 0.091\% (Besan\c{c}on) of the chance superposition of a background visual binary with KELT-21. We note that the choice of outer limit for this calculation has only a minimal effect on this result; 99\% of the binaries with appropriate contrast ratios have separations of less than 0\farcs5. We thus conclude that the candidate companions are very likely to be bound to KELT-21, and for the remainder of this paper we assume that they are bound and refer to them as KELT-21B and KELT-21C. Future high-resolution imaging observations to confirm common proper motion of the companions will likely be difficult. The proper motion of KELT-21 is small (total proper motion of $2.34 \pm 0.80$ mas yr$^{-1}$; Table~\ref{tab:LitProps}), and, indeed, is non-zero at only $2.9\sigma$ in TGAS \citep{Brown:2016}. As is expected for a system close to the direction of Galactic rotation ($l=71^{\circ}$), the space motion of KELT-21 is primarily in the radial direction, resulting in small proper motion. \subsubsection{Properties of the Companions} \label{sec:compprop} Assuming that the companions are indeed physically bound to KELT-21, we can estimate their physical properties. In order to estimate the masses of the companions, we used the \texttt{isochrones} package \citep{Morton:2015} to calculate the expected $\Delta K_S$ values between a primary with the properties of KELT-21 and the companions as a function of companion mass, which we compared to the observed values. This resulted in estimated companion masses of $M_B=0.13_{-0.01}^{+0.02}$ \ensuremath{\,M_\Sun}\ and $M_C=0.11 \pm 0.01$ \ensuremath{\,M_\Sun}. The quoted uncertainties on these masses are derived purely from the uncertainties on the parameters of KELT-21 and on the photometric measurements, and neglect any systematic uncertainties from the isochrones or other sources. Using the $\ensuremath{M_{*}}$-spectral type relations of \cite{Pecaut:2013}, these would correspond to spectral types of M5.5V and M6V for KELT-21 B and C, respectively. Assuming $a\sim a_\perp$ \citep[which is true on average:][]{Heacox:1994}, the orbital periods of the B-C and A-BC binaries should be $\sim200$ and $\sim9000$ years, respectively. If the orbits were to be circular and face-on, this would correspond to an astrometric motion of A and BC due to their mutual orbit of $\sim120$ and $\sim750$ $\mu$as yr$^{-1}$, respectively, while the astrometric motion of B and C about each other would be $\sim800$ $\mu$as yr$^{-1}$. Gaia should be able to achieve a proper motion precision of $\sim3-8$ $\mu$as yr$^{-1}$ on KELT-21\footnote{https://www.cosmos.esa.int/web/gaia/science-performance}, and so should be able to easily detect its orbital motion in the A-BC binary. Components B and C should have $G$-band magnitudes of $\sim19.2$ \citep[using the relation between $G$, $V$, and $I_C$ magnitudes from][]{Jordi:2010}, and Gaia should be able to achieve a parallax precision of 130 $\mu$as and a proper motion precision of $\sim70$ $\mu$as yr$^{-1}$, sufficient to both confirm that B and C are located at the same distance as KELT-21, and to detect the mutual orbital motion of the B-C and A-BC binaries. While the companions are undetected in the Gaia DR1 source catalog, this is not unexpected given the large flux difference and the incompleteness of this catalog within 4'' of bright sources \citep{Arenou:2017}. Given the low proper motion of KELT-21, it is likely that Gaia will confirm or refute the association of KELT-21 B and C with A before common proper motion confirmation is possible. \subsubsection{Implications of the Companions} \label{sec:compimplications} KELT-21b is likely one of the few known examples of a hot Jupiter in a hierarchical triple stellar system. Approximately 10\% of field systems with AFGK primaries are stellar triple systems \citep{Raghavan:2010,DeRosa:2014}. While theoretically it has been proposed that certain configurations of hierarchical triple systems should boost the efficiency of hot Jupiter formation \citep{Hamers:2017}, it is difficult to assess the occurrence rate of hot Jupiters in stellar triple systems due to observational biases against equal-mass triples and heterogenous imaging observations across the full sample of known hot Jupiters; doing so is beyond the scope of this paper. For binary companions to exoplanet host stars, we can evaluate whether the companion is capable of causing Kozai-Lidov oscillations by equating the Kozai-Lidov and general relativistic precession timescales \citep[e.g.,][]{Ngo:2016}. Approximating KELT-21 BC as a single object with the sum of their masses, they would be capable of causing Kozai-Lidov oscillations of a giant planet with a semi-major axis of greater than $\sim2.1$ AU if the A-BC mutual orbit is circular; this limit decreases as the eccentricity of this orbit increases (e.g., to $\sim1.9$ AU for $e=0.5$). The dynamics of Kozai-Lidov oscillations due to a binary stellar companion, however, are more complicated than that due to a single companion \citep{Hamers:2017,Fang:2017}. \cite{Hamers:2017} found in their simulations that in hierarchical triple systems with a structure similar to that of KELT-21 (i.e., a primary with a planet and a pair of binary companions, with $a_{B-C} << a_{A-BC}$, which they referred to as a ``2+2'' configuration), hot Jupiters tended to be formed only if $a_{B-C}\lesssim10^2$ AU, and 20 AU $\lesssim a_{A-BC}\lesssim$ 10$^3$ AU. The period distribution of the resulting hot Jupiters in their models peaked around 3 days. Finally, \cite{Hamers:2017} found that Kozai-Lidov oscillations are enhanced if the Kozai-Lidov time scales for the planetary orbit, and the binary orbit of the companions, is similar; quantitatively, this occurs when $\mathcal{R}_{2+2}\sim(a_{B-C}/a_P)^{3/2}[($\ensuremath{M_{*}}$+M_P)/(M_B+M_C)]^{3/2}\sim\mathcal{O}(1)$. For the KELT-21 system, if the planet formed at 5 (15) AU, the system would initially have had $\mathcal{R}_{2+2}\sim150$ ($30$). \cite{Hamers:2017} found that hot Jupiters tend to form in systems with $0.01\lesssim\mathcal{R}_{2+2}\lesssim100$. The KELT-21 system is broadly consistent with all of these criteria, suggesting that it is plausible that the companions drove the migration of the planet by four-body Kozai-Lidov oscillations. Nonetheless, the well-aligned orbit of KELT-21 is somewhat at odds with the Kozai-Lidov migration scenario. Hot Jupiters formed by this mechanism should frequently reside on highly-inclined orbits \citep[cf. Fig.~9 of][]{Hamers:2017}. On the other hand, we found in \S\ref{sec:lambda} that KELT-21b has $\psi<54.7^{\circ}$, which is reasonable given the distribution of $\psi$ (there denoted $\theta_*$) found by \cite{Hamers:2017}. We note in conclusion that, given the many migration mechanisms that have been proposed to create hot Jupiters, it is always difficult or impossible to ascribe the formation of a specific system to a specific mechanism with any certainty. Instead, it is the distributions of parameters of a population (e.g., $P$, $\lambda$, etc.), that constrains the migration mechanisms. KELT-21b adds to the population of known hot Jupiters around hot stars, and thus will help to answer these questions statistically. \subsection{Metal Content and Galactic Context} \label{sec:metals} The $\alpha$ enhancement and relatively low metallicity found by our spectral analysis of KELT-21 (\S\ref{sec:SpecPars}) are unusual for a relatively young ($\sim1.6$ Gyr) star. In order to better contextualize the low metallicity, we computed the Galactic orbit of KELT-21 using the \texttt{galpy} package\footnote{http://github.com/jobovy/galpy} \citep{Bovy:2015}. Given our ($U$,$V$,$W$) values and using \texttt{galpy}'s ``MWPotential2014'' Galactic potential, we estimate that KELT-21's Galactic orbit has an apoapsis of at most $\sim8.6$ kpc and a periapsis of at minimum $\sim7.7$ kpc. This suggests that KELT-21 does not stray far from the solar circle, but its low metallicity could still be explained by invoking formation in the metal-poor outer Galaxy or Galactic bar followed by radial mixing due to the Milky Way's spiral arms \citep[e.g.,][]{Sellwood:2002}. See Fig.~\ref{fig:galorbit} for a depiction of KELT-21's Galactic orbit. KELT-21 does not attain a height above the Galactic midplane of more than $\sim300$ pc, indicating that it is kinematically a member of the thin disk and substantiating our calculation that it has a high probability of being a member of the thin disk (\S\ref{sec:UVW}). Its [$\alpha$/Fe] value, however, is more similar to thick disk than thin disk stars at its \feh\ \citep[cf.][]{Bovy:2016,SilvaAguirre:2017}, although it does lie in between the thin and thick disk chemical sequences. \begin{figure*} \includegraphics[width=1.0\columnwidth]{f12a.pdf} \includegraphics[width=1.0\columnwidth]{f12b.pdf} \caption{Galactic orbit of KELT-21, as computed with \texttt{galpy} from KELT-21's position, distance, and ($U,V,W$) space velocity, as viewed from the top in the $x-y$ plane (left) and the side in the $x-z$ plane (right). The red line shows the orbit computed over the next 2 Gyr using our computed parameters of KELT-21, while the gray lines show 50 realizations with values of ($U,V,W$) and distance randomly drawn from Gaussian distributions with the same mean and standard deviation as the measured value and uncertainty on each parameter. The blue star marks the current position of KELT-21. KELT-21 does not stray far from the solar circle or the Galactic plane, confirming its thin disk kinematics.} \label{fig:galorbit} \end{figure*} In order to assess how unusual KELT-21 is for a relatively young field star, we utilized the APOKASC sample of stars with abundances, masses, and ages \citep[Pinsonneault et al. in prep., an update to the][sample]{Pinsonneault:2014}. These stars have asteroseismic parameters measured from \textit{Kepler} photometry \citep{Borucki:2010}, masses inferred from asteroseismic scaling relations \citep{Kjeldsen:1995} with theoretical corrections \citep{Serenelli:2017} and empirical calibrations to cluster data. They also have spectroscopic metallicities, temperatures, and abundances from Data Release 14 \citep{DR14} of the APOGEE-2 survey \citep{Majewski:2017,Holtzman:2015} of the Sloan Digital Sky Survey IV \citep{Blanton:2017} on the Sloan Digital Sky Survey telescope \citep{Gunn:2006}. For our comparison, we assessed the values of \feh\ and [$\alpha$/Fe] for two subsamples: giants with masses of $1.75 \ensuremath{\,M_\Sun}<\ensuremath{M_{*}}<2.25 \ensuremath{\,M_\Sun}$, which should have formed at around the same time as KELT-21, and stars with ages of less than 1.7 Gyr and Gaia parallaxes from which we could compute Galactic kinematics, again using \texttt{galpy} \citep{Bovy:2015}. For the latter sample we selected stars with Galactic periapses $>6.6$ kpc, and apoapses $<10.0$ kpc, in order to produce a sample with similar kinematics to KELT-21. For both samples we excluded stars with [C/N]$>-0.4$ in order to exclude stars which are likely merger products or have experienced significant mass gain due to binary interactions, and are therefore likely older than would be assumed given their other properties \citep[cf.][]{Izzard:2017}. We show the resulting \feh\ and [$\alpha$/Fe] values in Fig.~\ref{fig:fehvsalpfe}. It is apparent that KELT-21's \feh\ is at the lower end of these distributions, but it is not a dramatic outlier. We thus conclude that while KELT-21's low metallicity is unusual for a relatively young star that is kinematically part of the thin disk, it is not inexplicably so. \begin{figure} \includegraphics[width=1.1\columnwidth]{f13.pdf} \caption{$\alpha$-enhancement as a function of \feh\ for the APOKASC samples of $1.75 \ensuremath{\,M_\Sun}<\ensuremath{M_{*}}<2.25 \ensuremath{\,M_\Sun}$ giants (blue) and relatively young giants with kinematics similar to KELT-21 (red; see text for details). KELT-21 is shown as the large cyan star. We also show the young $\alpha$-rich stars found by \cite{Martig:2015} and \cite{Chiappini:2015} as downward- and upward-pointing gray triangles, respectively. Note that \cite{Martig:2015} quote [Fe/M] and [$\alpha$/M], not \feh\ and [$\alpha$/Fe], and so the placement of these points on our plot should be considered approximate.} \label{fig:fehvsalpfe} \end{figure} KELT-21's abundance pattern and relatively young age is qualitatively similar to those of the class of young, $\alpha$-rich giants found through the combination of {\it Kepler}/{\it CoRoT} asteroseismology and APOGEE spectra by \cite{Martig:2015} and \cite{Chiappini:2015}, although KELT-21 falls at the edge of the distribution of such stars (Fig.~\ref{fig:fehvsalpfe}). KELT-21 could be such a star observed while it is still on the main sequence. Such stars found by \cite{Chiappini:2015} were primarily in the inner Galactic disk, and they invoked formation at the end of the Galactic bar; while this is inconsistent with KELT-21's current circular orbit and location near the solar circle, as mentioned earlier it could still have formed in the inner Galaxy and experienced radial mixing to move it to its current location, although this would have needed to be rapid in order to move the star several kpc within the 1.6 Gyr since it formed. The planet orbiting KELT-21 also seems to be at odds with the other proposed explanation for young $\alpha$-rich stars, namely, that they are blue stragglers formed from stellar mergers \citep{Jofre:2016,Yong:2016}. Such a collision would have destroyed any short-period planet already around one of the stars; KELT-21b would have needed to either form from material thrown off in the collision, or have survived the collision at larger semi-major axis and only migrated {\it after} the collision. Young $\alpha$-rich stars also tend to be kinematically members of the thick disk, while KELT-21 is not, and so the association of KELT-21 with this class is not clear. KELT-21's low metallicity is also unusual for a hot Jupiter host. Giant planets are much more common around more metal-rich stars; this is the well-known planet-metallicity correlation \citep[e.g.,][]{Fischer&Valenti:2005}. Only a handful of hot Jupiters are known to orbit stars more metal-poor than KELT-21, the current record-holder being WASP-98, with \feh=-0.60 \citep{Hellier:2014}. KELT-21b is thus useful for probing the properties of giant planets at low metallicity. Additionally, planet-host stars at low metallicity tend to be significantly $\alpha$-enhanced \citep[e.g.,][]{Adibekyan:2012}. It is thought that this is because stars with higher [$\alpha$/Fe] have more metals available to form solids (and therefore planets) at fixed \feh. KELT-21's $\alpha$-enhancement is in between the values typical for the thick and thin disk populations, which is not inconsistent with this trend. \subsection{Prospects for Characterization} \label{Characterization} Although KELT-21 is, at $V=10.5$, one of the fainter transiting planet hosts found by KELT, it is nonetheless brighter than many transiting planet hosts (cf. the {\it Kepler} sample), and so prospects for further characterization are good. In Fig.~\ref{fig:popplot} we show KELT-21 in context of the population of known transiting planets, in terms of host star optical magnitude and \ensuremath{T_{\rm eff}}. Only a few stars hotter than KELT-21 host transiting hot Jupiters, and due to its relatively long-period orbit compared to these planets (3.6 days) KELT-21b is relatively cool (zero-albedo equilibrium temperature of $T_{\mathrm{eq}}=2051$ K), potentially offering interesting prospects for atmospheric characterization via transmission spectroscopy. KELT-21 is also relatively bright in the infrared ($J=10.15$, $K_S=10.09$; Table~\ref{tab:LitProps}), suggesting that KELT-21b may also be a good target for secondary eclipse observations in the infrared. Either alone or in concert with transmission spectroscopy during the transit, this will help to constrain the atmospheric properties of KELT-21b. Due to the high \ensuremath{v\sin{I_*}}\ of KELT-21, we have been unable to measure the mass of KELT-21b, only to set a $3\sigma$ upper limit of $M_P<3.91$ \ensuremath{\,M_{\rm J}}. Future observations might be able to measure the mass; however, the rapid rotation of KELT-21 will make this very difficult. Another possible avenue to measure the mass and probe the atmosphere of KELT-21b is through the detection of the orbital phase curve \citep[e.g.,][]{Shporer:2011}. We estimate that, in the TESS bandpass, KELT-21 should have an orbital phase curve amplitude of $\sim65$ ppm. KELT-21 (TIC 203189770) has a TESS-bandpass magnitude of 10.33, which should result in a per-point photometric precision of $\sim240$ ppm \citep[using the noise-$T$ magnitude relationship presented in][]{Stassun:2017}, and will be on TESS silicon for up to 54.8 days in TESS Year 2 (as KELT-21 is in the northern ecliptic hemisphere). This precision and duration of data, folded over the orbital period of the planet and binned, results in a precision of $\sim60$ ppm, suggesting that the phase curve of KELT-21b should be detectable by TESS. Due to the high equilibrium temperature, the TESS-bandpass phase curve should be dominated by thermal emission from the planet (amplitude $\sim35$ ppm), which would be even more prominent in the infrared; this may also be detectable with {\it Spitzer}. \begin{figure} \centering \includegraphics[width=1.1\columnwidth]{f14.pdf} \caption{\footnotesize KELT-21 in context with other known planets for the purpose of atmospheric investigations. The symbol color corresponds to the planetary zero-albedo equilibrium temperature, while the symbol size is proportional to the transit depth $(R_P/R_*)^2$. Planets discovered by KELT are marked with large cyan circles. KELT-21 is hotter and brighter than most transiting exoplanet hosts, and prospects for observations to characterize the atmosphere are good. The optical magnitude on the horizontal axis is the $Kp$ magnitude for {\it Kepler} planets and $V$ magnitude for most other objects. We show known transiting planets taken from the NASA Exoplanet Archive\footnote{https://exoplanetarchive.ipac.caltech.edu/}, plus KELT-19Ab \citep{Siverd:2017}, KELT-20b/MASCARA-2b \citep{Lund:2017,Talens:2017}, and MASCARA-1b \citep{Talens:2017b}. The location of KELT-21b is marked.} \label{fig:popplot} \end{figure} \section{Summary and Conclusion} We have presented the discovery of KELT-21b, a hot Jupiter on a 3.6-day orbit transiting the rapidly rotating A8V star HD 332124. With \ensuremath{v\sin{I_*}}=146 \ensuremath{\rm km\ s^{-1}}, KELT-21 is the most rapidly rotating star to host a confirmed transiting planet to date, and KELT-21b is one of only a handful of known transiting planets around an A star. Its host star is also relatively bright, suggesting good prospects for follow-up observations to further characterize the planet. Our high-resolution imaging observations revealed the presence of a close pair of faint stars at a separation of 1\farcs2 from the planet host star. Although we cannot confirm using our current data whether they are physically associated with the KELT-21 system, we have argued statistically that they are unlikely to be background sources. If they are indeed physically associated with KELT-21, KELT-21 B and C are a pair of mid-M dwarfs with a mutual separation of $\sim20$ AU, lying $\sim500$ AU from KELT-21. They occupy the part of parameter space where they could have caused the migration of KELT-21b through the Kozai-Lidov mechanism \citep[e.g.,][]{Hamers:2017}, although the well-aligned orbit of KELT-21b is not entirely consistent with such an origin. Unusually for a star of its relatively high mass and thus relatively young age, KELT-21 appears to have a somewhat low metallicity (\feh=$-0.405_{-0.033}^{+0.032}$) and an $\alpha$ enhancement ([$\alpha$/Fe]=$0.145 \pm 0.053$). While this metallicity is unusual for a relatively young ($\sim1.6$ Gyr) star with thin-disk kinematics, it is not inexplicably so, and the [$\alpha$/Fe] is more typical of thick-disk stars at this \feh. KELT-21b is also among the lowest-metallicity stars known to host a hot Jupiter, and is thus particularly interesting in the context of planet formation theory. \section{Acknowledgements} We thank Jennifer Johnson, Marc Pinsonneault, and Dennis Stello for useful discussions on the Galactic context of KELT-21 and the analysis using the APOKASC catalog. Work performed by J.E.R. was supported by the Harvard Future Faculty Leaders Postdoctoral fellowship. Work by G.Z. is provided by NASA through Hubble Fellowship grant HST-HF2-51402.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. Work performed by P.A.C. was supported by NASA grant NNX13AI46G. K.P. acknowledges support from NASA grant NNX13AQ62G. B.S.G. and D.J.S. were partially supported by NSF CAREER Grant AST-1056524. A.S. is partially supported by grant ESP2015-66134-R. Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant agreement No. DNRF106). V.S.A. acknowledges support from VILLUM FONDEN (research grant 10118). This project makes use of data from the KELT survey, including support from The Ohio State University, Vanderbilt University, and Lehigh University, along with the KELT follow-up collaboration. The LBT is an international collaboration among institutions in the United States, Italy and Germany. The LBT Corporation partners are: The Ohio State University; LBT Beteiligungsgesellschaft, Germany, representing the Max Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia. This paper includes data taken at The McDonald Observatory of The University of Texas at Austin. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This work has made use of NASA's Astrophysics Data System, the Extrasolar Planet Encyclopedia at exoplanet.eu, the SIMBAD database operated at CDS, Strasbourg, France, and the VizieR catalogue access tool, CDS, Strasbourg, France. We also used data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration; the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation; and the European Space Agency (ESA) mission {\it Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. \facilities{KELT, LBT(PEPSI), FLWO(TRES), HJST(TS23), Keck(NIRC2)} \software{Python, IDL, IRAF, TAPIR \citep{Jensen:2013}, AstroImageJ \citep{Collins:2013}, SDS4PEPSI \citep{Strassmeier:2017}, EXOFAST \citep{Eastman:2013}, POET \citep{Penev:2014}, TRILEGAL \citep{Girardi:2005}, Besan\c{c}on \citep{Robin:2003}, isochrones \citep{Morton:2015}, galpy \citep{Bovy:2015}}
1,108,101,563,035
arxiv
\section{Introduction} A newborn antelope attempts its first steps only minutes after birth and is capable of walking within half an hour. A human baby, when lifted so that its feet just graze the ground, will step in a cyclical walking motion \citep{peiper1929schreitbewegungen, dominici2011locomotor}. Nature provides clear examples of evolutionarily determined, innate behaviors of surprising complexity, and basic locomotor circuits are well-developed prior to any significant goal-directed experience. The innate structure simplifies the production of goal-directed behavior by confining exploration to stable and coherent, yet flexible dynamics. On the other hand, machine learning's recent success stories have emphasized rich models with weakly-defined prior structure, sculpted by large amounts of data. And indeed, several recent studies have shown that end-to-end reinforcement learning approaches are capable of generating high-quality motor control policies using generic neural networks \citep{schulman2015high, levine2015end, heess2015learning, lillicrap2015continuous, gu2016continuous}. A key question is how to get the best of both worlds: to build modular, hierarchical components that support coherent exploration while retaining the richness and flexibility of data-driven learning. Here, we aim to create flexible motor primitives using a reinforcement learning method. We explore an approach based on a general-purpose, closed-loop motor controller that is designed to be modulated by another system. When modulated by random noise, this low-level controller generates stable and coherent exploratory behavior. A higher-level controller can then recruit these motor behaviors to simplify the solution of complex tasks. As a result, it can learn effective control strategies given only weak feedback, including tasks where reward is only provided infrequently after completion of a goal (a natural description of many tasks). Our architecture takes inspiration from the division of labor in neurobiological motor control~\citep{kandel2000principles}. Biological motor primitives are formed by spinal cord interneurons, which mediate both fast, reflexive actions and the elementary constituents of motor behaviors like locomotion and reaching. The spinal cord has direct sensory inputs that measure muscle tension and load, joint position, and haptic information from the skin. These sensory modalities are \emph{proprioceptive} (``taken from near'') because they carry information measured at close range to the body as opposed to the \emph{exteroceptive} (``taken from afar'') modalities like vision, audition and olfaction. Cortical motor neurons produce voluntary motor behavior primarily through the modulation of these interneuron populations. \vspace{5mm} Based on these considerations, we explore hierarchical motor controllers with the following properties: \begin{enumerate} \item \emph{Hierarchy}: The controller is subdivided into a low-level controller, which computes direct motor commands (e.g. joint torques), and a high-level controller, which selects among abstract motor behaviors. \item \emph{Modulation}: The high-level controller outputs a signal that modulates the behavior of the low-level controller, through a communication bottleneck. \item \emph{Information hiding}: The low-level controller has direct access only to task-independent, proprioceptive information. For the control problems considered in this paper, this proprioceptive information contains, for instance, the joint angles and velocities of the body and haptic information. Notably, it does not include information about absolute position or orientation in space or task-specific information such as a goal location. The high-level controller has access to all necessary proprioceptive and exteroceptive information. \item \emph{Multiple time scales}: While the low-level controller operates at a basic control rate (i.e., it receives an observation and produces an action at every time step in the simulation), the high-level controller can operate at a slower rate, updating the modulatory control signal to the low-level controller less frequently. \end{enumerate} These design decisions are intended to create an abstraction barrier between the low and high levels: Separating the high level from the physics and detailed demands of motor actuation and likewise, sheltering the low-level controller from specific task objectives so it can acquire domain-general functionality. Below we present results demonstrating that our architecture solves several non-trivial control problems. A more detailed analysis of the implications of the design decisions listed above is, however, left for future work. \label{sec:Methods:Overview} \begin{figure} \begin{minipage}[c]{0.47\textwidth} \includegraphics[width=\textwidth]{model} \end{minipage}\hfill \begin{minipage}[c]{0.5\textwidth} \caption{\footnotesize Structure of our control hierarchy, consisting of the recurrent high-level controller (\emph{HL}, dark blue) and feedforward low-level controller (\emph{LL}, green). Rounded squares represent functions; squares represent deterministic variables; circles represent stochastic variables. The low-level controller only has access to the proprioceptive component of the current observation ($o_t$, red). The high-level controller has access to all observations ($o_t$, yellow and red). While both controllers observe sensory input at the world-clock frequency, the modulatory control signal $c_t$ from the high-level is updated every $K$ steps (here $K=2$). } \label{fig:model} \end{minipage} \end{figure} \section{Architecture} We now describe the architecture of the hierarchical controller in greater detail (see Fig.\ 1). The setup is the standard agent-environment interaction model. At each point in time $t$, an agent executes an action $a_t$, and subsequently receives a reward $r_t$ and a new observation $o_{t+1}$. Its goal is to maximize the expected sum of discounted future rewards $R_t = \sum_{t'=t } ^\infty \gamma^{t' - t} r_{t'}$, known as the \emph{return}. The agent is characterized by a policy $\pi$ with parameters $\theta$ which specify a distribution over actions as a function of the history $h_t = ( o_1, a_1, \dots o_{t-1}, a_{t-1}, o_t )$, i.e.\ $a_t \sim \pi(\cdot | h_t; \theta).$\footnote{In this work, we did not find it necessary to include the actions in the history and left them out.} The stochastic policy is defined by the composition of networks for the high-level controller $F_H$ and low-level controller $F_L$. This combined network outputs the parameters of the action distribution $\pi$. In the experiments below, actions are multi-dimensional, and we parameterize the action distribution as a factorized Normal distribution with mean and variance being functions of $h_t$: $a_t \sim \pi(\cdot | h_t) = \mathcal{N}(\cdot | \mu(h_t), \sigma^2(h_t))$. In the experiments presented here, the low-level controller is a non-recurrent neural network that maps the proprioceptive information $o^P$ and the control signal received from the high-level controller $c$ onto the parameters of the action-distribution: \begin{align} ( \mu, \sigma) &= F_L( o^P, c) \\ a & \sim \mathcal{N}( \cdot | \mu, \sigma^2). \end{align} For the high-level controller, we have used recurrent networks $F_H = ( f_H, g_H)$ that integrate observations at every time step and produce a new control signal $c_t$ every $K$ time steps: \begin{align} z_t &= f_H(o_t^F, z_{t-1}) \\ c_t &= g_H( z_{{\tau} (t)}) \\ \tau(t) &= \floor{(t - 1)/ K} K+1 \end{align} where $o_t^F$ is the full observation (including task-specific information), $z_t$ is the recurrent state of the high-level controller, $K$ is the control interval, and $\tau(t)$ is the most recent update time for the high-level control signal. \section{Learning locomotor controllers with policy gradients} \label{sec:Methods:PG} We use an actor-critic policy gradient framework for learning during pre-training as well as transfer. We consider both fully observed (MDPs) and partially observed problems (POMDPs). \subsection{Generalized advantage estimation} We perform gradient ascent in the expected discounted return $J = \expectationE{R_0}{}$, where the expectation is taken with respect to the trajectory distribution induced by the policy and the environment dynamics. This gradient is given by \begin{equation} \nabla_\theta J = \sum_t \expectationE{ \nabla_\theta \log \pi(a_t | h_t; \theta) ( R_t - b_t)}{}, \label{eq:ReinforceUpdate} \end{equation} where $b_t$ is some baseline that does not depend on $a_{t' \geq t}$. In this work we use a learned, parameterized value function $V_t(h_t; \omega)$ with parameters $\omega$ to lower the variance of the estimate. We replace $R_t$ by the $\lambda$-weighted return $R^\lambda_t = \sum_{k=0}^\infty \lambda^k R_t^k$ where $R_t^k = \sum_{j=0}^k \gamma^j r_{t+j} + \gamma^{t + k + 1} V(h_{t+k+1})$. The parameter $\lambda$ trades off bias in the value function against variance in the return. We also use estimates of the return $R^{\lambda'}_t$ as targets for the value function, so that the loss for value function training is \begin{equation} L(\omega) = \frac{1}{2} \sum_t ||R^{\lambda'}_t - V(h_t; \omega)||^2. \end{equation} Note that we allow for different values of $\lambda$ and $\lambda'$ for computing the policy and value function updates, respectively. Additionally, although each $R^{\lambda'}_t$ nominally includes value function terms dependent on $\omega$ from future time steps, we do not differentiate with respect to them, as is typical for temporal difference learning. \subsection{Policy gradient with hierarchical noise} The policy gradient framework outlined above performs on-policy learning where the stochasticity of the policy is used for exploration. Choosing the action distribution $\pi$ to be a diagonal Gaussian is common due to its simplicity. At the same time, as we will demonstrate below, it can lead to very poor exploratory behavior, especially in high-dimensional action spaces. Due to its restricted form, it is unable to describe correlations across action dimensions or time steps. Actuating physical bodies with this form of white noise tends to produce undirected, twitchy movements that are attenuated by the second-order dynamics of the physics. In contrast, our low-level controllers are feedback controllers that produce pre-trained locomotor behavior. Modulating this behavior appropriately can lead to exploratory behavior that is more consistent in space and time. Thus, we allow stochasticity not only at the output of the low-level controller but also in the high-level controller. More precisely, in transfer training we treat the high-level controller as a stochastic network where \begin{align} c_t &= \tilde{c}_{\tau(t)} \\ \tilde{c}_{\tau(t)} &\sim \pi^H(\cdot | z_{\tau(t)}) = \mathcal{N}(\mu^H( z_{\tau(t)} ), \sigma^H( z_{\tau(t)} ) ). \label{eq:stochastHLExplicit} \end{align} The policy distribution composed of the high- and low-level controllers can be seen as a distribution with latent variables: $\pi(a_t | h_t ) = \int \pi( a_t | h_t, \tilde{c}_{\tau(t)}) \pi^H( \tilde{c}_{\tau(t)} | h_{\tau(t)}, z_{\tau(t)}) \mathrm{d} \tilde{c}_{\tau(t)}$. This hierarchical model can be optimized in different ways. The particular approach we take relies on the re-parameterization trick recently applied in the probabilistic modeling literature \cite{kingma2013,rezende2014} and in a policy gradient framework by \cite{heess2015learning}: To use the re-parameterization trick, note that an equivalent formulation of equation (\ref{eq:stochastHLExplicit}) can be obtained by considering $c_t = \tilde{g}^H( z_{\tau}, \epsilon_{\tau})$, where $\tilde{g}^H( z_{\tau}, \epsilon_{\tau}) = \mu^H(z_{\tau}) + \sigma^H(z_{\tau}) \epsilon_{\tau}$ and $\epsilon_{\tau} \sim \mathcal{N}(0, \mathbf{I})$. In this view, $\tilde{g}^H$ is a deterministic function that takes as additional input a random variable drawn from a fixed distribution. Since we have knowledge of $\epsilon_{\tau}$, we can now evaluate $\nabla_\theta \log \pi(a_t | z_\tau, \epsilon_{\tau})$, which would otherwise be difficult for a latent variable model. The policy gradient estimate in equation ($\ref{eq:ReinforceUpdate}$) is simply formed by backpropagating directly into the high-level controller.\footnote{ Using a value function for bootstrapping in combination with hierarchical noise requires extra care since $c_t$ affects future primitive actions. This could be accounted for by making $V$ dependent on $c_t$, or by bootstrapping only after resampling $c_t$. In our experiments we ignore this influence on the value and use $V(h_t; \omega)$ as above. } This high-level noise can achieve a dramatically different effect from i.i.d.\ noise added to each action dimension independently at every time step. Since the high-level noise is held constant over the high-level control interval and transformed by the low-level controller, it induces spatially and temporally correlated stochasticity at the primitive action level. \section{Experiments} We evaluate our framework on three physical domains: a swimming snake, a quadruped and a humanoid. The snake has 6-links with a 5 dimensional action space and can propel itself forward by exploiting frictional forces. The quadruped has a 8 dimensional action space with two joints per leg. The humanoid has 21 action dimensions. For the following motor control problems, the core challenge is to learn basic locomotion. For more complex behaviors like navigation, the locomotion pattern can be reused. In addition to the description below, we encourage the reader to watch the supplemental videos\footnote{High-quality version at \url{https://youtu.be/sboPYvhpraQ}}. \subsection{Training methodology} We train our hierarchical motor controller on a simple \emph{pre-training} task, and then evaluate its performance in one or more \emph{transfer} tasks. This involves training the low-level controller (which will be re-used later) jointly with a provisional high-level controller which provides task-specifc information during pre-training and hence ensures controllability of the low-level controller. The pre-training task, which facilitates the development of generic locomotion skills, is described by an informative shaping reward and requires the controller to move each creature from a random initial configuration to a randomly positioned target. After pre-training the provisional high-level controller is discarded and the weights of low-level controller are frozen. For each transfer task a new high-level controller is trained to modulate the inputs of the frozen low-level controller. The transfer tasks are most naturally described by \emph{sparse} reward functions which are zero everywhere except at goal states. In order to obtain any reward, these tasks demand temporally-extended, structured exploration, posing a significant challenge for reinforcement learning methods. We use multiple transfer tasks for each domain to test the versatility of the learned low-level controllers. We implemented our experiments using the asynchronous actor-critic framework introduced in~\cite{mnih2016async}. For each experiment and architecture (pre-training and transfer), we perform a coarse grid search over the following hyper-parameters: learning rate, relative scaling of learning rate for the value function, $\lambda$, $\lambda'$, and the length of the backpropagation-through-time truncation window. Unless noted otherwise we report results for the hyper-parameter setting which performed best over an average of 5 repeated experiments. Depending on the transfer task we compare learning with the pre-trained low-level controller to learning a feedforward (\emph{FF}) or recurrent policy (\emph{LSTM}) from scratch; and we also compare to re-using a pre-learned FF network where we only learn a new input layer (\emph{init FF}; the new input layer is to account for the fact that the observation space may change between pre-training and transfer task, or that the new task requires a different mapping from observations to motor behavior). \subsection{6-link snake} \textbf{Pre-training task} The pre-training task initializes the snake at an origin with a random orientation and a random configuration of the joint angles. It is required to swim towards a fixed target over 300 time-steps, which requires being able to turn and swim straight. The low-level controller's sensory input consists of the joint angles, the angular velocities, and the velocities of the 6 segments in their local coordinate frames. The provisional high-level controller is also exposed to an egocentric (i.e., relative to the body frame) representation of the target position. A shaping reward in the form of the negative of the distance to the target is given at every step. The modulatory input from the high-level controller to the low-level controller is updated every $K=10$ time steps. \textbf{Analysis of the locomotor primitives} The training task is easily solved. To assess the locomotor primitives embodied by the learned low-level controller, we remove the high-level controller and modulate the low-level controller with i.i.d.\ Gaussian noise sampled every 10 steps. The resulting behavior, obtained by initializing the swimmer at the origin in a random configuration and running for 4000 time steps, is shown in Figure \ref{fig:SwimmerNoise}. Using the center of the most anterior body segment as a reference point, swimming trajectories are shown. Clearly, the locomotor primitives produce coherent swimming behavior, and the nature of the input noise determines the structure of the behavior. In the absence of high-level modulation, the snake swims nearly straight; increasing the amplitude of modulatory noise leads to more variable trajectories. For comparison, we also show the behavior elicited by the commonly used zero-mean i.i.d.\ Gaussian noise applied directly as motor commands, which produces barely any displacement of the body, even at high noise levels. The largest standard deviation we tested was 0.8, which is large compared to the action range $[-1, 1]$. \begin{figure} \begin{center} \includegraphics[width=.27\textwidth]{watermaze_topDown_small.png} \hspace{0.1\textwidth} \includegraphics[width=.3\textwidth]{maze_illustration_3D.png} \includegraphics[width=.2\textwidth]{maze_illustration.pdf} \end{center} \caption{\footnotesize \emph{Left.} The target-seeking task. A top-down view of the arena showing the snake swimming to the green target region. \emph{Right.} The canyon traversal task. A snapshot of the snake in the maze and an illustration of the shape of the maze. } \label{fig:swimmerTasks} \end{figure} \textbf{Transfer task 1: Target-seeking} The first transfer task is a partially-observed target-seeking task with a sparse reward function (see Fig.\ \ref{fig:swimmerTasks}, left). The high-level controller egocentrically senses the vector from head to the green target region's center but only when the target center is within $\pm 60^{\circ}$ of the head direction. At the beginning of each episode, both the snake and target are deposited at random, with a minimum distance between them. To solve this task, the snake needs to learn a strategy to turn around until it sees the target and then swim towards it. Each episode lasts 800 time steps, and reward is delivered when the snake's head is within the target region. \textbf{Transfer task 2: Canyon traversal} The snake must swim through a simple canyon (Fig.\ \ref{fig:swimmerTasks}, right). Perceptual input to the high-level controller is given in the form of a $10$ pixel strip from an egocentric depth camera attached to the head. A positive reward is received when the swimmer has successfully navigated from start to end. An episode is terminated 25 steps after the snake's head has passed the canyon exit or after 3000 steps, whichever comes first. Figure \ref{fig:SwimmerTransfer} shows the results of using the low-level controller to solve the transfer tasks. Both tasks are solved successfully when the learned motor primitives are harnessed. Without pre-training the low-level controller, however, an end-to-end system fails to learn. In both tasks, rewards are sparse: i.i.d.\ Gaussian exploration lacks the ability to make consistent progress in a given direction, so reaching the goals and receiving reward is highly unlikely; thus, no learning gradient exists. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{swimmerNoise3.pdf} \end{center} \caption{\footnotesize Exploration behavior of the snake with (a) zero mean i.i.d.\ Gaussian noise with three different standard deviations (left three plots) or (b) with the pre-trained low-level controller (LLC) periodically modulated by i.i.d.\ Gaussian noise (right four plots). Each plots shows 4000-step trajectories of the $(x,y)$-position of the anterior segment of the snake after initialization at the origin in a random configuration. Without high-level modulatory input $\sigma_{\mathrm{in}} = 0$ the snake swims mostly straight; high-level modulation leads to more diverse trajectories. } \label{fig:SwimmerNoise} \end{figure} \begin{figure} \begin{center} \begin{tabular}{c c} \includegraphics[width=0.4\columnwidth]{swimmerWatermazeHard_arxiv2.pdf} & \includegraphics[width=0.4\columnwidth]{swimmerMaze_arxiv2.pdf} \\ {\small (a) target-seek } & {\small (b) canyon traversal} \end{tabular} \end{center} \caption{\footnotesize Performance of the pre-learned locomotor controller for the snake on: (a) the target-seeking task and (b) the canyon traversal problem.} \label{fig:SwimmerTransfer} \end{figure} \subsection{Quadruped} \textbf{Pre-training task} Like the snake, the quadruped is initialized in a random orientation at the origin and must move to a target positioned randomly in its vicinity. The reward at each step is the (negative) distance to the target, and an episode lasts for 300 steps. The 34-dimensional input to the low-level controller is formed by the joint angles and angular velocities, the output of contact sensors attached to the legs and the ball-shaped torso, the velocity of the torso (translational and rotational) in the coordinate frame of the torso, and a 3-dimensional feature indicating the deviation of the torso's north pole from the z-axis. The high-level controller additionally receives the relative position of the target (36 dimensional input in total) and communicates with the low-level controller every 10 steps. \textbf{Analysis of the locomotor primitives} Figure \ref{fig:antNoise} provides an analysis of the behavior of the quadruped when actuated with i.i.d.\ Gaussian motor noise (left three plots) versus the randomly-modulated locomotor primitives (right four plots). Like the results shown for the snake in Figure \ref{fig:SwimmerNoise}, it is clear that the low-level controller achieves much better coverage of the space. \begin{figure} \begin{center} \includegraphics[width=.36\textwidth]{antSparseReward_small.png} \hspace{0.1\textwidth} \includegraphics[width=.35\textwidth]{soccer_illustration.pdf} \end{center} \caption{\footnotesize Transfer tasks for the quadruped: illustration of the target-seeking (left) and soccer task (right). } \label{fig:antTasks} \end{figure} \textbf{Transfer task 1: Target-seeking} The quadruped has to move to a random target as quickly as possible. Reward is given when the quadruped's torso is inside the green region. Episodes last for 600 steps. The difficulty of the task can be varied by adjusting the minimum initial distance between the quadruped and the target; if this distance is large, exploration becomes a key challenge. We evaluated the pre-learned low-level controller on two difficulty levels of the task, one with small minimum initial distances and one with large minimum distances, respectively. In both cases, the newly learned high-level controller receives the relative position of the target in addition to the proprioceptive features provided to the fixed low-level controller. \textbf{Transfer task 2: Soccer} The quadruped is placed with a ball inside a walled pitch, as shown in Figure \ref{fig:antTasks}. To receive a reward, it needs to manipulate the ball into the red ``goal zone,'' which consists of two half-circles covering the floor and the back wall of the pitch. The goal zone has higher friction than the rest of the pitch to prevent the ball from simply bouncing away (simulating a goal net), and there is a small resistance that needs to be overcome when the ball is pushed into the zone while on the floor. We consider two versions of the task that differ in their initial conditions. In a restricted version the quadruped and ball are positioned and oriented randomly in non-overlapping areas close to the center line of the pitch. The initial position for the ball is closer to the goal than that of the quadruped. In the less restricted version both ball and quadruped can be initialized further away from the center line and the ball can be placed further away from the goal than the quadruped so that the quadruped has to move away from the goal to position itself behind the ball. Note that even in the restricted version the quadruped is not necessarily facing the goal after initialization. The high-level controller senses the proprioceptive features alongside features indicating the relative displacements from both the ball and center of the goal and the velocity of the ball in the body frame. Reward is only provided when the ball has graced the goal zone. From this sparse feedback, the quadruped must learn both how to navigate to the ball and also how to manipulate the ball into the goal zone. The task poses a severe exploration problem to the quadruped as solutions of the two subtasks are not guided by shaping rewards. The results for the transfer tasks are shown in Figure \ref{fig:antTransfer}. For the target-seeking task, we show results for the two different levels of difficulty. The hierarchical policy with pre-trained locomotor primitives very rapidly learns to solve both task variants. In contrast, a policy that is trained from scratch only learns a satisfactory solution to the simpler version. Very little progress is made on the hard version of the task over the 200,000 episodes shown due to the very sparse learning signal and poor exploration. The hierarchical policy with locomotor primitives also makes good progress on the soccer task. For the restricted version of the task quantitative results are shown in Figure \ref{fig:antTransfer}c. We obtain players that learn to score goals for many initial configurations of the quadruped and ball. Attempts to solve the soccer task from scratch do not succeed. For the more challenging version learning is slower and results are more sensitive to initial conditions and learning parameters. Nevertheless, we obtain several players that fetch the ball, pursue it if necessary, and score (see video). \begin{figure} \begin{center} \includegraphics[width=1.0\textwidth]{antNoise3.pdf} \end{center} \caption{\footnotesize Random behavior of the quadruped when driven by i.i.d.\ random noise (left three plots) and using the pre-trained low-level controller (right four plots). Same format as in Figure \ref{fig:SwimmerNoise} } \label{fig:antNoise} \end{figure} \begin{figure} \begin{tabular}{c c c} \includegraphics[width=0.3\columnwidth]{antEasy_arxiv2.pdf} & \includegraphics[width=0.3\columnwidth]{antHard_arxiv2.pdf} & \includegraphics[width=0.3\columnwidth]{antSoccer_arxiv2.pdf} \\ {\small (a) target-seek (easy)} & {\small (b) target-seek (hard) } & {\small (c) soccer} \end{tabular} \caption{\footnotesize Performance of the pre-learned locomotor controller for the quadruped on the sparse reward go-to-target task ((a), (b); easy and hard, respectively), and the soccer problem (c). The results for learning from scratch with a FF network are sensitive to the initialization and regularization of the policy: Both, a larger initial value of the (learned) standard deviation ($\sigma_{init}$) as well as adding a regularizing term that encourages entropy in the per-step action distribution (as in \cite{mnih2016async}) improve the results (see (a)). The effectiveness of this approach is, however, noticeably reduced as task difficulty increases (see (b)). } \label{fig:antTransfer} \end{figure} \subsection{Humanoid} In a final set of experiments, we applied the approach to a particularly challenging 27 degree-of-freedom control problem: the humanoid model shown in Figure \ref{fig:humanoidTask}. With 21 actuators the problem is much higher dimensional than the others. Moreover, whereas the snake and quadruped are passively stable, keeping the humanoid from falling is a non-trivial control problem in itself. \textbf{Pre-training task} The training task consists of a simple multi-task setup: In every episode the humanoid is initialized in a standing position facing in the direction of the x-axis and is required to either move straight, or follow a leftward or a rightward circle of a fixed radius (5m). The reward function consists of a small quadratic control penalty, a positive constant stay-alive reward, the velocity in the desired direction (forward or along the circle) clipped at 2.5m/s, and, for the circle tasks, a quadratic penalty for deviating from the desired distance to the center point of the circle. Episodes last for up to 300 steps but are terminated when height of the humanoid falls below 0.9m. The input to the low-level controller consists of proprioceptive features describing the configuration of the body relative to the model root (torso) and ground, and associated velocities. The high-level controller additionally receives the relative position of the target as input. The control interval for the high-level controller is $K=10$. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{humanoidCombo.pdf} \end{center} \caption{\footnotesize Humanoid: \emph{Top left}: Trajectories obtained by running two different low-level controllers in isolation. Stronger modulatory input increases the trajectory diversity but also increases the probability of falling. i.i.d.\ Gaussian exploration on the actions simply falls and makes no progress (see video). \emph{Top right}: Screen shot of the humanoid approaching a virtual gate in the transfer task (two perspectives). \emph{Bottom}: Trajectories through a series of gates obtained with different low-level controllers in the transfer task (10 trajectories with different initial conditions each). Not all learned policies for the slalom task are equally successful (cf.\ controller 2). } \label{fig:humanoidTask} \end{figure} \textbf{Analysis of the locomotor primitives} Controlling the humanoid is a challenging problem and learning in the pre-training task is more sensitive to initial conditions and learning parameters than for the quadruped and snake. Nevertheless, we obtained several well performing policies with some variability across the gaits. Analyzing several of the associated low-level controllers in the same way as in the previous sections revealed that the locomotor primitives encode quite stable walking behaviors, typically taking hundreds of steps before falling (Fig.\ \ref{fig:humanoidTask}a). \textbf{Transfer task: Slalom} Our transfer task consists of a slalom walk where the humanoid is presented with a sequence of virtual gates to pass through. A reward of 5 is given for passing a gate, and missing a gate results in termination of the episode. No other reward is given. After a gate has been passed, the next gate is positioned randomly to the left or the right of the previous one. The newly trained high-level controller receives the proprioceptive features provided to the low-level controller as input, as well as the relative position and orientation of the next gate. We trained high-level controllers to solve the transfer task using some of the pre-trained low-level controllers and obtained several good solutions in which the humanoid learned to actively navigate the gates. Nevertheless, as expected from the already somewhat diverse set of solutions to the pretraining task, not all low-level controllers were equally suitable for solving the transfer task. And for a given low-level controller we further observed a much stronger sensitivity to the learning parameters and the initial conditions during transfer (see also section \ref{sec:Appendix:Variability} in the appendix). Considering the complexity of the humanoid and the relatively small number of constraints imposed by the pre-training task this is, however, perhaps not too surprising. We expect that a richer, more constrained pre-training regime would lead to more uniformly versatile and robust low-level controllers. \section{Related Work} \label{sec:RelWork} The notion that biological motor systems are hierarchical is ancient, dating to the 19th century \cite{hughlings1889comparative}. In the 20th century, Bernstein promulgated the notion of hierarchical control through ``motor synergies,'' or stereotyped, multi-joint muscle activation combinations \cite{bernstein1967co}. More recently, the notion of spinal motor primitives has been forwarded by Mussa-Ivaldi and Bizzi \cite{mussa2000motor} and others \cite{loeb1999hierarchical}. Motor primitives have resonated in robotics, especially as Dynamic Movement Primitives \cite{ijspeert2002learning}, which are low-dimensionality attractor systems that can simplify the learning of robot movements, and hierarchical robot control abstractions date to at least the 1980s \cite{brooks1986robust}. Modern control theorists have also considered abstract hierarchical architectures for manipulation \cite{todorov2005task} and more bio-mechanical descriptions of locomotion \cite{song2015neural}. The reinforcement learning literature has also explored a wide variety of temporal abstractions that wrap low-level control into \emph{options} \cite{sutton1999between} or \emph{skills} \cite{konidaris2009efficient}. These temporal abstractions may be applied to motor control \cite{da2014learning,thomas2012motor,wayne2014hierarchical}, may be transferred to new tasks \cite{andre2002state,asadi2007effective,ravindran2003relativized}, and may also incorporate information hiding principles \cite{dayan1993feudal,konidaris2009efficient}. However, this prior work typically requires precise subgoals to be specified, and treats options or skills as atomic actions. In contrast, our low-level motor skills emerge organically from pre-training in the context of natural tasks; and our high-level controller modulates these skills in a flexible manner to achieve its ends. Recent work \cite{vezhnevets2016straw} proposes an architecture for discovering temporally extended macro actions from scratch. \section{Conclusion} We have provided a preliminary investigation of a hierarchical motor control architecture that can learn low-level motor behaviors and transfer them to new tasks. Our architecture contains two levels of abstraction that differ both in their access to sensory information and in the time scales at which they operate. Our design encourages the low-level controller to focus on the specifics of reactive motor control, while a high-level controller directs behavior towards the task goal by communicating a modulatory signal. Our investigation departs from the common but unnatural setting where an agent is trained on a single task. Instead, we exploit the fact that many complex motor behaviors share low-level structure by building reusable low-level controllers for a variety of tasks. We found our method to be especially effective when attempting challenging transfer tasks with sparse rewards where exploration via ``motor babbling'' is unlikely to accrue reward at all. This is illustrated in the transfer tasks for the swimmer, quadruped, and humanoid, in which direct end-to-end learning failed, but our method produced solutions. We believe that the general idea of reusing learned behavioral primitives is important, and the design principles we have followed represent possible steps towards this goal. Our hierarchical design with information hiding has enabled the construction of low-level motor behaviors that are sheltered from task-specific information, enabling their reuse. However, the detailed individual and joint contributions of the features of our architecture remain to be investigated more thoroughly in future work (in particular the role and relevance of different time scales), alongside strategies to increase the reliability and stereotypy of the low-level behaviors, especially for difficult control problems such as humanoid walking. Our approach currently depends on the assumption that we can propose a set of low-level tasks that are simple to solve yet whose mastery in turn facilitates the solution of other high-level tasks. For motor behavior, we believe this assumption holds rather generally, as walking, for example, underpins a multitude of more complex tasks. By instilling a relatively small number of reusable skills into the low-level controllers, we expect that a large number of more complicated tasks involving composition of multiple behaviors should become solvable. We believe that this general direction could open new avenues for solving complex real-world, robotic control problems. \paragraph{Acknowledgements} We would like to thank Tom Schaul, Tom Erez, Ziyu Wang, Sasha Vezhnevets, and many others of the DeepMind team for helpful discussions and feedback. \bibliographystyle{plain} {\footnotesize \linespread{0}
1,108,101,563,036
arxiv
\section{Introduction} Although the \sm is remarkably successful in describing strong and electroweak phenomena it still leaves many questions unanswered. In particular one would like to understand the origin of fermion masses and mixing angles which in the \sm appear as arbitrary parameters. An obvious possibility is that some structure additional to that of the \sm is responsible for the pattern of masses and mixings that we see at low energies. Support for such a stage of unification has been obtained for, although unification \cite{unif,unif1} on its own does not agree with experiment, when combined with supersymmetry it leads to very successful predictions \cite{1a} for the gauge couplings, the pattern and magnitude of spontaneous symmetry breaking at the elecroweak scale \cite{radi} and the bottom -- tau ($ b$ -- $\tau$) unification \cite{1a,1}. A further indication that additional symmetries beyond the \sm exist, has been the observation that the fermion mixing angles and masses have values consistent with the appearance of ``texture'' zeros in the mass matrix \cite{3,3b,7,7b,IR,DLLRS,brn}. Such zeros may indicate the presence of an additional broken symmetry. When unbroken only the third generation is massive and all mixing angles are zero. However, symmetry breaking terms gradually fill in the mass matrix and generate a hierarchy of mass scales and mixing angles. In this paper we will consider the implications for the neutrino sector extending the analysis presented in \cite{DLLRS}. We shall consider models with right-handed neutrinos in which both Dirac and Majorana masses for the neutrinos are present, and the ``see--saw'' mechanism automatically explains the lightness of neutrinos relative to the charged fermions. Neutrino data from various experiments can be explained if there is mixing between the various types of neutrinos. The solar neutrino puzzle can be resolved through matter enhanced oscillations (preferably between $\nu_e$ and $\nu_{\mu}$ states) with a mixing angle somewhat smaller than the $1/3$ of the corresponding Cabbibo angle of the quark sector, $V_{CKM}^{12}$. For this explanation to work, the squared mass difference of the two types of neutrinos involved in this phenomenon should lie in a very narrow region. The specific ranges for the angle and mass squared given by the latest experimental data are: For matter enhanced oscillations in the sun: \begin{equation} \sin2\theta_{ex} = (0.39-1.6)\times 10^{-2} , \quad \Delta m^2= (0.6- 1.2)\times 10^{-5} eV^2, \label{eq:sol1} \end{equation} and for vacuum oscillations: \begin{equation} \sin2\theta_{ex} \ge 0.75 , \quad \Delta m^2= (0.5-1.1)\times 10^{-5} eV^2. \label{eq:sol2} \end{equation} If we wish to avoid fine tuning problems, it seems necessary to assume that such small differences in neutrino masses can be obtained only if the $\nu_{e,\mu}$ neutrino masses themselves are also of the same order. Finally, if neutrinos play a role in structure formation, providing a hot dark matter component, then the heavier neutrino(s) should have mass in the range $\sim (1-6)$ eV, the precise value depending on the number of neutrinos that have masses of this order of magnitude. What symmetry could explain this pattern of masses? If gauge symmetries have something to do with the hierarchical fermion mass spectrum a similar hierarchy may be expected to hold for the unknown neutrino masses too. In a previous study it was found that the observed hierarchical mass spectrum of the charged fermions (quarks and leptons) follows naturally if we extend the gauge group of the minimal supersymmetric standard model by a $U_{FD}(1)$ family type symmetry. The extension of this model to include the right handed neutrino in the theory resulted in a similar structure of the neutrino sector as well leading to the following general structures \cite{DLLRS}: {\it 1.)} The solar neutrino puzzle can be explained via $\nu_e \rightarrow \nu_{\mu}$ oscillations. The hierarchical mass spectrum leads to the conclusion that $m_{\nu_{\mu}}\approx \sqrt{\delta m_{\mu,e}^2}$ while $m_{\nu_{e}}\ll m_{\nu_{\mu}}$. This also fixes the right handed neutrino scale $M_N$ through the effective Majorana mass matrix resulting from the usual ``see -- saw'' mechanism \begin{equation} m^{eff}_{\nu}=-\frac{1}{4}m_{\nu_D}M_N^{-1} m_{\nu_D}^T\label{eq:meff} \end{equation} {\it 2.)} The simultaneous solution of the solar neutrino problem and the interpretation of the $\nu_{\tau}$ mass as a hot dark matter component through the effective light Majorana neutrino mass matrix, requires a right handed neutrino scale of the order of $M_{\nu_R}\sim 10^{12}-10^{13}$ GeV. {\it 3.)} There is no natural solution of the atmospheric neutrino problem unless a considerable fine-tuning of the coefficients in the neutrino mass textures occurs. This follows because the U(1) symmetry that was used to derive the above textures together with simple spontaneous breaking gives only a hierarchical mass spectrum and small mixing angles for all fermion mass matrices. One additional implication of the structure emerging from the $U(1)_{FD}$ symmetry is that the right handed neutrinos have Yukawa couplings of the same order as the up quarks. This in turn affects the radiative corrections in the model and in particular the expectations for gauge unification and for the $m_b/m_{\tau}$ ratio. The implications of such large couplings has already been explored in refs.\cite{VB}. Here we develop these analyses in two respects. Firstly we present a semi-analytic analysis of the radiative corrections that allows us to analyse the possibilities for maintaining $b - \tau$ equality at the GUT scale even in the presence of these radiative corrections through large $\mu - \tau $ mixing giving a mechanism to evade conclusion 3.) above\footnote{For another mechanism evading this result through the introduction of spontaneous $U(1)$ breaking via several Higgs scalars see ref.\cite{elen}.}. This leads us to consider schemes based on the $U(1)$ family symmetry which naturally generate such mixing. The implications for the neutrino mass spectrum are then explored. In addition the semi-analytic approach is supported by a full numerical calculation. In section 2 we give a semi--analytical approach to the renormalisation group equations, in the presence of right handed neutrinos. These equations are used in section 3 in order to get some direct intuition on the effects of the heavy neutrinos. The explicit form of the solutions makes it easy to see how $b-\tau$ equality at a GUT scale may be made consistent with the parameter spectrum at low energies, by sufficient $\mu-\tau$ mixing in the charged leptonic sector and for a relatively heavy strange quark. This is discussed in section 4. In section 5 we give the resulting predictions for the heavy and light Majorana neutrino mass matrices and eigenvalues, with a mixing which is of the correct order of magnitude in order to explain the atmospheric neutrino problem. In section 6 we present a numerical approach to the renormalisation group equations, which depicts the observations of section 2. Finally, in section 7 we give a summary of our results. \section{RGE with RH-neutrinos: a semi -- analytic approach} {}From the above it is clear that the interpretation of many important experimental facts is based on the existence of the right -- handed partners $\nu_{R_i}$ of the three left -- handed neutrinos, where the scale of mass of these particles is at least three orders of magnitude smaller than the gauge unification scale, $M_U$. Thus the running from the Unification scale, $M_U\sim 10^{16}$ GeV, down to the scale of $M_{\nu_R}$, must include radiative corrections from $\nu_R$ neutrinos. After that scale, $\nu_R$'s decouple from the spectrum, and an effective see -- saw mechanism is operative, c.f. eq( \ref{eq:meff}). In the presence of the right handed neutrino, the renormalization group equations for the Yukawa couplings at the one--loop level are \barr 16\pi^2 \frac{d}{dt} h_U&= & \left( 3 h_U h_U^\dagger +h_D h_D^\dagger +I\cdot Tr[h_N h_N^\dagger] + I\cdot Tr[3 h_Uh_U^\dagger ] - I\cdot G_U \right) h_U, \label{eq:rge1} \\ 16\pi^2 \frac{d}{dt} h_D&= & \left( 3 h_D h_D^\dagger +h_U h_U^\dagger +I\cdot Tr [3 h_Dh_D^\dagger] + I\cdot Tr[h_E h_E^\dagger ] - I\cdot G_D \right) h_D, \label{eq:rge3} \\ 16\pi^2 \frac{d}{dt} h_E&=&\left( 3 h_E h_E^\dagger+h_N h_N^\dagger + I\cdot Tr [ h_Eh_E^\dagger] + I\cdot Tr[3 h_D h_D^\dagger ] - I\cdot G_E \right) h_E, \label{eq:rge4} \\ 16\pi^2 \frac{d}{dt} h_N&=& \left( h_E h_E^\dagger + 3h_N h_N^\dagger + I\cdot Tr [3 h_U h_U^\dagger ] + I\cdot Tr[ h_N h_N^\dagger] - I\cdot G_N \right) h_N. \label{eq:rge2} \earr where $h_\alpha$, $\alpha=U,D,E,N$, represent the $3 \otimes 3$ Yukawa matrices for the up and down quarks, charged lepton and Dirac neutrinos, while $I$ is the $3 \otimes 3$ identity matrix. Finally, $G_{\alpha}= \sum_{i=1}^3c_{\alpha}^ig_i(t)^2$ are functions which depend on the gauge couplings with the coefficients $c_{\alpha}^i$'s given by \cite{DLT,VB}. \barr \{c_U^i \}_{i=1,2,3} &=& \left\{ \frac{13}{15},3,\frac{16}{3} \right\}, \qquad \{c_D^i \}_{i=1,2,3} = \left\{\frac{7}{15},3,\frac{16}{3} \right\}, \\ \{c_E^i \}_{i=1,2,3} &=& \left\{ \frac{9}{5},3,0\right\}, \qquad \quad \{c_N^i \}_{i=1,2,3} = \left\{ \frac{3}{5},3,0\right\}. \earr Consider initially the simple case where only the top and Dirac -- type neutrino Yukawa couplings are large at the GUT scale (i.e. the case of small $tan\beta$ scenario). Let us start assuming that the top and neutrino Yukawa couplings are equal at the Unification scale, $h_t(M_U)=h_N(M_U)$, a relation which arises naturally not only in our case but in most of the Grand Unified Models which predict the existence of the right handed neutrino. As in the case of the charged fermions, we will consider only hierarchical textures \cite{DLLRS} for the right handed neutrino Majorana mass matrices, i.e. $M_{\nu_1}\ll M_{\nu_2}\ll M_{\nu_3}$. If we work in a diagonal basis we can considerably simplify the above equations. Then, for the range $M_U$ to $M_N$, the renormalization group evolution of the Yukawa couplings of third generation, can be written as follows \barr 16\pi^2 \frac{d}{dt} h_t&= & \left( 6 h_t^2 + h_N^2 - G_U\right) h_t, \label{eq:rg1} \\ 16\pi^2 \frac{d}{dt} h_N&=& \left( 4h_N^2 + 3 h_t^2 - G_N \right) h_N, \label{eq:rg2} \\ 16\pi^2 \frac{d}{dt} h_b&= & \left(h_t^2 - G_D \right) h_b, \label{eq:rg3} \\ 16\pi^2 \frac{d}{dt} h_{\tau}&=&\left( h_N^2 - G_E \right) h_{\tau}, \label{eq:rg4} \earr Below $M_N$, the right handed neutrino decouples from the massless spectrum and we are left with the standard spectrum of the MSSM. For scales $Q\le M_N$ the gauge and Yukawa couplings evolve according to the standard renormalisation group equations. In addition, the effective Yukawa coupling of the light neutrino mass matrix (\ref{eq:meff}) evolves according to \cite{BP} \begin{eqnarray} 16\pi^2 \frac{d}{dt} h_{\nu} &= h_{\nu} (I \cdot Tr[6 h_Uh_U^{\dagger}]-G_{\nu})+ h_{\nu} h_Eh_E^{\dagger}+ h_E^{\dagger}h_E h_{\nu} \end{eqnarray} with $G_{\nu}= 2 g_1^2 +6 g_2^2$. In order to gain an insight into the effects of new couplings associated with the $\nu_R$ in the renormalisation group running we integrate the above equations in the region $M_N\le Q\le M_U$. We denote the top and $\nu_R$ Yukawas by $h_G$ at the unification scale, while bottom and $\tau$ are denoted with $h_{b_0},{h_{\tau_0}}$ respectively. We get \begin{eqnarray} h_t(t)&=&\gamma_U(t)h_G\xi_t^6\xi_N\\ h_N(t)&=&\gamma_N(t)h_G\xi_t^3\xi_N^4\\ h_b(t)&=&\gamma_D(t)h_{b_0}\xi_t\\ h_{\tau}(t)&=&\gamma_E(t)h_{\tau_0}\xi_N \end{eqnarray} where the functions $\gamma_\alpha(t)$ depend purely on gauge coupling constants and are given by \barr \gamma_\alpha(t)&=& \exp({\frac{1}{16\pi^2}\int_{t_0}^t G_\alpha(t) \,dt})\\ &=& \prod_{j=1}^3 \left( \frac{\alpha_{j,0}}{\alpha_j} \right)^{c_\alpha^j/2b_j} \\ &=& \prod_{j=1}^3 \left(1- \frac{b_{j,0}\alpha_{j,0}(t-t_0)} {2\pi}\right)^{c_\alpha^j/2b_j}, \earr The $\xi_i$'s ($i=t,N,b,\tau$) are given by the integrals \barr \xi_i&=& \exp({\frac{1}{16\pi^2}\int_{t_0}^t \lambda^2_{i}dt}) \earr The values of the parameters $\xi_i$ can be determined at any scale by numerically solving the renormalization group equations. As a general remark, we note that the initial condition for $\xi_i$ is $\xi_i(t_U)=1$, while at any lower scale $Q<M_U$, $\xi_i(Q)<1$. \section{Heavy neutrino effects : an insight} We start by investigating the $b-\tau$ Yukawa coupling solutions. Thus, in the case of small $tan\beta$, we can relate their solutions at the scale $M_N$ in terms of the initial values, by the following equation \begin{equation} h_{b}(t_N)=\rho \xi_t\frac{\gamma_D}{\gamma_E}h_{\tau}(t_N) \label{rho} \end{equation} with $\rho=\frac{h_{b_0}}{h_{\tau_0}\xi_N}$. In the case of $b-\tau$ unification at $M_U$, we have $h_{\tau_0} =h_{b_0}$, while in the absence of the right -- handed neutrino $\xi_N \equiv 1$, thus $\rho =1 $ and the $m_b$ mass has the phenomenologically reasonable prediction at low energies given by the approximate one-loop formula \begin{equation} m_{b} = \eta_b \xi_t \frac{\gamma_D}{\gamma_E} m_{\tau} \end{equation} where $\eta_b$ is the renormalization group coefficient in the $m_t$--$m_b$ range. In the presence of $\nu_R$ however, if $h_{\tau_0} =h_{b_0}$ at the GUT scale, the parameter $\rho$ is no longer equal to unity since $\xi_N<1$. In fact the parameter $\xi_N$ becomes smaller for lower $M_N$ scales. Therefore in order to restore the correct $m_b/m_{\tau}$ prediction at low energies we need $\rho =1$ corresponding to \begin{equation} h_{b_0}=h_{\tau_0}\xi_N \end{equation} Hence it is obvious that we need a $\tau -$Yukawa coupling $h_{\tau_0}$, larger than $h_{b_0}$ at $M_U$ to compensate for the factor $\xi_N$ arising from the presence of $\nu_R$. It is interesting that $\xi_N$ can be given in this case in terms of the values of gauge and Yukawa ratios at $M_N$ only, irrespective of the initial conditions \begin{eqnarray} \xi_N&=&\frac{h_{b_0}}{h_{\tau_0}}\nonumber \\ &=& \frac{h_{b}\gamma_E}{h_{\tau}\gamma_D} \bigl(\frac{h_t\gamma_N}{h_N\gamma_U}\bigr)^{-1/3} \end{eqnarray} On the other hand, the top mass at the scale $t_N=ln(M_N)$ can also be expressed formally as follows \begin{equation} m_{t}(t_N) =h_G\gamma_U(t_N) \xi_t^6(t_N)\xi_N(t_N)\frac{\upsilon}{\sqrt{2}}sin\beta \end{equation} where $\upsilon =246GeV$ and $tan\beta$ is the ratio of the two Higgs vev's. Again, in the absence of $\nu_R$ this reduces to the well known one loop approximate formula which coincides with the above for $\xi_N=1$. In the present case however, this prediction corresponds effectively to a smaller initial value of the top Yukawa coupling of the order \begin{equation} h_G^{\prime}=h_G \xi_N(t_N) \end{equation} {}For $h_G> 1$, however, due to the infrared fixed point property of the top -- Yukawa solution \cite{PR}, the $m_t$ -- prediction is not going to alter significantly. For the same $tan\beta$, one will get almost the previous top mass prediction, reduced at most by 2\%. In contrast, in the small $tan\beta$ scenario where $h_b\ll 1$, one naturally expects that bottom mass will be affected by the presence of $\nu_R$. For $M_N\approx 10^{13}GeV$ for example and $h_G \ge 1$, we can estimate that $\xi(t_N)\approx 0.89$ thus, there is a corresponding $\sim 10\%$ deviation of the $\tau - b$ equality at the GUT scale. {}Furthermore, we have seen that in order to recover the correct $m_b/m_{\tau}$ relation at low energies, it is necessary that $h_{b,0}/h_ {\tau ,0} < 1$ as long as $M_N < M_U$. This can be done in two ways: Either we can keep the same value of the $b$ -- Yukawa and increase the $\tau$-Yukawa by a factor $\xi_N^{-1}$, or decrease the bottom coupling by a factor $\xi_N$. In the first case, the angle $\beta$ remains the same and the top mass unaffected. In the second case, in order to retain the same absolute value of the initial parameters for the $b,\tau$ masses we need to increase $cos\beta$. This results to a corresponding decrease of the top mass prediction. We will present a detailed numerical analysis of the above in section 5, where two loop effects from the gauge couplings are taken into account. In the next section we first propose a scheme in which the bottom-tau unification is restored. We will show that this may result in a solution of the solar neutrino deficit and the atmospheric neutrino problem. \section{Restoration of bottom -- tau unification} Given the results of section 3, it is natural to ask if Grand Unified models which predict the $b - \tau$ equality at the Unification scale, exclude the experimentally required and cosmologically interesting region for the neutrino masses. To answer this question, we should first recall that the $b-\tau$ -- equality at the GUT scale refers to the $(33)$ entries of the corresponding charged lepton and down quark mass matrices. The detailed structure of the mass matrices is not predicted, at least by the Grand Unified Group itself, unless additional structure is imposed. It is possible then to assume $(m_E^0)_{33}= (m_D^0)_{33}$ and a specific structure of the corresponding mass matrices such that after the diagonalisation at the GUT scale, the $(m^{\delta}_E)_{33}$ and $(m^{\delta}_D)_{33}$ entries are no-longer equal. To illustrate this point, let us present here a simple $2\times 2 $ example. Assume a diagonal form of $m_D^0$ at the GUT scale , $m_D^0 = diagonal (c m_0,m_0)$, while the corresponding entries of charged lepton mass matrix have the form \begin{equation} m_{E}^0 = \left ( \begin{array}{cc} d & \epsilon \\ \epsilon & 1 \end{array} \right) m_0 \end{equation} These forms of $m_D^0,m_E^0$ ensure that at the GUT scale $(m_D^0)_{33}= (m_E^0)_{33}$. However, at the low energies one should diagonalize the renormalised Yukawa matrices to obtain the correct eigenmasses. Equivalently, one can diagonalise the quark and charged lepton Yukawa matrices at the GUT scale and evolve separately the eigenstates and the mixing angles. Since $m_D^0$ has been chosen diagonal there is no need of diagonalization and the mass eigenstates which are to be identified with the $s,b$ -- quark masses at low energies are given by \barr m_s=c \gamma_D m_0 ,& m_{b} = \gamma_D m_0 \xi_t \earr with $m_0 = h_{b_0} \frac{\upsilon}{\sqrt{2}}cos\beta$. To find the charged lepton mass eigenstates we need first to diagonalise $m_E^0$ at $M_{GUT}$. We can obtain the following relations between the entries $\epsilon ,d$ of $m_E^0$ and the mass eigenstates $m_{\mu}^0,m_{\tau}^0$ at the GUT scale. \barr d=(\frac{m_{\tau}^0 - m_{\mu}^0}{m_0}-1)\\ \epsilon^2 = (\frac{m_{\mu}^0}{m_0}+1) (\frac{m_{\tau}^0}{m_0}-1) \earr In the presence of right handed neutrino, the evolution of the above $\tau -$ eigenstate down to low energies is that described by (\ref{eq:rg4}) with $m_{\tau_0}=h_{\tau_0} \frac{\upsilon}{\sqrt{2}}cos\beta$. By simple comparison of the obtained formulae, we conclude that, to obtain the correct $m_{\tau}/m_b$ ratio at $m_W$ while preserving the $b - \tau$ unification at $m_{GUT}$, the $m_E^0$ entries should satisfy the following relations \barr \epsilon = \sqrt{\frac{1}{\xi_N}-1} ,& d \approx (\frac{1}{\xi_N}-1) = \epsilon^2 \label{eq:de} \earr The above result deserves some discussion. Firstly we see that it is possible to preserve $b - \tau$ unification by assuming $2-3$ generation mixing in the lepton sector, even if the effects of the $\nu_R$ states are included. Secondly, this mixing is related to a very simple parameter which depends only on the scale $M_N$ and the initial $h_N$ condition. The range of the coefficient $c$ in the diagonal form of the $m_D^0$ -- matrix, can also be estimated using the experimental values of the quark masses $m_s,m_b$. An interesting observation is that the usual $GUT$ -- relation for the $(22)$ -- matrix elements of the charged lepton and down quark mass matrices, i.e., $(m_E)_{22}=-3 (m_D)_{22}$, which in our case is satisfied for $c = -d/3$, implies here a relatively heavy strange quark mass $m_s\sim 200$MeV. Smaller $m_s$ values are obtained if $-3c/d <1$. \footnote{An alternative mechanism which restores the correct $m_b/m_{\tau}$ ratio in the presence of $\nu_R$ was proposed in \cite{dimp}.} We turn now to a consideration whether the hierarchical structure of the lepton mass matrix corresponding to eq(\ref{eq:de}) can be obtained by a simple $U(1)$ symmetry \cite{IR}. In \cite{IR} a viable fermion mass matrix structure was constructed following from a spontaneously broken $U(1)$ gauge symmetry. In this the form of the down mass matrix is \begin{equation} \frac{M_d}{m_b} \approx \left( \begin{array}{cc} \bar{\epsilon}^2 & \bar{\epsilon} \\ \bar{\epsilon} & 1 \end{array} \right)\label{eq:md} \end{equation} (We have concentrated for simplicity in the case of the two heavier generations which are relevant here.) Note that we have suppressed all Yukawa couplings in writing this mass matrix -- only the order of the matrix elements allowed by the broken symmetry are given. These Yukawa couplings are assumed to be of order 1 and the object of the exercise is to demonstrate that the hierarchical structure of the fermion masses may come only from symmetry considerations. Eq(\ref{eq:md}) is diagonalised by the orthogonal matrix \begin{equation} V \approx \left (\begin{array}{cc} 1 & \bar{\epsilon} \\ -\bar{\epsilon} & 1 \end{array} \right) \end{equation} where $\bar{\epsilon} = 0.23$, in order to fit the down quark masses and mixing angles. The structure of the lepton mass matrix following from the $U(1)$ symmetry (again for the heavier generations) is \begin{equation} \frac{M_l}{m_b} \approx \left ( \begin{array}{cc} \bar{\epsilon}^{2\mid \beta \mid} & \bar{\epsilon}^{\mid\beta\mid} \\ \bar{\epsilon}^{\mid \beta\mid } & 1 \end{array} \right) \label{eq:bt} \end{equation} where $\beta \equiv 1-b = \frac{a_2-\alpha_1}{\alpha_2-\alpha_1}$. and in \cite{IR}, \cite{DLLRS} the cases $\beta =1/2$ and $\beta=1$ had been considered. Both possibilities are in very good agreement with the measured masses and mixing angles. The important fact here is that $\beta $ can be determined by the requirement that $b - \tau$ mass ratio be correctly given when heavy neutrinos, which become massive at an intermediate scale $M_{N}< M_U$, are present. Allowing for the unknown coefficients of $O(1)$ we see (cf. eqs(\ref{eq:bt}) and (\ref{eq:de})) that both $\beta=1/2$ and $\beta=1$ are in reasonable agreement with the above expectation\footnote{Here we assume the field spontaneously breaking $U(1)$ carries half -- integral $U(1)$ charge so we do not have the $Z_2$ symmetry of \cite{IR}.}. Now the diagonalising matrix is given by \begin{equation} V\approx \left( \begin{array}{ccc} \sqrt{1-\bar{\epsilon}^2} & \bar{\epsilon} \\ -\bar{\epsilon} & \sqrt{1-\bar{\epsilon}^2} \end{array} \right) \end{equation} Obviously, there is a large mixing in the $2-3$ lepton sector of the obtained solution which may lead to interesting effects in the rare processes like $\tau \rightarrow \mu \gamma$ and neutrino oscillations. In the simplest realisation of this scheme we expect $h_b\approx h_{t}$ because in the limit $\epsilon$, $\bar \epsilon \rightarrow 0$ the $U(1)$ quantum numbers of the light Higgs $H_{1,2}$ allow them to couple to the third generation and a $SU(2)_l\otimes SU(2)_R$ symmetry of the couplings ensures equal Yukawa coupling. Thus this model applies only to the large $\tan \beta$ regime. However if there is an additional heavy state, $H_i,\;\bar H_i,\; i=1 $ or 2, with the same $U(1)$ quantum number then mixing effects can generate different $h_b$ and $h_{t}$ couplings allowing for any value of $\tan \beta$. \section{The Effective Light Majorana Mass Matrix} We have seen in section 3, that we can obtain with a $U(1)$ family symmetry a charged lepton mass matrix with the required large mixing in the two heavier generations by choosing the one free parameter, $b$. The choice $b=1/2$ gives a very good agreement with the charged lepton masses and the bottom-tau relation in the presence of $\nu_R $ with mass $M_N\approx 10^{13}GeV$. Our next step is to determine the Dirac and heavy Majorana mass matrices. The general form of the Dirac neutrino mass matrix for arbitrary $\alpha ,\beta \equiv 1-b$ is given by \cite{DLLRS} \begin{equation} M^D_{\nu} = \left ( \begin{array}{ccc} \eps^{2\mid 3\alpha + \beta\mid } & \eps^{3\mid\alpha\mid} &\eps^{\mid 3\alpha + \beta\mid } \\ \eps^{3\mid \alpha \mid } & \eps^{2\mid \beta \mid} & \eps^{\mid \beta \mid}\\ \eps^{\mid 3\alpha + \beta\mid } & \eps^{\mid \beta \mid} & 1 \end{array} \right) \end{equation} The Majorana masses for the right -- handed neutrinos are generated by terms of the form $\nu_{R} \nu_{R} \Sigma$, where $\Sigma$ is a singlet scalar field --invariant under the $SU(3)\otimes SU(2)_L\otimes U(1)_Y$ gauge group-- with charge $Q_{\Sigma}$ under the $U(1)_{FD}$ family symmetry. For the various choices of $Q_{\Sigma}$ we may then form the possible ``textures'' for the heavy Majorana mass matrix. For example, when $Q_{\Sigma} = -2 a_{1}$, that is when the field $\Sigma$ has the same charge with the Higgs fields, only the $(3,3)$ entry of the mass matrix appears for an exact $U(1)$ symmetry \footnote{The full anomaly free Abelian group $U(1)$ involves an additional family independent component $U(1)_{FD}$.} and is of order unity. However, at a later stage the Abelian symmetry is broken by standard model singlet fields $\theta$ and $\bar{\theta}$ with $U(1)$ charge $\pm 1/2$, which acquire vacuum expectation values along a D -- flat direction. At this stage, invariant combinations involving the $\theta$, $\bar{\theta}$ fields are generated, filling the rest of the entries in the mass matrices in terms of an expansion parameter \cite{IR}. Depending on which elements of the Majorana mass matrix we require to appear before the $U(1)$ symmetry breaking, we can make six different choices of the charge $Q_{\Sigma}$ which result to the $M_{\nu_R}^{maj.}$ -- ``textures'' shown in Table 1. \begin{table} \centering \begin{tabular}{|c|c|} \hline $\left ( \begin{array}{ccc} \bar\eps^{2\mid 3\alpha + \beta\mid } & \bar\eps^{3\mid\alpha\mid} &\bar\eps^{\mid 3\alpha + \beta\mid } \\ \bar\eps^{\mid 3\alpha \mid } & \bar\eps^{2\mid \beta \mid} & \bar\eps^{\mid \beta \mid}\\ \bar\eps^{\mid 3\alpha + \beta\mid } & \bar\eps^{\mid \beta \mid} & 1 \end{array} \right)$ & $\left ( \begin{array}{ccc} \bar\eps^{3\mid 2\alpha + \beta\mid } & \bar\eps^{\mid 3\alpha + \beta\mid} &\bar\eps^{\mid 3\alpha + 2\beta\mid } \\ \bar\eps^{\mid 3\alpha + \beta\mid } & \bar\eps^{\mid \beta \mid} & 1\\ \bar\eps^{\mid 3\alpha + 2\beta\mid } & 1 &\bar\eps^{\mid \beta \mid} \end{array} \right)$ \\ \hline $\left ( \begin{array}{ccc} \bar\eps^{2\mid 3\alpha + 2\beta\mid } & \bar\eps^{\mid 3\alpha + 2\beta\mid} &\bar\eps^{3\mid \alpha + \beta\mid } \\ \bar\eps^{\mid 3\alpha + 2\beta\mid } & 1&\bar\eps^{\mid \beta \mid} \\ \bar\eps^{3\mid \alpha + \beta\mid } &\bar\eps^{\mid \beta \mid} &\bar\eps^{2\mid \beta \mid} \end{array} \right)$ & $\left ( \begin{array}{ccc} \bar\eps^{\mid 3\alpha + \beta\mid } & \bar\eps^{\mid \beta\mid} &1 \\ \bar\eps^{ \mid\beta\mid } & \bar\eps^{3\mid \alpha + \beta\mid }& \bar\eps^{3\mid \alpha + 2\beta\mid } \\ 1 &\bar\eps^{3\mid \alpha + 2\beta\mid } &\bar\eps^{\mid 3\alpha + \beta \mid} \end{array} \right)$ \\ \hline $\left ( \begin{array}{ccc} 1&\bar\eps^{\mid 3\alpha + 2\beta\mid } & \bar\eps^{\mid 3\alpha + \beta\mid} \\ \bar\eps^{\mid 3\alpha + 2\beta\mid } & \bar\eps^{2\mid 3\alpha + 2\beta\mid }& \bar\eps^{3\mid 2\alpha + \beta\mid } \\ \bar\eps^{\mid 3\alpha + \beta\mid} & \bar\eps^{3\mid 2\alpha + \beta\mid }& \bar\eps^{2\mid 3\alpha + \beta \mid} \end{array} \right)$ & $\left( \begin{array}{ccc} \bar\eps^{\mid 3\alpha + 2\beta\mid } & 1 & \bar\eps^{\mid\beta\mid } \\ 1 & \bar\eps^{\mid 3\alpha + 2\beta\mid } &\bar\eps^{\mid 3\alpha + \beta\mid } \\ \bar\eps^{\mid\beta\mid } & \bar\eps^{\mid 3\alpha + \beta\mid} & \bar\eps^{\mid 3\alpha\mid} \end{array} \right)$ \\ \hline \end{tabular} \small{\caption{ General forms of heavy Majorana mass matrix textures. The specific textures of the text arise for $\alpha =1, \beta = 1/2$.}} \label{table:maj} \vspace{0.3 cm} \centering \begin{tabular}{|c|c|} \hline $\left ( \begin{array}{ccc} e^{10} & & \\ & e^2 & \\ & & 1 \end{array} \right)$ & $\left ( \begin{array}{ccc} e^{15} & & \\ & -1+e & \\ & & 1+e \end{array} \right)$ \\ \hline $\left ( \begin{array}{ccc} e^{16} & & \\ & e^2 & \\ & & 1 \end{array} \right)$ & $\left ( \begin{array}{ccc} e^9 & & \\ & -1-e^2 & \\ & & 1+e^2 \end{array} \right)$ \\ \hline $\left ( \begin{array}{ccc} e^{16} & & \\ & e^{14} & \\ & & 1 \end{array} \right)$ & $\left( \begin{array}{ccc} e^6 & & \\ & -1-e^2 & \\ & & 1+e^2 \end{array} \right)$ \\ \hline \end{tabular} \caption{ Eigenvalues of Heavy Majorana mass matrix textures, for $\alpha = 1$ and $\beta = 1/2$} \label{table:majei} \centering \begin{tabular}{|c|c|} \hline $\left ( \begin{array}{ccc} e^{26} & & \\ & e^{10} & \\ & & 1 \end{array} \right)$ & $\left ( \begin{array}{ccc} e^{25} & & \\ & e^9 & \\ & & 1/e \end{array} \right)$ \\ \hline $\left ( \begin{array}{ccc} e^{24} & & \\ & e^8 & \\ & & 1/e^2 \end{array} \right)$ & $\left ( \begin{array}{ccc} e^{33} & & \\ & e^{13} & \\ & & 1/e^7 \end{array} \right)$ \\ \hline $\left ( \begin{array}{ccc} e^{40} & & \\ & 1/e^8 & \\ & & 1/e^{14} \end{array} \right)$ & $\left( \begin{array}{ccc} e^{32} & & \\ & e^{6} & \\ & & 1/e^6 \end{array} \right)$ \\ \hline \end{tabular} \caption{ Eigenvalues of light Majorana mass matrix textures, for $\alpha = 1$ and $\beta = 1/2$} \label{table:majeil} \end{table} For $\alpha =1, \, \beta = 1/2$, we can obtain the specific forms for Dirac and Majorana textures compatible with the correct fermion mass predictions in the presence of the intermediate neutrino scale. In Table 2 we present the eigenvalues of the heavy Majorana mass matrix for this choice of $\alpha$ and $\beta$. The analysis of the resulting $m_{\nu}^{eff}$ follows the same steps as in ref.\cite{DLLRS}. In the matrices we use \begin{equation} e = \bar{\epsilon}^{1/2}, \; \bar{\epsilon} = 0.23 \end{equation} The eigenvalues of $m_{eff}$ are given in Table 3. The order of the matrices in Tables 2 and 3 corresponds to the one of Table 1. Note the interesting feature that the large mixing in the (2,3) entries of the charged leptons which we introduced in order to restore the $b - \tau$ unification leads to a similar large mixing in the neutrino sector\footnote{The mixing in the (1,2) is of the ${\cal O} ((\frac {m_e}{m_{\mu}})^{1/2})$ and negligible in (1,3).}. This mixing is of the correct order of magnitude for a possible solution to the atmospheric neutrino problem. Indeed, the atmospheric neutrino problem may be explained in the case that large mixing and small mass splitting involving the muon neutrino exists \cite{atmo}. Taking into account the bounds from accelerator and reactor disappearance experiments one finds that for $\nu_{e}-\nu_{\mu}$ or $\nu_{\tau}-\nu_{\mu}$ oscillations \begin{eqnarray} \delta m^2_{\nu_{\alpha}\nu_{\mu}}&\geq& 10^{-2} \; {\rm eV}^{2} \\ sin^22\theta_{\mu \alpha}&\geq& 0.51-0.6 \end{eqnarray} In \cite{DLLRS}, such a large mixing was not present due to a residual discrete symmetry assumed for the $b=1/2$ case. In that case, from the resulting $m^{eff}_{\nu}$ mass matrix we had been able to fit the COBE results and solve the solar neutrino problem (solution A). In the case of the large mixing discussed here we may also have a similtaneous solution to the solar and the atmospheric neutrino problems (solution B). However, the small mass splitting which is required between the neutrinos that mix in both the solar and atmospheric neutrino oscillations, make it impossible to account for the COBE measurements at the same time. This is a result of working in the minimal scheme with only one $\Sigma$ field present, which naturally leads to a large splitting between the neutrino masses. However, in the case that we add more singlets $\Sigma$ in the theory, it is possible to obtain a heavy Majorana mass matrix that leads to a solution of all three problems similtaneously \cite{elen}. Going back to the case of a single $\Sigma$ field, whether we obtain the solution (A) or (B) depends on the predicted mass splitting between the two heavy neutrinos in each of the six choices of the heavy Majorana mass matrix. For a $\nu_{\tau} \approx 5$ eV and $x_{i} \equiv e^6$, $e^8$ and $e^{10}$, for $i=1,2,3$, we obtain a muon neutrino mass $m_{\nu_{\mu}} = m_{\nu_{\tau}} x_{i} = 0.06, 0.014$ and $0.003$ eV respectively. This indicates that our solutions with a total splitting $e^{10}$ naturally lead to a solution of the COBE measurements and the solar neutrino problem. On the other hand, for $m_{\nu_{\tau}} \approx 0.1$ eV and $x_{1} = e^6$, $m_{\nu_{\mu}} = m_{\nu_{\tau}} x_{1} = 0.0012$ eV, which may be marginally consistent with a solution to the atmospheric and solar neutrino problems (remember that coefficients of order unity have not yet been defined in the solutions). Since there are alternative schemes which lead to an explanation of the COBE measurements, other than hot and cold dark matter \footnote{ For example, we have found that domain walls may give structure at medium and large scales if, either they are unstable, or the minima of the potentials of the relevant scalar field appear with different probabilities \cite{walls}.} we believe that the scheme (B) should be considered on equivalent grounds with the scheme (A). \section{Numerical analysis} In this section, we present the results of a numerical analysis of the effects of the heavy neutrinos in the renormalisation group analysis of masses, concentrating on its implications for lepton mass matrices with a large $\mu - \tau$ mixing. We start by giving a brief description of the procedure we are following. We compute numerically the low energy values of gauge and Yukawa couplings, taking into account two -- loop effects of the gauge couplings, one loop RGEs for the Yukawas assuming an effective SUSY scale to account for low energy threshold effects. First, we check the procedure by reproducing the standard results when no right handed neutrino is present in the theory. We start at the unification scale $M_U$ using as inputs $M_U$ itself, the values of the common gauge coupling $\alpha_U$, the top coupling $h_{t_G}$ and $h_{b_0}, h_{\tau_0}$. In obtaining the low energy values of $\alpha_{em}$, $a_3$, and $sin^2\theta_W$, we use the following ranges \begin{eqnarray} {\alpha_{em}}^{-1} = 127.9\pm .1 , \,\, a_3=.12\pm .01 ,\,\, sin^2\theta_W=.2319\pm 0.0004 \end{eqnarray} The top mass $m_t$ is obtained in consistency with the correlation\cite{Lang} \begin{eqnarray} sin^2\theta_W(m_Z)=0.2324-10^{-7}\times \left\{ \left(m_t/GeV\right)^2-143^2 \right\}\pm 0.0003\label{eq:a} \end{eqnarray} We have converted this correlation into a relation between $sin^2\theta_W$ and $tan\beta$, using the relation of $m_{t}^{fxd}$ and $m_t$ , i.e. \begin{eqnarray} m_t = m_{t}^{fxd} sin\beta \end{eqnarray} We first search for the $tan\beta$ 's satisfying the above correlation. Then, this range is further constrained by the requirement $h_{b_0} =h_{\tau_0}$ at $M_U$. In the present work, we have concentrated in the small $tan\beta$ scenario, i.e. when $h_t \gg h_{b,\tau}$ and we comment for the large $tan\beta$ case later. At the next stage, we introduce the Dirac neutrino RGE and run all of them together from $M_U$ down to the right handed neutrino scale $M_N$. We compare the predictions with those of the previous running (i.e. when there is no right--handed neutrino in the theory) and calculate the deviation from the equality of the $\tau - b$ unification for the same inputs at $M_U$. Below $m_N$ we add the RGE for the effective light neutrino Majorana mass matrix. We assume that we are in a diagonal basis, so we can run the three eigenvalues of $M_N$ independently. Let us start with the low $tan\beta$ regime, assuming an effective SUSY scale $M_S^{eff}\le 1TeV$. We vary $a_U$ in a range close to a central value $\frac{1}{25}$ and $M_U$ close to $10^{16}$ GeV. Our first observation is that the introduction of an intermediate scale where the right handed neutrino gets a mass, shifts slightly the range of the parameter space for which unification is possible. For example, assuming $h_{t_G}\approx 3$, i.e., close to its infrared fixed point, and assuming a unification point ranging in $M_G\sim (1.2-2.2)\times 10^{16}GeV$, with $\frac{1}{\alpha_U} \approx (23.81-25.64)$, the effective scale ranges from $M_S^{eff}=(100)GeV$ to $1 TeV$. Introducing the right--handed neutrino, we find $M_S^{eff}\ge 110GeV$. Some particular cases with the corresponding low energy predictions are shown in Table 1. \begin{center} \vglue 0.2cm \begin{tabular}{cccrccc} \hline & & & & & & \\ $\frac{M_N}{GeV}$ & $\frac{1}{\alpha_U}$ & $\frac{M_U}{10^{16}GeV}$ & $M_S^{eff}$ & $\frac{1}{\alpha_{em}}$ & $sin^2_{\theta_W}$ & $\alpha_3$ \\ \hline & & & & & & \\ $10^{16}$&23.81 & 2.18 & 100& 127.9& 0.2325& 0.121\\ &23.81 & 2.41& 110& 128.96& 0.2320& 0.123\\ &24.39& 1.97& 221& 128.09 & 0.2320& 0.120\\ &25.00& 1.46& 493& 127.98& 0.2321& 0.118\\ &25.64 & 1.08& 1212 & 127.82& 0.2319& 0.116\\ \hline & & & & & & \\ $10^{11}$& 23.81& 2.18& 110& 127.83& 0.2323& 0.122\\ & 23.81& 2.41 & 122& 127.90 & 0.2318& 0.124\\ & 24.39& 1.97& 270& 127.89& 0.2315& 0.122\\ & 25.00 & 1.46& 493& 128.05& 0.2321& 0.118\\ &25.64& 1.08&1212& 127.89& 0.2320& 0.115\\ & & & & & & \\ \hline \end{tabular} \end{center} We should point out that, the presence of $\nu_R$ in the spectrum has little effect in the low energy values of $a_{em},\, sin^2\theta_W \, , \alpha_3$ parameters. Moreover, for the above initial conditions the $sin^2\theta_W - m_t$ correlation, restricts $tan\beta$ very close to unity $tan\beta \le 2$. Of course, the biggest effects from the $\nu_R$ threshold are found in the $b - \tau$ unification. For values in the perturbative regime of the top Yukawa coupling, $h_{t_G}$, at $M_{GUT}$ we find it impossible to obtain the correct $m_b, m_{\tau}$ masses starting with $h_b=h_{\tau}$ at $M_{GUT}$, even if the neutrino threshold is as high as $M_N = {\cal O}(10^{15}GeV$). For example, using $h_{t_G}\approx 3.2$, (i.e. very close to its non-perturbative regime) and $h_{b_0}\approx h_{\tau_0} \approx .0125$, one can hardly obtain a running mass for the bottom $m_b(m_b) \approx 4.5GeV$ while the upper experimental bound is $m_b(m_b) \le 4.4GeV$. However, the solution of the solar puzzle needs $M_N \le 10^{13}GeV$. Therefore, in the following we do a complete two loop analysis and calculate the exact deviation from the $b-\tau$ -- universality for a reasonable range of the scale $M_N$. In our approach we first require the $\tau$ -- mass to be $1.749\pm 0.001$ at $m_Z$. Then we search for the correct bottom mass and top mass as well as the required $tan\beta$. We choose the biggest possible coefficient for which we have a solution, which corresponds to a bottom mass $4.4 GeV$. The variation of this coefficient as a function of $M_N$ is plotted in Fig {\it 1} ($h_{\tau_0} = 0.012$), for $h_{t_G} = 3.2$ and $2.0$ denoted in the plot by stars and crosses respectively. For the rest of the input parameters we take: \begin{eqnarray} M_S^{eff}= 544 GeV ,& M_{U} = 1.46 \times 10^{16} GeV ,& a_{U} = \frac{1}{25} \end{eqnarray} We see, in agreement with the qualitative analysis, that for this parameter range and small $h_{t_G}$ it is not possible to obtain solutions for the $b - \tau$ ratio at unification being unity. The larger the Yukawa coupling for the top, the lower the neutrino scale for which we find solutions. It is useful to compare the mass and other parameter predictions with respect to those obtained without the inclusion of $\nu_R$. As has been pointed out in the previous section, in the presence of $\nu_R$ correct predictions for $b-\tau$ masses can be restored either by increasing $h_{\tau_0}$ or by shifting $h_{b_0}$ to smaller values at $M_U$ with a simultaneous change in the $\beta$ -- angle. In this latter case, we show in Fig {\it 2} the curves corresponding to the values of $tan\beta$ as a function of $M_N$, needed to compensate for the decrease of the bottom mass. We find that as $M_N$ decreases there is a large effect on $tan\beta$, which drops for the two different choices for the top Yukawa coupling, from a common value of $1.35$ at $M_N \sim 10^{16}$GeV to $1.02$ and $1.13$ for $h_{t_G} = 3.2$ and $2.0$ respectively, at $M_N \sim 10^{11}$GeV. This, combined with the running of the top Yukawa coupling to the fixed point (Fig 3), implies that we expect in this case a decrease in the top mass, as the qualitative description of the previous section has indicated (Fig {\it 4}). The larger the initial value of the top Yukawa coupling and the smaller the initial value of $h_{\tau}$, the biggest the effect on $tan\beta$ through the running, and the larger the effect on the top mass. In Fig. {\it 5} we see the effect of $M_N$ on $1/a_{em}$, which increases slightly as $M_{N}$ drops. At the same time, $sin^{2}\theta_{w}$ also increases slowly (Fig {\it 6}), while the strong coupling decreases (Fig {\it 7}). In all cases the effect is much smaller than the errors on these quantities, however it is enough to eliminate some of the solutions that were in the border of the allowed region. We would like to stress that, in the case where the $h_{b_0}$ is the same while $h_{\tau_0}$ is slightly increased to compensate for the reduction caused by $\xi_N$, there is no need to change the angle $\beta$. {}For this reason there is no significant effect on the top mass, which preserves its value as in the standard case. {}Finally, in Fig {\it 8} we plot the light $\tau$--Majorana neutrino mass versus $h_{t_G}$ coupling, for three different values of the heavy Majorana scale $M_N= 10^{12}$, $10^{13}$ and $10^{14}$ GeV. This analysis can be applied also in the case of the large $tan\beta$ regime. However in this case there are important corrections to the bottom mass from one-loop graphs involving susy scalar masses and the $\mu$ parameter. These graphs might induce corrections to $m_b$ even of the order of $(30-50)\%$. In view of the considerable uncertainties involved we have not extended the numerical analysis to cover this case. \section{Conclusions} In this paper we have discussed the implications for low energy physics of right-handed neutrinos with masses at an intermediate scale $M_N $. For $M_N\approx 10^{12-13}GeV$ (required to give a $\tau$ neutrino mass ${\cal O}(1 eV)$ to serve as a hot dark matter component) a $10\%$ deviation from $b - \tau$ mass equality at the GUT scale is needed to give an acceptable value for the ratio $m_b/m_{\tau}$ as measured in the laboratory. We showed that it is possible to retain the $m_b^0=m_{\tau}^0$ GUT prediction of the $(3,3)$ -- elements of the corresponding mass matrices provided there is sufficient mixing in the charged lepton mass matrix between the two heavier generations. The scenario we propose can be realised in a simple extension of the standard symmetry of electroweak interactions to include a $U(1)$ family symmetry. Consideration of the implications of this symmetry for neutrino masses shows that the large mixing implied allows for a simultaneous explanation of the atmospheric neutrino problem and the solar neutrino problem. This complements our previous discussion of solutions to the solar neutrino deficit while having a neutrino mass in the range needed to fit the COBE measurements in a hot plus cold dark matter universe. We have also presented detailed numerical solutions of the renormalisation group equations for the case of a heavy right-handed neutrino to support the analytical analysis. \newpage
1,108,101,563,037
arxiv
\section{Introduction} Calculation of form factors and correlation functions in the algebraic Bethe ansatz solvable models \cite{FadST79,FadT79,BogIK93L,FadLH96,KulRes83} is a very important task. In many cases this can be reduced to the calculation of scalar products of Bethe vectors. Recently, in the work \cite{BelPRS12b}, we obtained a determinant representation for a particular case of the scalar product in $SU(3)$-invariant models. Using this representation, one can calculate certain form factors in $SU(3)$-invariant Heisenberg chain. In the present paper we extend this result and obtain determinant formulas for form factors of all diagonal elements of the monodromy matrix. Let us be more precise. We consider models with $SU(3)$-invariant $R$-matrix acting in the tensor product of two auxiliary spaces $V_1\otimes V_2$, where $V_k\sim\mathbb{C}^3$, $k=1,2$: \be{R-mat} R(x,y)=\mathbf{I}+g(x,y)\mathbf{P},\qquad g(x,y)=\frac{c}{x-y}. \end{equation} In the above definition, $\mathbf{I}$ is the identity matrix in $V_1\otimes V_2$, $\mathbf{P}$ is the permutation matrix that exchanges $V_1$ and $V_2$, and $c$ is a constant. The monodromy matrix $T(w)$ satisfies the algebra \be{RTT} R_{12}(w_1,w_2)T_1(w_1)T_2(w_2)=T_2(w_2)T_1(w_1)R_{12}(w_1,w_2). \end{equation} Equation \eqref{RTT} holds in the tensor product $V_1\otimes V_2\otimes\mathcal{H}$, where $V_k\sim\mathbb{C}^3$, $k=1,2$, are the auxiliary linear spaces, and $\mathcal{H}$ is the Hilbert space of the Hamiltonian of the model under consideration. The $R$-matrix acts non-trivially in $V_1\otimes V_2$, the matrices $T_k(w)$ act non-trivially in $V_k\otimes \mathcal{H}$. The trace in the auxiliary space $V\sim\mathbb{C}^3$ of the monodromy matrix, $\mathop{\rm tr} T(w)$, is called the transfer matrix. It is a generating functional of integrals of motion of the model. The eigenvectors of the transfer matrix are called on-shell Bethe vectors (or simply on-shell vectors). They can be parameterized by a set of complex parameters satisfying the Bethe equations (see section~\ref{S-N}). Besides the standard monodromy matrix we also consider a twisted monodromy matrix $\rho T(w)$ (see \cite{Kor82,IzeK84,KitMST05}), where $\rho$ is a matrix such that its tensor square commutes with the $R$-matrix: $[\rho_1\rho_2,R_{12}]=0$. The operator $\mathop{\rm tr} \rho T(w)$ is called the twisted transfer matrix. Its eigenvectors are called twisted on-shell Bethe vectors (or simply twisted on-shell vectors). Like the standard on-shell vectors, they can be parameterized by a set of complex parameters satisfying the twisted Bethe equations (see section~\ref{S-N}). In our previous publication \cite{BelPRS12b} we considered a special case of the twist matrix $\rho=\mathop{\rm diag}(1,\kappa,1)$, where $\kappa$ was a complex number. Now we consider the general case of the diagonal twist matrix $\rho=\mathop{\rm diag}(\kappa_1,\kappa_2,\kappa_3)$. Below we will use the shorthand notation $\bar\kappa=\{\kappa_1,\kappa_2,\kappa_3\}$ and denote the twisted monodromy matrix by $T_{\bar\kappa}(w)$. In this paper we obtain a determinant representation for the scalar product of a twisted on-shell vector and a standard on-shell vector. When the twist is general, this determinant representation is not exact. It is valid only up to the terms of order $(\kappa_1/\kappa_3-1)^2$. This precision, however, allows us to obtain exact determinant formulas for form factors of $T_{ss}(w)$, $s=1,2,3$. Using these representations one can calculate form factors of diagonal operators in the $SU(3)$-invariant XXX Heisenberg chain via the inverse scattering problem \cite{KitMaiT99,MaiTer00}. Indeed, if $E^{s,s}_m$, $s=1,2,3$, is an elementary unit associated with the $m$-th site of the chain $\left(E^{s,s}\right)_{jk}=\delta_{js}\delta_{ks}$, then \be{gen-sol-T} E^{s,s}_m =(\mathop{\rm tr} T(0))^{m-1} T_{ss}(0)(\mathop{\rm tr} T(0))^{-m}. \end{equation} Since the action of the transfer matrix on on-shell vectors is trivial, we see that the form factors of $E^{s,s}_m$ are proportional to the those of $T_{ss}$. The article is organized as follows. In section~\ref{S-N} we introduce the model under consideration and describe the notation used in the paper. In section~\ref{S-R} we give the main results. In section~\ref{S-SP-FF} we explain how the twisted transfer matrix can be used for evaluation of form factors of the operators $T_{ss}(w)$. Section~\ref{S-D} is devoted to the derivation of the results given in section~\ref{S-R}. Appendix~\ref{A-NF} contains the proof of an auxiliary lemma. \section{Notation\label{S-N}} We basically use the same notation and conventions as in the paper \cite{BelPRS12b}. Besides the function $g(x,y)$ we also introduce rational functions \be{fg} f(x,y)=1+g(x,y)=\frac{x-y+c}{x-y} \end{equation} and \be{univ-not} h(x,y)=\frac{f(x,y)}{g(x,y)}=\frac{x-y+c}{c},\qquad t(x,y)=\frac{g(x,y)}{h(x,y)}=\frac{c^2}{(x-y)(x-y+c)}. \end{equation} Sets of variables are always denoted by bars: $\bar v$, $\bar{u}^{\scriptscriptstyle C}$ etc. Individual elements of the sets are denoted by subscripts: $v_j$, $u^{\scriptscriptstyle B}_k$ etc. As a rule, the number of elements in the sets is not shown explicitly in the equations; however we give these cardinalities in special comments after the formulas. We also use a special notation for subsets with one element omitted $\bar u_j=\bar u\setminusu_j$, $\bar{v}^{\scriptscriptstyle C}_k=\bar{v}^{\scriptscriptstyle C}\setminusv^{\scriptscriptstyle C}_k$ and so on. In order to avoid formulas being too cumbersome we use shorthand notation for products of scalar functions. Namely, if functions $g$, $f$, $h$, $t$, as well as $\lambda_2$ (see \eqref{Tjj}) depend on sets of variables, this means that one should take the product over the corresponding set. For example, \be{SH-prod} \lambda_2(\bar u)=\prod_{u_j\in\bar u} \lambda_2(u_j);\quad g(v_k, \bar w)= \prod_{w_j\in\bar w} g(v_k, w_j);\quad f(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar u_{\scriptscriptstyle \rm I})=\prod_{u_j\in\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}}\prod_{u_k\in\bar u_{\scriptscriptstyle \rm I}} f(u_j,u_k). \end{equation} In the last equation of \eqref{SH-prod} the set $\bar u$ is divided into two subsets $\bar u_{\scriptscriptstyle \rm I}$, $\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}$, and the double product is taken with respect to all $u_k$ belonging to $\bar u_{\scriptscriptstyle \rm I}$ and all $u_j$ belonging to $\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}$. Now we pass to the description of Bethe vectors. We assume that the monodromy matrix possesses a pseudovacuum vector $|0\rangle$ and a dual pseudovacuum vector $\langle0|$. These vectors are annihilated by the operators $T_{jk}(w)$, where $j>k$ for $|0\rangle$ and $j<k$ for $\langle0|$. At the same time both vectors are eigenvectors for the diagonal entries of the monodromy matrix: \be{Tjj} T_{jj}(w)|0\rangle=\lambda_j(w)|0\rangle, \qquad \langle0|T_{jj}(w)=\lambda_j(w)\langle0|, \end{equation} where $\lambda_j(w)$ are some scalar functions. In the framework of the generalized model, $\lambda_j(w)$ remain free functional parameters. Actually, it is always possible to normalize the monodromy matrix $T(w)\to \lambda_2^{-1}(w)T(w)$ so as to deal only with the ratios \be{ratios} r_1(w)=\frac{\lambda_1(w)}{\lambda_2(w)}, \qquad r_3(w)=\frac{\lambda_3(w)}{\lambda_2(w)}. \end{equation} Generic Bethe vectors are special polynomials in the operators $T_{jk}(w)$ with $j<k$ applied to $|0\rangle$. Similarly, dual Bethe vectors are special polynomials in the operators $T_{jk}(w)$ with $j>k$ applied to $\langle0|$. The procedure used to construct these polynomials was formulated in \cite{KulRes83} (see also \cite{TarVar93,BelRag08}). Their explicit form was found in \cite{BelPRS12c}. In \cite{BelPRS12b} we denoted Bethe vectors and their dual ones by $|\bar u;\bar v\rangle$ and $\langle\bar u;\bar v|$ respectively, stressing that they depend on two sets of variables $\bar u$ and $\bar v$. In this paper we use the notation $\mathbb{B}^{a,b}(\bar u;\bar v)$ for Bethe vectors and $\mathbb{C}^{a,b}(\bar u;\bar v)$ for dual ones. These vectors differ from $|\bar u;\bar v\rangle$ and $\langle\bar u;\bar v|$ by the normalization \be{Normalization} \mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})=\frac{|\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B}\rangle}{f(\bar{v}^{\scriptscriptstyle B},\bar{u}^{\scriptscriptstyle B})\lambda_2(\bar{v}^{\scriptscriptstyle B})\lambda_2(\bar{u}^{\scriptscriptstyle B})},\qquad \mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})=\frac{\langle\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C}|}{f(\bar{v}^{\scriptscriptstyle C},\bar{u}^{\scriptscriptstyle C})\lambda_2(\bar{v}^{\scriptscriptstyle C})\lambda_2(\bar{u}^{\scriptscriptstyle C})}. \end{equation} We use here superscripts $B$ and $C$ in order to distinguish the sets of parameters entering these two vectors. In other words, unless explicitly specified, the variables $\{\bar{u}^{\scriptscriptstyle B}, \bar{v}^{\scriptscriptstyle B}\}$ in $\mathbb{B}^{a,b}$ and $\{\bar{u}^{\scriptscriptstyle C}, \bar{v}^{\scriptscriptstyle C}\}$ in $\mathbb{C}^{a,b}$ are not supposed to be equal. The normalization \eqref{Normalization} is more convenient for the calculation of the action of the monodromy matrix entries $T_{jk}(w)$ on Bethe vectors and dual ones \cite{BelPRS12c}. The superscripts $a$ and $b$ show the cardinalities of the sets $\bar u$ and $\bar v$: $\#\bar u=a$, $\#\bar v=b$. Bethe vectors and dual Bethe vectors are related by the anti-automorphism $^\dag$ defined by $T_{ij}(w)^\dag=T_{ji}(w)$ and $|0\rangle^\dag=\langle0|$. Below we will consider the scalar product of the on-shell vector and dual twisted on-shell vector. A generic Bethe vector becomes an on-shell vector, if it is an eigenvector of the transfer matrix. Similarly the dual twisted on-shell vector is an eigenvector of the twisted transfer matrix. We use the same notation, $\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})$ and $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$, for on-shell vectors and dual ones, while we denote dual twisted on-shell vectors by $\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$ in order to stress that they are eigenvectors of $\mathop{\rm tr} T_{\bar\kappa}(w)$. Then \be{Left-act} \begin{array}{l} \mathop{\rm tr} T(w)\ \mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B}) = \tau(w|\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B})\,\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B}),\\\rule{0pt}{20pt} \mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\ \mathop{\rm tr} T_{\bar\kappa}(w) = \tau_{\bar\kappa}(w|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})\,\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C}), \end{array} \end{equation} where \be{tau-def} \begin{array}{l} \tau(w)\equiv\tau(w|\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B})=r_1(w)f(\bar{u}^{\scriptscriptstyle B},w)+f(w,\bar{u}^{\scriptscriptstyle B})f(\bar{v}^{\scriptscriptstyle B},w)+r_3(w)f(w,\bar{v}^{\scriptscriptstyle B}),\\\rule{0pt}{20pt} \tau_{\bar\kappa}(w)\equiv\tau_{\bar\kappa}(w|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})=\kappa_1r_1(w)f(\bar{u}^{\scriptscriptstyle C},w)+\kappa_2 f(w,\bar{u}^{\scriptscriptstyle C})f(\bar{v}^{\scriptscriptstyle C},w) +\kappa_3r_3(w)f(w,\bar{v}^{\scriptscriptstyle C}). \end{array} \end{equation} Hereby the sets $\bar{u}^{\scriptscriptstyle B}$ and $\bar{v}^{\scriptscriptstyle B}$ should satisfy the system of nested Bethe ansatz equations \cite{KulRes83} \be{AEigenS-1} r_1(u^{\scriptscriptstyle B}_{j})=\frac{f(u^{\scriptscriptstyle B}_{j},\bar{u}^{\scriptscriptstyle B}_{j})}{f(\bar{u}^{\scriptscriptstyle B}_{j},u^{\scriptscriptstyle B}_{j})}f(\bar{v}^{\scriptscriptstyle B},u^{\scriptscriptstyle B}_{j}),\qquad r_3(v^{\scriptscriptstyle B}_{j})=\frac{f(\bar{v}^{\scriptscriptstyle B}_{j},v^{\scriptscriptstyle B}_{j})}{f(v^{\scriptscriptstyle B}_{j},\bar{v}^{\scriptscriptstyle B}_{j})}f(v^{\scriptscriptstyle B}_{j},\bar{u}^{\scriptscriptstyle B}), \end{equation} while the sets $\bar{u}^{\scriptscriptstyle C}$ and $\bar{v}^{\scriptscriptstyle C}$ satisfy the twisted system of nested Bethe ansatz equations \be{ATEigenS-1} r_1(u^{\scriptscriptstyle C}_{j})=\frac{\kappa_2}{\kappa_1}\frac{f(u^{\scriptscriptstyle C}_{j},\bar{u}^{\scriptscriptstyle C}_{j})}{f(\bar{u}^{\scriptscriptstyle C}_{j},u^{\scriptscriptstyle C}_{j})}f(\bar{v}^{\scriptscriptstyle C},u^{\scriptscriptstyle C}_{j}), \qquad r_3(v^{\scriptscriptstyle C}_{j})=\frac{\kappa_2}{\kappa_3}\frac{f(\bar{v}^{\scriptscriptstyle C}_{j},v^{\scriptscriptstyle C}_{j})}{f(v^{\scriptscriptstyle C}_{j},\bar{v}^{\scriptscriptstyle C}_{j})}f(v^{\scriptscriptstyle C}_{j},\bar{u}^{\scriptscriptstyle C}). \end{equation} We recall that $\bar{u}^{\scriptscriptstyle C}_{j}=\bar{u}^{\scriptscriptstyle C}\setminusu^{\scriptscriptstyle C}_j$, $\bar{u}^{\scriptscriptstyle B}_{j}=\bar{u}^{\scriptscriptstyle B}\setminusu^{\scriptscriptstyle B}_j$ etc. For further application it is useful to re-write the system of twisted equations in the logarithmic form. Define \be{Phi-1} \Phi_j=\log r_1(u^{\scriptscriptstyle C}_{j})-\log \left(\frac{f(u^{\scriptscriptstyle C}_{j},\bar{u}^{\scriptscriptstyle C}_{j})}{f(\bar{u}^{\scriptscriptstyle C}_{j},u^{\scriptscriptstyle C}_{j})}\right) -\log f(\bar{v}^{\scriptscriptstyle C},u^{\scriptscriptstyle C}_{j}), \qquad j=1,\dots,a, \end{equation} and \be{Phi-2} \Phi_{j+a}=\log r_3(v^{\scriptscriptstyle C}_{j})-\log \left(\frac{f(\bar{v}^{\scriptscriptstyle C}_{j},v^{\scriptscriptstyle C}_{j})}{f(v^{\scriptscriptstyle C}_{j},\bar{v}^{\scriptscriptstyle C}_{j})}\right)-\log f(v^{\scriptscriptstyle C}_{j},\bar{u}^{\scriptscriptstyle C}), \qquad j=1,\dots,b. \end{equation} Then the system \eqref{ATEigenS-1} takes the form \be{Log-TBE} \begin{array}{l} \Phi_j=\log\kappa_2-\log\kappa_1+2\pi i \ell_j,\qquad j=1,\dots,a,\\ \Phi_{j+a}=\log\kappa_2-\log\kappa_3+2\pi i m_j,\qquad j=1,\dots,b, \end{array} \end{equation} where $\ell_j$ and $m_j$ are some integers. The Jacobian of \eqref{Phi-1} and \eqref{Phi-2} is closely related to the norm of the on-shell Bethe vector and the average values of the operators $T_{ss}(z)$ (see Section~\ref{S-R}). To conclude this section we introduce the partition function of the six-vertex model with domain wall boundary conditions (DWPF) \cite{Kor82,Ize87}. This is one of the central object in the study of scalar products. We denote the DWPF by ${\sf K}_n(\bar x|\bar y)$. It depends on two sets of variables $\bar x$ and $\bar y$; the subscript shows that $\#\bar x=\#\bar y=n$. The function ${\sf K}_n$ has the following determinant representation \cite{Ize87} \begin{equation}\label{K-def} {\sf K}_n(\bar x|\bar y) =\Delta'_n(\bar x)\Delta_n(\bar y)h(\bar x,\bar y) \det_n t(x_j,y_k), \end{equation} where $\Delta'_n(\bar x)$ and $\Delta_n(\bar y)$ are \be{def-Del} \Delta'_n(\bar x) =\prod_{j>k}^n g(x_j,x_k),\qquad {\Delta}_n(\bar y)=\prod_{j<k}^n g(y_j,y_k). \end{equation} \section{ Determinant expressions for the form factors\label{S-R}} The form factors of the operators $T_{ss}(z)$ are defined as \be{SP-deFF} \mathcal{F}_{a,b}^{(s)}(z)\equiv\mathcal{F}_{a,b}^{(s)}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C};\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B})= \mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})T_{ss}(z)\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B}), \end{equation} where both $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$ and $\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})$ are on-shell Bethe vectors\footnote[1]{% For simplicity here and below we do not distinguish between vectors and dual vectors.}. One should distinguish two cases: \begin{itemize} \item $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})= \bigl(\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)^\dagger$; \item $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\ne \bigl(\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)^\dagger$. \end{itemize} \subsection{The average value of $T_{ss}(z)$\label{ss-AV1}} Here we consider the case $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})= \bigl(\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)^\dagger$, that is $\bar{u}^{\scriptscriptstyle B}=\bar{u}^{\scriptscriptstyle C}=\bar u$ and $\bar{v}^{\scriptscriptstyle B}=\bar{v}^{\scriptscriptstyle C}=\bar v$. First of all we define an $(a+b)\times(a+b)$ matrix $\theta$ with the entries \be{theta} \theta_{j,k}=\left.\frac{\partial\Phi_j}{\partial u^{\scriptscriptstyle C}_k}\right|_{\bar{u}^{\scriptscriptstyle C}=\bar u\atop{\bar{v}^{\scriptscriptstyle C}=\bar v}},\qquad k=1,\dots,a;\quad\text{and}\quad \theta_{j,k+a}=\left.\frac{\partial\Phi_j}{\partial v^{\scriptscriptstyle C}_k}\right|_{\bar{u}^{\scriptscriptstyle C}=\bar u\atop{\bar{v}^{\scriptscriptstyle C}=\bar v}},\qquad k=1,\dots,b, \end{equation} where the $\Phi_j$ are given by \eqref{Phi-1} and \eqref{Phi-2}. Then we extend the matrix $\theta$ to an $(a+b+1)\times(a+b+1)$ matrix $\Theta^{(s)}$ with $s=1,2,3$, by adding one row and one column \be{Theta} \begin{array}{l} \Theta^{(s)}_{j,k}=\theta_{j,k}, \qquad j,k=1,\dots,a+b,\\\rule{0pt}{20pt} \Theta^{(s)}_{a+b+1,k}=\frac{\partial\tau(z|\bar u,\bar v)}{\partialu_k},\qquad k=1,\dots, a,\qquad \Theta^{(s)}_{a+b+1,a+k}=\frac{\partial\tau(z|\bar u,\bar v)}{\partialv_k},\qquad k=1,\dots, b,\\\rule{0pt}{20pt} \Theta^{(s)}_{j,a+b+1}=\delta_{s1}-\delta_{s2}\qquad j=1,\dots, a,\qquad \Theta^{(s)}_{j+a,a+b+1}=\delta_{s3}-\delta_{s2}\qquad j=1,\dots, b,\\\rule{0pt}{20pt} \Theta^{(s)}_{a+b+1,a+b+1}=\left.\frac{\partial\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})}{\partial\kappa_s} \right|_{\bar{u}^{\scriptscriptstyle C}=\bar u\atop{\bar{v}^{\scriptscriptstyle C}=\bar v}}. \end{array} \end{equation} Here the $\delta_{sk}$ are Kronecker deltas. Notice that $\Theta^{(s)}$ depends on $s$ only in its last column. Then the form factor $\mathcal{F}_{a,b}^{(s)}(z)$ is \be{average-Tss} \mathcal{F}_{a,b}^{(s)}(z|\bar u,\bar v;\bar u,\bar v)=H_{a,b}\det_{a+b+1}\Theta^{(s)}, \end{equation} where \be{Hab} H_{a,b}=(-1)^ac^{a+b}f(\bar v,\bar u)\prod_{j,k=1\atop{j\ne k}}^af(u_j,u_k) \prod_{j,k=1\atop{j\ne k}}^b f(v_j,v_k). \end{equation} \subsection{Form factor of $T_{ss}(z)$ between different states\label{ss-FFDS1}} If $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\ne \bigl(\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)^\dagger$, then we introduce a row-vector $\Omega$ with the following components: \be{def-Omega} \begin{array}{l} {\displaystyle \Omega_k=\prod\limits_{\ell=1}^a(u^{\scriptscriptstyle C}_k-u^{\scriptscriptstyle B}_\ell) \prod\limits_{\ell=1\atop{\ell\ne k}}^a(u^{\scriptscriptstyle C}_k-u^{\scriptscriptstyle C}_\ell)^{-1},\qquad k=1,\dots,a,}\\\rule{0pt}{20pt} {\displaystyle \Omega_{a+k}=\prod\limits_{m=1}^b(v^{\scriptscriptstyle B}_k-v^{\scriptscriptstyle C}_m) \prod\limits_{m=1\atop{m\ne k}}^b(v^{\scriptscriptstyle B}_k-v^{\scriptscriptstyle B}_m)^{-1},\qquad k=1,\dots,b.} \end{array} \end{equation} Obviously there exists an integer $p\in\{1,\dots,a+b\}$, such that $\Omega_p\ne 0$. Let $p$ be fixed. Then for $j\ne p$ we define the entries $\mathcal{N}^{(s)}_{j,k}$ of the $(a+b)\times(a+b)$ matrix $\mathcal{N}^{(s)}$ as \be{FF-P11} \mathcal{N}^{(s)}_{j,k}= c\,g^{-1}(w_k,\bar{u}^{\scriptscriptstyle C})\,g^{-1}(\bar{v}^{\scriptscriptstyle C},w_k) \frac{\partial \tau(w_k|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})}{\partialu^{\scriptscriptstyle C}_j},\qquad j=1,\dots,a,\quad j\ne p, \end{equation} % and \be{FF-P22} \mathcal{N}^{(s)}_{a+j,k}=-c\,g^{-1}(\bar{v}^{\scriptscriptstyle B},w_k )\,g^{-1}(w_k,\bar{u}^{\scriptscriptstyle B}) \frac{\partial \tau(w_k|\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B})}{\partialv^{\scriptscriptstyle B}_j},\qquad j=1,\dots,b,\quad j\ne p. \end{equation} % In these formulas one should set $w_k=u^{\scriptscriptstyle B}_k$ for $k=1,\dots,a$ and $w_{k+a}=v^{\scriptscriptstyle C}_k$ for $k=1,\dots,b$. The $p$-th row has the following elements \be{Np} \mathcal{N}^{(s)}_{p,k}=h(\bar{v}^{\scriptscriptstyle C},w_k)h(w_k,\bar{u}^{\scriptscriptstyle B})Y^{(s)}_k, \end{equation} where again $w_k=u^{\scriptscriptstyle B}_k$ for $k=1,\dots,a$ and $w_{k+a}=v^{\scriptscriptstyle C}_k$ for $k=1,\dots,b$, and \be{Y1} \begin{array}{l} {\displaystyle Y^{(s)}_k=c\,(\delta_{s1}-\delta_{s2})+(\delta_{s1}-\delta_{s3})u^{\scriptscriptstyle B}_k\left(1-\frac{f(\bar{v}^{\scriptscriptstyle B},u^{\scriptscriptstyle B}_k)}{f(\bar{v}^{\scriptscriptstyle C},u^{\scriptscriptstyle B}_k)}\right),\qquad k=1,\dots,a;}\\\rule{0pt}{20pt} {\displaystyle Y^{(s)}_{a+k}=c\,(\delta_{s3}-\delta_{s2})+(\delta_{s1}-\delta_{s3})(v^{\scriptscriptstyle C}_k+c)\left(1-\frac{f(v^{\scriptscriptstyle C}_k,\bar{u}^{\scriptscriptstyle C})}{f(v^{\scriptscriptstyle C}_k,\bar{u}^{\scriptscriptstyle B})}\right),\qquad k=1,\dots,b.} \end{array} \end{equation} Then \begin{multline}\label{FF-dif} \mathcal{F}_{a,b}^{(s)}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C};\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B})=\bigl(\tau(z|\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})-\tau(z|\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)\\ \times \Omega^{-1}_{p}t(\bar{v}^{\scriptscriptstyle C},\bar{u}^{\scriptscriptstyle B}) \Delta'_a(\bar{u}^{\scriptscriptstyle C})\Delta_a(\bar{u}^{\scriptscriptstyle B})\Delta'_b(\bar{v}^{\scriptscriptstyle C})\Delta_b(\bar{v}^{\scriptscriptstyle B}) \det_{a+b}\mathcal{N}^{(s)}_{jk}. \end{multline} Note that the number $p$ of the modified row of the matrix $\mathcal{N}^{(s)}$ is arbitrary. The only condition is that $\Omega_p\ne 0$. It is worth mentioning that in the case considered in subsection~\ref{ss-AV1} the vector $\Omega$ becomes a zero vector. Therefore one cannot take the limit $\bar{u}^{\scriptscriptstyle C}=\bar{u}^{\scriptscriptstyle B}$ and $\bar{v}^{\scriptscriptstyle C}=\bar{v}^{\scriptscriptstyle B}$ in \eqref{FF-dif}. This should not be surprising, because the sets $\{\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C}\}$ and $\{\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B}\}$ are not generic complex numbers, but they are some fixed solutions of Bethe equations. Evidently, one cannot consider the limit in which one solution goes to another. \section{Scalar product and form factors\label{S-SP-FF}} Let $\mathop{\rm tr} T_{\bar\kappa}(z)$ be the twisted transfer matrix and $\mathop{\rm tr} T(z)$ be the standard transfer matrix. Consider \be{Qm} Q_{\bar\kappa}(z)=\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C}) \bigl(\mathop{\rm tr} T_{\bar\kappa}(z)-\mathop{\rm tr} T(z)\bigr)\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B}), \end{equation} where $\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$ and $\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})$ are twisted and standard on-shell vectors respectively. Obviously \be{Qm-0} Q_{\bar\kappa}(z)=\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\sum_{j=1}^3(\kappa_j-1)T_{jj}(z) \mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B}), \end{equation} and therefore \be{Qm-00} \frac{d Q_{\bar\kappa}(z)}{d\kappa_s}\Bigl.\Bigr|_{\bar\kappa=1}= \mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\Bigl.\Bigr|_{\bar\kappa=1}T_{ss}(z) \mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B}). \end{equation} Here $\bar\kappa=1$ means that $\kappa_j=1$ for $j=1,2,3$. Observe that after setting $\bar\kappa=1$ the vector $\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$ turns into the standard on-shell vector $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$. Hence, we obtain the form factor of $T_{ss}(z)$ in the r.h.s. of \eqref{Qm-00} \be{Qm-FF} \frac{d Q_{\bar\kappa}(z)}{d\kappa_s}\Bigl.\Bigr|_{\bar\kappa=1}= \mathcal{F}^{(s)}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C};\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B}). \end{equation} On the other hand \be{Qm-1} Q_{\bar\kappa}(z)=\bigl(\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})-\tau(z|\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)\;\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B}), \end{equation} where $\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$ and $\tau(z|\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})$ are the eigenvalues of $\mathop{\rm tr} T_{\bar\kappa}(z)$ and $\mathop{\rm tr} T(z)$ respectively. Consider the case when $\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\bigl.\bigr|_{\bar\kappa=1}= \bigl(\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)^\dagger$, that is $\bar{u}^{\scriptscriptstyle C}=\bar{u}^{\scriptscriptstyle B}=\bar u$ and $\bar{v}^{\scriptscriptstyle C}=\bar{v}^{\scriptscriptstyle B}=\bar v$ at $\bar\kappa=1$. Then taking the derivative of \eqref{Qm-1} with respect to $\kappa_s$ at $\bar\kappa=1$ we find \be{Qm-2} \mathcal{F}^{(s)}(z|\bar u,\bar v;\bar u,\bar v)= \|\mathbb{B}^{a,b}(\bar u;\bar v)\|^2\;\frac{d \tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})}{d\kappa_s}\Bigl.\Bigr|_{\bar\kappa=1}, \end{equation} and one should set $\bar{u}^{\scriptscriptstyle C}=\bar u$ and $\bar{v}^{\scriptscriptstyle C}=\bar v$ after taking the derivative of $\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$ with respect to $\kappa_s$. Note that here we take total derivative with respect to $\kappa_s$. Therefore, differentiating the eigenvalue $\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})$ we should also differentiate $\bar{u}^{\scriptscriptstyle C}$ and $\bar{v}^{\scriptscriptstyle C}$ with respect to $\kappa_s$, as these parameters implicitly depend on $\kappa_s$ through the twisted Bethe equations \eqref{ATEigenS-1}. If $\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\bigl.\bigr|_{\bar\kappa=1}\ne \bigl(\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)^\dagger$, then the scalar product in the r.h.s. of \eqref{Qm-1} vanishes at $\bar\kappa=1$ (as a scalar product of two different on-shell vectors), and we obtain \be{Qm-3} \frac{d Q_{\bar\kappa}(z)}{d\kappa_s}\Bigl.\Bigr|_{\bar\kappa=1}=\bigl(\tau(z|\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})-\tau(z|\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)\; \frac{d}{d\kappa_s}\, \left(\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\right)\Bigl.\Bigr|_{\bar\kappa=1}. \end{equation} Thus, the form factor of $T_{ss}(z)$ between two different on-shell vectors is proportional to the $\kappa_s$-derivative of the scalar product of the twisted and standard on-shell vectors \be{Qm-4} \mathcal{F}_{a,b}^{(s)}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C};\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B})=\bigl(\tau(z|\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})-\tau(z|\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)\; \frac{d}{d\kappa_s}\, \left(\mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\right)\Bigl.\Bigr|_{\bar\kappa=1}. \end{equation} Observe that after taking the $\kappa_s$-derivative of the scalar product one should set $\bar\kappa=1$. Hence, for the calculation of form factors it is sufficient to compute the scalar product up to the terms $(\kappa_i-1)(\kappa_j-1)$, where $i,j=1,2,3$. \section{Calculation of the form factors \label{S-D}} \subsection{Average value of $T_{ss}(z)$\label{ss-AV2}} In this section we assume that $\bar{u}^{\scriptscriptstyle C}=\bar{u}^{\scriptscriptstyle B}=\bar u$ and $\bar{v}^{\scriptscriptstyle C}=\bar{v}^{\scriptscriptstyle B}=\bar v$ at $\bar\kappa=1$. As we have shown in the previous section, the form factor $\mathcal{F}_{a,b}^{(s)}(z|\bar u,\bar v;\bar u,\bar v)$ is equal to the norm of the on-shell vector $\|\mathbb{B}^{a,b}(\bar u;\bar v)\|^2$ multiplied by the derivative of the twisted transfer matrix eigenvalue with respect to $\kappa_s$ (see \eqref{Qm-2}). The norm of the on-shell Bethe vector was calculated in \cite{Res86} (see also \cite{BelPRS12b}) \be{Norm} \|\mathbb{B}^{a,b}(\bar u;\bar v)\|^2=H_{a,b}\det_{a+b}\theta, \end{equation} where $H_{a,b}$ is given by \eqref{Hab} and $\theta_{j,k}$ is defined in \eqref{theta}, where one should set $\bar{u}^{\scriptscriptstyle C}=\bar u$ and $\bar{v}^{\scriptscriptstyle C}=\bar v$. The total derivative of $\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})$ with respect to $\kappa_s$ at $\bar\kappa=1$ and $\bar{u}^{\scriptscriptstyle C}=\bar u$, $\bar{v}^{\scriptscriptstyle C}=\bar v$ is \be{tot-der} \left.\frac{d\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})}{d\kappa_s}\right|_{\bar\kappa=1}= \left(\frac{\partial\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})}{\partial\kappa_s} +\sum_{\ell=1}^a\frac{\partial\tau(z|\bar u,\bar v)}{\partialu_\ell}\frac{du^{\scriptscriptstyle C}_\ell}{d\kappa_s} +\sum_{m=1}^b\frac{\partial\tau(z|\bar u,\bar v)}{\partialv_m}\frac{dv^{\scriptscriptstyle C}_m}{d\kappa_s}\right)_{\bar\kappa=1}. \end{equation} In order to compute derivatives $d\bar{u}^{\scriptscriptstyle C}/d\kappa_s$ and $d\bar{v}^{\scriptscriptstyle C}/d\kappa_s$ at $\bar\kappa=1$ we differentiate \eqref{Log-TBE}: \be{system} \begin{array}{l} {\displaystyle\sum_{\ell=1}^a \theta_{j,\ell}\frac{d u^{\scriptscriptstyle C}_\ell}{d\kappa_s}+ \sum_{m=1}^b \theta_{j,m}\frac{d v^{\scriptscriptstyle C}_m}{d\kappa_s}=\delta_{s2}-\delta_{s1},\qquad j=1,\dots,a}\\\rule{0pt}{20pt} {\displaystyle\sum_{\ell=1}^a \theta_{j+a,\ell}\frac{d u^{\scriptscriptstyle C}_\ell}{d\kappa_s}+ \sum_{m=1}^b \theta_{j+a,m}\frac{d v^{\scriptscriptstyle C}_m}{d\kappa_s}=\delta_{s2}-\delta_{s3},\qquad j=1,\dots,b.} \end{array} \end{equation} From this system we find \be{sol-system} \begin{array}{l} {\displaystyle \frac{d u^{\scriptscriptstyle C}_j}{d\kappa_s}= (\delta_{s2}-\delta_{s1})\sum_{\ell=1}^a (\theta^{-1})_{j,\ell}+ (\delta_{s2}-\delta_{s3})\sum_{m=1}^b (\theta^{-1})_{j,m+a},}\\\rule{0pt}{20pt} {\displaystyle \frac{d v^{\scriptscriptstyle C}_j}{d\kappa_s}= (\delta_{s2}-\delta_{s1})\sum_{\ell=1}^a (\theta^{-1})_{j+a,\ell}+ (\delta_{s2}-\delta_{s3})\sum_{m=1}^b (\theta^{-1})_{j+a,m+a}.} \end{array} \end{equation} Substituting \eqref{sol-system} into \eqref{Qm-2} and \eqref{tot-der} we obtain \begin{multline}\label{norm-tau} \mathcal{F}_{a,b}^{(s)}(z|\bar u,\bar v;\bar u,\bar v)=H_{a,b} \det_{a+b}\theta \;\Biggl\{ \frac{\partial\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})}{\partial\kappa_s} \\ +\sum_{\ell=1}^a\frac{\partial\tau(z|\bar u,\bar v)}{\partialu_\ell} \left[(\delta_{s2}-\delta_{s1})\sum_{\ell'=1}^a (\theta^{-1})_{\ell,\ell'}+ (\delta_{s2}-\delta_{s3})\sum_{m'=1}^b (\theta^{-1})_{\ell,m'+a}\right]\\ +\sum_{m=1}^b\frac{\partial\tau(z|\bar u,\bar v)}{\partialv_m} \left[(\delta_{s2}-\delta_{s1})\sum_{\ell'=1}^a (\theta^{-1})_{m+a,\ell'}+ (\delta_{s2}-\delta_{s3})\sum_{m'=1}^b (\theta^{-1})_{m+a,m'+a}\right]\Biggr\}_{\bar\kappa=1}. \end{multline} Let $\widehat\theta_{j,k}$ be a cofactor of the matrix element $\theta_{j,k}$ in the matrix $\theta$. Then $\widehat\theta_{j,k} =(\theta^{-1})_{k,j}\,\det_{a+b}\theta$, and \eqref{norm-tau} turns into \begin{multline}\label{norm-tau1} \mathcal{F}_{a,b}^{(s)}(z|\bar u,\bar v;\bar u,\bar v)=H_{a,b} \;\Biggl\{ \det_{a+b}\theta\; \frac{\partial\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})}{\partial\kappa_s} \\ +\sum_{\ell=1}^a\frac{\partial\tau(z|\bar u,\bar v)}{\partialu_\ell} \left[(\delta_{s2}-\delta_{s1})\sum_{\ell'=1}^a \widehat\theta_{\ell',\ell}+ (\delta_{s2}-\delta_{s3})\sum_{m'=1}^b \widehat\theta_{m'+a,\ell}\right]\\ +\sum_{m=1}^b\frac{\partial\tau(z|\bar u,\bar v)}{\partialv_m} \left[(\delta_{s2}-\delta_{s1})\sum_{\ell'=1}^a \widehat\theta_{\ell',m+a}+ (\delta_{s2}-\delta_{s3})\sum_{m'=1}^b \widehat\theta_{m'+a,m+a}\right]\Biggr\}_{\bar\kappa=1}. \end{multline} On the other hand, developing $\det_{a+b+1}\Theta^{(s)}$ over the last row and the last column, one has \begin{equation}\label{Dev-Theta} \det_{a+b+1}\Theta^{(s)}=\Theta^{(s)}_{a+b+1,a+b+1}\det_{a+b}\theta-\sum_{j=1}^{a+b}\sum_{k=1}^{a+b} \Theta^{(s)}_{j,a+b+1}\Theta^{(s)}_{a+b+1,k}\;\widehat\theta_{j,k}\;. \end{equation} Substituting here the explicit expressions \eqref{Theta} for the entries of the matrix $\Theta^{(s)}$, we immediately reproduce \eqref{norm-tau1}. In this way we prove \eqref{average-Tss}. Observe that \be{sum-Th-s} \sum_{s=1}^3\Theta^{(s)}_{j,a+b+1}=0,\quad\text{for}\quad j=1,\dots,a+b. \end{equation} This implies \begin{multline}\label{sum-FF-s} \mathbb{C}^{a,b}(\bar u;\bar v)\mathop{\rm tr} T(z)\mathbb{B}^{a,b}(\bar u;\bar v)=H_{a,b} \det_{a+b}\theta \sum_{s=1}^3\left.\frac{\partial\tau_{\bar\kappa}(z|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})}{\partial\kappa_s}\right|_{\bar{u}^{\scriptscriptstyle C}=\bar u\atop{\bar{v}^{\scriptscriptstyle C}=\bar v}}\\ =\tau(z|\bar u,\bar v)H_{a,b} \det_{a+b}\theta=\tau(z|\bar u,\bar v)\|\mathbb{B}^{a,b}(\bar u;\bar v)\|^2, \end{multline} as it should. One can also easily check that at $a=0$ or $b=0$, equation \eqref{average-Tss} reproduces known results for $SU(2)$ form factors \cite{KitMaiT99}. \subsection{The scalar product of twisted and standard on-shell vectors\label{ss-SP-TS}} In order to compute the form factor of $T_{ss}(z)$ between different on-shell Bethe vectors we should calculate the scalar product of the twisted on-shell vector and the standard on-shell vector (see \eqref{Qm-4}). The main steps of this derivation almost literally repeat the ones described in the work \cite{BelPRS12b} for the particular case of the twist matrix. We start with the general formula for the scalar product of two Bethe vectors obtained by N. Reshetikhin in the work \cite{Res86}. Then we successively take the sums over partitions of the arguments of Bethe vectors. The reader can find the details of this very onerous derivation in \cite{BelPRS12b}. The only essential difference is that now we need a generalization of lemma~6.3 of \cite{BelPRS12b}. \begin{lemma}\label{New-Lemma} Let $\zeta$ be a constant. Define $G_n(\zeta)$ as \be{def-Gg} G_n(\zeta)=\sum \zeta^{n_{\scriptscriptstyle \rm I\hspace{-1pt}I}}f(\bar\xi_{\scriptscriptstyle \rm I},\bar\xi_{\scriptscriptstyle \rm I\hspace{-1pt}I})f(\bar\eta_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar\eta_{\scriptscriptstyle \rm I}){\sf K}_{n_{\scriptscriptstyle \rm I}}(\bar\eta_{\scriptscriptstyle \rm I}|\bar\xi_{\scriptscriptstyle \rm I}) {\sf K}_{n_{\scriptscriptstyle \rm I\hspace{-1pt}I}}(\bar\xi_{\scriptscriptstyle \rm I\hspace{-1pt}I}+c|\bar\eta_{\scriptscriptstyle \rm I\hspace{-1pt}I}), \end{equation} where $n=n_{\scriptscriptstyle \rm I}+n_{\scriptscriptstyle \rm I\hspace{-1pt}I}$, and the sum is taken over all partitions of the set $\bar\eta$ into subsets $\bar\eta_{\scriptscriptstyle \rm I},\bar\eta_{\scriptscriptstyle \rm I\hspace{-1pt}I}$ and the set $\bar\xi$ into subsets $\bar\xi_{\scriptscriptstyle \rm I},\bar\xi_{\scriptscriptstyle \rm I\hspace{-1pt}I}$ with cardinalities $\#\bar\eta_{\scriptscriptstyle \rm I}=\#\bar\xi_{\scriptscriptstyle \rm I}=n_{\scriptscriptstyle \rm I}$, $0\leq n_{\scriptscriptstyle \rm I}\leq n$, and $\#\bar\eta_{\scriptscriptstyle \rm I\hspace{-1pt}I}=\#\bar\xi_{\scriptscriptstyle \rm I\hspace{-1pt}I}=n_{\scriptscriptstyle \rm I\hspace{-1pt}I} = n-n_{\scriptscriptstyle \rm I}$. The functions ${\sf K}_{n_{\scriptscriptstyle \rm I}}$ and ${\sf K}_{n_{\scriptscriptstyle \rm I\hspace{-1pt}I}}$ are the DWPF \eqref{K-def}. Then \be{res-Gg} G_n(\zeta)=(-1)^n\zeta^{\frac{\bar\eta-\bar\xi}c}\; t(\bar\xi,\bar\eta)h(\bar\eta,\bar\eta)h(\bar\xi,\bar\xi)+ O\bigl((\zeta-1)^2\bigr), \end{equation} where we have used the shorthand notation \be{sh-not} \zeta^{\frac{\bar\eta-\bar\xi}c}=\prod_{j=1}^n\zeta^{\frac{\eta_j-\xi_j}c}. \end{equation} \end{lemma} The proof is given in appendix~\ref{A-NF}. It turns out that in our case, $\zeta=\kappa_1/\kappa_3$. In the work \cite{BelPRS12b} we considered the case $\kappa_1=\kappa_3$. Therefore we succeeded in calculating the sum \eqref{def-Gg} exactly. In the case of the general twist matrix we have $\kappa_1\ne \kappa_3$, and hence, $\zeta\ne1$. Thus, generically the function $G_n(\zeta)$ is a polynomial in $\zeta$. Using lemma~(6.1) of \cite{BelPRS12b} one can take the sum in \eqref{def-Gg} with respect to the partitions of one set of variables, for instance, \be{ParSum-Gg} G_n(\zeta)=\sum \zeta^{n_{\scriptscriptstyle \rm I\hspace{-1pt}I}}(-1)^{n_{\scriptscriptstyle \rm I}}f(\bar\xi_{\scriptscriptstyle \rm I},\bar\xi_{\scriptscriptstyle \rm I\hspace{-1pt}I})f(\bar\eta,\bar\xi_{\scriptscriptstyle \rm I}) {\sf K}_{n}(\{\bar\xi_{\scriptscriptstyle \rm I}-c,\bar\xi_{\scriptscriptstyle \rm I\hspace{-1pt}I}+c\}|\bar\eta). \end{equation} Here the sum is taken only over partitions of the set $\bar\xi$ into subsets $\bar\xi_{\scriptscriptstyle \rm I},\bar\xi_{\scriptscriptstyle \rm I\hspace{-1pt}I}$. However it is doubtful that further simplifications of equation \eqref{ParSum-Gg} are possible. This is a serious obstacle for the derivation of a determinant representation for the scalar product involving twisted on-shell vectors with a general twist. On the other hand, in order to calculate form factors, we should find only the first $\kappa_s$-derivatives of the scalar product at $\bar\kappa=1$. Therefore we do not need an exact result for $G_n(\zeta)$, since the terms $O\bigl((\zeta-1)^2\bigr)$ are not relevant. As we have pointed out, in all other respects the derivation of the determinant representation for the scalar product of twisted and standard on-shell Bethe vectors literally repeats the derivation described in \cite{BelPRS12b}. The result reads \begin{equation}\label{fin0} \mathbb{C}_{\bar\kappa}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})= t(\bar{v}^{\scriptscriptstyle C},\bar{u}^{\scriptscriptstyle B}) \Delta'_a(\bar{u}^{\scriptscriptstyle C}) \Delta'_b(\bar{v}^{\scriptscriptstyle B}) {\Delta}_a(\bar{u}^{\scriptscriptstyle B}){\Delta}_b(\bar{v}^{\scriptscriptstyle C})\det_{a+b}\mathcal{N}+O\bigl((\kappa_3/\kappa_1-1)^2\bigr), \end{equation} where $\Delta'$ and $\Delta$ are defined in \eqref{def-Del}. In order to describe the $(a+b)\times(a+b)$ matrix $\mathcal{N}$ we first introduce a column-vector $\widehat{\mathcal{N}}_j(w)$ with the components \be{SP-P11} \widehat{\mathcal{N}}_j(w)= c\,g^{-1}(w,\bar{u}^{\scriptscriptstyle C})g^{-1}(\bar{v}^{\scriptscriptstyle C},w) \frac{\partial \tau_{\bar\kappa}(w|\bar{u}^{\scriptscriptstyle C},\bar{v}^{\scriptscriptstyle C})}{\partialu^{\scriptscriptstyle C}_j},\qquad j=1,\dots,a, \end{equation} % and \be{SP-P22} \widehat{\mathcal{N}}_{j+a}(w)=-c\,g^{-1}(\bar{v}^{\scriptscriptstyle B},w )g^{-1}(w,\bar{u}^{\scriptscriptstyle B}) \frac{\partial \tau(w|\bar{u}^{\scriptscriptstyle B},\bar{v}^{\scriptscriptstyle B})}{\partialv^{\scriptscriptstyle B}_j},\qquad j=1,\dots,b. \end{equation} % Then \begin{equation}\label{block-matrix} \begin{array}{l} {\displaystyle \mathcal{N}_{j,k}=\widehat{\mathcal{N}}_{j}(u^{\scriptscriptstyle B}_k),\qquad j,k=1,\dots,a;}\\\rule{0pt}{20pt} {\displaystyle \mathcal{N}_{j,k}=\widehat{\mathcal{N}}_{j}(v^{\scriptscriptstyle C}_k)\left(\frac{\kappa_3}{\kappa_1}\right)^{v^{\scriptscriptstyle C}_{k}/c}, \qquad j=1,\dots,a,\quad k=a+1,\dots,b;}\\\rule{0pt}{20pt} {\displaystyle \mathcal{N}_{j,k}=\widehat{\mathcal{N}}_{j}(u^{\scriptscriptstyle B}_k)\left(\frac{\kappa_3}{\kappa_1}\right)^{-u^{\scriptscriptstyle B}_{k}/c}, \qquad j=a+1,\dots,b,\quad k=1,\dots,a;}\\\rule{0pt}{20pt} {\displaystyle \mathcal{N}_{j,k}=\widehat{\mathcal{N}}_{j}(v^{\scriptscriptstyle C}_k),\qquad j,k=a+1,\dots,b.} \end{array} \end{equation} Comparing the entries of the matrix \eqref{block-matrix} with the ones obtained in \cite{BelPRS12b} one can see additional factors $\left(\kappa_3/\kappa_1\right)^{v^{\scriptscriptstyle C}_{k}/c}$ and $\left(\kappa_3/\kappa_1\right)^{-u^{\scriptscriptstyle B}_{k}/c}$ in the off-diagonal blocks. These terms are due to the factor $\zeta^{(\bar\eta-\bar\xi)/c}$ in lemma~\ref{New-Lemma}. \subsection{The form factor of $T_{ss}(z)$ between different states\label{ss-FFDS2}} In order to obtain form factors one has to take $\kappa_s$-derivatives of the scalar product at $\bar\kappa=1$. Taking into account that the parameters $\bar{u}^{\scriptscriptstyle C}$ and $\bar{v}^{\scriptscriptstyle C}$ depend on $\bar\kappa$ through the twisted Bethe equations, it might be rather difficult to obtain an explicit expression for the derivatives of $\det_{a+b}\mathcal{N}$. However, as was shown in \cite{BelPRS12b}, the matrix $\mathcal{N}$ has an eigenvector with zero eigenvalue at $\bar\kappa=1$. This fact can be used for significant simplification of our calculations. The components of the zero eigenvector are given by \eqref{def-Omega}. If $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\ne \bigl(\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)^\dagger$ then this vector has at least one non-zero component, say $\Omega_p$. Then we can add to the $p$-th row of the matrix $\mathcal{N}$ all other rows multiplied by the coefficients $\Omega_j/\Omega_{p}$. Such sums were already calculated in \cite{BelPRS12b}. Then the $p$-th row is modified as follows: \be{LR-l} \mathcal{N}_{p,k}=c\Omega^{-1}_{p}h(\bar{v}^{\scriptscriptstyle C},u^{\scriptscriptstyle B}_k)h(u^{\scriptscriptstyle B}_k,\bar{u}^{\scriptscriptstyle B})\left[\frac{f(\bar{v}^{\scriptscriptstyle B},u^{\scriptscriptstyle B}_k)}{f(\bar{v}^{\scriptscriptstyle C},u^{\scriptscriptstyle B}_k)} \left(1-\left(\frac{\kappa_1}{\kappa_3}\right)^{u^{\scriptscriptstyle B}_k/c}\right) +\left(\frac{\kappa_1}{\kappa_3}\right)^{u^{\scriptscriptstyle B}_k/c}-\frac{\kappa_2}{\kappa_1}\right], \end{equation} for $k=1,\dots,a$, and \be{LR-r} \mathcal{N}_{p,a+k}=c\Omega^{-1}_{p}h(\bar{v}^{\scriptscriptstyle C},v^{\scriptscriptstyle C}_k)h(v^{\scriptscriptstyle C}_k,\bar{u}^{\scriptscriptstyle B})\left[\frac{f(v^{\scriptscriptstyle C}_k,\bar{u}^{\scriptscriptstyle C})}{f(v^{\scriptscriptstyle C}_k,\bar{u}^{\scriptscriptstyle B})} \left(\frac{\kappa_2}{\kappa_1}\left(\frac{\kappa_3}{\kappa_1}\right)^{v^{\scriptscriptstyle C}_k/c}-\frac{\kappa_2}{\kappa_3}\right) +1-\frac{\kappa_2}{\kappa_1}\left(\frac{\kappa_3}{\kappa_1}\right)^{v^{\scriptscriptstyle C}_k/c}\right], \end{equation} for $k=1,\dots,b$. Obviously $\mathcal{N}_{p,k}=0$ at $\bar\kappa=1$. Hence, when we take derivatives with respect to $\kappa_s$ we have to differentiate only the $p$-th row, setting $\bar\kappa=1$ everywhere else. We also do not need to take derivatives of $\bar{u}^{\scriptscriptstyle C}$ and $\bar{v}^{\scriptscriptstyle C}$ with respect to $\kappa_s$, since they produce zero contributions at $\bar\kappa=1$. Thus, differentiating \eqref{LR-l}, \eqref{LR-r} with respect to $\kappa_s$ we arrive at \eqref{Y1}. In all other rows of the matrix $\mathcal{N}$ we simply set $\bar\kappa=1$. In this way we reproduce equations \eqref{FF-P11} and \eqref{FF-P22}. Observe that \be{sum-Y-s} \sum_{s=1}^3 Y^{(s)}_{k}=0,\quad\text{for}\quad k=1,\dots,a+b, \end{equation} where $Y^{(s)}_{k}$ are given by \eqref{Y1}. Hence, \begin{equation}\label{sum-FF1-s} \mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\mathop{\rm tr} T(z)\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})=0, \end{equation} as it should, if $\mathbb{C}^{a,b}(\bar{u}^{\scriptscriptstyle C};\bar{v}^{\scriptscriptstyle C})\ne \bigl(\mathbb{B}^{a,b}(\bar{u}^{\scriptscriptstyle B};\bar{v}^{\scriptscriptstyle B})\bigr)^\dagger$. Known formulas for form factors in $SU(2)$ case \cite{KitMaiT99} also can be obtained from \eqref{FF-dif} by setting $a=0$ or $b=0$. \section*{Conclusion} We have mentioned already that the problem of calculation of form factors and correlation functions in the framework of the algebraic Bethe ansatz can be reduced to that of the calculation of scalar products of Bethe vectors. In particular, determinant representations for scalar products of on-shell vectors and generic Bethe vectors play a very important role. Such determinant representations for $\frak{gl}_2$-based models are known \cite{Sla89,Sla07} and they were used for analytical \cite{KitMT00,KitMST02,KitKMST09b,GohKS04,GohKS05,SeeBGK07} and numerical \cite{CauHM05,PerSCHMWA06,PerSCHMWA07,CauCS07} analysis of correlation functions. However in the case of higher rank algebras the situation is more involved. We argued in \cite{BelPRS12b} that in the $SU(3)$ case a determinant representation for the scalar product of an on-shell vector and arbitrary Bethe vector is hardly to be found. In the present paper we obtained an additional argument in favor of this conjecture. Indeed, even in the particular case of a Bethe vector, namely a twisted on-shell vector, we were able to obtain the determinant representation for the scalar product only up to the terms of order $(\kappa_1/\kappa_3-1)^2$. In order to obtain an exact result one should obtain a closed expression for the function $G_n(\zeta)$ defined in lemma~\ref{New-Lemma}. Nevertheless we succeeded in finding single determinant representations for form factors of $T_{ss}(w)$. These matrix elements describe form factors of diagonal operators in the $SU(3)$-invariant Heisenberg chain. For complete a description one should obtain determinant formulas for the matrix elements of $T_{jk}(w)$ with $j\ne k$. We hope to study this problem in our further publication. \section*{Acknowledgements} Work of S.P. was supported in part by RFBR grant 11-01-00980-a, grant of Scientific Foundation of NRU HSE 12-09-0064 and grant of FASI RF 14.740.11.0347. E.R. was supported by ANR Project DIADEMS (Programme Blanc ANR SIMI1 2010-BLAN-0120-02). N.A.S. was supported by the Program of RAS Basic Problems of the Nonlinear Dynamics, RFBR-11-01-00440, RFBR-11-01-12037-ofi-m, SS-4612.2012.1.
1,108,101,563,038
arxiv
\section{INTRODUCTION} \label{s1} It has been known for some time that the effects of heavy dynamical fermions can be included in Monte Carlo simulations by a hopping parameter expansion of the fermion determinant. This is reminiscent of the Euler-Heisenberg Lagrangian of perturbative QED, in which the effects of electron loops are included in a gauge-invariant effective Lagrangian. Recently Hasenfratz and DeGrand \cite{HaDe94,Lat93} have performed a zero-temperature calculation of the shift in the lattice gauge coupling constant $\beta$, defined as $\beta=2N_c/g^2$, induced by staggered dynamical fermions and applied the result to the finite temperature phase transition in QCD. Their result for the shift in the critical coupling, in the form $\beta_c = \beta^{\rm pure}_c - \Delta\beta_F$, was found to hold rather well down to small values of the fermion mass. It is convenient to work with a hopping parameter $\kappa$ defined by $\kappa={1 \over{2 m_F }}$, where $m_F$ is the fermion mass. As shown in Fig.\ \ref{f1}, the Hasenfratz-DeGrand results for $N_t = 4$ are in excellent agreement out to at least $\kappa =2 $, and in reasonable agreement at $\kappa = 5$; this is particularly surprising since $\Delta\beta_F$ is calculated using lattice perturbation theory at zero temperature. In order to understand the effects of finite temperature, we have calculated the one-loop fermionic corrections to the spatial and temporal plaquette couplings, as well as the leading $Z_N$ symmetry breaking coupling. \section{RENORMALIZATION OF $\beta$ AT FINITE TEMPERATURE} \label{s2} \subsection{Perturbation Theory for $\Delta \beta$} \label{ss2.1} The $O(A^2)$ term in the gauge field lattice action including the one-loop finite temperature fermionic correction is given by \begin{equation} S_{\rm eff} = - g^2 \sum_{p_0} {1\over {N_t}} \int_{-\pi}^{~\pi} {d^3\vec{p} \over {(2\pi)^3}} {\rm Tr}_c \Big\{ \tilde{A}_{\mu}(p) \tilde{A}_{\nu}(p) {\beta \over{2N_c}} \Bigl[ D_{\mu\nu}^{(0)}(p) + D_{\mu\nu}^{(1)}(p) - D_{\mu\nu}^{(2)} \Bigr] \Big\} \label{e2.1.1} \end{equation} where \begin{equation} D_{\mu\nu}^{(0)}(p) = 4 \Bigl[\delta_{\mu\nu} \sum_{\alpha} \sin^2\Bigl({p_{\alpha} \over 2}\Bigr) - \sin\Bigl({p_{\mu} \over 2}\Bigr) \sin\Bigl({p_{\nu} \over 2}\Bigr)\Bigr], \label{e2.1.2} \end{equation} \begin{equation} D_{\mu\nu}^{(1)}(p) = {1 \over 2} \sum_{k_0} {1 \over {N_t}} \int_{-\pi}^{~\pi} {d^3\vec(k) \over {(2\pi)^3}} {\rm Tr}_d \Bigl[ R \Bigl( k_{\mu}+ {p_{\mu} \over 2} \Bigr) S^{-1}(k) R \Bigl( k_{\nu}+{p_{\mu} \over 2} \Bigr) S^{-1}(k+p) \Bigr], \label{e2.1.3} \end{equation} and \begin{equation} D_{\mu\nu}^{(2)} = {1 \over 2} \delta_{\mu\nu} \sum_{k_0} {1 \over {N_t}} \int_{-\pi}^{~\pi} {d^3\vec(k) \over {(2\pi)^3}} {\rm Tr}_d \Bigl[ Q(k_{\mu}) S^{-1}(k) \Bigr] \label{e2.1.4} \end{equation} with the vertex functions given by \begin{equation} R(k_{\mu}) = i\gamma_{\mu}\cos(k_{\mu}) \label{e2.1.5} \end{equation} and \begin{equation} Q(k_{\mu}) = -i\gamma_{\mu}\sin(k_{\mu}) \label{e2.1.6} \end{equation} with no sum over $\mu$. The inverse fermion propagator is \begin{equation} S(k) = {1 \over {2\kappa}} + i\gamma_{\mu}\sin(k_{\mu}). \label{e2.1.7} \end{equation} This formula is a straightforward consequence of the lattice Feynman rules, which are given in Fig.\ \ref{f2}. The diagrams contributing to the fermionic renormalization of $\Delta \beta$ are shown in Fig.\ \ref{f3}. The first diagram, corresponding to $D^{(0)}$, is the free lattice gluon propagator. The second diagram, corresponding to $D^{(1)}$, involves R and S only and survives in the continuum limit; note that R is the lattice analog of the continuum gluon vertex. The third diagram, corresponding to $D^{(2)}$, is a lattice tadpole diagram, and involves the vertex function Q, which is a feature only of the lattice theory. At zero temperature, this tadpole contribution vanishes after integration by parts \cite{CeMa81}. Finite temperature enters into the calculation only through the replacement of the integration over the $k_0$ variable appropriate for zero temperature by the sum over Matsubara frequencies \begin{equation} k_0 = \frac{2 \pi n }{T} \label{e2.1.8} \end{equation} where n is integer-valued and T is the temperature. \subsection{Ward Identity at Finite Temperature} \label{ss2.2} At finite temperature, the $D^{(2)}$ term in Eq. (\ref{e2.1.1}) is necessary in order to show that the lattice form of the Ward identity \begin{equation} \sin \Bigl( {p_{\nu} \over 2} \Bigr) \Bigl[ D_{\mu\nu}^{(2)}(p) - D_{\mu\nu}^{(1)} \Bigr] = 0 \label{e2.2.1} \end{equation} still holds at finite temperature, even though the four dimensional hypercubic symmetry is broken. To show this, we first note the two identities \begin{equation} S(k+p) - S(k) = 2 \sin \Bigl( {p_{\mu} \over 2} \Bigr) R \Bigl( k_\mu+{p_{\mu} \over 2} \Bigr) \label{e2.2.2} \end{equation} and \begin{equation} R \Bigl( k_\mu + {p_\mu \over 2} \Bigr) - R \Bigl( k_\mu - {p_\mu \over 2} \Bigr) = 2 \sin \Bigl( {p_{\mu} \over 2} \Bigr) Q(k_\mu). \label{e2.2.3} \end{equation} Use of the first identity gives \begin{eqnarray} \sin \Bigl( {p_{\nu} \over 2} \Bigr) \Bigl[ D_{\mu\nu}^{(2)}(p) - D_{\mu\nu}^{(1)} \Bigr] = &&\frac{1}{4} \sum_k \frac{1}{N_t V } Tr_d \Bigl\{ 2 \sin \Bigl( \frac{p_\mu}{2} \Bigr) Q( k_\mu ) S^{-1}(k) \nonumber\\ &&+ \Bigl[ R \Bigl( k_\mu+{p_{\mu} \over 2} \Bigr) S^{-1}(k_{\mu} + p_{\mu}) - R \Bigl( k_\mu+{p_{\mu} \over 2} \Bigr) S^{-1}(k_\mu) \Bigr] \Bigr\}, \label{e2.2.4} \end{eqnarray} and after a simple shift of variables, use of the second identity yields the desired cancellation. \subsection{Finite Temperature Decomposition of the Propagators} \label{ss2.3} At finite temperature there are two independent symmetric tensors of order $p^2$ which are four-dimensionally transverse\cite{KaKa85}. In what follows, all expressions will be in the thermal rest frame. The first of the corresponding lattice tensors is specified by \begin{equation} P_{\mu 0}^{(3)} (p) = P_{0 \mu}^{(3)} (p) = 0 \label{e2.3.1} \end{equation} \begin{equation} P_{ i j }^{(3)} (p) = \delta_{ i j } - {{\tilde{p}_i \tilde{p}_j} \over { \tilde{p}_{\it s}^2}} \label{e2.3.2} \end{equation} and the second is \begin{equation} P_{\mu \nu}^{(4)} (p) - P_{\mu \nu}^{(3)} (p) \label{e2.3.3} \end{equation} where \begin{equation} P_{\mu \nu}^{(4)} (p) = \delta_{\mu \nu} - {{\tilde{p}_\mu \tilde{p}_\nu } \over { \tilde{p}^2}} \label{e2.3.4} \end{equation} The lattice quantities $\tilde{p}$ are defined by \begin{equation} \tilde{p}_\mu = 2 \sin \Bigl( {p_{\mu} \over 2} \Bigr) \label{e2.3.5} \end{equation} \begin{equation} \tilde{p}^2 = \sum_\mu \tilde{p}_\mu^2 \label{e2.3.6} \end{equation} \begin{equation} \tilde{p}_s^2 = \sum_i \tilde{p}_i^2 \label{e2.3.7} \end{equation} The existence of these two independent tensors leads to separate renormalizations of the spatial and temporal gauge couplings at finite temperature. The first tensor, $P_{\mu \nu}^{(3)} (p)$, is associated with the magnetic, or spatial, part of the action, while the second tensor, $P_{\mu \nu}^{(4)} (p) - P_{\mu \nu}^{(3)} (p)$, is associated with the electric, or temporal, part of the action. We find that \begin{equation} \Delta\beta_s = - N_c \sum_{k_0} {1\over {N_t}} \int_{-\pi}^{~\pi} {d^3\vec{k} \over {(2\pi)^3}} \Phi(k;1,2) \label{e2.3.8} \end{equation} and \begin{equation} \Delta\beta_t = - N_c \sum_{k_0} {1\over {N_t}} \int_{-\pi}^{~\pi} {d^3\vec{k} \over {(2\pi)^3}} \Phi(k;0,1) \label{e2.3.9} \end{equation} where \begin{eqnarray} \Phi(k;\mu,\nu) = &&32B^{-2}(k)\cos^2(k_{\mu})\cos^2(k_{\nu}) \nonumber\\ &&- 4096B^{-4}(k) \sin^2(k_{\mu})\cos^2(k_{\mu}) \sin^2(k_{\nu})\cos^2(k_{\nu}) \label{e2.3.10} \end{eqnarray} and \begin{equation} B(k) = {1 \over {\kappa^2}} + 4 \sum_{\alpha} \sin^2(k_{\alpha}). \label{e2.3.11} \end{equation} As the temperature is taken to zero, the two expressions smoothly approach each other to give the zero-temperature result. \subsection{Numerical Results for $\Delta \beta$} \label{ss2.4} The integrals (\ref{e2.3.8}) and (\ref{e2.3.9}) were evaluated numerically by calculating mode sums for large values of $N_s$ and various values of $N_t$. Figure \ref{f4} compares $\Delta\beta$ per fermion (spatial and temporal) vs. $\kappa$ for $N_t = 4$ and $N_t = 6$ with the zero temperature result of Hasenfratz and DeGrand \cite{HaDe94}. The finite temperature values approach the zero temperature result from below as a consequence of the antiperiodic boundary conditions; as $N_t$ increases, the sum includes more terms in the region near $p = 0$, which dominates for small $m_F$. As might be expected, the temporal shift in $\beta$ is more sensitive to the effect of finite temperature than the spatial shift. For small values of $\kappa$, corresponding to large values of the fermion mass, the effects of finite temperature are small. This is easily understood, since $\kappa$ is much smaller than $N_t$. However, for intermediate values of $\kappa$, finite temperature corrections ruin the excellent agreement between the zero-temperature calculation and the Monte Carlo results discussed in Sec.\ \ref{s1}. \subsection{Image Expansion} \label{ss2.5} The connection between the the zero and finite temperature result can be understood more physically by transforming the sum over Matsubara frequencies into a sum over images using the Poisson summation formula for antiperiodic boundary conditions \begin{equation} \sum_{p_0} {1 \over {N_t}} F(p_0) = \sum_n (-1)^n \int_{-\pi}^{~\pi} {dp_0 \over {2\pi}} F(p_0) e^{inN_tp_0} \label{e2.5.1} \end{equation} so that, for example, the shift in the spatial coupling is given by \begin{equation} \Delta\beta_s = - N_c \int_{-\pi}^{~\pi} {d^4k \over {(2\pi)^4}} \Phi(k;1,2) - 2 N_c \sum_{n=1}^{\infty} (-1)^n \int_{-\pi}^{~\pi} {d^4k \over {(2\pi)^4}} \Phi(k;1,2) \cos(nN_tk_0) \label{e2.5.2} \end{equation} with a similar result for $\Delta\beta_t$. This form has a simple physical interpretation: the first integral is the zero-temperature shift, and the integer $n$ in the second term labels the net number of times the fermion wraps around the lattice in the temporal direction. The finite temperature corrections result from the $O(A^2)$ expansion of image diagrams such as those depicted in Fig.\ \ref{f5}. Numerically, the dominant corrections to the zero-temperature result come from the first few values of $n$, with the $n = 1$ and $n = 2$ terms accounting for more than $90\%$ of the finite temperature correction for $\kappa \leq 2.0$. Although not apparent in our perturbative calculations, in order to maintain gauge invariance, the vertical segments of the image diagrams must be accompanied by powers of Polyakov loops. It is an observed feature of simulations with dynamical fermions that the $Z_N$ symmetry is approximately maintained in the low-temperature phase. This suggests that the image contibutions may be negligible below $\beta_c$. Thus, the zero temperature corrections to $\beta$ are suppressed nonperturbatively in the confined regime. In particular, just below $\beta_c$ the zero-temperature result will hold for $\Delta\beta$. Figure \ref{f6} illustrates an idealized behaviour for $\Delta\beta_{\rm fermion}$ as a function of $\beta_{\rm pure}$. \section{$Z_N$ SYMMETRY BREAKING IN THE EFFECTIVE ACTION} \label{s3} There is another set of terms induced by the fermion determinant only at finite temperature. As is well known, the $Z_N$ symmetry of the pure gauge theory is explicitly broken by dynamical fermions. To lowest order in the hopping parameter expansion, a path of $N_t$ hops around the lattice in the temporal direction produces a effective coupling to the Polyakov loop, explicitly breaking the $Z_N$ symmetry. Evaluating the fermion determinant in a constant $A_0$ background field and applying the Poisson summation formula [Eq.\ (\ref{e2.5.1})] once again, we find an additional contribution to the effective action: \begin{equation} S_{\rm eff} = \sum_{\vec{x}} \sum_{n=1}^{\infty} (-1)^{n+1} h_n(\kappa) Re \Bigl[ {\rm Tr} P^n(\vec{x}) \Bigr] \label{e3.1} \end{equation} where $P(\vec{x})$ denotes a Polyakov loop and the couplings $h_n$ are given by \begin{equation} h_n(\kappa) = -4 N_t \int_{-\pi}^{~\pi} {d^4q \over {(2\pi)^4}} {\rm ln} \Biggl[ {1 \over {4\kappa^2}} + \sum_{\mu} \sin^2(q_{\mu}) \Biggr] \cos(nN_tq_0) \label{e3.2} \end{equation} This result can also be obtained by using a contour integral to evaluate the sum over Matsubara frequencies. The leading term in this effective action has been discussed for the case $N_t = 2$ \cite{HaKaSt83}. \subsection{Numerical Results for h} \label{ss3.1} The maximum values of the $h_n$ are obtained when $m_F = 0$. For $N_t = 4$ we find $h_1^{\rm max} = 0.107$, $h_2^{\rm max} = 0.00445$, and increasingly smaller magnitudes for higher order couplings. Because $h_1$ favors $Z_N$ breaking, it acts to lower the critical value of $\beta$. Unlike the $D^{(1)}(p)$ term considered in Sec.\ \ref{s2}, this effect cannot be directly included as a finite shift in $\beta$. The most direct way to determine the shift in $\beta$ due to $h_1$ term is to perform a Monte Carlo simulation of the pure gauge theory with an $h_1$ term added to the action. The numerical simulation results presented in this section were obtained from runs of 40,000 sweeps (after thermalization) on a $10^3\times4$ lattice using a variety of work stations. We have observed that this additional source of $\Delta\beta$ is small compared to the renormalization of the plaquette couplings discussed in the preceeding section, regardless of $m_F$. For example, even $h_1 = 0.1$ at $N_t = 4$ leads to a shift in $\beta$ per fermion of $0.00325$. This value of $h_1$ corresponds to $m_F = 0.17$ which yields a zero-temperature predicted shift in $\beta$ per fermion of $0.104$. Although the effect of the $h_1$ term in the effective action on the critical value of $\beta$ is quite small, $h_1$ has a profound effect on the character of the transition. Figure~\ref{f7} shows the frequency distribution for the spatial expectation value of the Polyakov loop as a function of $h_1$ at the appropriate $\beta_c (h_1)$ on a $10^3 \times 4$ lattice. As $h_1$ increases, the peaks associated with the two phases move closer together until they merge at what is presumably a second-order critical end-point. A first-order phase transition exists for values of $h_1$ smaller than approximately $0.08$. The Monte Carlo results for the phase diagram of a pure gauge theory with an $h_1$ coupling can be mapped back to the phase diagram of QCD with dynamical quarks by including both the $h_1$ term and the shift $\Delta \beta$. Including zero-temperature plaquette coupling renormalization, the endpoint of this first-order phase transition line maps to the point $(0.394, 4.68)$ in the $(m_F, \beta)$-plane for the case of sixteen degenerate staggered fermion species. This is roughly consistent with the endpoint of the first-order phase transition observed in simulations with dynamical staggered fermions \cite{OhKi91}. However, a definitive comparison will likely require significant computational resources. \section{CONCLUSIONS} Figure \ref{f8}, a graph of $\beta_c$ versus $\kappa$, summarizes the results we have obtained. As shown by Hasenfratz and DeGrand, the zero-temperature shift in the coupling constant due to dynamical fermions nicely accounts for the shift in $\beta_c$. We have found that finite temperature corrections to the gauge coupling renormalization lift the degeneracy of the spatial and temporal couplings, and the results are significantly different from the zero-temperature results. As can be seen from the figure, they are in conflict with the Monte Carlo data. At finite temperature, dynamical fermions couple to Polyakov loops via loops circling the lattice in the timelike direction. This also acts to shift the value of $\beta_c$. This shift is small, however, and does not restore the success of the zero-temperature calculation. As we have shown, the image expansion makes it plausible that the success of the zero-temperature calculation in determining the critical value of $\beta$ is due to a nonperturbative suppression of finite temperature effects in the low temperature regime. Specifically, the small expectation value of the Polyakov loops at low temperatures indicates a suppression of those quark paths which account for finite temperature corrections. This leads to the behavior of the effective gauge coupling constants shown in Fig.\ \ref{f5} above. If this picture is correct, a gauge theory with dynamical fermions of intermediate mass is well modeled by a pure gauge theory in which the spatial and temporal couplings are slightly different. Along the critical line, both couplings jump discontinuously. This could be studied in more detail by adjusting spatial and temporal couplings in a pure gauge theory simulation so that plaquette expectation values matched those observed in dynamical simulations. This might also afford an opportunity to consider the effects of finite spatial sizes. Such effects are easily included in Eq.\ \ref{e2.5.2} for the coupling shift by choosing the spatial mode sums appropriately. While the coupling to Polyakov loops induced by dynamic fermion loops seems to play little role in determining $\beta_c$, this coupling does influence the order of the transition; in our $N_t = 4$ simulations of a pure gauge theory with an additional coupling $h_1$ to Polyakov loops, a sufficiently large value of $h_1$ causes the first order line to terminate, while shifting $\beta_c$ very little. Thus, the effective coupling $h_1$ appears to be the most important factor in determining the end point of the first-order deconfining phase transition line. In addition to the transition temperature, other quantities can be estimated using these perturbative techniques. For example, the chiral order parameter $\langle \bar{\psi} \psi \rangle$ can be estimated as a constant term plus a term proportional to the expectation value of a plaquette plus a term proportional to the expectation value of the Polyakov loop. However, the use of a perturbative evaluation of the fermion determinant obviously fails to include the effects of chiral symmetry breaking, which is the dominant factor in determining $\langle \bar{\psi} \psi \rangle$ for light quarks. Presumably this accounts for the failure of the effective theory for light quark masses. \section{ACKNOWLEDGEMENTS} We would like to thank the U.S. Department of Energy for financial support under Contract No. DE-AC02-76-CH00016. One of us (PNM) would also like to thank the U.S. Department of Education for additional financial support in the form of a GANN Predoctoral Fellowship.
1,108,101,563,039
arxiv
\section{Introduction} Every Digital Library (DL) system generates huge amounts of usage data and DL operators often face the problem of not being able to report about the real usage on an expressive level that is moreover understandable for laymen. Reporting average statistics like number of unique sessions, page impressions, amount of actions and even click-through rates is not enough because these numbers cannot represent and explain the underlying pattern of the information behavior of DL users. Exploratory search in DLs and academic search engines \cite{Carevic2017} is a rewarding research environment for interactive IR researchers because evolving searches with complex search tasks can be observed much easier compared to web search where searchers often jump into different websites. In DLs, users typically stay in the system and work with the variety of facilities it offers. This is due to the fact that state-of-the-art DLs offer dozens of possibilities to navigate and interact with the search system \cite{Hienert2015,Fuhr2007}. Our motivation in proposing this data set is grounded in the observation that in the field very few open data sets which support whole session investigation exist. To the best of our knowledge there is no open data set available from academic search engines or DLs with full coverage of whole session information. Among the available data sets, we find the most famous evaluation campaign TREC (Text REtrieval Conference) which proposed TREC Session\footnote{http://trec.nist.gov/data/session.html} \cite{Kacem17} and Interactive\footnote{http://trec.nist.gov/data/interactive.html} tracks. In fact, one way to enhance the development and evaluation of information-seeking systems is to propose shareable data sets in order to facilitate the collaboration within an interdisciplinary team including developers, computer scientists, and behavioral experts who work together in order to explore new ideas and propose improvements \cite{KellyDP09}. Consequently, with the proposed data set we want to support DL developers and IR researchers to work on the analysis of whole retrieval sessions. These practitioners need such data sets to propose methods and techniques which allow us to examine search steps, analyze usage data, understand the underlying information behavior covered in search sessions that are performed by geographically distributed persons. \section{Related Work} Interactive information retrieval (IIR) refers to a research discipline that studies the interaction between the user and the search system. In fact, researchers have moved from considering only the current query to consider the user's past interactions. Research approaches aim to understand the user search behavior in order to improve the ranking of results after submitting a query and enhance the user experience with an IR system. Thus, they study concepts such as search strategies \cite{Carevic:2016,Carevic2017}, search term suggestions \cite{Hienert:2016}, communities' detection \cite{Akbar:2012}, personalization of search results, recommendation's impact \cite{Hienert:2016}, user’s information needs frequency and change. Many interactive IR models have been proposed in the literature (e.g. \cite{Ellis:1989}) that describe the user’s behavior by different steps (stages) of information seeking and interacting with an information retrieval system. In order to evaluate and analyze such models and approaches log analysis has been introduced. In \cite{Peters:1993}, the authors proposed a detailed overview of the history and development of transaction log analysis by examining possible applications and features analysis. Jones et al. \cite{Jones:1998} investigated transaction logs for the Computer Science Technical Reports Collection of the New Zealand DL. The authors analyzed query complexity, query terms change, sessions frequency and length. \section{Dataset} \label{sec:dataset} Sowiport\footnote{http://www.sowiport.de} is a DL for the Social Sciences that contains more than nine million records, full texts and research projects included from twenty-two different databases whose content is in English and German \cite{Hienert2015}. This data set \textbf{Sowiport User Search Sessions Data Set (SUSS)}\footnote{To download the dataset: http://dx.doi.org/10.7802/1380} \cite{Mayr2016} contains individual search sessions extracted from the transaction log of sowiport. The data was collected over a period of one year (between 2nd April 2014 and 2nd April 2015). The web server log files and specific JavaScript-based logging techniques were, first, used to capture the user behavior within the system. Then, the log was heavily filtered to exclude transactions performed by robots and short interactions limited to one action per session. After that, all transaction activities are mapped to a list of 58 different user actions which cover all types of activities and pages that can be carried out/visited within the system (e.g. typing a query, visiting a document, selecting a facet, exporting a document, etc.). For each action, a session id, the date stamp and additional information (e.g. query terms, document ids, and result lists) are stored. Based on the session id and date stamp, the step in which an action is conducted and the length of the action is included in the data set as well. The session id is assigned via browser cookies and allows tracking user behavior over multiple search sessions. Session boundaries were specified after a threshold period indicating a period of inactivity and thus the end of the session. In our data set this threshold is equal to 20 minutes. Thus, in the data set we find 484,449 individual search sessions and a total of 7,982,427 log entries. \section{Preliminary analysis} In this section, we present, first, a descriptive analysis of the SUSS data set regarding sessions, users, and searches. These analyses are not following concrete research questions but are intended to show the richness of this open data set. \subsection{Description of Actions} Searching sowiport can be performed through an \textit{All fields} search box (default search without specification), or through specifying one or more field(s): title, person, institution, number, keyword or year. The users' main actions are described in Table~\ref{tab:actions}. We grouped the main actions into two categories: "Query"-related and "Document"-related actions. Another categorization of actions was proposed in \cite{Hienert:2016} by specifying search interactions and successive positive actions. \begin{table} \centering \caption{Main actions performed by users in sowiport} \label{tab:actions} \begin{tabular}{|l|l|p{7cm}|r|} \hline Category & Action & Description & Frequency\\ \hline \multirow{8}{*}{Query} & query\_form & Formulating a query & 179,964 \\ & search & A search result list for any kind of search & 848,556\\ & search\_advanced & A search with the advanced settings that can limit the search fields, information type, etc. & 103,432 \\ & search\_keyword & A search for a keyword & 43,608 \\ & search\_thesaurus & Usage of the thesaurus system & 71,599 \\ & search\_institution & A search for an institution & 13,104 \\ & search\_person & A search for a specific person (author/editor) & 93,083\\ \hline \multirow{8}{*} {Document} & view\_record & Displaying a record in the result list after clicking on it & 1,344,361\\ & view\_citation & View the document's citation(s) & 24,994 \\ & view\_references & View the document's references & 2,086 \\ & view\_description & View the document's abstract & 86,752\\ & export\_bib & Export the document through different formats & 27,229 \\ & export\_cite & Export the document's citations list & 27,385 \\ & export\_mail & Send the document via email & 10,987 \\ & to\_favorites & Save the document to the favorite list & 5,431 \\ \hline \end{tabular} \end{table} \subsection{Users and Sessions} Given the data set described in Section~\ref{sec:dataset}, we first analyze the user types. A user can perform a search and submit a query to sowiport without signing up. Registered users can keep the search history, add a document to favorites and create favorite lists according to their interests. We found 1,509 registered users who performed 3,372 unique sessions (0.69\%). The rest of the sessions in sowiport were performed by non-registered users (99.31\%). \subsection{Investigation of Actions} Main user actions as described before can be categorized into actions regarding either search queries or documents. These actions are used in different scales in the data set. Query-related actions represent 29.84\% while document-related actions represent 35.79\% of the total amount of actions. The rest of actions contain navigational interactions such as logging in the system, managing favorites, and accessing the system pages. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{ActionsTop6-2.png} \caption{Frequency distribution of the six most performed action groups} \label{fig:TopActions} \end{center} \end{figure} Figure~\ref{fig:TopActions} shows the frequencies of the top six most used actions by the users in the data set. We notice that the actions \textit{"view\_record"} and \textit{"search"} are the most used ones before \textit{"query\_form"} and \textit{"search\_{keyword, person, institution}"}. In Table~\ref{tab:SessionSample}, we show a specific session, the user's ID and the actions' label and length in seconds. In this session, the user with ID \textit{41821} started with logging into the system and then submitted a query describing his/her information need (\textit{query\_form}). After getting the result list, labeled as \textit{resultlistids} and viewing a document, the user performed additional searches (\textit{searchterm\_2}), and displayed some results' content (\textit{view\_record}). Finally, he/she checked the external availability of a result (\textit{goto\_google\_scholar}). We notice that the user spent more than 40\% of the time reading documents' content. \begin{table}[!htbp] \centering \caption{Sample of a session search for a specific user} \label{tab:SessionSample} \begin{tabular}{|c|p{3cm}|p{3.5cm}|l|} \hline User ID & Date & Action label & Action length (s)\\ \hline \multirow{9}{*} {41821} & 2014-10-28 16:08:46 & goto\_login & 1 \\ & 2014-10-28 16:09:13 & query\_form & 22 \\ & 2014-10-28 16:09:35 & search & 10 \\ & 2014-10-28 16:09:35 & resultlistids & 10 \\ & 2014-10-28 16:09:45 & view\_record & 31 \\ & 2014-10-28 16:09:45 & docid & 31 \\ & 2014-10-28 16:10:16 & view\_record & 392 \\ & 2014-10-28 16:16:48 & search & 10 \\ & 2014-10-28 16:16:48 & searchterm\_2 & 10 \\ & 2014-10-28 16:16:48 & resultlistids & 10 \\ & 2014-10-28 16:16:58 & view\_record & 9 \\ & 2014-10-28 16:17:07 & goto\_google\_scholar & 0 \\ \hline \end{tabular} \end{table} In Figure~\ref{fig:ActionsSessions}, we display the number of actions per session. We note that the average number of actions per session is 16 and only sessions with a minimum of one action are considered in this data set. We conclude, from this figure, that the number of sessions with less than 16 actions (n=384,087) is much larger than the number of sessions having over 16 actions (n=100,360). \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{ActionsSessions2.png} \caption{Distribution of the number of actions contained in a session} \label{fig:ActionsSessions} \end{center} \end{figure} \section{Future work} For academia there is a need for open data sets which provide information about the variety of retrieval sessions and help to study and understand the abstract information behavior and common scan paths of academic users in a DL. In fact, session log provision and investigation open opportunities to enhance DLs' systems and to offer new services. Some possible future work based on our proposed data set can be outlined as follows: finding and studying abstract user groups like exhaustive or effective users; modeling academic users; analyzing reformulation and refining strategies; identifying various search phases like starting; chaining, browsing and differentiating; task characterization and prediction; personalization of search results according to the user behavior within search sessions. \section{Acknowledgement} This work was funded by Deutsche Forschungsgemeinschaft (DFG), grant no. MA 3964/5-1; the AMUR project at GESIS together with the working group of Norbert Fuhr. The AMUR project aims at improving the support of interactive retrieval sessions following two major goals: improving user guidance and system tuning. \bibliographystyle{splncs}
1,108,101,563,040
arxiv
\section{\label{sec:intro} Introduction} Threading magnetic flux through a two-dimensional crystal changes the single particle band spectrum into a Hofstadter butterfly spectrum that exhibits a fractal structure with an infinitude of mini gaps~\cite{hofstadter1976energy}. The Hofstadter butterfly is the lattice counterpart of Landau levels in the continuum. While the Landau levels of a continuum model are often easier to compute than the Hofstadter butterfly of the corresponding lattice model, diagnosing band topology in the presence of magnetic flux requires the lattice because topological invariants are defined over the entire Brillouin zone. The topology of Hofstadter bands has been a subject of intense recent study ~\cite{herzog2020hofstadter,zuo2021topological,asaga2020boundary,otaki2019higher}. In the absence of magnetic flux, the topology of a band structure can be classified by the theory of topological quantum chemistry (TQC) \cite{bradlyn2017topological,vergniory2017graph,elcoro2017double,cano2018building,bradlyn2018band,vergniory2019complete,cano2021band,elcoro2021magnetic}. A practical diagnosis comes from studying the space group representations of bands at high symmetry momenta, which are known as symmetry indicators \cite{po2017symmetry}. However, in its present form, TQC cannot be directly applied to systems in a magnetic field because it does not account for the Aharonov-Bohm phase. \par \begin{figure} \centering \includegraphics[width=\linewidth]{workflow7} \caption{Framework for TQC in the presence of magnetic flux. A representation of the site-symmetry group induces a band representation of the entire space group, which is subduced to a representation of the little co-group at each high-symmetry momentum, i.e., the symmetry indicator. The new element introduced by the magnetic field is that the symmetry operators form a projective representation of the space group. The red font indicates differences between TQC with and without magnetic flux.} \label{fig:workflow} \end{figure} In the present manuscript we derive a framework to generalize TQC and the classification of symmetry indicators to band structures in the presence of a rational magnetic flux per unit cell. The workflow is shown in Fig.~\ref{fig:workflow}. We find that the two key ingredients in the theory of TQC -- the irreducible representations of bands at high symmetry points in momentum space and the induced representations of localized orbitals in real space -- are modified from their non-magnetic counterparts due to the presence of magnetic flux. The essential reason for this modification is that the commutation relations between crystal symmetries change in the presence of magnetic flux due to the Aharonov-Bohm phase. As a result, the symmetry operators form non-trivial projective representations of the space group. The earliest example of this is Zak's magnetic translation group~\cite{zak1964magnetic,zak1964magnetic2}. Our theory builds on Zak's theory by including crystalline symmetries. Our theory of TQC in commensurate magnetic flux is distinct from ``magnetic TQC''~\cite{elcoro2021magnetic}. While magnetic TQC classifies topological band structures according to the representations of magnetic space groups, which describe the symmetry of magnetically ordered crystals, magnetic TQC does not yet accommodate magnetic flux through each unit cell, as it deals with zero flux configurations of orbitals. Our manuscript proceeds as follows. In Sec.~\ref{sec:mag_sym}, we derive the space group symmetry operators at rational flux in both real and momentum space. The results are at the crux of the theory of TQC in a magnetic field that we derive in Sec.~\ref{sec:TQC}. We then use the theory to compute symmetry indicators for three magnetic layer groups, $p2$, $p4$ and $p4/m'$, at $\pi$ flux per unit cell. The strong indicator in $p2$ recovers an earlier formula for the Chern number in Ref.~\onlinecite{matsugatani2018universal}, which is a stronger version of the formula in Ref.~\onlinecite{fang2012bulk}. In group $p4/m'$, our theory gives rise to a new strong $\mathbb Z_2$ indicator, which is simply the filling per unit cell mod $2$. This $\mathbb Z_2$ non-trivial phase is protected by translation symmetry: the non-trivial phase does not permit exponentially localized and symmetric Wannier functions, but such Wannier functions exist when translation symmetries within the magnetic unit cell are broken. {We also find a new indicator for $p4$.} \par In Sec.~\ref{sec:Model}, we study a tight binding model introduced in Ref.~\onlinecite{wieder2020strong} that realizes an obstructed atomic limit (OAL) on the square lattice at zero flux. The Hofstadter butterfly spectrum shows that the system undergoes a gap-closing phase transition at finite flux after which the corner states that were present in the OAL phase disappear. By applying our theory of TQC in a magnetic field to this model, we show that the gap closing corresponds to a phase transition from an OAL to a trivial phase that can be diagnosed by symmetry indicators. \par \section{\label{sec:mag_sym} Magnetic symmetries} In quantum mechanics the coupling of a magnetic field to a charged particle is described by replacing the momentum $\mathbf P$ of the particle with the canonical momentum $\mathbf p=\mathbf P+\mathbf A$ in the Hamiltonian (without loss of generality, we have used natural units and assumed positive unit charge). To account for the Aharonov-Bohm phase, terms in the single-particle tight binding Hamiltonian are modified by the usual Peierls substitution: \begin{equation} c^{\dagger}_{\mathbf r_2} c_{\mathbf r_1} \mapsto e^{i\int_{\mathbf r_1}^{\mathbf r_2} \mathbf A(\mathbf r)\cdot d\mathbf r}c^{\dagger}_{\mathbf r_2} c_{\mathbf r_1}, \label{eq:peierls} \end{equation} where the path of the integral is the straight line connecting $\mathbf{r}_1$ and $\mathbf{r}_2$. However, if the zero-field Hamiltonian is invariant under a crystal symmetry $\hat{g}: c_{\mathbf{r}} \mapsto c_{\hat{g}\mathbf{r}}$, the Hamiltonian modified by the Peierls substitution in Eq.~(\ref{eq:peierls}) is not necessarily invariant under $\hat{g}$, even if the physical system is unchanged by the symmetry. Consequently, the operator $\hat{g}$ must be modified from its zero-field form by a gauge transformation that accounts for the Aharonov-Bohm phase. Specifically, the magnetic field requires $\hat{g}$ be replaced by $g\equiv \tilde G_g \hat g$, where $\tilde{G}_g=e^{i\sum_{\mathbf x}\lambda_g(\mathbf x)c^\dagger_{\mathbf x} c_{\mathbf x}}$ is a gauge transformation that acts on the electron annihilation operators by~\cite{herzog2020hofstadter}: \begin{align} \tilde{G}_g c_{\mathbf{r}} \tilde{G}_g^{-1}&= e^{-i\lambda_g(\mathbf{r})} c_{\mathbf{r}}, \label{eq:defG1}\\ \tilde{G}_g c^\dagger_{\mathbf{r}} \tilde{G}_g^{-1}&= e^{i\lambda_g(\mathbf{r})} c^\dagger_{\mathbf{r}}, \label{eq:defG2} \end{align} where $\lambda_g$ is a scalar function defined for each symmetry $\hat{g}$ that we will derive momentarily. Acting on terms in the Hamiltonian in the form of Eq.~(\ref{eq:peierls}), $\tilde{G}_g$ has the effect of mapping $\mathbf{A}(\mathbf{r}) \mapsto \mathbf{A}(\mathbf{r}) + \nabla \lambda_g(\mathbf{r})$. Similar gauge transformations were introduced by Zak for the magnetic translation operators in Refs.~\cite{zak1964magnetic,zak1964magnetic2}. More recently, the magnetic operators for rotations about the origin and for time-reversal symmetry were considered in Refs.~\cite{de2011exponentially,matsugatani2018universal,herzog2020hofstadter}. Here, we develop a general theory for any symmetry group in the presence of a magnetic field, thereby extending previous works to include more general rotations and glide reflection symmetries. Doing so allows us to apply the theory of symmetry indicators to diagnose topological phases in the presence of a magnetic field. \par We now derive the gauge transformation $\lambda_g$ in Eq.~(\ref{eq:defG1}): we require that if a single-particle Hamiltonian in zero field is invariant under a symmetry $\hat{g}$, then in the presence of a magnetic field that preserves $\hat{g}$, the Hamiltonian modified by the Peierls substitution in Eq.~(\ref{eq:peierls}) is invariant under the combined symmetry operation $g \equiv \tilde{G}_g \hat{g}$, i.e., we require \begin{equation} g : e^{i\int_{\mathbf r_1}^{\mathbf r_2} \mathbf A(\mathbf r)\cdot d\mathbf r}c^{\dagger}_{\mathbf r_2} c_{\mathbf r_1} \mapsto e^{i\int_{g\mathbf r_1}^{g\mathbf r_2} \mathbf A(\mathbf r')\cdot d\mathbf r'}c^{\dagger}_{g\mathbf r_2} c_{g\mathbf r_1} \label{eqn:covariance} \end{equation} Acting on the left-hand-side by $g= \tilde{G}_g \hat{g}$, using the definition of $\tilde{G}_g$ in Eqs.~(\ref{eq:defG1}) and (\ref{eq:defG2}), and equating with the right-hand-side yields \begin{equation} e^{i\int_{\mathbf r_1}^{\mathbf r_2} \mathbf A(\mathbf r)\cdot d\mathbf r +i\int_{g\mathbf{r}_1}^{g\mathbf{r}_2} \nabla \lambda(\mathbf{r}')\cdot d\mathbf{r}' } = e^{i\int_{g\mathbf r_1}^{g\mathbf r_2} \mathbf A(\mathbf r')\cdot d\mathbf r'} \label{eqn:covariance2} \end{equation} A few lines of algebra (detailed in Appendix~\ref{app:Eqlambda}) show that Eq.~(\ref{eqn:covariance2}) is satisfied when $\lambda_g(\mathbf r)$ satisfies \begin{equation} \label{eqn:lambda} \nabla \lambda_g(\mathbf r) = \mathbf A(\mathbf r)-R_g \mathbf A (g^{-1} \mathbf r), \end{equation} where $R_g$ is the point group part of $g$. Eq.~(\ref{eqn:lambda}) determines each $\lambda_g$ up to a constant. We choose the constant such that for translation~\cite{herzog2020hofstadter} \begin{equation} \label{eqn:lambdaT} \lambda_{T(\mathbf a)}(\mathbf r) = \int_{\mathbf r-\mathbf a}^{\mathbf r} \mathbf A(\mathbf r')\cdot d\mathbf r' + \mathbf B \cdot \mathbf a \times \mathbf r \end{equation} and that a $2\pi$ rotation is implemented by the identity matrix. This choice of constants ensures that the commutation relations between translation and rotations about the origin are the same as at zero field as we show in the Appendix~\ref{app:rotation}. This choice of gauge is fixed throughout this paper; later when we refer to a gauge choice, we are referring to the gauge of the vector potential. So far, we have only considered lattice degrees of freedom; orbital and spin degrees of freedom can be included by an extra unitary transformation in the action of $\hat{g}$, {which does not change $\lambda_g$}. We will include these degrees of freedom in later sections. \par Eq.~(\ref{eqn:lambda}), which serves as the definition of $\lambda_g$, is the first key result of this manuscript. Combining it with the spatial action of the symmetry yields the explicit form of the magnetic symmetry operator: \begin{align} \label{eqn:Gg1} g&=e^{i\sum_{\mathbf x'}\lambda_g (\mathbf x')c^\dagger_{\mathbf x'}c_{\mathbf x'}} \sum_\mathbf{x} c^\dagger_{\hat g \mathbf x}c_{\mathbf x} \\ &=\sum_\mathbf{x} e^{i\lambda_g (\hat g \mathbf x)} c^\dagger_{\hat g\mathbf x}c_{\mathbf x} , \label{eqn:Gg2} \end{align} where $\lambda_g$ is determined by Eq.~(\ref{eqn:lambda}). The second equality holds when $g$ acts on the single-particle Hilbert space. We now explain why changing $\lambda_g$ up to a constant does not change the representation of the magnetic symmetry operators defined in Eq.~(\ref{eqn:Gg2}). These operators furnish projective representations of the space group. A projective representation $\rho$ of a group satisfies the following multiplication rule \begin{equation} \rho(h_1)\rho(h_2)=\omega(h_1,h_2) \rho(h_1h_2), \end{equation} where $h_1$, $h_2$ are group elements and $\omega(h_1,h_2)$ is called the 2-cocycle. If $\omega(h_1,h_2)\equiv 1$, then $\rho$ is an ordinary linear representation. In general, the magnetic symmetry operators in Eq.~(\ref{eqn:Gg2}) will have non-trivial 2-cocycles, as we show in the next sections. The $U(1)$ gauge freedom in Eq.~(\ref{eqn:lambda}) corresponding to the gauge transformation $\lambda_g \mapsto \lambda_g+C_g$, $g \mapsto e^{iC_g}g$ leaves the representation in the same group cohomology class, i.e. the transformed projective representation is equivalent to the previous one. Essential properties of projective representations are presented in Appendix.~\ref{app:projective}. \par In the next two subsections, we apply this formalism to two examples, first rederiving Zak's magnetic translation group and then reviewing the symmetries of the square lattice in a magnetic field. \subsection{\label{sec:mag_sym_1_zak} Zak's magnetic translation group} In Refs.~\onlinecite{zak1964magnetic} and~\onlinecite{zak1964magnetic2} Zak introduced the continuous magnetic translation symmetries. We reproduce Zak's result by taking the continuum limit {of Eq.~(\ref{eqn:Gg2})}. \par Consider a two-dimensional infinite plane without a lattice and denote operators that translate by ${\mathbf \Delta}=\Delta_x \hat{\mathbf{x}}+\Delta_y \hat{\mathbf{y}}$ by $\hat T(\mathbf \Delta)\equiv \hat T(\Delta_x,\Delta_y)$, where $\hat{\mathbf{x}}$ and $\hat{\mathbf{y}}$ denote the unit vectors. We first work in the symmetric gauge: $\mathbf{A}(\mathbf r)=\frac B2(-r_y,r_x)$. Then from Eq.~(\ref{eqn:lambda}): \begin{align} \label{eq:lambdaT_symmetric} \lambda_{T({\mathbf \Delta})}(\mathbf r)= \frac B2(\Delta_x r_y-\Delta_y r_x) \end{align} For continuous translations, we replace the sum $\sum_\mathbf{x} c^\dagger_{\hat T(\mathbf \Delta) \mathbf x}c_{\mathbf x}$ in Eq.~(\ref{eqn:Gg1}) with $e^{-ip_x\Delta_x-ip_y\Delta_y}$. Then the magnetic translation by vector $\mathbf \Delta$ is \begin{align} T(\mathbf \Delta)&=e^{i(\frac12 B\Delta_x(r_y+\Delta_y)-\frac12 B\Delta_y(r_x+\Delta_x))}e^{-i(p_x\Delta_x+p_y\Delta_y)} \nonumber \\ &=e^{-i((p_x-\frac12 Br_y)\Delta_x+(p_y+\frac12 Br_x)\Delta_y)}, \end{align} where the Baker–Campbell–Hausdorff formula is considered. Therefore, the generators of the magnetic translations in $\hat{\mathbf x}$ and $\hat{\mathbf y}$ directions are \begin{align} K_x&=p_x-\frac12 Br_y \nonumber \\ K_y&=p_y+\frac12 Br_x, \end{align} which is exactly Zak's definition from his 1964 paper~\cite{zak1964magnetic}.\par In the remainder of the manuscript it will be easier to use the Landau gauge $\mathbf A(\mathbf r)=(-B r_y,0)$. Repeating the calculation of $\lambda_g$ in the Landau gauge yields \begin{align} \lambda_{T(\mathbf \Delta)}(\mathbf r)=-B\Delta_y r_x \label{eqn:lambdaT_Landau} \end{align} One important property of the magnetic translation operators is the gauge-invariant noncommutativity: \begin{equation} T(\Delta_x \hat{\mathbf{x}}) T(\Delta_y \hat{\mathbf{y}})= T(\Delta_y \hat{\mathbf{y}}) T(\Delta_x \hat{\mathbf{x}}) e^{iB\Delta_x\Delta_y} \label{eqn:TxTyDelta}, \end{equation} which reproduces the Aharonov-Bohm phase. More generally, for two translations $\mathbf a_1$ and $\mathbf a_2$, the gauge invariant multiplication equation is~\cite{zak1964magnetic} \begin{equation} \label{eqn:Ta1Ta2} T(\mathbf a_1) T(\mathbf a_2) = T(\mathbf a_1+\mathbf a_2) e^{\frac i2 \mathbf B \cdot (\mathbf a_1 \times \mathbf a_2)} \end{equation} The gauge invariant phase term $e^{\frac i2 \mathbf B \cdot (\mathbf a_1 \times \mathbf a_2)}$ is the 2-cocycle of magnetic translations, which shows the magnetic translation operators form a non-trivial projective representation of the translation group. \subsection{\label{sec:mag_sym_2_landau} Magnetic symmetries of the square lattice} \begin{table*}[t] \begin{tabular}{c|c|c|c|c|c|c|c} \hline $g$ &$T(\Delta_x\hat{\mathbf x})$ &$T(\Delta_y\hat{\mathbf y})$& $C_2(\bar x, \bar y)$ &$C_4(\bar x, \bar y)$ &$I(\bar x, \bar y)$ &$Um_{x}(\bar x)$ &$Um_{y}(\bar y)$\\ \hline &&&&&&&\\[-0.5em] $\hat{g}=\{R_g|\tau_g\}$ &$\{0|(\Delta_x,0)\}$ &$\{0|(0,\Delta_y)\}$ &$\{\hat{C}_2|(2\bar x,2\bar y)\}$ &$\{\hat{C}_4|(\bar x+\bar y,\bar y-\bar x)\}$ &$\{\hat{I}|(2\bar x,2\bar y)\}$ &$U\{\hat{m}_{x}|(2\bar x,0)\} $ &$U\{\hat{m}_{y}|(0,2\bar y)\} $\\ \hline &&&&&&&\\[-0.5em] $\lambda_g(x,y)$&0&$-B\Delta_y x$ &$-2B\bar y (x-\bar x)$ &$-B(x-\bar x)(y-\bar y)$ &$-2B\bar y (x-\bar x)$ &$0$ &$-2B\bar y x$\\ &&&&$+B\bar y((y-{\bar y})-(x-\bar x))$&&\\ \hline \end{tabular} \caption{The gauge transformation $\lambda_g(x,y)$ for symmetries of the square lattice in Landau gauge. For each symmetry $g$ in the first row, the second row lists the symmetry in the notation $\lbrace R_g | \tau_g \rbrace$, where $\hat{g}: \mathbf{r} \mapsto R_g \mathbf{r} + \tau_g$. The third row provides $\lambda_g$ from Eq.~(\ref{eqn:lambda}).} \label{tab:lambda} \end{table*} As a second example, we consider discrete symmetries of the two-dimensional square lattice using the Landau gauge $\mathbf A(\mathbf r)=(-B r_y,0)$. When $B=0$, the square lattice is invariant under the layer group $p4/mmm$, which is generated by a four-fold rotation symmetr and the mirrors $m_x$ and $m_z$. Without a magnetic field, the system is also invariant under time-reversal symmetry, $\cal T$. When $B\neq 0$, only the symmetries that leave the magnetic field invariant (four-fold rotations and $m_z$) remain; the resulting layer group is $p4/m$. To determine how these symmetries act on the electron creation/annihilation operators, one must compute the gauge transformation $\lambda_g$ from Eq.~(\ref{eqn:lambda}). We summarize the results in Table~\ref{tab:lambda}. Notice that $\lambda_g$ depends on the rotation or inversion center; thus, it is necessary to introduce the notation \begin{equation} \label{eqn:defCnxy} C_n(\bar{x},\bar{y}) \equiv T(\bar{x},\bar{y})C_n T(-\bar{x},-\bar{y}) \end{equation} to denote an $n$-fold rotation about the point $(\bar x,\bar y)$; we use $C_n \equiv C_n(\bar{x}=0,\bar{y}=0)$ to denote a rotation about the origin. We adopt analogous notation for inversions and reflections about different points and planes. The symmetries $m_x$, $m_{(110)}$ and $\mathcal{T}$ flip the magnetic field and thus are not symmetries at finite $B$. However, the product of these symmetries with a magnetic flux shifting operator can leave the system invariant at special values of flux, as we now describe. A lattice Hamiltonian coupled to a magnetic field is periodic in $B$: the period corresponds to the minimal magnetic field such that every possible closed hopping path encloses an integer multiple of $2\pi$ flux. Let $\phi$ denote the magnetic flux per unit cell and $\Phi = 2\pi n$ its periodicity, where $n$ is an integer. Following Ref.~\cite{herzog2020hofstadter}, we define the unitary matrix $U$ that shifts $\phi \mapsto \phi + \Phi$ by \begin{align} {U}&=e^{i\sum_{\mathbf x'}\lambda_{U} (\mathbf x')c^\dagger_{\mathbf x'}c_{\mathbf x'}} \\ \lambda_{U} (\mathbf r) &= \int ^{\mathbf r}_{\mathbf r_0} \widetilde{ \mathbf A}(\mathbf r) \cdot d\mathbf r, \end{align} where $\mathbf r_0$ is a reference lattice point and $\tilde{\mathbf{A}}$ is the magnetic vector potential corresponding to $\Phi$ flux, i.e., $\nabla \times \widetilde{\mathbf A}=\Phi$. Notice that for any symmetry $g$ that flips $\phi \mapsto -\phi$, the product $Ug$ is a symmetry in the special case where $\phi = \Phi/2$. In the case of the square lattice, the products $Um_x$, $Um_y$ and $U{\cal T} $ are recovered as symmetries of the system at the special value of $\phi = \Phi/2$. We list the gauge transformations for $Um_x$ and $Um_y$ at $\phi = \Phi/2$ in Table~\ref{tab:lambda}. In the special case of a square lattice and Landau gauge, $\lambda_U= -\Phi yx,$ where $x = (\mathbf{r} - \mathbf{r}_0) \cdot \hat{\mathbf{x}}$. Since $\Phi$ is a multiple of $2\pi$ and $x,y$ are integers, this phase is also a multiple of $2\pi$. The flux translation matrix is given by $U = \mathbb{I}$, where $\mathbb{I}$ is the identity matrix. In summary, we have explicitly extended Zak's translation operators in a magnetic field to the discrete symmetries of the square lattice. In Appendix~\ref{app:triangular} we generalize the results to the symmetries of the triangular lattice. \subsection{BBH model} We apply the results of the previous section to derive the symmetry operators in the Benalcazar-Bernevig-Hughes (BBH) model ~\cite{benalcazar2017electric,benalcazar2017quantized}. The model describes spinless electrons on a square lattice. The Hamiltonian consists of nearest-neighbor hopping terms, whose amplitudes $\lambda_{x/y}$ and $\gamma_{x/y}$ are depicted in Fig.~\ref{fig:BBHmodell}. Since $\lambda_{x/y} \neq \gamma_{x/y}$, each unit cell contains four atoms. {Further, each square plaquette has $\pi$ flux, for a total flux $\phi = 4\pi$ per unit cell.} The flux periodicity is $\Phi = 8\pi$, corresponding to $2\pi$ flux per square plaquette. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{BBH.png} \caption{Lattice and hopping terms of the BBH model. The black dashed square indicates the unit cell. Blue dots indicate atoms, each with one orbital. The origin is at the left-bottom atom in the unit cell {indicated by $0$}. The hopping amplitudes $\gamma_{x/y}$ and $\lambda_{x/y}$ are real; the minus signs result from the magnetic flux $\phi=4\pi$, i.e., applying Eq.~(\ref{eq:peierls}) in Landau gauge.} \label{fig:BBHmodell} \end{figure} We now derive the symmetry operations in the presence of the magnetic field; these commutation relations were stated in Refs.~\onlinecite{benalcazar2017electric,benalcazar2017quantized}, but here we derive them as an application of our formalism. We start with the mirror symmetries: in zero field, the Hamiltonian is invariant under $m_x(\bar{x})$ and $m_y(\bar{y})$ where $\bar{x}, \bar{y}$ are half-integers. (The Hamiltonian is not invariant under reflections about lines containing the origin because $\gamma_{x/y} \neq \lambda_{x/y}$.) These mirror reflections flip the sign of the magnetic field and thus generically are not symmetries of the Hamiltonian at finite flux. However, since $\phi = 4\pi = \Phi/2$, the combined operations $Um_x(\bar{x})$ and $Um_y(\bar{y})$ are symmetries. We showed in the previous section that in the Landau gauge, the flux shifting operator $U = \mathbb{I}$ for this model. Therefore, at $\phi = 4\pi$, $m_x(\bar{x})$ and $m_y(\bar{y})$ are in fact symmetries of the Hamiltonian. The effect of the magnetic field is to change their commutation relations: using Table~\ref{tab:lambda} with $U = \mathbb{I}$ yields $ m_{x}(\bar x)m_{y}(\bar y) =m_{y}(\bar y)m_{x}(\bar x) e^{4iB\bar x\bar y} $. Since $B=\pi$ and $\bar x,\bar y$ are half-integers \begin{equation} \{m_{x}(\bar x), m_{y}(\bar y)\}=0, \end{equation} i.e., mirror symmetries in the BBH model anti-commute. We now consider a four-fold rotation. When $\gamma_x=\gamma_y$, $\lambda_x=\lambda_y$, the BBH model has a four-fold rotation symmetry $C_4(\frac{1}{2}, \frac{1}{2})$, as well as other four-fold rotation axes related by translation. Since $\phi=\Phi/2$, the system also has an effective time-reversal symmetry, $U\cal T$; since we established $U=\mathbb{I}$ for this model, $\mathcal{T}$ is a symmetry even at this finite field and acts by complex conjugation. In the absence of a magnetic field, time-reversal pairs eigenstates with $\pm i$ rotation eigenvalues. We now show that in the presence of a magnetic field, time-reversal pairs eigenstates of $C_4(\bar{x},\bar{y})$ in a more complicated way. \par Since our origin is chosen such that all lattice sites have integer coordinates, $x,y\in \mathbb Z$, the phase $e^{i\lambda_{C_4}}$ in Table~\ref{tab:lambda} takes values of $\pm e^{-i\pi/4}$, so that $e^{-2i\lambda_{C_4}}=i$. Therefore, given an eigenstate $|\xi\rangle$ of $C_4(\bar x,\bar y)$ with eigenvalue $\xi$, $\cal T|\xi\rangle$ is also an eigenstate of $C_4(\bar x,\bar y)$: \begin{align} C_4(\bar x,\bar y){\cal T}|\xi\rangle &= e^{i\lambda_{C_4}}\hat{C_4}(\bar x,\bar y){\cal T}|\xi\rangle \nonumber\\ &= {\cal T}e^{-i\lambda_{C_4}}\hat{C_4}(\bar x,\bar y)|\xi\rangle \nonumber\\ &= {\cal T}e^{-2i\lambda_{C_4}}C_4(\bar x,\bar y)|\xi\rangle \nonumber\\ &= {\cal T} i \xi |\xi\rangle =-i\xi^*{\cal T}|\xi\rangle. \label{eqn:BBHC4T} \end{align} Thus, $\cal T$ pairs $C_4(\bar x,\bar y)$ eigenstates with eigenvalues $\xi$ and $-i\xi^*$. This is an example of symmetry operators acting in unusual ways at finite field. \section{Momentum space representations} We now define how the magnetic symmetry operators act in momentum space. This requires first defining how the symmetries act on Bloch wave functions and then labelling the Bloch wave functions by irreducible representations (irreps) of the symmetry group at each momentum point. However, in the presence of magnetic flux, we cannot immediately define the Bloch wave functions because Bloch's theorem does not apply when the translation operators do not commute. To apply Bloch's theorem, we define an enlarged ``magnetic unit cell,'' chosen to contain an integer multiple of $2\pi$ flux. The translation vectors that span the magnetic unit cell are referred to as magnetic translation vectors. From Eq.~(\ref{eqn:TxTyDelta}), the magnetic translation operators commute and thus can be simultaneously diagonalized, forming an abelian subgroup $\mathbb T_M$ of the full translation group $\mathbb T$. Consequently, Bloch's theorem applies to the magnetic unit cell and eigenstates of the Hamiltonian can be labelled by wave vectors in the magnetic Brillouin zone. In Sec.~\ref{sec:mag_sym_3_momentum}, we define the Fourier transformed electron creation and annihilation operators in the magnetic unit cell. The operators necessarily have a ``sublattice'' index because the magnetic unit cell contains more than one non-magnetic unit cell. In Sec.~\ref{sec:kirrep}, we address how to label the Bloch wave functions by irreps of the little co-group at each momentum. Here we encounter another subtle point: since the little co-group is defined as a quotient group obtained from the space group mod magnetic translations, the little co-group only has a group structure if the magnetic translation group is a normal subgroup of the space group. Thus, not all magnetic unit cells are equal: to label Bloch wave functions by irreps of the little co-group, we must choose a magnetic unit cell such that $\mathbb{T}_M$ is a normal subgroup. After addressing this issue, we explain how to find the irreducible projective representations of the little co-group. \subsection{\label{sec:mag_sym_3_momentum} Symmetries in the magnetic Brillouin zone} \begin{figure} \centering \includegraphics[width=\linewidth]{unitcell.png} \caption{ Examples of (a) a $q$-by-$1$ unit cell and (b) a $q$-by-$q$ unit cell, taking $q=2$.} \label{fig:unitcell} \end{figure} We first consider the minimal magnetic unit cell in Landau gauge, which is a $q$-by-$1$ unit cell (see Fig.~\ref{fig:unitcell}(a)). For this choice of unit cell, the magnetic translation group $\mathbb{T}_M$ is generated by $T(\hat{\mathbf{x}})$ and $T(q\hat{\mathbf{y}})$. Now consider the layer group $p2$, generated by $C_2$ and lattice translations, for which $\mathbb{T}_M$ is a normal subgroup. $C_2$ acts identically to the non-magnetic case, {mapping a Bloch wave function at $\mathbf{k}$ to one at $-\mathbf{k}$.} However, $T(\hat{\mathbf y})$ acts in an unusual way, by mapping $k_x$ to $k_x+\phi$. This can be understood as follows: let $|\mathbf k\rangle$ be an eigenstate of $T(\hat{\mathbf x})$ such that $T(\hat{\mathbf x}) |\mathbf k\rangle = e^{ik_x}|\mathbf k\rangle$. Then $T(\hat{\mathbf y})|\mathbf k\rangle$ is also an eigenstate of $T(\hat{\mathbf x})$, with eigenvalue $k_x + \phi$, i.e., \begin{equation} T(\hat{\mathbf x}) \left[ T(\hat{\mathbf y}) |\mathbf k\rangle \right] = e^{i(k_x+\phi)}T(\hat{\mathbf y})|\mathbf k\rangle \label{eq:Ty} \end{equation} Thus, $T(\hat{\mathbf{y}})$ shifts the eigenvalue of $T(\hat{\mathbf{x}})$ by $e^{i\phi}$. {Nonetheless, both $C_2$ and $T(\hat{\mathbf{y}})$ have the usual property that a Bloch state at $\mathbf{k}$ is mapped to another Bloch state at $\mathbf{k}'$.} This is not the case for the layer group $p4$, {with respect to which $\mathbb{T}_M$ is not a normal subgroup}. As we will show below, the symmetry operator $C_4$ mixes a Bloch state at $\mathbf{k}$ into a linear combination of Bloch states at other momenta, forming a $q^2$-dimensional representation. Thus, we are motivated to consider a $q$-by-$q$ unit cell (see Fig.~\ref{fig:unitcell}(b)), where, although the magnetic unit cell is larger, the symmetry matrices are the same size as in the $q$-by-$1$ case. In Appendix~\ref{app:sym_q1_qq}, we show that the representations obtained from these two choices of magnetic unit cell are the same up to a unitary transformation. However, the $q$-by-$q$ unit cell is a more suitable to apply topological quantum chemistry because the corresponding magnetic translation group is a normal Abelian subgroup of the layer group $p4$. We now consider the $q$-by-$1$ and $q$-by-$q$ unit cells in detail for the group $p4$ to illustrate these points. \subsubsection{\label{sec:mag_sym_3_momentum_qby1}$q$-by-$1$ unit cell for $p4$} We first consider the $q$-by-$1$ unit cell shown in Fig.~\ref{fig:unitcell}(a). The coordinates of lattice sites are labeled by $(x,y)=(R_x,qR_y+j)$ where $R_x,R_y\in \mathbb Z$, $j=0,\dots,q-1$. The Fourier transformed electron creation and annihilation operators are defined by \begin{align} c^\dagger_{\mathbf R, j,\alpha}&=\frac{q}{(2\pi)^2}\int d\mathbf k e^{i (k_xR_x+k_yqR_y)}c^\dagger_{\mathbf k, j,\alpha} \label{eqn:FT_qby1_cd} \\ c_{\mathbf R, j,\alpha}&=\frac{q}{(2\pi)^2}\int d\mathbf k e^{-i (k_xR_x+k_yqR_y)}c_{\mathbf k, j,\alpha}, \label{eqn:FT_qby1_c} \end{align} where $\alpha$ labels orbital degrees of freedom on each site. For now, we ignore the $\alpha$ degree of freedom, but will add it later when necessary. The magnetic Brillouin zone is a torus with $k_x\in [0,2\pi)$, $k_y\in [0,2\pi/q)$. \par Using the Fourier transforms in Eqs.~(\ref{eqn:FT_qby1_cd}) and (\ref{eqn:FT_qby1_c}), we find the action of the symmetry operators in momentum space. A translation by one (non-magnetic) lattice vector in the $\hat{\mathbf y}$ direction is implemented by \begin{align} T(\hat{\mathbf y})&=\frac{q}{(2\pi)^2}\sum_j\int d\mathbf k~ e^{ik_y} c^\dagger_{\mathbf k+(\phi,0), j}c_{\mathbf k, j-1} \label{eq:Tyq} \end{align} Unlike the non-magnetic case, $T(\hat{\mathbf y})$ does not leave each $\mathbf{k}$ point invariant: it maps $(k_x,k_y)$ to $(k_x+\phi,k_y)$. Translations by the magnetic lattice vectors do leave $\mathbf{k}$ invariant. We now consider the four-fold rotation operator. Using the function $\lambda_{C_4}$ in Table~\ref{tab:lambda}, \begin{align} C_4&=\frac{q}{(2\pi)^2}\int d\mathbf k \sum_{j,j'}\sum_{n=0}^{q-1} \frac1q e^{i(\phi jj'-(k_y+2\pi n /q)j-k_xj')} \nonumber \\ &c^\dagger_{(k_x,k_y),j}c_{(k_y+2\pi n/q,~-k_x~\text{mod}~ 2\pi/q),j'} \label{eq:C4q} \end{align} Thus, the situation for $C_4$ is much worse than for $T(\hat{\mathbf y})$: $C_4$ does not rotate one $\mathbf{k}$ point to another, but instead mixes a state at $(k_x,k_y)$ into a linear combination of states at the different points $(k_y+2\pi n/q,-k_x)$ $n=0,1,\dots,q-1$. \subsubsection{\label{sec:mag_sym_3_momentum_qbyq}$q$-by-$q$ unit cell for $p4$} We now consider the $q$-by-$q$ unit cell shown in Fig.~\ref{fig:unitcell}(b). The coordinates of lattice sites are labeled by $(x,y)=(qR_x+j_x,qR_y+j_y)$ where $R_x,R_y\in \mathbb Z$ label a magnetic unit cell and $j_x,j_y=0,\dots,q-1$ label the coordinates of atoms within. The Fourier transformed electron creation and annihilation operators are defined by \begin{align} \label{eqn:FT_qbyq_cd} c^\dagger_{\mathbf R, \mathbf{j},\alpha}&=(\frac{q}{2\pi})^2\int d\mathbf k e^{i (k_xqR_x+k_yqR_y)}c^\dagger_{\mathbf k, \mathbf{j},\alpha} \\ c_{\mathbf R, \mathbf{j},\alpha}&=(\frac{q}{2\pi})^2\int d\mathbf k e^{-i (k_xqR_x+k_yqR_y)}c_{\mathbf k, \mathbf{j},\alpha} \label{eqn:FT_qbyq_c} \end{align} Again we omit the orbital degrees of freedom $\alpha$ in this section. The magnetic Brillouin zone is a torus with $k_x, k_y\in [0,2\pi/q)$.\par Using the Fourier transforms in Eqs.~(\ref{eqn:FT_qbyq_cd}) and (\ref{eqn:FT_qbyq_c}) and plugging $\lambda_g$ from Table~\ref{tab:lambda} into Eq.~(\ref{eqn:Gg2}), the magnetic $T(\hat{\mathbf y})$ and $C_4$ symmetries are~\cite{herzog2020hofstadter} \begin{align} T(\hat{\mathbf y})&=(\frac{q}{2\pi})^2\int d\mathbf k~e^{ik_y}\sum_{j_x,j_y}e^{-i\phi j_x} c^\dagger_{\mathbf k, j_x,j_y}c_{\mathbf k, j_x,j_y-1} \label{eq:Tyqq} \end{align} and \begin{align} C_4&=(\frac{q}{2\pi})^2\int d\mathbf k~\sum_{j_x,j_y}e^{-i\phi j_x j_y} e^{-i(C_4\mathbf k \cdot \mathbf j-\mathbf k \cdot \mathbf j')} \nonumber\\ &\qquad c^\dagger_{(-k_y,k_x), j_x,j_y}c_{(k_x,k_y), j_x',j_y'} \label{eq:C4qq} \end{align} where $\mathbf j'=(j_x',j_y')$ is a function of $j_x,j_y$ that satisfies $j_x'=j_y \mod q$ and $j_y'=-j_x \mod q$. In Eqs.~(\ref{eq:Tyqq}) and (\ref{eq:C4qq}), the action of the symmetry operator on $\mathbf{k}$ is identical to its action in the absence of a magnetic field, i.e., translation leaves $\mathbf{k}$ invariant and a rotation in space rotates $\mathbf{k}$. This is an improvement over the $q$-by-$1$ magnetic unit cell (Eqs.~(\ref{eq:Tyq}) and (\ref{eq:C4q})), {for which a rotation mixed a Bloch state into a linear combination of several Bloch states.} \subsection{\label{sec:kirrep}Irreps at high symmetry points} We now address how to determine irreps of the symmetry group at each momentum. A Bloch wave function at a particular momentum $\mathbf{k}$ transforms as a representation of the little group at $\mathbf{k}$, denoted $G_\mathbf{k}$, which consists of all the space group operations that leave $\mathbf{k}$ invariant up to a reciprocal lattice vector: \begin{equation} \label{eq:defGk} G_\mathbf{k} = \lbrace g \in G | g\mathbf{k} \equiv \mathbf{k} \rbrace, \end{equation} where $\equiv$ is defined by equality up to a reciprocal lattice vector. Since the lattice translations are always represented by Bloch phases in the representations, it is useful to label the wave functions by irreps of the little co-group, defined as \begin{equation} \label{eq:deftildeGK} \widetilde{G}_\mathbf{k}=G_\mathbf{k}/{\mathbb T}_M \end{equation} As mentioned above, for the little co-group to satisfy the definition of a group, $\mathbb{T}_M$ must be a normal subgroup of $G_\mathbf{k}$, i.e., for all $g\in G_\mathbf{k}$, $t\in \mathbb{T}_M$, $g^{-1} t g \in \mathbb{T}_M$. One can check that for the $q$-by-$1$ unit cell, the magnetic translation group is a normal subgroup of the layer group $p2$, but it is not normal for the layer groups containing three- or four-fold rotations (because, for example, $C_4^{-1} T(\hat{\mathbf{x}}) C_4 = T(\hat{\mathbf{y}})^{-1}$, which is not in the magnetic translation group for the $q$-by-$1$ unit cell.) Thus, we use the $q$-by-$q$ unit cell for layer groups with three- or four-fold rotations. Thus, under magnetic flux, the little co-groups and their irreps differ from their zero-flux analogues in two important ways: first, in the presence of magnetic flux, the little co-groups include sublattice translation symmetries; and second, the irreps of little co-groups in the presence of magnetic flux are projective representations corresponding to the 2-cocyle defined by the flux. We now study some examples: in Tables~\ref{tab:p2}, \ref{tab:p4} and \ref{tab:p4TI} we summarize the projective irreps at high symmetry points for the layer groups $p2$, $p4$, $p4/m'$ at flux $\phi=\pi$. For later convenience we have assumed there is spin-orbit coupling, i.e., $C_n^n=-1$. Notice that the character tables are not square, which is a general feature of projective representations. The projective irreps corresponding to a particular 2-cocycle can be considered as a subset of non-projective representations of a larger group; the character table of that larger group will be square. To ensure that we have found all the projective irreps, we use the theorem by Schur~\cite{bradley2010mathematical} stating that for irreducible projective representations with a particular $2$-cocyle, \begin{equation} \label{eqn:schur} \sum_{\rho} \left(\text{dim}(\rho)\right)^2=|\widetilde{G}_{\mathbf k}|, \end{equation} where the sum runs over all projective irreps $\rho$ of $\widetilde{G}_{\mathbf k}$ with the specified $2$-cocyle and $\widetilde{G}_{\mathbf k}$ is the little co-group defined above. (Notice this formula does not apply to anti-unitary groups.) The calculation of the irreps of little co-groups are shown in Appendix~\ref{app:irreps} with the (anti)-commutation relations for the magnetic symmetries shown in Appendix~\ref{app:rotation}. In the remainder of this section, we sketch the calculation for the simplest non-trivial case, layer group $p2$ at $\pi$ flux. For the $2$-by-$1$ unit cell, the group of magnetic lattice translations is ${\mathbb T}_M=\{T(n_1{\hat{\mathbf x}}+2n_2{\hat{\mathbf y}})|n_1,n_2\in \mathbb Z\}$ and the Brillouin zone is $[-\pi,\pi)\times[-\pi/2,\pi/2)$. We now determine the high-symmetry points. Since $C_2$ symmetry maps $(k_x,k_y)$ to $(-k_x,-k_y)$, there are four momenta that are symmetric under $C_2$ up to a magnetic reciprocal lattice vector: $(0,0),(0,\pi/2),(\pi,0),(\pi,\pi/2)$. Since $T(\hat{\mathbf y})$ maps $(k_x,k_y)$ to $ (k_x+\pi,k_y)$ (Eq.~(\ref{eq:Ty})), $T(\hat{\mathbf y})C_2$ maps $(k_x,k_y)$ to $ (-k_x+\pi,-k_y)$. Therefore, there are four $T(\hat{\mathbf y})C_2$ symmetric momenta, $(\pm \pi/2,0)$ and $(\pm \pi/2, \pi/2)$. We derive in Appendix~\ref{app:irreps} that the $C_2$ eigenvalues at $(\pi,0)$ are the same as $(0,0)$, while the $C_2$ eigenvalues at $(0,\pi/2)$ are opposite of $(\pi,\pi/2)$. The same relations hold for the $T(\hat{\mathbf y})C_2$ symmetric points. In conclusion, there are two independent $C_2$ symmetric points, $\Gamma=(0,0)$ and $Y=(0,\pi/2)$, and we find that each has two one-dimensional irreps labeled by $C_2$ eigenvalue $+i$, $-i$. There are also two independent $T(\hat{\mathbf y})C_2$ symmetric points, $X=(\pi/2,0)$ and $M=(\pi/2,\pi/2)$, and each has two one-dimensional irreps labeled by $T(\hat{\mathbf y})C_2$ eigenvalue $+i$, $-i$. Since each little co-group contains the identity element and either $C_2$ or $T(\hat{\mathbf y})C_2$, $|\widetilde{G}_{\mathbf k}|=2$ for these points. Thu, Eq.~(\ref{eqn:schur}) is satisfied, which means we have found all the projective irreps. \begin{table}[] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} &\multicolumn{2}{c|}{$X(\pi/2,0)$} &\multicolumn{2}{c|}{$Y(0,\pi/2)$} &\multicolumn{2}{c|}{$\Gamma(0,0)$} &\multicolumn{2}{c}{$M(\pi/2,\pi/2)$} \\ \hline Irrep & $X_1^{(p2)}$ & $X_2^{(p2)}$ & $Y_1^{(p2)}$ & $Y_2^{(p2)}$ & $\Gamma_1^{(p2)}$ & $\Gamma_2^{(p2)}$ & $M_1^{(p2)}$ & $M_2^{(p2)}$\\ \hline $C_2$ & & & $+i$ & $-i$ & $+i$ & $-i$ & & \\ $T(\hat{\mathbf y})C_2$ & $+i$ & $-i$ &&&& & $+i$ & $-i$ \end{tabular} \caption{High symmetry momenta (first row) and the irreps (second row) of their little co-group for the group $p2$. The third and fourth rows list the eigenvalue of the indicated symmetry; the row is blank if the symmetry is not in the little co-group.} \label{tab:p2} \end{table} \begin{table}[] \begin{tabular}{c|c|c|c|c} &\multicolumn{2}{c|}{$X(\pi/2,0)$} &\multicolumn{2}{c}{$Y(0,\pi/2)$}\\ \hline Irrep & $X_1$ & $X_2$ & $Y_1$ & $Y_2$\\ \hline $C_2$ & $i\sigma_z$ & $-i\sigma_z$ & $i\sigma_z$ & $-i\sigma_z$\\ $T(\hat{\mathbf x})$ & $\sigma_x$ & $\sigma_x$ & $\sigma_z$ & $\sigma_z$\\ $T(\hat{\mathbf y})$ & $\sigma_z$ & $\sigma_z$ & $\sigma_y$ & $\sigma_y$\\ $T(\hat{\mathbf x})T(\hat{\mathbf y})$ & $-i\sigma_y$ & $-i\sigma_y$ & $-i\sigma_x$ & $-i\sigma_x$ \\ \hline \end{tabular} \vspace{10pt} \begin{tabular}{c|c|c|c|c} \hline &\multicolumn{4}{c}{$\Gamma(0,0)$} \\ \hline Irrep & $\Gamma_1$ & $\Gamma_2$ & $\Gamma_3$ & $\Gamma_4$ \\ \hline $C_4T(\hat{\mathbf x})$ & $\begin{pmatrix} 1&\\&i \end{pmatrix}$ & $\begin{pmatrix} i&\\&-1 \end{pmatrix}$ & $\begin{pmatrix} -1&\\&-i \end{pmatrix}$ & $\begin{pmatrix} -i&\\&1 \end{pmatrix}$\\ $T(\hat{\mathbf x})T(\hat{\mathbf y})$ & $i\sigma_z$ & $i\sigma_z$ & $i\sigma_z$ & $i\sigma_z$ \end{tabular} \vspace{10pt} \begin{tabular}{c|c|c|c|c} \hline &\multicolumn{4}{c}{$M(\pi/2,\pi/2)$} \\ \hline Irrep & $M_1$ & $M_2$ &$M_3$ &$M_4$\\ \hline $C_4$ &$\begin{pmatrix} \epsilon &\\&\epsilon^*\end{pmatrix}$ &$\begin{pmatrix} -\epsilon^* &\\&\epsilon\end{pmatrix}$ &$\begin{pmatrix} -\epsilon &\\&-\epsilon^*\end{pmatrix}$ &$\begin{pmatrix} \epsilon^* &\\&-\epsilon\end{pmatrix}$ \\ $T(\hat{\mathbf x})T(\hat{\mathbf y})$ & $i\sigma_z$ & $i\sigma_z$ & $i\sigma_z$ &$i\sigma_z$ \end{tabular} \caption{High symmetry momenta (first row) and the irreps (second row) of their little co-group for the group $p4$. Subsequent rows list the eigenvalue of the indicated symmetry with $\epsilon=e^{i\pi/4}$; the row is blank if the symmetry is not in the little co-group.} \label{tab:p4} \end{table} \begin{table}[] \begin{tabular}{c|c|c|c|c|c|c|c} &{$X(\pi/2,0)$} &{$Y(0,\pi/2)$} &\multicolumn{2}{c|}{$\Gamma(0,0)$} &\multicolumn{3}{c}{$M(\pi/2,\pi/2)$} \\ \hline Irrep &$X_1X_2$ &$Y_1Y_2$ &$\Gamma_1\Gamma_4$&$\Gamma_2\Gamma_3$ &$M_1M_1$ &$M_3M_3$ &$M_2M_4$ \end{tabular} \caption{High symmetry momenta (first row) and the irreps (second row) of their little co-group for the group $p4/m'$. The notation $\Pi_i\Pi_j$ indicates that $\Pi_i$ and $\Pi_j$ are paired by $\cal TI$ symmetry, where $\Pi_i$ is an irrep of the corresponding little co-group with respect to layer group $p4$, shown in Table~\ref{tab:p4}.} \label{tab:p4TI} \end{table} \section{\label{sec:TQC}Topological quantum chemistry in a magnetic field} Finally we turn to the theory of TQC. TQC classifies topological crystalline insulators (TCIs) by enumerating all trivial phases in each space group, where a trivial phase is defined as one where exponentially localized Wannier functions exist and transform locally under all symmetries. A group of bands can be identified as a TCI if it is not in the space of trivial phases. Together, the Wannier functions corresponding to a single band (or group of bands) transform as a representation of the full space group, called a band representation \cite{zak1980symmetry,zak1981band,michel1999connectivity,michel2000elementary,michel2001elementary,bradlyn2017topological,cano2018building}. TQC labels each band representation by how its Bloch wave functions transform under symmetry at high symmetry momenta, i.e., by a set of irreps of the little co-group at each high symmetry momentum; this label is known as a symmetry indicator~\cite{po2017symmetry}. Symmetry indicators provide a practical way to identify many TCIs: specifically, a group of bands whose irreps at high symmetry momenta are not consistent with any of the trivial phases must be topological. In Sec.~\ref{sec:magneticWannier} we describe how to construct a basis of symmetric magnetic Wannier functions. We use this basis in Sec.~\ref{sec:inducedrep} to derive how the space group symmetries act on the Wannier functions; the symmetry matrices comprise the band representation. Fourier transforming the band representation yields its symmetry indicator. \subsection{Magnetic Wannier functions} \label{sec:magneticWannier} We now describe how to construct a basis of symmetric Wannier functions for a magnetic unit cell. Given a site $\mathbf{q}$, which will serve as a Wannier center, the site-symmetry group $G_\mathbf{q}$ is defined as the set of symmetries that leave $\mathbf{q}$ invariant, i.e., $G_\mathbf{q} = \lbrace g\in G | g\mathbf{q} = \mathbf{q} \rbrace$. The site-symmetry group defines a coset decomposition of the space group, \begin{equation} \label{eqn:cosetdecomp} G=\bigcup_{\alpha} g_\alpha G_{\mathbf q} \ltimes {\mathbb T}_M, \end{equation} where $G$ is the space group, ${\mathbb T}_M$ is the {magnetic} lattice translation group, and $\alpha=1, \dots, n$, where $n=|G/{\mathbb T}_M|/|G_{\mathbf q}|$ is the multiplicity of the Wyckoff position containing $\mathbf{q}$. The symmetries $g_\alpha$ are coset representatives. The choice of coset representatives is not unique; a different choice will yield a band representation related to the original by a unitary transformation, while the symmetry indicator is unchanged. The coset representatives define positions ${\mathbf q}_\alpha = g_\alpha {\mathbf q}$ that form the orbit of $\mathbf q$ within the magnetic unit cell. Together, these points are part of the same Wyckoff position, whose multiplicity $n$ is equal to the number of points in the orbit of $\mathbf{q}$ in the magnetic unit cell. Unlike the case of zero magnetic field, the set of coset representatives $g_\alpha$ includes pure translations within the magnetic unit cell. Fig.~\ref{fig:WC} shows the Wyckoff positions for the groups $p2$, and $p4$, $p4/m'$. \begin{figure}[b] \centering \includegraphics[width=\linewidth]{WC_July.png} \caption{Wyckoff positions in a magnetic flux $\pi$ for (a) the $2$-by-$1$ unit cell for group $p2$ and (b) the $2$-by-$2$ unit cell for the groups $p4$ and $p4/m'$. Each Wyckoff position is labelled by its multiplicity and an alphanumeric character. The general Wyckoff position, whose site-symmetry group consists of only the identity, is not shown.} \label{fig:WC} \end{figure} Suppose there are $n_{\mathbf q}$ orbitals centered at $\mathbf q$. These orbitals are described by $n_{\mathbf q}$ Wannier functions $|W_{i1}\rangle$, where $i=1\dots n_{\mathbf q}$. The Wannier functions transform under symmetries $g\in G_{\mathbf q}$ as a projective representation $\rho$ of $G_{\mathbf q}$, \begin{equation} \label{eqn:rep_sitegroup} g|W_{i1}\rangle =\sum_{j=1}^{n_{\mathbf q}}[\rho(g)]_{ji}|W_{j1}\rangle. \end{equation} Applying the representatives $g_\alpha$ in the coset decomposition of the space group $G$ in Eq.~(\ref{eqn:cosetdecomp}) to $|W_{i1}\rangle$ gives another Wannier function \begin{equation} \label{eqn:Walpha} |W_{i\alpha}\rangle = g_\alpha |W_{i1}\rangle, \end{equation} localized at ${\mathbf q}_\alpha$. All these Wannier functions $|W_{i\alpha}\rangle$, where $i=1\dots n_{\mathbf q}$ and $\alpha=1\dots n$, form an induced representation of $G$, as we now explain. \par \subsection{Induced representation} \label{sec:inducedrep} In this section, we derive how the space group symmetries act on the Wannier functions. This provides an explicit construction of a band representation with Wannier functions as a basis. Fourier transforming the band representation gives the irreps of the little co-group at each high-symmetry point, i.e., the symmetry indicator. Consider a group element $h g_{\alpha} \in G$. The coset decomposition in Eq.~(\ref{eqn:cosetdecomp}) implies that $hg_\alpha$ can be written in the form \begin{equation} \label{eqn:hg_coset} hg_\alpha = e^{if_{\alpha\beta}(h)}\{E|{\mathbf t}_{\alpha\beta}(h)\}g_\beta g \end{equation} where $\mathbf t_{\alpha\beta}(h) = h{\mathbf q}_\alpha -{\mathbf q}_\beta$ and $ \{E|\mathbf{t}_{\alpha\beta}\}\in \mathbb{T}_M$, $g\in G_{\mathbf q}$, and the coset representative $g_\beta$ are uniquely determined by the coset decomposition. The remaining phase factor $f_{\alpha\beta}(h)$ is due to the non-trivial 2-cocycles. For the two-dimensional systems without magnetic field, $f_{\alpha\beta}(h)\equiv0$. For the case with magnetic field, in general $f_{\alpha\beta}(h)$ is nonzero. The phase factor $f_{\alpha\beta}(h)$ is the new ingredient that appears in a magnetic field and is a key result of the present work; it does not appear in the non-magnetic theory (for example, it does not appear in Eq.~(6) in Ref.~\onlinecite{cano2018building}). This phase factor is gauge invariant because it results from the commutations between rotations and translations (see Appendix~\ref{app:rotation}). We briefly give two examples to show how this phase factor appears. As a first example, consider the layer group $p1$ with a $2$-by-$2$ unit cell, corresponding to $\pi$ flux. {Starting from a Wannier function centered at a general position $\mathbf{q} = (x,y)$, the coset representatives in Eq.~(\ref{eqn:cosetdecomp}) can be chosen as $g_1=\{E|0\}$, $g_2 = T(\hat{\mathbf x}) $, $g_3 = T(\hat{\mathbf y})$, $g_4=T(\hat{\mathbf x})T(\hat{\mathbf y})$. } Now consider the left-hand-side of Eq.~(\ref{eqn:hg_coset}) with $h=T(\hat{\mathbf y})$, $g_\alpha=T(\hat{\mathbf x})$. Then on the right-hand-side of Eq.~(\ref{eqn:hg_coset}), $g_\beta=T(\hat{\mathbf x})T(\hat{\mathbf y})$, $g=E$, and ${\mathbf t}_{\alpha\beta}=0$. Since $T(\hat{\mathbf y})T(\hat{\mathbf x}) = e^{i\pi} T(\hat{\mathbf x})T(\hat{\mathbf y})$, $f_{\alpha\beta}(h)=\pi$. As a second example, consider layer group $p4$ with a $2$-by-$2$ unit cell, corresponding again to $\pi$ flux. {Starting from a Wannier function centered at $\mathbf{q} = (\frac{1}{2}, \frac{1}{2})$, the coset representatives in Eq.~(\ref{eqn:cosetdecomp}) can be chosen as $g_1=\{E|0\}$, $g_2 = T(\hat{\mathbf x}) $, $g_3 = T(\hat{\mathbf y})$, $g_4=T(\hat{\mathbf x})T(\hat{\mathbf y})$.} Now consider the left-hand-side of Eq.~(\ref{eqn:hg_coset}) with $h=C_4$, $g_\alpha=T(\hat{\mathbf x})$. The coset decomposition uniquely determines $g_\beta=T(\hat{\mathbf x})T(\hat{\mathbf y})$, $g=C_4(\frac{1}{2},\frac{1}{2})$, {and $\mathbf{t}_{\alpha\beta} = (-2,0)$ on the right-hand-side of Eq.~(\ref{eqn:hg_coset}).} Since $ C_4 T(\hat{\mathbf x}) = e^{i3\pi/4}\{E|(-2,0)\} T(\hat{\mathbf x})T(\hat{\mathbf y})C_4(1/2,1/2) $, the extra phase term $f_{\alpha\beta}(h)=3\pi/4$. As discussed at the start of Sec.~\ref{sec:TQC}, the set of Wannier functions {centered at all $\mathbf q_\alpha$} form a basis for a band representation, which we denote $\rho_G$. Given a space group symmetry $h\in G$, Eq.~(\ref{eqn:hg_coset}) determines the matrix elements of $\rho_G(h)$ in the basis of Wannier functions defiend in Eq.~(\ref{eqn:Walpha}) by: \begin{align} \label{eqn:inducedR} \rho_{G}(h) |W_{i\alpha}(\mathbf r-\mathbf t)\rangle &= e^{if_{\alpha\beta}(h)}[\rho(g)]_{ji}|W_{j\beta}(\mathbf r-R\mathbf t-{\mathbf t}_{\alpha\beta})\rangle \end{align} where $R$ is the rotational part of $h$, $\rho(g)$ is the given representation defined in Eq.~(\ref{eqn:rep_sitegroup}), $\mathbf t_{\alpha\beta}(h) = h{\mathbf q}_\alpha -{\mathbf q}_\beta$ and sum over $j=1\dots n_{\mathbf q}$ is implied. Substituting the Fourier transformed Wannier functions, \begin{align} |W_{j\beta}(\mathbf r-\mathbf t) \rangle = \int d\mathbf k e^{i{\mathbf k}\cdot{\mathbf t}} |a_{j\beta}(\mathbf k,\mathbf r)\rangle, \\ |a_{j\beta}(\mathbf k,\mathbf r)\rangle = \sum_{\mathbf t} e^{-i{\mathbf k}\cdot{\mathbf t}} |W_{j\beta}(\mathbf r-\mathbf t) \rangle, \end{align} into Eq.~(\ref{eqn:inducedR}) yields \begin{align} \label{eqn:inducedk} \rho_{G}(h) |a_{i\alpha}(\mathbf k,\mathbf r)\rangle &= e^{if_{\alpha\beta}(h)-iR{\mathbf k}\cdot {\mathbf t}_{\alpha\beta}}[\rho(g)]_{ji}|a_{j\beta}(R\mathbf k,\mathbf r)\rangle \end{align} From Eq.~(\ref{eqn:inducedk}), a representation of the little co-group (defined in Sec.~\ref{sec:kirrep}) is determined from $\rho_G$ by restricting each matrix $\rho_G(h\in \widetilde{G}_\mathbf{k})$ to only the rows and column corresponding to Fourier-transformed Wannier functions at $\mathbf{k}$. The set of irreps obtained at all $\mathbf{k}$ determines the symmetry indicator following the procedure we introduced in Ref.~\onlinecite{fang2021filling}, which is summarized in Appendix~\ref{app:TQC}. We now derive the symmetry indicator classification for a few examples. \subsection{Examples} \label{sec:applications} We apply our formalism of TQC in a magnetic flux to three magnetic layer groups with $\pi$ flux: $p2$, $p4$, and $p4/m'$. In each case, we discuss the stable symmetry indicator classification; the derivations are in Appendix~\ref{app:TQC}. \par \subsubsection{p2} For layer group $p2$ with $\pi$ flux, we choose a $2$-by-$1$ magnetic unit cell, following the discussion in Sec.~\ref{sec:mag_sym_3_momentum}. The symmetry indicator has a $\mathbb{Z}_4$ classification. The indicator for a particular group of bands is \begin{multline} \label{eqn:indexp2} \text{index} = \#\Gamma_1-\#\Gamma_2+\#X_1-\#X_2 +\#Y_1-\#Y_2 \\ +\#M_1-\#M_2-N \mod 4 \end{multline} where $\#\Pi_i$ indicates the number of times the irrep $\Pi_i$ at the high symmetry point $\Pi$ appears in the bands and $N=\#\Gamma_1+\#\Gamma_2=\#X_1+\#X_2=\#Y_1+\#Y_2=\#M_1+\#M_2$ is the filling per magnetic unit cell. \par {This indicator Eq.~(\ref{eqn:indexp2}) is exactly the same as the Chern number indicator Eq.~(3) in Ref.~\onlinecite{matsugatani2018universal} in the special case of $\pi$-flux and spinful electrons, i.e.,} \begin{equation} \label{eqn:Chern} e^{i\pi (C/q-\bar{\rho})}=(-)^{2SN}w_{C_2}^\Gamma w_{C_2}^Y w_{T(\hat{\mathbf y})C_2}^X w_{T(\hat{\mathbf y})C_2}^M \end{equation} where {at flux $\pi$}, $q=2$; $\bar{\rho}=N/2$ is the filling per non-magnetic primitive unit cell; $S=1/2$ is the spin (angular momentum) quantum number; and $w_{g}^{\Pi}$ is the product of eigenvalues of the symmetry $g$ for all filled bands at momentum $\Pi$. \subsubsection{p4 } For layer group $p4$ with $\pi$ flux, we choose $2$-by-$2$ unit cell, following the discussion in Sec.~\ref{sec:mag_sym_3_momentum}. The symmetry indicator has a $\mathbb{Z}_8$ classification. The indicators for a particular group of bands are determined by: \begin{multline} \label{eqn:indexp4} \text{index}=2\# \Gamma_1+4\# \Gamma_2-2\# \Gamma_3+\# M_1+3\# M_2\\ -3\# M_3-\# M_4+4\# X_1 \mod 8 \end{multline} To understand this indicator, we compare the new index to the symmetry indicator formula for Chern number in Ref.~\onlinecite{fang2012bulk}: \begin{equation} \label{eqn:Chernnumber} e^{i\frac{\pi}{2} C}=(-)^{2SN}w_{C_4}^\Gamma w_{C_2}^X w_{C_4}^M, \end{equation} which, in terms of irreps, is given by (derivation in Appendix~\ref{app:Chern}) \begin{align} \label{eqn:ChernumberIrrep} C = 2N &+\#\Gamma_1+\#\Gamma_3-\#\Gamma_2-\#\Gamma_4 \nonumber \\ &+2(\#M_2+\#M_4) \mod 4 \end{align} $N$ is always an even integer due to the two-dimensional irreps (shown in Table~\ref{tab:p4}). We conclude \begin{equation} C = \text{index} \mod 4 \end{equation} The topological phase with index equal to $4\mod 8$ goes beyond the earlier symmetry indicator given by Eq.~(\ref{eqn:Chernnumber}). We leave it to future work to determine whether this index is a stronger indicator for the Chern number or has a different physical meaning. \subsubsection{p4/m'} For layer group $p4/m'$ with $\pi$ flux, we choose $2$-by-$2$ unit cell, following the discussion in Sec.~\ref{sec:mag_sym_3_momentum}. The symmetry indicator has a $\mathbb{Z}_2$ classification. The indicator for a particular group of bands is determined by: \begin{align} \label{eqn:index1p4TI} \text{index}&= N/4 \mod 2, \end{align} where $N/4\equiv \bar \rho$ is the filling per original unit cell. Notice each band in this group is four-fold degenerate (see Table~\ref{tab:p4TI} in Sec.~\ref{sec:kirrep}), and hence $\bar \rho \in \mathbb Z$. The group $p4/m'$ is generated by a four-fold rotation and the product of time-reversal and inversion symmetry $\cal TI$. As is well known, $\cal TI$ prevents a non-vanishing Chern number~\cite{bernevig2013topological} and the absence of $\cal T$ prevents the existence of strong topological insulator~\cite{fu2006time}. Since $\cal T$ and $\cal I$ are not separately symmetries, there is no mirror symmetry and hence no mirror Chern number. Thus, our stable index is a new phase that only exists in systems with magnetic flux. This phase is realized in the model we present in Sec.~\ref{sec:Model}. However, it does not realize an anomalous gapless boundary state because when the boundary is opened, the sublattice translation symmetries that protect the phase are broken. \par \section{\label{sec:Model} Application to a quadrupole insulator} {In this section, we apply our results to a model on the square lattice. At zero flux, this model is a quadrupole insulator that exhibits corner states. Since the symmetries that protect the corner states are preserved in the presence of a perpendicular magnetic field, the corner states must survive when magnetic flux is introduced. We use the formalism developed in the previous sections to verify the presence of corner states using symmetry indicators. Finally, we show that at a critical magnetic flux, the bulk gap closes and the corner states disappear, as shown in the Hofstadter butterfly spectrum in Fig.~\ref{fig:Hof}. We use the symmetry indicators to verify that when the corner states disappear, the symmetry indicator vanishes.} {Our results provide a new probe of the higher order topology in the model, i.e., the presence of a gap closing phase transition in the presence of a magnetic field, which may be easier to observe than probing the corner states directly. } \subsection{\label{sec:Model_model} Model} \begin{figure} \centering \includegraphics[width=\linewidth]{WBBH.png} \caption{Hofstadter spectrum of the OAL model. The grey states are calculated with periodic boundary conditions and show the bulk gap closing at a critical flux. The red states are calculated with open boundary conditions and show the disappearance of the corner states upon bulk gap closure. The spectrum is computed for dimensions $L_x=200$, $L_y=10$ and parameters $\lambda=1$, $\gamma=0.5$.} \label{fig:Hof} \end{figure} {We study a model proposed by Wieder et al in Ref.~\onlinecite{wieder2020strong}} at zero flux that has the same momentum space Hamiltonian as the {$C_4$ symmetric} Bernevig-Benalcazar-Hughes (BBH) model which is at $4\pi$ flux per unit cell. \cite{benalcazar2017quantized,benalcazar2017electric} {This model was given as an example in Fig.~3 and Appendix~A of Ref.~\onlinecite{wieder2020strong}}. Yet the two models have some fundamental differences: while the BBH model has four atoms per unit cell and one orbital per atom, Wieder's model has one atom sitting at the origin of the unit cell and four orbitals per atom. Since the position of atoms in the unit cell will be important when we include magnetic flux, the two models have different Hofstadter spectra. {Further, the BBH model describes spinless fermions, while Wieder's model describes fermions with spin-orbit coupling. As a result, the symmetry representations for the two models are different.} In momentum space the Hamiltonian is \begin{align} H(\mathbf k)&=(v_m+t_1(\cos(k_x)+\cos(k_y)))\Gamma_3 \nonumber \\ &+t_2(\cos(k_x)-\cos(k_y))\Gamma_4 \nonumber \\ &+u \sin(k_x)\Gamma_1+u \sin(k_y)\Gamma_2, \end{align} where $\Gamma_1=\tau_y\sigma_y$, $\Gamma_2=\tau_y\sigma_x$, $\Gamma_3=\tau_z$, and $\Gamma_4=\tau_x$. {The Pauli matrices $\tau$ and $\sigma$ together span the orbital space of each atom.} In the limit $t_1=t_2=u/\sqrt 2=\lambda/\sqrt 2$ and $v_m=\sqrt 2 \gamma$, this Hamiltonian is equivalent to the BBH Hamiltonian after a basis change. In this section, we adopt these parameters and set $\lambda=1$ and $\gamma=0.5$ so that the system is a quadrupole insulator at zero flux. The generators of the symmetries of this Hamiltonian take the matrix forms: \begin{align} \label{eqn:C4z} C_{4z}&=\tau_z\left(\frac{{\mathbb I}_\sigma-\sigma_z}{2}\right),\\ M_x&=\sigma_x,\\ {\cal TI}&=\sigma_y {\cal K}, \end{align} where $\cal K$ is the complex conjugation. There is also a chiral symmetry that anti-commutes with the Hamiltonian \begin{equation} \Gamma_5=\tau_y\sigma_z \label{eq:chiral} \end{equation} {To incorporate the effect of a magnetic field, we need the real space Hamiltonian, given by:} \begin{align} H&=\sum_{i,j\in \mathbb Z}\mathbf t_x c^\dagger_{(i+1,j)} c_{(i,j)}+\mathbf t_y c^\dagger_{(i,j+1)} c_{(i,j)}+h.c. \nonumber \\ &\quad +v_m\Gamma_3 c^\dagger_{(i,j)} c_{(i,j)} \label{eqn:Ham} \end{align} where $\mathbf{t}_{x,y}$ are hopping matrices given by \begin{align} \mathbf t_x=(t_1\Gamma_3+t_2\Gamma_4)/2-iu\Gamma_1/2\\ \mathbf t_y=(t_1\Gamma_3-t_2\Gamma_4)/2-iu\Gamma_2/2 \end{align} When a magnetic field in the $z$-direction is turned on, the Hamiltonian in Eq.~(\ref{eqn:Ham}) requires the Peierls substitution \cite{hofstadter1976energy}. Working in Landau gauge, $\mathbf A(x,y)=(-\phi y,0)$ where $\phi=B$ is the flux per unit cell and the substitution is given by \begin{align} c^\dagger_{(i+1,j)} c_{(i,j)} &\mapsto e^{-i\phi j}c^\dagger_{(i+1,j)} c_{(i,j)}\\ c^\dagger_{(i,j+1)} c_{(i,j)} &\mapsto c^\dagger_{(i,j+1)} c_{(i,j)} \end{align} The momentum space Hamiltonian at finite flux can be obtained by Fourier transforming Eq.~(\ref{eqn:Ham}) using the convention in Eqs.~(\ref{eqn:FT_qby1_cd}) and (\ref{eqn:FT_qby1_c}) when the flux is rational $\phi=2\pi p/q$. In Fig.~\ref{fig:Hof}, we numerically compute the Hofstadter spectrum for this model. \subsection{\label{sec:Model_analysis} Symmetry analysis} The model has a $2\pi$ periodicity in $\phi$, the flux per unit cell. At zero flux and $\pi$-flux the system is invariant under the symmetry group $p4/m'mm$, while at other fluxes the symmetry group is $p4$. Using the formalism developed in this manuscript, we apply TQC in a magnetic field to compute the symmetry indicators at $\pi$ flux. Indicators at other fluxes are discussed in Appendix~\ref{app:symmetry}. Ultimately, we will show that the symmetry indicator at $\pi$ flux corresponds to an absence of corner states, from which we deduce there must be a gap closing phase transition at a critical flux between zero and $\pi$. At $\pi$-flux, the magnetic unit cell is $2$-by-$2$ and the Brillouin zone is $[-\pi/2,\pi/2]\times[-\pi/2,\pi/2]$. According to Sec.~\ref{sec:mag_sym_3_momentum}, the four-fold rotation symmetry operators at $\Gamma=(0,0)$ and $M=(\pi/2,\pi/2)$ are \begin{align} D(C_{4z},\Gamma)&= \begin{pmatrix} 1&&&\\ &&1&\\ &1&&\\ &&&-1 \end{pmatrix}\otimes C_{4z}, \\ D(C_4,M))&= \begin{pmatrix} 1&&&\\ &&-1&\\ &1&&\\ &&&1 \end{pmatrix}\otimes C_{4z}, \end{align} {where the first matrix acts on the sublattice basis,} and the $C_{4z}$ matrix acts on the orbital basis as defined in Eq.~(\ref{eqn:C4z}). The magnetic translation symmetries at $\mathbf k$ are implemented by \begin{align} D(T(\hat{\mathbf x}),\mathbf k)&=e^{ik_x} \begin{pmatrix} &1&&\\ 1&&&\\ &&&1\\ &&1& \end{pmatrix} \otimes \tau_0\sigma_0 \\ D(T(\hat{\mathbf y}),\mathbf k)&=e^{ik_y} \begin{pmatrix} &&1&\\ &&&-1\\ 1&&&\\ &-1&& \end{pmatrix} \otimes \tau_0\sigma_0, \end{align} where $\tau_0$ and $\sigma_0$ are identity matrices. The irreps of the occupied bands are listed in Table~\ref{tab:BRatpi}. Each band is four-fold degenerate because ${\cal TI}^2=-1$ and $\{T(\hat{\mathbf x}),T(\hat{\mathbf y})\}=0$, as explained in Appendix~\ref{app:irreps}. Using the computed irreps in Table~\ref{tab:BRatpi}, the symmetry indicators are listed in Table~\ref{tab:indatpi}. \begin{table} \centering \begin{tabular}{c|c|c|c|c} band index& $1$ & $2$ &$3$ &$4$ \\ \hline irrep at $\Gamma$ &$\Gamma_2\Gamma_3$ &$\Gamma_1\Gamma_4$ &$\Gamma_2\Gamma_3$ &$\Gamma_1\Gamma_4$ \\ irrep at $X$ &$X_1X_2$ &$X_1X_2$ &$X_1X_2$ &$X_1X_2$ \\ irrep at $M$ &$M_2M_4$ &$M_3M_3$ &$M_1M_1$ &$M_2M_4$ \end{tabular} \caption{Band representation of the four four-fold degenerate bands at $\pi$-flux. The ordering of band index is from lowest energy to highest energy, i.e., half-filling corresponds to filling bands 1 and 2. Each irrep $\Pi_i\Pi_j$ is four-dimensional and defined in Table~\ref{tab:p4TI}. } \label{tab:BRatpi} \end{table} \begin{table} \begin{tabular}{c|c|c|c|c} band index& $n=1,4$ & $n=2,3$ & $1\oplus2$ &$3\oplus4$ \\ \hline $\mathbb Z_2$ phase (Eq.~(\ref{eqn:index1p4TI})) &$1$ &$1$ &$0$ &$0$\\ \hline $e_{4a} \mod 8$ &&&$2$&$2$\\ $e_{4b} \mod 8$ &&&$0$&$0$\\ $e_{8c} \mod 4$ &&&$0$&$0$ \\ \hline $e_{1a'} \mod 4$ &$0$&$2$&$2$&$2$\\ $e_{1b'} \mod 4$ &$0$&$2$&$2$&$2$\\ $e_{2c'} \mod 2$ &$2$&$0$&$2$&$2$ \\ \end{tabular} \caption{Symmetry indicators at $\pi$-flux. {The second column corresponds to each four-fold degenerate band individually, while the last two columns correspond to sums of bands.} {The second row shows} the strong topological index in Eq.~(\ref{eqn:index1p4TI}) is $1\mod 2$ for each band, while for two occupied/empty bands the index is $0\mod 2$. Since symmetric and exponentially localized Wannier functions exist for the two occupied or two empty bands, {in the next three rows, $e_\mathbf{q}$ indicates the number of Wannier functions centered at the Wyckoff position $\mathbf{q}$}, computed using Eqs.~(\ref{eqn:4a}) -- (\ref{eqn:8c}) in Appendix~\ref{app:TQC}. If the sublattice translation symmetry within each magnetic unit cell is broken, the number of Wannier functions centered at the indicated Wyckoff positions in the lower symmetry group are shown in Fig.~\ref{fig:WCWC}. } \label{tab:indatpi} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{WCvsWC.png} \caption{(a) Wyckoff position $4a$ in $2$-by-$2$ unit cell with space group $G$ splits into (b) Wyckoff positions $1a'$, $1b'$, $2c'$ in the same unit cell with symmetry group $G/(\mathbb T/ \mathbb T_M)$, i.e. no sublattice translations. } \label{fig:WCWC} \end{figure} Below the gap at half filling, the two occupied bands together ($1\oplus2$ in Table~\ref{tab:indatpi}) are topologically trivial. They admit symmetric exponentially localized Wannier functions located at the $4a$ Wyckoff position. Since the atoms are also at $4a$ Wyckoff position, there is no corner charge. This analysis agrees with the Hofstadter spectrum shown in Fig.~\ref{fig:Hof}. Open boundary conditions break the lattice translation symmetries and, in particular, break the sublattice translation symmetries within the magnetic unit cell. Once the sublattice translation symmetries are broken, the little co-group (Eq.~(\ref{eq:deftildeGK})) is identical to the non-magnetic case. Thus, the crystalline symmetry protected phases with open boundary condition should be labelled by the usual symmetry indicators in zero flux but with respect to the enlarged magnetic unit cell; these indicators are computed in Ref.~\onlinecite{fang2021filling}. The results are shown in the lower half of Table~\ref{tab:indatpi}. In this reduced symmetry group, the magnetic $4a$ Wyckoff position splits into positions: $1a'=(0,0)$, $1b'=(1,1)$, $2c'=(1,0),(0,1)$, as shown in Fig.~\ref{fig:WCWC}. \par \subsection{Corner states} The spectrum with periodic boundary condition has a gap at half filling at $\phi=0$ and $\phi=\pi$. This gap closes at some $\phi^*$ between $0$ and $\pi$ as shown in Fig.~\ref{fig:Hof}. For the spectrum with open boundary condition, there are higher-order topological states when $-\phi^*<\phi<\phi^*$ that are corner localized. Due to the chiral symmetry (\ref{eq:chiral}), they are at zero energy in this model. The corner states can be understood by the non-zero quadrupole moment~\cite{benalcazar2017electric} or the non-zero filling anomaly~\cite{benalcazar2019quantization,schindler2019fractional,wieder2020strong,fang2021filling}. \par The corner states have four-fold degeneracy, consistent with the non-magnetic symmetry analysis in Refs.~\onlinecite{fang2021filling, fang2021classification}. The corner states with open boundary condition always come in a group of $d$ states. This degeneracy $d$ is determined by the point group of the finite system. Let $w$ be the general Wyckoff position of the point group, with multiplicity $n_w$. Denote the site symmetry group $G_{w}$. It has only one irrep, $\rho(G_{w})$. The degeneracy of corner states is~\cite{fang2021classification} \begin{equation} \label{eqn:deg} d=\text{dim}(\rho(G_{w}))\times n_w \end{equation} where dim$(\rho(G_{w}))=2$ for spinful systems with time-reversal symmetry that squares to $-1$, otherwise dim$(\rho(G_{w}))=1$. In the present case, at zero flux the system is invariant under the symmetry group $p4/m'mm$, while at any small flux the symmetry group reduces to $p4$. For $p4/m'mm$ and $p4$, the point groups of the finite size system are $4/m'mm$ and $4$ respectively. Each has a general Wyckoff position $w$ with $n_w=4$ and $\text{dim}(\rho(G_w))=1$; thus, Eq.~(\ref{eqn:deg}) yields a degeneracy of $4$~\cite{fang2021classification}. Since the degeneracy of corner states is the same for zero flux and finite flux, the corner states do not split when the magnetic flux is introduced (The chiral symmetry in this model pins the corner states to zero energy, but even in the absence of chiral symmetry, the nonzero filling anomaly will remain the same for $0\leq\phi<\phi^*$.) We have also shown from the symmetry indicators that at half filling and $\pi$ flux, the system is in the trivial phase, without corner states. Thus, the corner states must terminate at $\phi=\phi^*$ by either a bulk or edge gap closing. There is indeed a bulk gap closing at flux $\phi^*$ as Fig.~\ref{fig:Hof} shows. This is consistent with the Wannier centers of the occupied bands, which can be deduced from the symmetry indicators: the Wannier centers are at the $4b$ Wyckoff position at zero flux and the $4a$ Wyckoff position at $\pi$ flux (see Table~\ref{tab:indatpi} and Appendix~\ref{app:symmetry}). Symmetries prevent four Wannier functions from moving continuously between the $4a$ and $4b$ positions~\cite{fang2021filling,song2017d}. A discontinuous jump of the Wannier centers implies the bulk gap closes between zero and $\pi$ flux. \par In Appendix~\ref{app:symmetry} we compute the symmetry indicator at intermediate flux $\phi=2\pi/5<\phi^*$ and $\phi=4\pi/5>\phi^*$ to verify that symmetry indicators are consistent with the presence and absence of corner states between zero and $\pi$. In Appendix \ref{app:Wilson} we show that the presence and absence of corner states also agrees with the nested Wilson loop ~\cite{benalcazar2017electric}. \par \section{\label{sec:conclusion} Conclusion} In conclusion, we derived a general framework to apply TQC and the theory of symmetry indicators to crystalline systems at rational flux per unit cell. Applying our results to some simple examples at $\pi$ flux revealed new symmetry indicators that did not appear at zero flux. Finally, the symmetry indicators enable us to study a quadrupole insulator at finite field, which reveals a gap closing topological-to-trivial phase transition as a function of magnetic field. Observing this phase transition could be particularly promising in moir\'e systems where higher flux is attainable for reasonable magnetic fields. While preparing our work, we became aware of a related study \cite{herzog2022hofstadter}, which gives criteria for when such a bulk gap closing at finite flux can be predicted from the band structure at zero flux. The two bulk gap closings between zero and $\Phi = 2\pi$ flux of our model in Sec.~\ref{sec:Model} are indicated by the real space invariant in Ref.~\onlinecite{herzog2022hofstadter}. \par We note that the Zeeman term is neglected in this manuscript. When the Zeeman term is present, the periodicity in the flux direction is broken. Thus there is no magnetic time-reversal symmetry or other mirror symmetries that flip magnetic flux. At large magnetic field where Zeeman term dominates, the two dimensional system must be in the trivial atomic limit where Wannier centers locate at the atom positions. {Our work is also restricted to a spatially constant magnetic field. It would be interesting to extend our results to a spatially varying periodic magnetic field that maintains a commensurate flux per unit cell. This more general theory might be relevant to magnetically ordered crystals.} \par As a final note, we draw a connection between our results and the theory of phase space quantization, where one seeks a symmetric and exponentially localized Wannier basis that can continuously reduce to points in the classical phase space by setting the Planck constant $h\rightarrow 0$~\cite{von2018mathematical}. However, such a basis can never be found due to the Balian-Low theorem, which forbids the existence of exponentially localized translational symmetric basis for any single particle~\cite{benedetto1994differentiation}. The magnetic Wannier functions in two dimensions share a similar translation group structure to the one-dimensional quantum phase space and the non-vanishing Chern number for any single magnetic band also forbids Wannierization~\cite{zak1997balian}, as we explain in Appendix~\ref{app:Wannier}. Thus, there is an interesting open question: since in two dimensional magnetic systems, it is possible to find a Wannier basis for a group of bands, we conjecture that the continuous quantization of phase space may be realized by constructing Wannier functions for groups of particles. \section{Acknowledgements} We acknowledge useful conversations with Andrei Bernevig, Aaron Dunbrack, Sayed Ali Akbar Ghorashi, Jonah Herzog-Arbeitman, and Oskar Vafek. We thank Jonah, Andrei, and their collaborators for sharing their unpublished manuscript. Our manuscript is based upon work supported by the National Science Foundation under Grant No. DMR-1942447. The work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. J.C. also acknowledges the support of the Flatiron Institute, a division of the Simons Foundation. \section{\label{sec:intro} Introduction} Threading magnetic flux through a two-dimensional crystal changes the single particle band spectrum into a Hofstadter butterfly spectrum that exhibits a fractal structure with an infinitude of mini gaps~\cite{hofstadter1976energy}. The Hofstadter butterfly is the lattice counterpart of Landau levels in the continuum. While the Landau levels of a continuum model are often easier to compute than the Hofstadter butterfly of the corresponding lattice model, diagnosing band topology in the presence of magnetic flux requires the lattice because topological invariants are defined over the entire Brillouin zone. The topology of Hofstadter bands has been a subject of intense recent study ~\cite{herzog2020hofstadter,zuo2021topological,asaga2020boundary,otaki2019higher}. In the absence of magnetic flux, the topology of a band structure can be classified by the theory of topological quantum chemistry (TQC) \cite{bradlyn2017topological,vergniory2017graph,elcoro2017double,cano2018building,bradlyn2018band,vergniory2019complete,cano2021band,elcoro2021magnetic}. A practical diagnosis comes from studying the space group representations of bands at high symmetry momenta, which are known as symmetry indicators \cite{po2017symmetry}. However, in its present form, TQC cannot be directly applied to systems in a magnetic field because it does not account for the Aharonov-Bohm phase. \par \begin{figure} \centering \includegraphics[width=\linewidth]{workflow7} \caption{Framework for TQC in the presence of magnetic flux. A representation of the site-symmetry group induces a band representation of the entire space group, which is subduced to a representation of the little co-group at each high-symmetry momentum, i.e., the symmetry indicator. The new element introduced by the magnetic field is that the symmetry operators form a projective representation of the space group. The red font indicates differences between TQC with and without magnetic flux.} \label{fig:workflow} \end{figure} In the present manuscript we derive a framework to generalize TQC and the classification of symmetry indicators to band structures in the presence of a rational magnetic flux per unit cell. The workflow is shown in Fig.~\ref{fig:workflow}. We find that the two key ingredients in the theory of TQC -- the irreducible representations of bands at high symmetry points in momentum space and the induced representations of localized orbitals in real space -- are modified from their non-magnetic counterparts due to the presence of magnetic flux. The essential reason for this modification is that the commutation relations between crystal symmetries change in the presence of magnetic flux due to the Aharonov-Bohm phase. As a result, the symmetry operators form non-trivial projective representations of the space group. The earliest example of this is Zak's magnetic translation group~\cite{zak1964magnetic,zak1964magnetic2}. Our theory builds on Zak's theory by including crystalline symmetries. Our theory of TQC in commensurate magnetic flux is distinct from ``magnetic TQC''~\cite{elcoro2021magnetic}. While magnetic TQC classifies topological band structures according to the representations of magnetic space groups, which describe the symmetry of magnetically ordered crystals, magnetic TQC does not yet accommodate magnetic flux through each unit cell, as it deals with zero flux configurations of orbitals. Our manuscript proceeds as follows. In Sec.~\ref{sec:mag_sym}, we derive the space group symmetry operators at rational flux in both real and momentum space. The results are at the crux of the theory of TQC in a magnetic field that we derive in Sec.~\ref{sec:TQC}. We then use the theory to compute symmetry indicators for three magnetic layer groups, $p2$, $p4$ and $p4/m'$, at $\pi$ flux per unit cell. The strong indicator in $p2$ recovers an earlier formula for the Chern number in Ref.~\onlinecite{matsugatani2018universal}, which is a stronger version of the formula in Ref.~\onlinecite{fang2012bulk}. In group $p4/m'$, our theory gives rise to a new strong $\mathbb Z_2$ indicator, which is simply the filling per unit cell mod $2$. This $\mathbb Z_2$ non-trivial phase is protected by translation symmetry: the non-trivial phase does not permit exponentially localized and symmetric Wannier functions, but such Wannier functions exist when translation symmetries within the magnetic unit cell are broken. {We also find a new indicator for $p4$.} \par In Sec.~\ref{sec:Model}, we study a tight binding model introduced in Ref.~\onlinecite{wieder2020strong} that realizes an obstructed atomic limit (OAL) on the square lattice at zero flux. The Hofstadter butterfly spectrum shows that the system undergoes a gap-closing phase transition at finite flux after which the corner states that were present in the OAL phase disappear. By applying our theory of TQC in a magnetic field to this model, we show that the gap closing corresponds to a phase transition from an OAL to a trivial phase that can be diagnosed by symmetry indicators. \par \section{\label{sec:mag_sym} Magnetic symmetries} In quantum mechanics the coupling of a magnetic field to a charged particle is described by replacing the momentum $\mathbf P$ of the particle with the canonical momentum $\mathbf p=\mathbf P+\mathbf A$ in the Hamiltonian (without loss of generality, we have used natural units and assumed positive unit charge). To account for the Aharonov-Bohm phase, terms in the single-particle tight binding Hamiltonian are modified by the usual Peierls substitution: \begin{equation} c^{\dagger}_{\mathbf r_2} c_{\mathbf r_1} \mapsto e^{i\int_{\mathbf r_1}^{\mathbf r_2} \mathbf A(\mathbf r)\cdot d\mathbf r}c^{\dagger}_{\mathbf r_2} c_{\mathbf r_1}, \label{eq:peierls} \end{equation} where the path of the integral is the straight line connecting $\mathbf{r}_1$ and $\mathbf{r}_2$. However, if the zero-field Hamiltonian is invariant under a crystal symmetry $\hat{g}: c_{\mathbf{r}} \mapsto c_{\hat{g}\mathbf{r}}$, the Hamiltonian modified by the Peierls substitution in Eq.~(\ref{eq:peierls}) is not necessarily invariant under $\hat{g}$, even if the physical system is unchanged by the symmetry. Consequently, the operator $\hat{g}$ must be modified from its zero-field form by a gauge transformation that accounts for the Aharonov-Bohm phase. Specifically, the magnetic field requires $\hat{g}$ be replaced by $g\equiv \tilde G_g \hat g$, where $\tilde{G}_g=e^{i\sum_{\mathbf x}\lambda_g(\mathbf x)c^\dagger_{\mathbf x} c_{\mathbf x}}$ is a gauge transformation that acts on the electron annihilation operators by~\cite{herzog2020hofstadter}: \begin{align} \tilde{G}_g c_{\mathbf{r}} \tilde{G}_g^{-1}&= e^{-i\lambda_g(\mathbf{r})} c_{\mathbf{r}}, \label{eq:defG1}\\ \tilde{G}_g c^\dagger_{\mathbf{r}} \tilde{G}_g^{-1}&= e^{i\lambda_g(\mathbf{r})} c^\dagger_{\mathbf{r}}, \label{eq:defG2} \end{align} where $\lambda_g$ is a scalar function defined for each symmetry $\hat{g}$ that we will derive momentarily. Acting on terms in the Hamiltonian in the form of Eq.~(\ref{eq:peierls}), $\tilde{G}_g$ has the effect of mapping $\mathbf{A}(\mathbf{r}) \mapsto \mathbf{A}(\mathbf{r}) + \nabla \lambda_g(\mathbf{r})$. Similar gauge transformations were introduced by Zak for the magnetic translation operators in Refs.~\cite{zak1964magnetic,zak1964magnetic2}. More recently, the magnetic operators for rotations about the origin and for time-reversal symmetry were considered in Refs.~\cite{de2011exponentially,matsugatani2018universal,herzog2020hofstadter}. Here, we develop a general theory for any symmetry group in the presence of a magnetic field, thereby extending previous works to include more general rotations and glide reflection symmetries. Doing so allows us to apply the theory of symmetry indicators to diagnose topological phases in the presence of a magnetic field. \par We now derive the gauge transformation $\lambda_g$ in Eq.~(\ref{eq:defG1}): we require that if a single-particle Hamiltonian in zero field is invariant under a symmetry $\hat{g}$, then in the presence of a magnetic field that preserves $\hat{g}$, the Hamiltonian modified by the Peierls substitution in Eq.~(\ref{eq:peierls}) is invariant under the combined symmetry operation $g \equiv \tilde{G}_g \hat{g}$, i.e., we require \begin{equation} g : e^{i\int_{\mathbf r_1}^{\mathbf r_2} \mathbf A(\mathbf r)\cdot d\mathbf r}c^{\dagger}_{\mathbf r_2} c_{\mathbf r_1} \mapsto e^{i\int_{g\mathbf r_1}^{g\mathbf r_2} \mathbf A(\mathbf r')\cdot d\mathbf r'}c^{\dagger}_{g\mathbf r_2} c_{g\mathbf r_1} \label{eqn:covariance} \end{equation} Acting on the left-hand-side by $g= \tilde{G}_g \hat{g}$, using the definition of $\tilde{G}_g$ in Eqs.~(\ref{eq:defG1}) and (\ref{eq:defG2}), and equating with the right-hand-side yields \begin{equation} e^{i\int_{\mathbf r_1}^{\mathbf r_2} \mathbf A(\mathbf r)\cdot d\mathbf r +i\int_{g\mathbf{r}_1}^{g\mathbf{r}_2} \nabla \lambda(\mathbf{r}')\cdot d\mathbf{r}' } = e^{i\int_{g\mathbf r_1}^{g\mathbf r_2} \mathbf A(\mathbf r')\cdot d\mathbf r'} \label{eqn:covariance2} \end{equation} A few lines of algebra (detailed in Appendix~\ref{app:Eqlambda}) show that Eq.~(\ref{eqn:covariance2}) is satisfied when $\lambda_g(\mathbf r)$ satisfies \begin{equation} \label{eqn:lambda} \nabla \lambda_g(\mathbf r) = \mathbf A(\mathbf r)-R_g \mathbf A (g^{-1} \mathbf r), \end{equation} where $R_g$ is the point group part of $g$. Eq.~(\ref{eqn:lambda}) determines each $\lambda_g$ up to a constant. We choose the constant such that for translation~\cite{herzog2020hofstadter} \begin{equation} \label{eqn:lambdaT} \lambda_{T(\mathbf a)}(\mathbf r) = \int_{\mathbf r-\mathbf a}^{\mathbf r} \mathbf A(\mathbf r')\cdot d\mathbf r' + \mathbf B \cdot \mathbf a \times \mathbf r \end{equation} and that a $2\pi$ rotation is implemented by the identity matrix. This choice of constants ensures that the commutation relations between translation and rotations about the origin are the same as at zero field as we show in the Appendix~\ref{app:rotation}. This choice of gauge is fixed throughout this paper; later when we refer to a gauge choice, we are referring to the gauge of the vector potential. So far, we have only considered lattice degrees of freedom; orbital and spin degrees of freedom can be included by an extra unitary transformation in the action of $\hat{g}$, {which does not change $\lambda_g$}. We will include these degrees of freedom in later sections. \par Eq.~(\ref{eqn:lambda}), which serves as the definition of $\lambda_g$, is the first key result of this manuscript. Combining it with the spatial action of the symmetry yields the explicit form of the magnetic symmetry operator: \begin{align} \label{eqn:Gg1} g&=e^{i\sum_{\mathbf x'}\lambda_g (\mathbf x')c^\dagger_{\mathbf x'}c_{\mathbf x'}} \sum_\mathbf{x} c^\dagger_{\hat g \mathbf x}c_{\mathbf x} \\ &=\sum_\mathbf{x} e^{i\lambda_g (\hat g \mathbf x)} c^\dagger_{\hat g\mathbf x}c_{\mathbf x} , \label{eqn:Gg2} \end{align} where $\lambda_g$ is determined by Eq.~(\ref{eqn:lambda}). The second equality holds when $g$ acts on the single-particle Hilbert space. We now explain why changing $\lambda_g$ up to a constant does not change the representation of the magnetic symmetry operators defined in Eq.~(\ref{eqn:Gg2}). These operators furnish projective representations of the space group. A projective representation $\rho$ of a group satisfies the following multiplication rule \begin{equation} \rho(h_1)\rho(h_2)=\omega(h_1,h_2) \rho(h_1h_2), \end{equation} where $h_1$, $h_2$ are group elements and $\omega(h_1,h_2)$ is called the 2-cocycle. If $\omega(h_1,h_2)\equiv 1$, then $\rho$ is an ordinary linear representation. In general, the magnetic symmetry operators in Eq.~(\ref{eqn:Gg2}) will have non-trivial 2-cocycles, as we show in the next sections. The $U(1)$ gauge freedom in Eq.~(\ref{eqn:lambda}) corresponding to the gauge transformation $\lambda_g \mapsto \lambda_g+C_g$, $g \mapsto e^{iC_g}g$ leaves the representation in the same group cohomology class, i.e. the transformed projective representation is equivalent to the previous one. Essential properties of projective representations are presented in Appendix.~\ref{app:projective}. \par In the next two subsections, we apply this formalism to two examples, first rederiving Zak's magnetic translation group and then reviewing the symmetries of the square lattice in a magnetic field. \subsection{\label{sec:mag_sym_1_zak} Zak's magnetic translation group} In Refs.~\onlinecite{zak1964magnetic} and~\onlinecite{zak1964magnetic2} Zak introduced the continuous magnetic translation symmetries. We reproduce Zak's result by taking the continuum limit {of Eq.~(\ref{eqn:Gg2})}. \par Consider a two-dimensional infinite plane without a lattice and denote operators that translate by ${\mathbf \Delta}=\Delta_x \hat{\mathbf{x}}+\Delta_y \hat{\mathbf{y}}$ by $\hat T(\mathbf \Delta)\equiv \hat T(\Delta_x,\Delta_y)$, where $\hat{\mathbf{x}}$ and $\hat{\mathbf{y}}$ denote the unit vectors. We first work in the symmetric gauge: $\mathbf{A}(\mathbf r)=\frac B2(-r_y,r_x)$. Then from Eq.~(\ref{eqn:lambda}): \begin{align} \label{eq:lambdaT_symmetric} \lambda_{T({\mathbf \Delta})}(\mathbf r)= \frac B2(\Delta_x r_y-\Delta_y r_x) \end{align} For continuous translations, we replace the sum $\sum_\mathbf{x} c^\dagger_{\hat T(\mathbf \Delta) \mathbf x}c_{\mathbf x}$ in Eq.~(\ref{eqn:Gg1}) with $e^{-ip_x\Delta_x-ip_y\Delta_y}$. Then the magnetic translation by vector $\mathbf \Delta$ is \begin{align} T(\mathbf \Delta)&=e^{i(\frac12 B\Delta_x(r_y+\Delta_y)-\frac12 B\Delta_y(r_x+\Delta_x))}e^{-i(p_x\Delta_x+p_y\Delta_y)} \nonumber \\ &=e^{-i((p_x-\frac12 Br_y)\Delta_x+(p_y+\frac12 Br_x)\Delta_y)}, \end{align} where the Baker–Campbell–Hausdorff formula is considered. Therefore, the generators of the magnetic translations in $\hat{\mathbf x}$ and $\hat{\mathbf y}$ directions are \begin{align} K_x&=p_x-\frac12 Br_y \nonumber \\ K_y&=p_y+\frac12 Br_x, \end{align} which is exactly Zak's definition from his 1964 paper~\cite{zak1964magnetic}.\par In the remainder of the manuscript it will be easier to use the Landau gauge $\mathbf A(\mathbf r)=(-B r_y,0)$. Repeating the calculation of $\lambda_g$ in the Landau gauge yields \begin{align} \lambda_{T(\mathbf \Delta)}(\mathbf r)=-B\Delta_y r_x \label{eqn:lambdaT_Landau} \end{align} One important property of the magnetic translation operators is the gauge-invariant noncommutativity: \begin{equation} T(\Delta_x \hat{\mathbf{x}}) T(\Delta_y \hat{\mathbf{y}})= T(\Delta_y \hat{\mathbf{y}}) T(\Delta_x \hat{\mathbf{x}}) e^{iB\Delta_x\Delta_y} \label{eqn:TxTyDelta}, \end{equation} which reproduces the Aharonov-Bohm phase. More generally, for two translations $\mathbf a_1$ and $\mathbf a_2$, the gauge invariant multiplication equation is~\cite{zak1964magnetic} \begin{equation} \label{eqn:Ta1Ta2} T(\mathbf a_1) T(\mathbf a_2) = T(\mathbf a_1+\mathbf a_2) e^{\frac i2 \mathbf B \cdot (\mathbf a_1 \times \mathbf a_2)} \end{equation} The gauge invariant phase term $e^{\frac i2 \mathbf B \cdot (\mathbf a_1 \times \mathbf a_2)}$ is the 2-cocycle of magnetic translations, which shows the magnetic translation operators form a non-trivial projective representation of the translation group. \subsection{\label{sec:mag_sym_2_landau} Magnetic symmetries of the square lattice} \begin{table*}[t] \begin{tabular}{c|c|c|c|c|c|c|c} \hline $g$ &$T(\Delta_x\hat{\mathbf x})$ &$T(\Delta_y\hat{\mathbf y})$& $C_2(\bar x, \bar y)$ &$C_4(\bar x, \bar y)$ &$I(\bar x, \bar y)$ &$Um_{x}(\bar x)$ &$Um_{y}(\bar y)$\\ \hline &&&&&&&\\[-0.5em] $\hat{g}=\{R_g|\tau_g\}$ &$\{0|(\Delta_x,0)\}$ &$\{0|(0,\Delta_y)\}$ &$\{\hat{C}_2|(2\bar x,2\bar y)\}$ &$\{\hat{C}_4|(\bar x+\bar y,\bar y-\bar x)\}$ &$\{\hat{I}|(2\bar x,2\bar y)\}$ &$U\{\hat{m}_{x}|(2\bar x,0)\} $ &$U\{\hat{m}_{y}|(0,2\bar y)\} $\\ \hline &&&&&&&\\[-0.5em] $\lambda_g(x,y)$&0&$-B\Delta_y x$ &$-2B\bar y (x-\bar x)$ &$-B(x-\bar x)(y-\bar y)$ &$-2B\bar y (x-\bar x)$ &$0$ &$-2B\bar y x$\\ &&&&$+B\bar y((y-{\bar y})-(x-\bar x))$&&\\ \hline \end{tabular} \caption{The gauge transformation $\lambda_g(x,y)$ for symmetries of the square lattice in Landau gauge. For each symmetry $g$ in the first row, the second row lists the symmetry in the notation $\lbrace R_g | \tau_g \rbrace$, where $\hat{g}: \mathbf{r} \mapsto R_g \mathbf{r} + \tau_g$. The third row provides $\lambda_g$ from Eq.~(\ref{eqn:lambda}).} \label{tab:lambda} \end{table*} As a second example, we consider discrete symmetries of the two-dimensional square lattice using the Landau gauge $\mathbf A(\mathbf r)=(-B r_y,0)$. When $B=0$, the square lattice is invariant under the layer group $p4/mmm$, which is generated by a four-fold rotation symmetr and the mirrors $m_x$ and $m_z$. Without a magnetic field, the system is also invariant under time-reversal symmetry, $\cal T$. When $B\neq 0$, only the symmetries that leave the magnetic field invariant (four-fold rotations and $m_z$) remain; the resulting layer group is $p4/m$. To determine how these symmetries act on the electron creation/annihilation operators, one must compute the gauge transformation $\lambda_g$ from Eq.~(\ref{eqn:lambda}). We summarize the results in Table~\ref{tab:lambda}. Notice that $\lambda_g$ depends on the rotation or inversion center; thus, it is necessary to introduce the notation \begin{equation} \label{eqn:defCnxy} C_n(\bar{x},\bar{y}) \equiv T(\bar{x},\bar{y})C_n T(-\bar{x},-\bar{y}) \end{equation} to denote an $n$-fold rotation about the point $(\bar x,\bar y)$; we use $C_n \equiv C_n(\bar{x}=0,\bar{y}=0)$ to denote a rotation about the origin. We adopt analogous notation for inversions and reflections about different points and planes. The symmetries $m_x$, $m_{(110)}$ and $\mathcal{T}$ flip the magnetic field and thus are not symmetries at finite $B$. However, the product of these symmetries with a magnetic flux shifting operator can leave the system invariant at special values of flux, as we now describe. A lattice Hamiltonian coupled to a magnetic field is periodic in $B$: the period corresponds to the minimal magnetic field such that every possible closed hopping path encloses an integer multiple of $2\pi$ flux. Let $\phi$ denote the magnetic flux per unit cell and $\Phi = 2\pi n$ its periodicity, where $n$ is an integer. Following Ref.~\cite{herzog2020hofstadter}, we define the unitary matrix $U$ that shifts $\phi \mapsto \phi + \Phi$ by \begin{align} {U}&=e^{i\sum_{\mathbf x'}\lambda_{U} (\mathbf x')c^\dagger_{\mathbf x'}c_{\mathbf x'}} \\ \lambda_{U} (\mathbf r) &= \int ^{\mathbf r}_{\mathbf r_0} \widetilde{ \mathbf A}(\mathbf r) \cdot d\mathbf r, \end{align} where $\mathbf r_0$ is a reference lattice point and $\tilde{\mathbf{A}}$ is the magnetic vector potential corresponding to $\Phi$ flux, i.e., $\nabla \times \widetilde{\mathbf A}=\Phi$. Notice that for any symmetry $g$ that flips $\phi \mapsto -\phi$, the product $Ug$ is a symmetry in the special case where $\phi = \Phi/2$. In the case of the square lattice, the products $Um_x$, $Um_y$ and $U{\cal T} $ are recovered as symmetries of the system at the special value of $\phi = \Phi/2$. We list the gauge transformations for $Um_x$ and $Um_y$ at $\phi = \Phi/2$ in Table~\ref{tab:lambda}. In the special case of a square lattice and Landau gauge, $\lambda_U= -\Phi yx,$ where $x = (\mathbf{r} - \mathbf{r}_0) \cdot \hat{\mathbf{x}}$. Since $\Phi$ is a multiple of $2\pi$ and $x,y$ are integers, this phase is also a multiple of $2\pi$. The flux translation matrix is given by $U = \mathbb{I}$, where $\mathbb{I}$ is the identity matrix. In summary, we have explicitly extended Zak's translation operators in a magnetic field to the discrete symmetries of the square lattice. In Appendix~\ref{app:triangular} we generalize the results to the symmetries of the triangular lattice. \subsection{BBH model} We apply the results of the previous section to derive the symmetry operators in the Benalcazar-Bernevig-Hughes (BBH) model ~\cite{benalcazar2017electric,benalcazar2017quantized}. The model describes spinless electrons on a square lattice. The Hamiltonian consists of nearest-neighbor hopping terms, whose amplitudes $\lambda_{x/y}$ and $\gamma_{x/y}$ are depicted in Fig.~\ref{fig:BBHmodell}. Since $\lambda_{x/y} \neq \gamma_{x/y}$, each unit cell contains four atoms. {Further, each square plaquette has $\pi$ flux, for a total flux $\phi = 4\pi$ per unit cell.} The flux periodicity is $\Phi = 8\pi$, corresponding to $2\pi$ flux per square plaquette. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{BBH.png} \caption{Lattice and hopping terms of the BBH model. The black dashed square indicates the unit cell. Blue dots indicate atoms, each with one orbital. The origin is at the left-bottom atom in the unit cell {indicated by $0$}. The hopping amplitudes $\gamma_{x/y}$ and $\lambda_{x/y}$ are real; the minus signs result from the magnetic flux $\phi=4\pi$, i.e., applying Eq.~(\ref{eq:peierls}) in Landau gauge.} \label{fig:BBHmodell} \end{figure} We now derive the symmetry operations in the presence of the magnetic field; these commutation relations were stated in Refs.~\onlinecite{benalcazar2017electric,benalcazar2017quantized}, but here we derive them as an application of our formalism. We start with the mirror symmetries: in zero field, the Hamiltonian is invariant under $m_x(\bar{x})$ and $m_y(\bar{y})$ where $\bar{x}, \bar{y}$ are half-integers. (The Hamiltonian is not invariant under reflections about lines containing the origin because $\gamma_{x/y} \neq \lambda_{x/y}$.) These mirror reflections flip the sign of the magnetic field and thus generically are not symmetries of the Hamiltonian at finite flux. However, since $\phi = 4\pi = \Phi/2$, the combined operations $Um_x(\bar{x})$ and $Um_y(\bar{y})$ are symmetries. We showed in the previous section that in the Landau gauge, the flux shifting operator $U = \mathbb{I}$ for this model. Therefore, at $\phi = 4\pi$, $m_x(\bar{x})$ and $m_y(\bar{y})$ are in fact symmetries of the Hamiltonian. The effect of the magnetic field is to change their commutation relations: using Table~\ref{tab:lambda} with $U = \mathbb{I}$ yields $ m_{x}(\bar x)m_{y}(\bar y) =m_{y}(\bar y)m_{x}(\bar x) e^{4iB\bar x\bar y} $. Since $B=\pi$ and $\bar x,\bar y$ are half-integers \begin{equation} \{m_{x}(\bar x), m_{y}(\bar y)\}=0, \end{equation} i.e., mirror symmetries in the BBH model anti-commute. We now consider a four-fold rotation. When $\gamma_x=\gamma_y$, $\lambda_x=\lambda_y$, the BBH model has a four-fold rotation symmetry $C_4(\frac{1}{2}, \frac{1}{2})$, as well as other four-fold rotation axes related by translation. Since $\phi=\Phi/2$, the system also has an effective time-reversal symmetry, $U\cal T$; since we established $U=\mathbb{I}$ for this model, $\mathcal{T}$ is a symmetry even at this finite field and acts by complex conjugation. In the absence of a magnetic field, time-reversal pairs eigenstates with $\pm i$ rotation eigenvalues. We now show that in the presence of a magnetic field, time-reversal pairs eigenstates of $C_4(\bar{x},\bar{y})$ in a more complicated way. \par Since our origin is chosen such that all lattice sites have integer coordinates, $x,y\in \mathbb Z$, the phase $e^{i\lambda_{C_4}}$ in Table~\ref{tab:lambda} takes values of $\pm e^{-i\pi/4}$, so that $e^{-2i\lambda_{C_4}}=i$. Therefore, given an eigenstate $|\xi\rangle$ of $C_4(\bar x,\bar y)$ with eigenvalue $\xi$, $\cal T|\xi\rangle$ is also an eigenstate of $C_4(\bar x,\bar y)$: \begin{align} C_4(\bar x,\bar y){\cal T}|\xi\rangle &= e^{i\lambda_{C_4}}\hat{C_4}(\bar x,\bar y){\cal T}|\xi\rangle \nonumber\\ &= {\cal T}e^{-i\lambda_{C_4}}\hat{C_4}(\bar x,\bar y)|\xi\rangle \nonumber\\ &= {\cal T}e^{-2i\lambda_{C_4}}C_4(\bar x,\bar y)|\xi\rangle \nonumber\\ &= {\cal T} i \xi |\xi\rangle =-i\xi^*{\cal T}|\xi\rangle. \label{eqn:BBHC4T} \end{align} Thus, $\cal T$ pairs $C_4(\bar x,\bar y)$ eigenstates with eigenvalues $\xi$ and $-i\xi^*$. This is an example of symmetry operators acting in unusual ways at finite field. \section{Momentum space representations} We now define how the magnetic symmetry operators act in momentum space. This requires first defining how the symmetries act on Bloch wave functions and then labelling the Bloch wave functions by irreducible representations (irreps) of the symmetry group at each momentum point. However, in the presence of magnetic flux, we cannot immediately define the Bloch wave functions because Bloch's theorem does not apply when the translation operators do not commute. To apply Bloch's theorem, we define an enlarged ``magnetic unit cell,'' chosen to contain an integer multiple of $2\pi$ flux. The translation vectors that span the magnetic unit cell are referred to as magnetic translation vectors. From Eq.~(\ref{eqn:TxTyDelta}), the magnetic translation operators commute and thus can be simultaneously diagonalized, forming an abelian subgroup $\mathbb T_M$ of the full translation group $\mathbb T$. Consequently, Bloch's theorem applies to the magnetic unit cell and eigenstates of the Hamiltonian can be labelled by wave vectors in the magnetic Brillouin zone. In Sec.~\ref{sec:mag_sym_3_momentum}, we define the Fourier transformed electron creation and annihilation operators in the magnetic unit cell. The operators necessarily have a ``sublattice'' index because the magnetic unit cell contains more than one non-magnetic unit cell. In Sec.~\ref{sec:kirrep}, we address how to label the Bloch wave functions by irreps of the little co-group at each momentum. Here we encounter another subtle point: since the little co-group is defined as a quotient group obtained from the space group mod magnetic translations, the little co-group only has a group structure if the magnetic translation group is a normal subgroup of the space group. Thus, not all magnetic unit cells are equal: to label Bloch wave functions by irreps of the little co-group, we must choose a magnetic unit cell such that $\mathbb{T}_M$ is a normal subgroup. After addressing this issue, we explain how to find the irreducible projective representations of the little co-group. \subsection{\label{sec:mag_sym_3_momentum} Symmetries in the magnetic Brillouin zone} \begin{figure} \centering \includegraphics[width=\linewidth]{unitcell.png} \caption{ Examples of (a) a $q$-by-$1$ unit cell and (b) a $q$-by-$q$ unit cell, taking $q=2$.} \label{fig:unitcell} \end{figure} We first consider the minimal magnetic unit cell in Landau gauge, which is a $q$-by-$1$ unit cell (see Fig.~\ref{fig:unitcell}(a)). For this choice of unit cell, the magnetic translation group $\mathbb{T}_M$ is generated by $T(\hat{\mathbf{x}})$ and $T(q\hat{\mathbf{y}})$. Now consider the layer group $p2$, generated by $C_2$ and lattice translations, for which $\mathbb{T}_M$ is a normal subgroup. $C_2$ acts identically to the non-magnetic case, {mapping a Bloch wave function at $\mathbf{k}$ to one at $-\mathbf{k}$.} However, $T(\hat{\mathbf y})$ acts in an unusual way, by mapping $k_x$ to $k_x+\phi$. This can be understood as follows: let $|\mathbf k\rangle$ be an eigenstate of $T(\hat{\mathbf x})$ such that $T(\hat{\mathbf x}) |\mathbf k\rangle = e^{ik_x}|\mathbf k\rangle$. Then $T(\hat{\mathbf y})|\mathbf k\rangle$ is also an eigenstate of $T(\hat{\mathbf x})$, with eigenvalue $k_x + \phi$, i.e., \begin{equation} T(\hat{\mathbf x}) \left[ T(\hat{\mathbf y}) |\mathbf k\rangle \right] = e^{i(k_x+\phi)}T(\hat{\mathbf y})|\mathbf k\rangle \label{eq:Ty} \end{equation} Thus, $T(\hat{\mathbf{y}})$ shifts the eigenvalue of $T(\hat{\mathbf{x}})$ by $e^{i\phi}$. {Nonetheless, both $C_2$ and $T(\hat{\mathbf{y}})$ have the usual property that a Bloch state at $\mathbf{k}$ is mapped to another Bloch state at $\mathbf{k}'$.} This is not the case for the layer group $p4$, {with respect to which $\mathbb{T}_M$ is not a normal subgroup}. As we will show below, the symmetry operator $C_4$ mixes a Bloch state at $\mathbf{k}$ into a linear combination of Bloch states at other momenta, forming a $q^2$-dimensional representation. Thus, we are motivated to consider a $q$-by-$q$ unit cell (see Fig.~\ref{fig:unitcell}(b)), where, although the magnetic unit cell is larger, the symmetry matrices are the same size as in the $q$-by-$1$ case. In Appendix~\ref{app:sym_q1_qq}, we show that the representations obtained from these two choices of magnetic unit cell are the same up to a unitary transformation. However, the $q$-by-$q$ unit cell is a more suitable to apply topological quantum chemistry because the corresponding magnetic translation group is a normal Abelian subgroup of the layer group $p4$. We now consider the $q$-by-$1$ and $q$-by-$q$ unit cells in detail for the group $p4$ to illustrate these points. \subsubsection{\label{sec:mag_sym_3_momentum_qby1}$q$-by-$1$ unit cell for $p4$} We first consider the $q$-by-$1$ unit cell shown in Fig.~\ref{fig:unitcell}(a). The coordinates of lattice sites are labeled by $(x,y)=(R_x,qR_y+j)$ where $R_x,R_y\in \mathbb Z$, $j=0,\dots,q-1$. The Fourier transformed electron creation and annihilation operators are defined by \begin{align} c^\dagger_{\mathbf R, j,\alpha}&=\frac{q}{(2\pi)^2}\int d\mathbf k e^{i (k_xR_x+k_yqR_y)}c^\dagger_{\mathbf k, j,\alpha} \label{eqn:FT_qby1_cd} \\ c_{\mathbf R, j,\alpha}&=\frac{q}{(2\pi)^2}\int d\mathbf k e^{-i (k_xR_x+k_yqR_y)}c_{\mathbf k, j,\alpha}, \label{eqn:FT_qby1_c} \end{align} where $\alpha$ labels orbital degrees of freedom on each site. For now, we ignore the $\alpha$ degree of freedom, but will add it later when necessary. The magnetic Brillouin zone is a torus with $k_x\in [0,2\pi)$, $k_y\in [0,2\pi/q)$. \par Using the Fourier transforms in Eqs.~(\ref{eqn:FT_qby1_cd}) and (\ref{eqn:FT_qby1_c}), we find the action of the symmetry operators in momentum space. A translation by one (non-magnetic) lattice vector in the $\hat{\mathbf y}$ direction is implemented by \begin{align} T(\hat{\mathbf y})&=\frac{q}{(2\pi)^2}\sum_j\int d\mathbf k~ e^{ik_y} c^\dagger_{\mathbf k+(\phi,0), j}c_{\mathbf k, j-1} \label{eq:Tyq} \end{align} Unlike the non-magnetic case, $T(\hat{\mathbf y})$ does not leave each $\mathbf{k}$ point invariant: it maps $(k_x,k_y)$ to $(k_x+\phi,k_y)$. Translations by the magnetic lattice vectors do leave $\mathbf{k}$ invariant. We now consider the four-fold rotation operator. Using the function $\lambda_{C_4}$ in Table~\ref{tab:lambda}, \begin{align} C_4&=\frac{q}{(2\pi)^2}\int d\mathbf k \sum_{j,j'}\sum_{n=0}^{q-1} \frac1q e^{i(\phi jj'-(k_y+2\pi n /q)j-k_xj')} \nonumber \\ &c^\dagger_{(k_x,k_y),j}c_{(k_y+2\pi n/q,~-k_x~\text{mod}~ 2\pi/q),j'} \label{eq:C4q} \end{align} Thus, the situation for $C_4$ is much worse than for $T(\hat{\mathbf y})$: $C_4$ does not rotate one $\mathbf{k}$ point to another, but instead mixes a state at $(k_x,k_y)$ into a linear combination of states at the different points $(k_y+2\pi n/q,-k_x)$ $n=0,1,\dots,q-1$. \subsubsection{\label{sec:mag_sym_3_momentum_qbyq}$q$-by-$q$ unit cell for $p4$} We now consider the $q$-by-$q$ unit cell shown in Fig.~\ref{fig:unitcell}(b). The coordinates of lattice sites are labeled by $(x,y)=(qR_x+j_x,qR_y+j_y)$ where $R_x,R_y\in \mathbb Z$ label a magnetic unit cell and $j_x,j_y=0,\dots,q-1$ label the coordinates of atoms within. The Fourier transformed electron creation and annihilation operators are defined by \begin{align} \label{eqn:FT_qbyq_cd} c^\dagger_{\mathbf R, \mathbf{j},\alpha}&=(\frac{q}{2\pi})^2\int d\mathbf k e^{i (k_xqR_x+k_yqR_y)}c^\dagger_{\mathbf k, \mathbf{j},\alpha} \\ c_{\mathbf R, \mathbf{j},\alpha}&=(\frac{q}{2\pi})^2\int d\mathbf k e^{-i (k_xqR_x+k_yqR_y)}c_{\mathbf k, \mathbf{j},\alpha} \label{eqn:FT_qbyq_c} \end{align} Again we omit the orbital degrees of freedom $\alpha$ in this section. The magnetic Brillouin zone is a torus with $k_x, k_y\in [0,2\pi/q)$.\par Using the Fourier transforms in Eqs.~(\ref{eqn:FT_qbyq_cd}) and (\ref{eqn:FT_qbyq_c}) and plugging $\lambda_g$ from Table~\ref{tab:lambda} into Eq.~(\ref{eqn:Gg2}), the magnetic $T(\hat{\mathbf y})$ and $C_4$ symmetries are~\cite{herzog2020hofstadter} \begin{align} T(\hat{\mathbf y})&=(\frac{q}{2\pi})^2\int d\mathbf k~e^{ik_y}\sum_{j_x,j_y}e^{-i\phi j_x} c^\dagger_{\mathbf k, j_x,j_y}c_{\mathbf k, j_x,j_y-1} \label{eq:Tyqq} \end{align} and \begin{align} C_4&=(\frac{q}{2\pi})^2\int d\mathbf k~\sum_{j_x,j_y}e^{-i\phi j_x j_y} e^{-i(C_4\mathbf k \cdot \mathbf j-\mathbf k \cdot \mathbf j')} \nonumber\\ &\qquad c^\dagger_{(-k_y,k_x), j_x,j_y}c_{(k_x,k_y), j_x',j_y'} \label{eq:C4qq} \end{align} where $\mathbf j'=(j_x',j_y')$ is a function of $j_x,j_y$ that satisfies $j_x'=j_y \mod q$ and $j_y'=-j_x \mod q$. In Eqs.~(\ref{eq:Tyqq}) and (\ref{eq:C4qq}), the action of the symmetry operator on $\mathbf{k}$ is identical to its action in the absence of a magnetic field, i.e., translation leaves $\mathbf{k}$ invariant and a rotation in space rotates $\mathbf{k}$. This is an improvement over the $q$-by-$1$ magnetic unit cell (Eqs.~(\ref{eq:Tyq}) and (\ref{eq:C4q})), {for which a rotation mixed a Bloch state into a linear combination of several Bloch states.} \subsection{\label{sec:kirrep}Irreps at high symmetry points} We now address how to determine irreps of the symmetry group at each momentum. A Bloch wave function at a particular momentum $\mathbf{k}$ transforms as a representation of the little group at $\mathbf{k}$, denoted $G_\mathbf{k}$, which consists of all the space group operations that leave $\mathbf{k}$ invariant up to a reciprocal lattice vector: \begin{equation} \label{eq:defGk} G_\mathbf{k} = \lbrace g \in G | g\mathbf{k} \equiv \mathbf{k} \rbrace, \end{equation} where $\equiv$ is defined by equality up to a reciprocal lattice vector. Since the lattice translations are always represented by Bloch phases in the representations, it is useful to label the wave functions by irreps of the little co-group, defined as \begin{equation} \label{eq:deftildeGK} \widetilde{G}_\mathbf{k}=G_\mathbf{k}/{\mathbb T}_M \end{equation} As mentioned above, for the little co-group to satisfy the definition of a group, $\mathbb{T}_M$ must be a normal subgroup of $G_\mathbf{k}$, i.e., for all $g\in G_\mathbf{k}$, $t\in \mathbb{T}_M$, $g^{-1} t g \in \mathbb{T}_M$. One can check that for the $q$-by-$1$ unit cell, the magnetic translation group is a normal subgroup of the layer group $p2$, but it is not normal for the layer groups containing three- or four-fold rotations (because, for example, $C_4^{-1} T(\hat{\mathbf{x}}) C_4 = T(\hat{\mathbf{y}})^{-1}$, which is not in the magnetic translation group for the $q$-by-$1$ unit cell.) Thus, we use the $q$-by-$q$ unit cell for layer groups with three- or four-fold rotations. Thus, under magnetic flux, the little co-groups and their irreps differ from their zero-flux analogues in two important ways: first, in the presence of magnetic flux, the little co-groups include sublattice translation symmetries; and second, the irreps of little co-groups in the presence of magnetic flux are projective representations corresponding to the 2-cocyle defined by the flux. We now study some examples: in Tables~\ref{tab:p2}, \ref{tab:p4} and \ref{tab:p4TI} we summarize the projective irreps at high symmetry points for the layer groups $p2$, $p4$, $p4/m'$ at flux $\phi=\pi$. For later convenience we have assumed there is spin-orbit coupling, i.e., $C_n^n=-1$. Notice that the character tables are not square, which is a general feature of projective representations. The projective irreps corresponding to a particular 2-cocycle can be considered as a subset of non-projective representations of a larger group; the character table of that larger group will be square. To ensure that we have found all the projective irreps, we use the theorem by Schur~\cite{bradley2010mathematical} stating that for irreducible projective representations with a particular $2$-cocyle, \begin{equation} \label{eqn:schur} \sum_{\rho} \left(\text{dim}(\rho)\right)^2=|\widetilde{G}_{\mathbf k}|, \end{equation} where the sum runs over all projective irreps $\rho$ of $\widetilde{G}_{\mathbf k}$ with the specified $2$-cocyle and $\widetilde{G}_{\mathbf k}$ is the little co-group defined above. (Notice this formula does not apply to anti-unitary groups.) The calculation of the irreps of little co-groups are shown in Appendix~\ref{app:irreps} with the (anti)-commutation relations for the magnetic symmetries shown in Appendix~\ref{app:rotation}. In the remainder of this section, we sketch the calculation for the simplest non-trivial case, layer group $p2$ at $\pi$ flux. For the $2$-by-$1$ unit cell, the group of magnetic lattice translations is ${\mathbb T}_M=\{T(n_1{\hat{\mathbf x}}+2n_2{\hat{\mathbf y}})|n_1,n_2\in \mathbb Z\}$ and the Brillouin zone is $[-\pi,\pi)\times[-\pi/2,\pi/2)$. We now determine the high-symmetry points. Since $C_2$ symmetry maps $(k_x,k_y)$ to $(-k_x,-k_y)$, there are four momenta that are symmetric under $C_2$ up to a magnetic reciprocal lattice vector: $(0,0),(0,\pi/2),(\pi,0),(\pi,\pi/2)$. Since $T(\hat{\mathbf y})$ maps $(k_x,k_y)$ to $ (k_x+\pi,k_y)$ (Eq.~(\ref{eq:Ty})), $T(\hat{\mathbf y})C_2$ maps $(k_x,k_y)$ to $ (-k_x+\pi,-k_y)$. Therefore, there are four $T(\hat{\mathbf y})C_2$ symmetric momenta, $(\pm \pi/2,0)$ and $(\pm \pi/2, \pi/2)$. We derive in Appendix~\ref{app:irreps} that the $C_2$ eigenvalues at $(\pi,0)$ are the same as $(0,0)$, while the $C_2$ eigenvalues at $(0,\pi/2)$ are opposite of $(\pi,\pi/2)$. The same relations hold for the $T(\hat{\mathbf y})C_2$ symmetric points. In conclusion, there are two independent $C_2$ symmetric points, $\Gamma=(0,0)$ and $Y=(0,\pi/2)$, and we find that each has two one-dimensional irreps labeled by $C_2$ eigenvalue $+i$, $-i$. There are also two independent $T(\hat{\mathbf y})C_2$ symmetric points, $X=(\pi/2,0)$ and $M=(\pi/2,\pi/2)$, and each has two one-dimensional irreps labeled by $T(\hat{\mathbf y})C_2$ eigenvalue $+i$, $-i$. Since each little co-group contains the identity element and either $C_2$ or $T(\hat{\mathbf y})C_2$, $|\widetilde{G}_{\mathbf k}|=2$ for these points. Thu, Eq.~(\ref{eqn:schur}) is satisfied, which means we have found all the projective irreps. \begin{table}[] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} &\multicolumn{2}{c|}{$X(\pi/2,0)$} &\multicolumn{2}{c|}{$Y(0,\pi/2)$} &\multicolumn{2}{c|}{$\Gamma(0,0)$} &\multicolumn{2}{c}{$M(\pi/2,\pi/2)$} \\ \hline Irrep & $X_1^{(p2)}$ & $X_2^{(p2)}$ & $Y_1^{(p2)}$ & $Y_2^{(p2)}$ & $\Gamma_1^{(p2)}$ & $\Gamma_2^{(p2)}$ & $M_1^{(p2)}$ & $M_2^{(p2)}$\\ \hline $C_2$ & & & $+i$ & $-i$ & $+i$ & $-i$ & & \\ $T(\hat{\mathbf y})C_2$ & $+i$ & $-i$ &&&& & $+i$ & $-i$ \end{tabular} \caption{High symmetry momenta (first row) and the irreps (second row) of their little co-group for the group $p2$. The third and fourth rows list the eigenvalue of the indicated symmetry; the row is blank if the symmetry is not in the little co-group.} \label{tab:p2} \end{table} \begin{table}[] \begin{tabular}{c|c|c|c|c} &\multicolumn{2}{c|}{$X(\pi/2,0)$} &\multicolumn{2}{c}{$Y(0,\pi/2)$}\\ \hline Irrep & $X_1$ & $X_2$ & $Y_1$ & $Y_2$\\ \hline $C_2$ & $i\sigma_z$ & $-i\sigma_z$ & $i\sigma_z$ & $-i\sigma_z$\\ $T(\hat{\mathbf x})$ & $\sigma_x$ & $\sigma_x$ & $\sigma_z$ & $\sigma_z$\\ $T(\hat{\mathbf y})$ & $\sigma_z$ & $\sigma_z$ & $\sigma_y$ & $\sigma_y$\\ $T(\hat{\mathbf x})T(\hat{\mathbf y})$ & $-i\sigma_y$ & $-i\sigma_y$ & $-i\sigma_x$ & $-i\sigma_x$ \\ \hline \end{tabular} \vspace{10pt} \begin{tabular}{c|c|c|c|c} \hline &\multicolumn{4}{c}{$\Gamma(0,0)$} \\ \hline Irrep & $\Gamma_1$ & $\Gamma_2$ & $\Gamma_3$ & $\Gamma_4$ \\ \hline $C_4T(\hat{\mathbf x})$ & $\begin{pmatrix} 1&\\&i \end{pmatrix}$ & $\begin{pmatrix} i&\\&-1 \end{pmatrix}$ & $\begin{pmatrix} -1&\\&-i \end{pmatrix}$ & $\begin{pmatrix} -i&\\&1 \end{pmatrix}$\\ $T(\hat{\mathbf x})T(\hat{\mathbf y})$ & $i\sigma_z$ & $i\sigma_z$ & $i\sigma_z$ & $i\sigma_z$ \end{tabular} \vspace{10pt} \begin{tabular}{c|c|c|c|c} \hline &\multicolumn{4}{c}{$M(\pi/2,\pi/2)$} \\ \hline Irrep & $M_1$ & $M_2$ &$M_3$ &$M_4$\\ \hline $C_4$ &$\begin{pmatrix} \epsilon &\\&\epsilon^*\end{pmatrix}$ &$\begin{pmatrix} -\epsilon^* &\\&\epsilon\end{pmatrix}$ &$\begin{pmatrix} -\epsilon &\\&-\epsilon^*\end{pmatrix}$ &$\begin{pmatrix} \epsilon^* &\\&-\epsilon\end{pmatrix}$ \\ $T(\hat{\mathbf x})T(\hat{\mathbf y})$ & $i\sigma_z$ & $i\sigma_z$ & $i\sigma_z$ &$i\sigma_z$ \end{tabular} \caption{High symmetry momenta (first row) and the irreps (second row) of their little co-group for the group $p4$. Subsequent rows list the eigenvalue of the indicated symmetry with $\epsilon=e^{i\pi/4}$; the row is blank if the symmetry is not in the little co-group.} \label{tab:p4} \end{table} \begin{table}[] \begin{tabular}{c|c|c|c|c|c|c|c} &{$X(\pi/2,0)$} &{$Y(0,\pi/2)$} &\multicolumn{2}{c|}{$\Gamma(0,0)$} &\multicolumn{3}{c}{$M(\pi/2,\pi/2)$} \\ \hline Irrep &$X_1X_2$ &$Y_1Y_2$ &$\Gamma_1\Gamma_4$&$\Gamma_2\Gamma_3$ &$M_1M_1$ &$M_3M_3$ &$M_2M_4$ \end{tabular} \caption{High symmetry momenta (first row) and the irreps (second row) of their little co-group for the group $p4/m'$. The notation $\Pi_i\Pi_j$ indicates that $\Pi_i$ and $\Pi_j$ are paired by $\cal TI$ symmetry, where $\Pi_i$ is an irrep of the corresponding little co-group with respect to layer group $p4$, shown in Table~\ref{tab:p4}.} \label{tab:p4TI} \end{table} \section{\label{sec:TQC}Topological quantum chemistry in a magnetic field} Finally we turn to the theory of TQC. TQC classifies topological crystalline insulators (TCIs) by enumerating all trivial phases in each space group, where a trivial phase is defined as one where exponentially localized Wannier functions exist and transform locally under all symmetries. A group of bands can be identified as a TCI if it is not in the space of trivial phases. Together, the Wannier functions corresponding to a single band (or group of bands) transform as a representation of the full space group, called a band representation \cite{zak1980symmetry,zak1981band,michel1999connectivity,michel2000elementary,michel2001elementary,bradlyn2017topological,cano2018building}. TQC labels each band representation by how its Bloch wave functions transform under symmetry at high symmetry momenta, i.e., by a set of irreps of the little co-group at each high symmetry momentum; this label is known as a symmetry indicator~\cite{po2017symmetry}. Symmetry indicators provide a practical way to identify many TCIs: specifically, a group of bands whose irreps at high symmetry momenta are not consistent with any of the trivial phases must be topological. In Sec.~\ref{sec:magneticWannier} we describe how to construct a basis of symmetric magnetic Wannier functions. We use this basis in Sec.~\ref{sec:inducedrep} to derive how the space group symmetries act on the Wannier functions; the symmetry matrices comprise the band representation. Fourier transforming the band representation yields its symmetry indicator. \subsection{Magnetic Wannier functions} \label{sec:magneticWannier} We now describe how to construct a basis of symmetric Wannier functions for a magnetic unit cell. Given a site $\mathbf{q}$, which will serve as a Wannier center, the site-symmetry group $G_\mathbf{q}$ is defined as the set of symmetries that leave $\mathbf{q}$ invariant, i.e., $G_\mathbf{q} = \lbrace g\in G | g\mathbf{q} = \mathbf{q} \rbrace$. The site-symmetry group defines a coset decomposition of the space group, \begin{equation} \label{eqn:cosetdecomp} G=\bigcup_{\alpha} g_\alpha G_{\mathbf q} \ltimes {\mathbb T}_M, \end{equation} where $G$ is the space group, ${\mathbb T}_M$ is the {magnetic} lattice translation group, and $\alpha=1, \dots, n$, where $n=|G/{\mathbb T}_M|/|G_{\mathbf q}|$ is the multiplicity of the Wyckoff position containing $\mathbf{q}$. The symmetries $g_\alpha$ are coset representatives. The choice of coset representatives is not unique; a different choice will yield a band representation related to the original by a unitary transformation, while the symmetry indicator is unchanged. The coset representatives define positions ${\mathbf q}_\alpha = g_\alpha {\mathbf q}$ that form the orbit of $\mathbf q$ within the magnetic unit cell. Together, these points are part of the same Wyckoff position, whose multiplicity $n$ is equal to the number of points in the orbit of $\mathbf{q}$ in the magnetic unit cell. Unlike the case of zero magnetic field, the set of coset representatives $g_\alpha$ includes pure translations within the magnetic unit cell. Fig.~\ref{fig:WC} shows the Wyckoff positions for the groups $p2$, and $p4$, $p4/m'$. \begin{figure}[b] \centering \includegraphics[width=\linewidth]{WC_July.png} \caption{Wyckoff positions in a magnetic flux $\pi$ for (a) the $2$-by-$1$ unit cell for group $p2$ and (b) the $2$-by-$2$ unit cell for the groups $p4$ and $p4/m'$. Each Wyckoff position is labelled by its multiplicity and an alphanumeric character. The general Wyckoff position, whose site-symmetry group consists of only the identity, is not shown.} \label{fig:WC} \end{figure} Suppose there are $n_{\mathbf q}$ orbitals centered at $\mathbf q$. These orbitals are described by $n_{\mathbf q}$ Wannier functions $|W_{i1}\rangle$, where $i=1\dots n_{\mathbf q}$. The Wannier functions transform under symmetries $g\in G_{\mathbf q}$ as a projective representation $\rho$ of $G_{\mathbf q}$, \begin{equation} \label{eqn:rep_sitegroup} g|W_{i1}\rangle =\sum_{j=1}^{n_{\mathbf q}}[\rho(g)]_{ji}|W_{j1}\rangle. \end{equation} Applying the representatives $g_\alpha$ in the coset decomposition of the space group $G$ in Eq.~(\ref{eqn:cosetdecomp}) to $|W_{i1}\rangle$ gives another Wannier function \begin{equation} \label{eqn:Walpha} |W_{i\alpha}\rangle = g_\alpha |W_{i1}\rangle, \end{equation} localized at ${\mathbf q}_\alpha$. All these Wannier functions $|W_{i\alpha}\rangle$, where $i=1\dots n_{\mathbf q}$ and $\alpha=1\dots n$, form an induced representation of $G$, as we now explain. \par \subsection{Induced representation} \label{sec:inducedrep} In this section, we derive how the space group symmetries act on the Wannier functions. This provides an explicit construction of a band representation with Wannier functions as a basis. Fourier transforming the band representation gives the irreps of the little co-group at each high-symmetry point, i.e., the symmetry indicator. Consider a group element $h g_{\alpha} \in G$. The coset decomposition in Eq.~(\ref{eqn:cosetdecomp}) implies that $hg_\alpha$ can be written in the form \begin{equation} \label{eqn:hg_coset} hg_\alpha = e^{if_{\alpha\beta}(h)}\{E|{\mathbf t}_{\alpha\beta}(h)\}g_\beta g \end{equation} where $\mathbf t_{\alpha\beta}(h) = h{\mathbf q}_\alpha -{\mathbf q}_\beta$ and $ \{E|\mathbf{t}_{\alpha\beta}\}\in \mathbb{T}_M$, $g\in G_{\mathbf q}$, and the coset representative $g_\beta$ are uniquely determined by the coset decomposition. The remaining phase factor $f_{\alpha\beta}(h)$ is due to the non-trivial 2-cocycles. For the two-dimensional systems without magnetic field, $f_{\alpha\beta}(h)\equiv0$. For the case with magnetic field, in general $f_{\alpha\beta}(h)$ is nonzero. The phase factor $f_{\alpha\beta}(h)$ is the new ingredient that appears in a magnetic field and is a key result of the present work; it does not appear in the non-magnetic theory (for example, it does not appear in Eq.~(6) in Ref.~\onlinecite{cano2018building}). This phase factor is gauge invariant because it results from the commutations between rotations and translations (see Appendix~\ref{app:rotation}). We briefly give two examples to show how this phase factor appears. As a first example, consider the layer group $p1$ with a $2$-by-$2$ unit cell, corresponding to $\pi$ flux. {Starting from a Wannier function centered at a general position $\mathbf{q} = (x,y)$, the coset representatives in Eq.~(\ref{eqn:cosetdecomp}) can be chosen as $g_1=\{E|0\}$, $g_2 = T(\hat{\mathbf x}) $, $g_3 = T(\hat{\mathbf y})$, $g_4=T(\hat{\mathbf x})T(\hat{\mathbf y})$. } Now consider the left-hand-side of Eq.~(\ref{eqn:hg_coset}) with $h=T(\hat{\mathbf y})$, $g_\alpha=T(\hat{\mathbf x})$. Then on the right-hand-side of Eq.~(\ref{eqn:hg_coset}), $g_\beta=T(\hat{\mathbf x})T(\hat{\mathbf y})$, $g=E$, and ${\mathbf t}_{\alpha\beta}=0$. Since $T(\hat{\mathbf y})T(\hat{\mathbf x}) = e^{i\pi} T(\hat{\mathbf x})T(\hat{\mathbf y})$, $f_{\alpha\beta}(h)=\pi$. As a second example, consider layer group $p4$ with a $2$-by-$2$ unit cell, corresponding again to $\pi$ flux. {Starting from a Wannier function centered at $\mathbf{q} = (\frac{1}{2}, \frac{1}{2})$, the coset representatives in Eq.~(\ref{eqn:cosetdecomp}) can be chosen as $g_1=\{E|0\}$, $g_2 = T(\hat{\mathbf x}) $, $g_3 = T(\hat{\mathbf y})$, $g_4=T(\hat{\mathbf x})T(\hat{\mathbf y})$.} Now consider the left-hand-side of Eq.~(\ref{eqn:hg_coset}) with $h=C_4$, $g_\alpha=T(\hat{\mathbf x})$. The coset decomposition uniquely determines $g_\beta=T(\hat{\mathbf x})T(\hat{\mathbf y})$, $g=C_4(\frac{1}{2},\frac{1}{2})$, {and $\mathbf{t}_{\alpha\beta} = (-2,0)$ on the right-hand-side of Eq.~(\ref{eqn:hg_coset}).} Since $ C_4 T(\hat{\mathbf x}) = e^{i3\pi/4}\{E|(-2,0)\} T(\hat{\mathbf x})T(\hat{\mathbf y})C_4(1/2,1/2) $, the extra phase term $f_{\alpha\beta}(h)=3\pi/4$. As discussed at the start of Sec.~\ref{sec:TQC}, the set of Wannier functions {centered at all $\mathbf q_\alpha$} form a basis for a band representation, which we denote $\rho_G$. Given a space group symmetry $h\in G$, Eq.~(\ref{eqn:hg_coset}) determines the matrix elements of $\rho_G(h)$ in the basis of Wannier functions defiend in Eq.~(\ref{eqn:Walpha}) by: \begin{align} \label{eqn:inducedR} \rho_{G}(h) |W_{i\alpha}(\mathbf r-\mathbf t)\rangle &= e^{if_{\alpha\beta}(h)}[\rho(g)]_{ji}|W_{j\beta}(\mathbf r-R\mathbf t-{\mathbf t}_{\alpha\beta})\rangle \end{align} where $R$ is the rotational part of $h$, $\rho(g)$ is the given representation defined in Eq.~(\ref{eqn:rep_sitegroup}), $\mathbf t_{\alpha\beta}(h) = h{\mathbf q}_\alpha -{\mathbf q}_\beta$ and sum over $j=1\dots n_{\mathbf q}$ is implied. Substituting the Fourier transformed Wannier functions, \begin{align} |W_{j\beta}(\mathbf r-\mathbf t) \rangle = \int d\mathbf k e^{i{\mathbf k}\cdot{\mathbf t}} |a_{j\beta}(\mathbf k,\mathbf r)\rangle, \\ |a_{j\beta}(\mathbf k,\mathbf r)\rangle = \sum_{\mathbf t} e^{-i{\mathbf k}\cdot{\mathbf t}} |W_{j\beta}(\mathbf r-\mathbf t) \rangle, \end{align} into Eq.~(\ref{eqn:inducedR}) yields \begin{align} \label{eqn:inducedk} \rho_{G}(h) |a_{i\alpha}(\mathbf k,\mathbf r)\rangle &= e^{if_{\alpha\beta}(h)-iR{\mathbf k}\cdot {\mathbf t}_{\alpha\beta}}[\rho(g)]_{ji}|a_{j\beta}(R\mathbf k,\mathbf r)\rangle \end{align} From Eq.~(\ref{eqn:inducedk}), a representation of the little co-group (defined in Sec.~\ref{sec:kirrep}) is determined from $\rho_G$ by restricting each matrix $\rho_G(h\in \widetilde{G}_\mathbf{k})$ to only the rows and column corresponding to Fourier-transformed Wannier functions at $\mathbf{k}$. The set of irreps obtained at all $\mathbf{k}$ determines the symmetry indicator following the procedure we introduced in Ref.~\onlinecite{fang2021filling}, which is summarized in Appendix~\ref{app:TQC}. We now derive the symmetry indicator classification for a few examples. \subsection{Examples} \label{sec:applications} We apply our formalism of TQC in a magnetic flux to three magnetic layer groups with $\pi$ flux: $p2$, $p4$, and $p4/m'$. In each case, we discuss the stable symmetry indicator classification; the derivations are in Appendix~\ref{app:TQC}. \par \subsubsection{p2} For layer group $p2$ with $\pi$ flux, we choose a $2$-by-$1$ magnetic unit cell, following the discussion in Sec.~\ref{sec:mag_sym_3_momentum}. The symmetry indicator has a $\mathbb{Z}_4$ classification. The indicator for a particular group of bands is \begin{multline} \label{eqn:indexp2} \text{index} = \#\Gamma_1-\#\Gamma_2+\#X_1-\#X_2 +\#Y_1-\#Y_2 \\ +\#M_1-\#M_2-N \mod 4 \end{multline} where $\#\Pi_i$ indicates the number of times the irrep $\Pi_i$ at the high symmetry point $\Pi$ appears in the bands and $N=\#\Gamma_1+\#\Gamma_2=\#X_1+\#X_2=\#Y_1+\#Y_2=\#M_1+\#M_2$ is the filling per magnetic unit cell. \par {This indicator Eq.~(\ref{eqn:indexp2}) is exactly the same as the Chern number indicator Eq.~(3) in Ref.~\onlinecite{matsugatani2018universal} in the special case of $\pi$-flux and spinful electrons, i.e.,} \begin{equation} \label{eqn:Chern} e^{i\pi (C/q-\bar{\rho})}=(-)^{2SN}w_{C_2}^\Gamma w_{C_2}^Y w_{T(\hat{\mathbf y})C_2}^X w_{T(\hat{\mathbf y})C_2}^M \end{equation} where {at flux $\pi$}, $q=2$; $\bar{\rho}=N/2$ is the filling per non-magnetic primitive unit cell; $S=1/2$ is the spin (angular momentum) quantum number; and $w_{g}^{\Pi}$ is the product of eigenvalues of the symmetry $g$ for all filled bands at momentum $\Pi$. \subsubsection{p4 } For layer group $p4$ with $\pi$ flux, we choose $2$-by-$2$ unit cell, following the discussion in Sec.~\ref{sec:mag_sym_3_momentum}. The symmetry indicator has a $\mathbb{Z}_8$ classification. The indicators for a particular group of bands are determined by: \begin{multline} \label{eqn:indexp4} \text{index}=2\# \Gamma_1+4\# \Gamma_2-2\# \Gamma_3+\# M_1+3\# M_2\\ -3\# M_3-\# M_4+4\# X_1 \mod 8 \end{multline} To understand this indicator, we compare the new index to the symmetry indicator formula for Chern number in Ref.~\onlinecite{fang2012bulk}: \begin{equation} \label{eqn:Chernnumber} e^{i\frac{\pi}{2} C}=(-)^{2SN}w_{C_4}^\Gamma w_{C_2}^X w_{C_4}^M, \end{equation} which, in terms of irreps, is given by (derivation in Appendix~\ref{app:Chern}) \begin{align} \label{eqn:ChernumberIrrep} C = 2N &+\#\Gamma_1+\#\Gamma_3-\#\Gamma_2-\#\Gamma_4 \nonumber \\ &+2(\#M_2+\#M_4) \mod 4 \end{align} $N$ is always an even integer due to the two-dimensional irreps (shown in Table~\ref{tab:p4}). We conclude \begin{equation} C = \text{index} \mod 4 \end{equation} The topological phase with index equal to $4\mod 8$ goes beyond the earlier symmetry indicator given by Eq.~(\ref{eqn:Chernnumber}). We leave it to future work to determine whether this index is a stronger indicator for the Chern number or has a different physical meaning. \subsubsection{p4/m'} For layer group $p4/m'$ with $\pi$ flux, we choose $2$-by-$2$ unit cell, following the discussion in Sec.~\ref{sec:mag_sym_3_momentum}. The symmetry indicator has a $\mathbb{Z}_2$ classification. The indicator for a particular group of bands is determined by: \begin{align} \label{eqn:index1p4TI} \text{index}&= N/4 \mod 2, \end{align} where $N/4\equiv \bar \rho$ is the filling per original unit cell. Notice each band in this group is four-fold degenerate (see Table~\ref{tab:p4TI} in Sec.~\ref{sec:kirrep}), and hence $\bar \rho \in \mathbb Z$. The group $p4/m'$ is generated by a four-fold rotation and the product of time-reversal and inversion symmetry $\cal TI$. As is well known, $\cal TI$ prevents a non-vanishing Chern number~\cite{bernevig2013topological} and the absence of $\cal T$ prevents the existence of strong topological insulator~\cite{fu2006time}. Since $\cal T$ and $\cal I$ are not separately symmetries, there is no mirror symmetry and hence no mirror Chern number. Thus, our stable index is a new phase that only exists in systems with magnetic flux. This phase is realized in the model we present in Sec.~\ref{sec:Model}. However, it does not realize an anomalous gapless boundary state because when the boundary is opened, the sublattice translation symmetries that protect the phase are broken. \par \section{\label{sec:Model} Application to a quadrupole insulator} {In this section, we apply our results to a model on the square lattice. At zero flux, this model is a quadrupole insulator that exhibits corner states. Since the symmetries that protect the corner states are preserved in the presence of a perpendicular magnetic field, the corner states must survive when magnetic flux is introduced. We use the formalism developed in the previous sections to verify the presence of corner states using symmetry indicators. Finally, we show that at a critical magnetic flux, the bulk gap closes and the corner states disappear, as shown in the Hofstadter butterfly spectrum in Fig.~\ref{fig:Hof}. We use the symmetry indicators to verify that when the corner states disappear, the symmetry indicator vanishes.} {Our results provide a new probe of the higher order topology in the model, i.e., the presence of a gap closing phase transition in the presence of a magnetic field, which may be easier to observe than probing the corner states directly. } \subsection{\label{sec:Model_model} Model} \begin{figure} \centering \includegraphics[width=\linewidth]{WBBH.png} \caption{Hofstadter spectrum of the OAL model. The grey states are calculated with periodic boundary conditions and show the bulk gap closing at a critical flux. The red states are calculated with open boundary conditions and show the disappearance of the corner states upon bulk gap closure. The spectrum is computed for dimensions $L_x=200$, $L_y=10$ and parameters $\lambda=1$, $\gamma=0.5$.} \label{fig:Hof} \end{figure} {We study a model proposed by Wieder et al in Ref.~\onlinecite{wieder2020strong}} at zero flux that has the same momentum space Hamiltonian as the {$C_4$ symmetric} Bernevig-Benalcazar-Hughes (BBH) model which is at $4\pi$ flux per unit cell. \cite{benalcazar2017quantized,benalcazar2017electric} {This model was given as an example in Fig.~3 and Appendix~A of Ref.~\onlinecite{wieder2020strong}}. Yet the two models have some fundamental differences: while the BBH model has four atoms per unit cell and one orbital per atom, Wieder's model has one atom sitting at the origin of the unit cell and four orbitals per atom. Since the position of atoms in the unit cell will be important when we include magnetic flux, the two models have different Hofstadter spectra. {Further, the BBH model describes spinless fermions, while Wieder's model describes fermions with spin-orbit coupling. As a result, the symmetry representations for the two models are different.} In momentum space the Hamiltonian is \begin{align} H(\mathbf k)&=(v_m+t_1(\cos(k_x)+\cos(k_y)))\Gamma_3 \nonumber \\ &+t_2(\cos(k_x)-\cos(k_y))\Gamma_4 \nonumber \\ &+u \sin(k_x)\Gamma_1+u \sin(k_y)\Gamma_2, \end{align} where $\Gamma_1=\tau_y\sigma_y$, $\Gamma_2=\tau_y\sigma_x$, $\Gamma_3=\tau_z$, and $\Gamma_4=\tau_x$. {The Pauli matrices $\tau$ and $\sigma$ together span the orbital space of each atom.} In the limit $t_1=t_2=u/\sqrt 2=\lambda/\sqrt 2$ and $v_m=\sqrt 2 \gamma$, this Hamiltonian is equivalent to the BBH Hamiltonian after a basis change. In this section, we adopt these parameters and set $\lambda=1$ and $\gamma=0.5$ so that the system is a quadrupole insulator at zero flux. The generators of the symmetries of this Hamiltonian take the matrix forms: \begin{align} \label{eqn:C4z} C_{4z}&=\tau_z\left(\frac{{\mathbb I}_\sigma-\sigma_z}{2}\right),\\ M_x&=\sigma_x,\\ {\cal TI}&=\sigma_y {\cal K}, \end{align} where $\cal K$ is the complex conjugation. There is also a chiral symmetry that anti-commutes with the Hamiltonian \begin{equation} \Gamma_5=\tau_y\sigma_z \label{eq:chiral} \end{equation} {To incorporate the effect of a magnetic field, we need the real space Hamiltonian, given by:} \begin{align} H&=\sum_{i,j\in \mathbb Z}\mathbf t_x c^\dagger_{(i+1,j)} c_{(i,j)}+\mathbf t_y c^\dagger_{(i,j+1)} c_{(i,j)}+h.c. \nonumber \\ &\quad +v_m\Gamma_3 c^\dagger_{(i,j)} c_{(i,j)} \label{eqn:Ham} \end{align} where $\mathbf{t}_{x,y}$ are hopping matrices given by \begin{align} \mathbf t_x=(t_1\Gamma_3+t_2\Gamma_4)/2-iu\Gamma_1/2\\ \mathbf t_y=(t_1\Gamma_3-t_2\Gamma_4)/2-iu\Gamma_2/2 \end{align} When a magnetic field in the $z$-direction is turned on, the Hamiltonian in Eq.~(\ref{eqn:Ham}) requires the Peierls substitution \cite{hofstadter1976energy}. Working in Landau gauge, $\mathbf A(x,y)=(-\phi y,0)$ where $\phi=B$ is the flux per unit cell and the substitution is given by \begin{align} c^\dagger_{(i+1,j)} c_{(i,j)} &\mapsto e^{-i\phi j}c^\dagger_{(i+1,j)} c_{(i,j)}\\ c^\dagger_{(i,j+1)} c_{(i,j)} &\mapsto c^\dagger_{(i,j+1)} c_{(i,j)} \end{align} The momentum space Hamiltonian at finite flux can be obtained by Fourier transforming Eq.~(\ref{eqn:Ham}) using the convention in Eqs.~(\ref{eqn:FT_qby1_cd}) and (\ref{eqn:FT_qby1_c}) when the flux is rational $\phi=2\pi p/q$. In Fig.~\ref{fig:Hof}, we numerically compute the Hofstadter spectrum for this model. \subsection{\label{sec:Model_analysis} Symmetry analysis} The model has a $2\pi$ periodicity in $\phi$, the flux per unit cell. At zero flux and $\pi$-flux the system is invariant under the symmetry group $p4/m'mm$, while at other fluxes the symmetry group is $p4$. Using the formalism developed in this manuscript, we apply TQC in a magnetic field to compute the symmetry indicators at $\pi$ flux. Indicators at other fluxes are discussed in Appendix~\ref{app:symmetry}. Ultimately, we will show that the symmetry indicator at $\pi$ flux corresponds to an absence of corner states, from which we deduce there must be a gap closing phase transition at a critical flux between zero and $\pi$. At $\pi$-flux, the magnetic unit cell is $2$-by-$2$ and the Brillouin zone is $[-\pi/2,\pi/2]\times[-\pi/2,\pi/2]$. According to Sec.~\ref{sec:mag_sym_3_momentum}, the four-fold rotation symmetry operators at $\Gamma=(0,0)$ and $M=(\pi/2,\pi/2)$ are \begin{align} D(C_{4z},\Gamma)&= \begin{pmatrix} 1&&&\\ &&1&\\ &1&&\\ &&&-1 \end{pmatrix}\otimes C_{4z}, \\ D(C_4,M))&= \begin{pmatrix} 1&&&\\ &&-1&\\ &1&&\\ &&&1 \end{pmatrix}\otimes C_{4z}, \end{align} {where the first matrix acts on the sublattice basis,} and the $C_{4z}$ matrix acts on the orbital basis as defined in Eq.~(\ref{eqn:C4z}). The magnetic translation symmetries at $\mathbf k$ are implemented by \begin{align} D(T(\hat{\mathbf x}),\mathbf k)&=e^{ik_x} \begin{pmatrix} &1&&\\ 1&&&\\ &&&1\\ &&1& \end{pmatrix} \otimes \tau_0\sigma_0 \\ D(T(\hat{\mathbf y}),\mathbf k)&=e^{ik_y} \begin{pmatrix} &&1&\\ &&&-1\\ 1&&&\\ &-1&& \end{pmatrix} \otimes \tau_0\sigma_0, \end{align} where $\tau_0$ and $\sigma_0$ are identity matrices. The irreps of the occupied bands are listed in Table~\ref{tab:BRatpi}. Each band is four-fold degenerate because ${\cal TI}^2=-1$ and $\{T(\hat{\mathbf x}),T(\hat{\mathbf y})\}=0$, as explained in Appendix~\ref{app:irreps}. Using the computed irreps in Table~\ref{tab:BRatpi}, the symmetry indicators are listed in Table~\ref{tab:indatpi}. \begin{table} \centering \begin{tabular}{c|c|c|c|c} band index& $1$ & $2$ &$3$ &$4$ \\ \hline irrep at $\Gamma$ &$\Gamma_2\Gamma_3$ &$\Gamma_1\Gamma_4$ &$\Gamma_2\Gamma_3$ &$\Gamma_1\Gamma_4$ \\ irrep at $X$ &$X_1X_2$ &$X_1X_2$ &$X_1X_2$ &$X_1X_2$ \\ irrep at $M$ &$M_2M_4$ &$M_3M_3$ &$M_1M_1$ &$M_2M_4$ \end{tabular} \caption{Band representation of the four four-fold degenerate bands at $\pi$-flux. The ordering of band index is from lowest energy to highest energy, i.e., half-filling corresponds to filling bands 1 and 2. Each irrep $\Pi_i\Pi_j$ is four-dimensional and defined in Table~\ref{tab:p4TI}. } \label{tab:BRatpi} \end{table} \begin{table} \begin{tabular}{c|c|c|c|c} band index& $n=1,4$ & $n=2,3$ & $1\oplus2$ &$3\oplus4$ \\ \hline $\mathbb Z_2$ phase (Eq.~(\ref{eqn:index1p4TI})) &$1$ &$1$ &$0$ &$0$\\ \hline $e_{4a} \mod 8$ &&&$2$&$2$\\ $e_{4b} \mod 8$ &&&$0$&$0$\\ $e_{8c} \mod 4$ &&&$0$&$0$ \\ \hline $e_{1a'} \mod 4$ &$0$&$2$&$2$&$2$\\ $e_{1b'} \mod 4$ &$0$&$2$&$2$&$2$\\ $e_{2c'} \mod 2$ &$2$&$0$&$2$&$2$ \\ \end{tabular} \caption{Symmetry indicators at $\pi$-flux. {The second column corresponds to each four-fold degenerate band individually, while the last two columns correspond to sums of bands.} {The second row shows} the strong topological index in Eq.~(\ref{eqn:index1p4TI}) is $1\mod 2$ for each band, while for two occupied/empty bands the index is $0\mod 2$. Since symmetric and exponentially localized Wannier functions exist for the two occupied or two empty bands, {in the next three rows, $e_\mathbf{q}$ indicates the number of Wannier functions centered at the Wyckoff position $\mathbf{q}$}, computed using Eqs.~(\ref{eqn:4a}) -- (\ref{eqn:8c}) in Appendix~\ref{app:TQC}. If the sublattice translation symmetry within each magnetic unit cell is broken, the number of Wannier functions centered at the indicated Wyckoff positions in the lower symmetry group are shown in Fig.~\ref{fig:WCWC}. } \label{tab:indatpi} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{WCvsWC.png} \caption{(a) Wyckoff position $4a$ in $2$-by-$2$ unit cell with space group $G$ splits into (b) Wyckoff positions $1a'$, $1b'$, $2c'$ in the same unit cell with symmetry group $G/(\mathbb T/ \mathbb T_M)$, i.e. no sublattice translations. } \label{fig:WCWC} \end{figure} Below the gap at half filling, the two occupied bands together ($1\oplus2$ in Table~\ref{tab:indatpi}) are topologically trivial. They admit symmetric exponentially localized Wannier functions located at the $4a$ Wyckoff position. Since the atoms are also at $4a$ Wyckoff position, there is no corner charge. This analysis agrees with the Hofstadter spectrum shown in Fig.~\ref{fig:Hof}. Open boundary conditions break the lattice translation symmetries and, in particular, break the sublattice translation symmetries within the magnetic unit cell. Once the sublattice translation symmetries are broken, the little co-group (Eq.~(\ref{eq:deftildeGK})) is identical to the non-magnetic case. Thus, the crystalline symmetry protected phases with open boundary condition should be labelled by the usual symmetry indicators in zero flux but with respect to the enlarged magnetic unit cell; these indicators are computed in Ref.~\onlinecite{fang2021filling}. The results are shown in the lower half of Table~\ref{tab:indatpi}. In this reduced symmetry group, the magnetic $4a$ Wyckoff position splits into positions: $1a'=(0,0)$, $1b'=(1,1)$, $2c'=(1,0),(0,1)$, as shown in Fig.~\ref{fig:WCWC}. \par \subsection{Corner states} The spectrum with periodic boundary condition has a gap at half filling at $\phi=0$ and $\phi=\pi$. This gap closes at some $\phi^*$ between $0$ and $\pi$ as shown in Fig.~\ref{fig:Hof}. For the spectrum with open boundary condition, there are higher-order topological states when $-\phi^*<\phi<\phi^*$ that are corner localized. Due to the chiral symmetry (\ref{eq:chiral}), they are at zero energy in this model. The corner states can be understood by the non-zero quadrupole moment~\cite{benalcazar2017electric} or the non-zero filling anomaly~\cite{benalcazar2019quantization,schindler2019fractional,wieder2020strong,fang2021filling}. \par The corner states have four-fold degeneracy, consistent with the non-magnetic symmetry analysis in Refs.~\onlinecite{fang2021filling, fang2021classification}. The corner states with open boundary condition always come in a group of $d$ states. This degeneracy $d$ is determined by the point group of the finite system. Let $w$ be the general Wyckoff position of the point group, with multiplicity $n_w$. Denote the site symmetry group $G_{w}$. It has only one irrep, $\rho(G_{w})$. The degeneracy of corner states is~\cite{fang2021classification} \begin{equation} \label{eqn:deg} d=\text{dim}(\rho(G_{w}))\times n_w \end{equation} where dim$(\rho(G_{w}))=2$ for spinful systems with time-reversal symmetry that squares to $-1$, otherwise dim$(\rho(G_{w}))=1$. In the present case, at zero flux the system is invariant under the symmetry group $p4/m'mm$, while at any small flux the symmetry group reduces to $p4$. For $p4/m'mm$ and $p4$, the point groups of the finite size system are $4/m'mm$ and $4$ respectively. Each has a general Wyckoff position $w$ with $n_w=4$ and $\text{dim}(\rho(G_w))=1$; thus, Eq.~(\ref{eqn:deg}) yields a degeneracy of $4$~\cite{fang2021classification}. Since the degeneracy of corner states is the same for zero flux and finite flux, the corner states do not split when the magnetic flux is introduced (The chiral symmetry in this model pins the corner states to zero energy, but even in the absence of chiral symmetry, the nonzero filling anomaly will remain the same for $0\leq\phi<\phi^*$.) We have also shown from the symmetry indicators that at half filling and $\pi$ flux, the system is in the trivial phase, without corner states. Thus, the corner states must terminate at $\phi=\phi^*$ by either a bulk or edge gap closing. There is indeed a bulk gap closing at flux $\phi^*$ as Fig.~\ref{fig:Hof} shows. This is consistent with the Wannier centers of the occupied bands, which can be deduced from the symmetry indicators: the Wannier centers are at the $4b$ Wyckoff position at zero flux and the $4a$ Wyckoff position at $\pi$ flux (see Table~\ref{tab:indatpi} and Appendix~\ref{app:symmetry}). Symmetries prevent four Wannier functions from moving continuously between the $4a$ and $4b$ positions~\cite{fang2021filling,song2017d}. A discontinuous jump of the Wannier centers implies the bulk gap closes between zero and $\pi$ flux. \par In Appendix~\ref{app:symmetry} we compute the symmetry indicator at intermediate flux $\phi=2\pi/5<\phi^*$ and $\phi=4\pi/5>\phi^*$ to verify that symmetry indicators are consistent with the presence and absence of corner states between zero and $\pi$. In Appendix \ref{app:Wilson} we show that the presence and absence of corner states also agrees with the nested Wilson loop ~\cite{benalcazar2017electric}. \par \section{\label{sec:conclusion} Conclusion} In conclusion, we derived a general framework to apply TQC and the theory of symmetry indicators to crystalline systems at rational flux per unit cell. Applying our results to some simple examples at $\pi$ flux revealed new symmetry indicators that did not appear at zero flux. Finally, the symmetry indicators enable us to study a quadrupole insulator at finite field, which reveals a gap closing topological-to-trivial phase transition as a function of magnetic field. Observing this phase transition could be particularly promising in moir\'e systems where higher flux is attainable for reasonable magnetic fields. While preparing our work, we became aware of a related study \cite{herzog2022hofstadter}, which gives criteria for when such a bulk gap closing at finite flux can be predicted from the band structure at zero flux. The two bulk gap closings between zero and $\Phi = 2\pi$ flux of our model in Sec.~\ref{sec:Model} are indicated by the real space invariant in Ref.~\onlinecite{herzog2022hofstadter}. \par We note that the Zeeman term is neglected in this manuscript. When the Zeeman term is present, the periodicity in the flux direction is broken. Thus there is no magnetic time-reversal symmetry or other mirror symmetries that flip magnetic flux. At large magnetic field where Zeeman term dominates, the two dimensional system must be in the trivial atomic limit where Wannier centers locate at the atom positions. {Our work is also restricted to a spatially constant magnetic field. It would be interesting to extend our results to a spatially varying periodic magnetic field that maintains a commensurate flux per unit cell. This more general theory might be relevant to magnetically ordered crystals.} \par As a final note, we draw a connection between our results and the theory of phase space quantization, where one seeks a symmetric and exponentially localized Wannier basis that can continuously reduce to points in the classical phase space by setting the Planck constant $h\rightarrow 0$~\cite{von2018mathematical}. However, such a basis can never be found due to the Balian-Low theorem, which forbids the existence of exponentially localized translational symmetric basis for any single particle~\cite{benedetto1994differentiation}. The magnetic Wannier functions in two dimensions share a similar translation group structure to the one-dimensional quantum phase space and the non-vanishing Chern number for any single magnetic band also forbids Wannierization~\cite{zak1997balian}, as we explain in Appendix~\ref{app:Wannier}. Thus, there is an interesting open question: since in two dimensional magnetic systems, it is possible to find a Wannier basis for a group of bands, we conjecture that the continuous quantization of phase space may be realized by constructing Wannier functions for groups of particles. \section{Acknowledgements} We acknowledge useful conversations with Andrei Bernevig, Aaron Dunbrack, Sayed Ali Akbar Ghorashi, Jonah Herzog-Arbeitman, and Oskar Vafek. We thank Jonah, Andrei, and their collaborators for sharing their unpublished manuscript. Our manuscript is based upon work supported by the National Science Foundation under Grant No. DMR-1942447. The work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. J.C. also acknowledges the support of the Flatiron Institute, a division of the Simons Foundation.
1,108,101,563,041
arxiv
\section{Introduction} The constituent quark model has been very successful in explaining the composition of hadrons in the past few decades. In this model, the observed meson spectrum is described as bound $q\bar{q}$ states grouped into SU(n) flavor multiplets. The nonets of pseudo-scalar, vector and tensor mesons have been well identified. Nevertheless, the identification of the scalar-meson nonet is still ambiguous. Distinguishing scalar mesons from non-resonant background is rather difficult due to their broad widths and non-distinctive angular distribution. There are copious candidates for the $J^{PC} = 0^{++}$ nonets~\cite{PDG}. The case with isospin zero states, e.g.~$f_0(500)$, $f_0(980)$, $f_0(1370)$, $f_0(1500)$, and $f_0(1710)$, is the most complicated from both experimental and theoretical points of view. Among them, the $f_0(980)$ meson, as a possible tetraquark candidate~\cite{PhysRevD.27.588, PhysRevD.41.2236, PhysRevD.96.033002}, is particularly interesting and can be studied via the hadronic decays $D_{s}^{+}\to\pi^{+}\pi^{0}\pi^{0}$, $D_{s}^{+} \to \pi^{+}\pi^{+}\pi^{-}$ and $D_{s}^{+} \to K^{+}K^{-}\pi^{+}$. Charge conjugation is implied throughout in this paper. The current published branching fraction (BF) of $D^+_s\to f_{0(2)}\pi^+$ from the $D_{s}^{+} \to \pi^{+}\pi^{+}\pi^{-}$ decays has large discrepancies~\cite{PDG, E687:1997jvh, E791:2000lzz} with that measured from the $D_{s}^{+} \to K^{+}K^{-}\pi^{+}$ decays. The $f_{0(2)}$ contributions may suffer from the contaminations of $a_{0}(980)\to K^+K^-$ or $\rho \to \pi^+ \pi^-$ and the $D_{s}^{+} \to \pi^{+}\pi^{0}\pi^{0}$ decays offer a cleaner environment due to absence of these contributions. Furthermore, hadronic $D_{s}^{+}$ decays can be used to probe the interplay of short-distance weak-decay matrix elements and long-distance QCD interactions, and the measured BFs provide valuable information concerning the amplitudes and phases that induce in decay processes~\cite{Li2021iwf, BCKa0, PRD79-034016, PRD81-074021, PRD84-074019}. The CLEO Collaboration reported a measurement of absolute BF $\mathcal{B}(D_{s}^{+} \to \pi^{+}\pi^{0}\pi^{0}) = (0.65\pm 0.13)\%$~\cite{CLEO-BF}, using 600~pb$^{-1}$ of $e^+e^-$ collision data recorded at a center-of-mass energy~($\sqrt{s}$) of 4.17~GeV. In this analysis, by using 6.32~$\rm fb^{-1}$ of data collected with the BESIII detector ranging from $\sqrt{s}=4.178$~GeV to $\sqrt{s}=4.226$~GeV, we perform the first amplitude analysis of $D^+_s\to \pi^+\pi^0\pi^0$ and a more precise measurement of its absolute BF. The amplitude analysis allows the determination of $\mathcal{B}(D_{s}^{+} \to f_{0}(980)\pi^{+})$, $\mathcal{B}(D_{s}^{+} \to f_{0}(1370)\pi^{+})$, and $\mathcal{B}(D_{s}^{+} \to f_{2}(1270)\pi^{+})$. \section{Detector and data sets} \label{sec:detector_dataset} The BESIII detector~\cite{Ablikim:2009aa} records symmetric $e^+e^-$ collisions provided by the BEPCII storage ring in the range from $\sqrt{s}=2.00$~GeV~ to $\sqrt{s}=4.95$~GeV~\cite{Yu:IPAC2016-TUYA01, Ablikim:2019hff}. The cylindrical core of the BESIII detector covers 93\% of the full solid angle and consists of a helium-based multilayer drift chamber~(MDC), a plastic scintillator time-of-flight system~(TOF), and a CsI(Tl) electromagnetic calorimeter~(EMC), which are all enclosed in a superconducting solenoidal magnet providing a magnetic field of 1.0~T. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identification modules interleaved with steel. The charged-particle momentum resolution at $1~{\rm GeV}/c$ is $0.5\%$, and the d$E$/d$x$ resolution is $6\%$ for electrons from Bhabha scattering. The EMC measures photon energies with a resolution of $2.5\%$ ($5\%$) at $1$~GeV in the barrel (end cap) region. The time resolution in the TOF barrel region is 68~ps, while that in the end cap region is 110~ps. The end cap TOF system was upgraded in 2015 using multi-gap resistive plate chamber technology, providing a time resolution of 60~ps~\cite{etof1, etof2, etof3}. The data samples used in this analysis are listed in Table~\ref{energe}~\cite{LumE42301,LumE42302}. Since the cross section of $D_{s}^{*\pm}D_{s}^{\mp}$ production in $e^{+}e^{-}$ annihilation is about a factor of twenty larger than that of $D_{s}^{+}D_{s}^{-}$~\cite{DsStrDs}, and the $D_{s}^{*+}$ meson decays to $\gamma D_{s}^{+}$ with a dominant BF of $(93.5\pm0.7)$\%~\cite{PDG}, the signal events discussed in this paper are selected from the process $e^+e^-\to D_{s}^{*\pm}D_{s}^{\mp} \to \gamma D_{s}^{+}D_{s}^{-}$. \begin{table}[htb] \renewcommand\arraystretch{1.25} \centering \caption{The integrated luminosities ($\mathcal{L}_{\rm int}$) and the requirements on $M_{\rm rec}$ for various collision energies. The definition of $M_{\rm rec}$ is given in Eq.~(\ref{eq:mrec}). The first and the second uncertainties are statistical and systematic, respectively.} \begin{tabular}{ccc} \hline $\sqrt{s}$ (GeV) & $\mathcal{L}_{\rm int}$ (pb$^{-1}$) & $M_{\rm rec}$ (GeV/$c^2$)\\ \hline 4.178 &3189.0$\pm$0.2$\pm$31.9&[2.050, 2.180] \\ 4.189 &526.7$\pm$0.1$\pm$2.2&[2.048, 2.190] \\ 4.199 &526.0$\pm$0.1$\pm$2.1&[2.046, 2.200] \\ 4.209 &517.1$\pm$0.1$\pm$1.8&[2.044, 2.210] \\ 4.219 &514.6$\pm$0.1$\pm$1.8&[2.042, 2.220] \\ 4.226 &1047.3$\pm$0.1$\pm10.2$&[2.040, 2.220] \\ \hline \end{tabular} \label{energe} \end{table} Simulated data samples produced with a {\sc geant4}-based~\cite{GEANT4} Monte Carlo (MC) package, which includes the geometric description of the BESIII detector and the detector response, are used to determine detection efficiencies and to estimate backgrounds. The simulation models the beam energy spread and initial state radiation (ISR) in the $e^+e^-$ annihilations with the generator {\sc kkmc}~\cite{KKMC1, KKMC2}. The inclusive MC sample includes the production of open charm processes, the ISR production of vector charmonium(-like) states, and the continuum processes incorporated in {\sc kkmc}~\cite{KKMC1, KKMC2}. The known decay modes are modelled with {\sc evtgen}~\cite{EVTGEN1, EVTGEN2} using BFs taken from the Particle Data Group~\cite{PDG}, and the remaining unknown charmonium decays are modelled with {\sc lundcharm}~\cite{LUNDCHARM1, LUNDCHARM2}. Final state radiation~(FSR) from charged final state particles is incorporated using {\sc photos}~\cite{PHOTOS}. \section{Event selection} \label{ST-selection} The data samples were collected just above the $D_s^{*\pm}D_s^{\mp}$ threshold, which allows to extract relatively pure samples for amplitude analysis and measurements of absolute BFs of the hadronic $D^+_s$ meson decays with a tag method. The tag method has single-tag~(ST) and double-tag~(DT) candidates. The ST candidates are those $D_{s}^{\pm}$ mesons without further requirements on the remaining tracks and EMC showers. The DT candidates are identified by fully reconstructing the $D_s^+D_s^-$ mesons, where one of the $D_s^{\pm}$ mesons decays into the signal mode $D_{s}^{+} \to \pi^{+}\pi^{0}\pi^{0}$ and the other to a tag mode. The $D_s^\pm$ mesons are reconstructed through the final state particles, i.e.~$\pi^\pm$, $K^\pm$, $\eta$, $\eta^{\prime}$, $K_{S}^{0}$ and $\pi^{0}$, whose selection criteria is discussed below. For charged tracks not originating from $K_S^0$ decays, the distance of closest approach to the interaction point is required to be less than 10~cm along the beam direction and less than 1~cm in the plane perpendicular to the beam. Particle identification~(PID) for charged tracks combines measurements of the specific ionization energy losses in the MDC~(d$E$/d$x$) and the flight time in the TOF to form a likelihood $\mathcal{L}(h)~(h=K,\pi)$ for the hypothesis of being a hadron $h$. A charged hadron is identified as a kaon if $\mathcal{L}(K)$ is larger than $\mathcal{L}(\pi)$, otherwise it is identified as a pion. The $K_{S}^0$ candidates are reconstructed from two oppositely charged tracks satisfying $|\!\cos\theta|< 0.93$ and the distance of closest approach along the beam direction must be less than 20~cm. The two charged tracks coming form the $K^0_S$ are assigned as $\pi^+\pi^-$ without imposing further PID criteria. They are constrained to originate from a common vertex and are required to have an invariant mass within $|M_{\pi^{+}\pi^{-}} - m_{K_{S}^{0}}|<$ 12~MeV$/c^{2}$, where $m_{K_{S}^{0}}$ is the $K^0_{S}$ mass taken from PDG~\cite{PDG}. Photon candidates are identified using showers in the EMC. The deposited energy of each shower must be more than 25~MeV in the barrel region~($|\!\cos \theta|< 0.80$) and more than 50~MeV in the end cap region~($0.86 <|\!\cos \theta|< 0.92$). The angle between the position of each shower in the EMC and any charged track must be greater than 10 degrees to exclude showers originating from charged tracks. The difference between the EMC time and the event start time is required to be within [0, 700]\,ns to suppress electronic noise and showers unrelated to the event. The $\pi^0$ $(\eta)$ candidates are reconstructed through $\pi^0\to \gamma\gamma$ ($\eta \to \gamma\gamma$) decays, with at least one photon in the barrel. The invariant masses of the photon pairs for $\pi^{0}$ and $\eta$ candidates must be in the ranges $[0.115, 0.150]$~GeV/$c^{2}$ and $[0.490, 0.580]$~GeV/$c^{2}$, respectively, which are about three times the resolution of the detector. A kinematic fit that constrains the $\gamma\gamma$ invariant mass to the $\pi^{0}$ or $\eta$ nominal mass~\cite{PDG} is performed to improve the mass resolution. The $\chi^2$ of the kinematic fit is required to be less than 30. The $\eta^{\prime}$ candidates are formed from the $\pi^{+}\pi^{-}\eta$ combinations with an invariant mass within a range of $[0.946, 0.970]$~GeV/$c^{2}$. Seven tag modes are used to reconstruct the tag $D_{s}^{-}$ candidate and its mass~($M_{\rm tag}$) is required to fall within the mass window listed in Table~\ref{tab:tag-cut}. The recoiling mass of the tag $D_{s}^{-}$ candidate \begin{eqnarray} \begin{aligned} M_{\rm rec} = \left({\left(\sqrt{s}-\sqrt{|\vec{p}_{D_{s}}|^{2}+m_{D_{s}}^{2}}\right)^{2}-|\vec{p}_{D_{s}}|^{2}}\right)^{1/2}\label{eq:mrec} \end{aligned}\end{eqnarray} is calculated in the $e^+e^-$ center-of-mass system, where $\vec{p}_{D_{s}}$ is the momentum of the $D_{s}^{-}$ candidate in the $e^+e^-$ center-of-mass frame and $m_{D_{s}}$ is the known $D_{s}^{-}$ mass~\cite{PDG}. The value of $M_{\rm rec}$ is required to be within the region listed in Table~\ref{energe}. \begin{table}[htbp] \renewcommand\arraystretch{1.25} \centering \caption{Requirements on $M_{\rm tag}$ for various tag modes. }\label{tab:tag-cut} \begin{tabular}{lc} \hline Tag mode & Mass window (GeV/$c^{2}$) \\ \hline $D_{s}^{-} \to K_{S}^{0}K^{-}$ & [1.948, 1.991] \\ $D_{s}^{-} \to K^{+}K^{-}\pi^{-}$ & [1.950, 1.986] \\ $D_{s}^{-} \to K_{S}^{0}K^{+}\pi^{0}$ & [1.946, 1.987] \\ $D_{s}^{-} \to K_{S}^{0}K^{-}\pi^{-}\pi^{+}$ & [1.958, 1.980] \\ $D_{s}^{-} \to K_{S}^{0}K^{+}\pi^{-}\pi^{-}$ & [1.953, 1.983] \\ $D_{s}^{-} \to \pi^{-}\eta^{\prime}$ & [1.940, 1.996] \\ $D_{s}^{-} \to K^{-}\pi^{+}\pi^{-}$ & [1.953, 1.986] \\ \hline \end{tabular} \end{table} \section{Amplitude analysis} \label{Amplitude-Analysis} \subsection{Further selections} \label{AASelection} The following selection criteria are further applied in order to obtain data samples with high purity for the amplitude analysis. The selection criteria discussed in this section are not used in the BF measurement. An eight-constraint kinematic fit is performed to select photon from $D_{s}^{*\pm}$ decays and the best DT candidates assuming $D_{s}^{-}$ candidates decaying to one of the tag modes and $D_{s}^{+}$ decaying to the signal mode with two hypotheses: the signal $D_s^{+}$ comes from a $D_s^{*+}$ or the tag $D_s^{-}$ comes from a $D_s^{*-}$. In this kinematic fit, the total four-momentum is constrained to the initial four-momentum of the $e^+e^-$ system, and the invariant masses of $(\gamma\gamma)_{\pi^{0}}$, $(\pi^{+}\pi^{-})_{K_S^0}$, tag $D_{s}^{-}$, and $D_{s}^{*+(-)}$ candidates are constrained to the corresponding known masses~\cite{PDG}. The best combination is chosen with the minimum $\chi^2$. After the selection, an additional constraint of the signal $\pi^+ \pi^0 \pi^0$ invariant mass to the known $D_s^+$ mass is added and the updated four-momenta of final-state particles from the kinematic fit are used for the amplitude analysis in order to ensure that all candidates fall within the phase-space boundary. The energy of the transition photon from $D_s^{*+}\to \gamma D_s^{+}$ is required to be smaller than 0.18~GeV. The recoiling mass against this photon and the signal $D_s^{+}$ is required to fall in the range $[1.952, 1.995]$~GeV/$c^2$. The $D_s^+\to \pi^+\pi^0\eta$ decay contributes to the background when $\pi^0\eta$ is misreconstructed as $\pi^0\pi^0$. This background is reduced via an ``$\eta$'' veto to reject events which simultaneously satisfy $|M_{\gamma_1 \gamma_3}-M_{\eta}|< 10$~MeV/$c^2$ and $|M_{\gamma_2 \gamma_4}-M_{\pi^0}|< 20$~MeV/$c^2$, where $M_{\gamma_1 \gamma_3}$ and $M_{\gamma_2 \gamma_4}$ are the invariant masses of any combinations of the photons used to reconstruct the two $\pi^0s$ in the signal decay. There is also background originating from $D^0\to K^-\pi^+\pi^0_1$ versus $\bar{D}^0\to K^+\pi^-\pi^0_2$ decays, where the $\pi^0$ from $D^0$ is denoted as $\pi^0_1$ and that from $\bar{D}^0$ as $\pi^0_2$. It fakes $D_s^+ \to \pi^+\pi^0_1\pi^0_2$ versus $D_s^- \to K^+K^-\pi^-$ ($D_s^- \to \pi^-\pi^0_1\pi^0_2$ versus $D_s^+ \to K^+K^-\pi^+$) decays by exchanging $K^-$ and $\pi^0_2$ ($K^+$ and $\pi^0_1$). This background is excluded by rejecting events which simultaneously satisfy $|M_{K^-\pi^+\pi^0_1(\pi^0_2)}-m_{D^{0}}|< 40$~MeV/$c^2$ and $|M_{K^+\pi^+\pi^0_2(\pi^0_1)}-m_{D^{0}}|< 40$~MeV/$c^2$, where $m_{D^{0}}$ is the known $D^0$ mass~\cite{PDG}. A $K^0_S\rightarrow \pi^0\pi^0$ mass veto, $M_{\pi^0\pi^0}\notin (0.458, 0.520)$~GeV/$c^2$, is also applied on the signal $D_{s}^+$ to remove the peaking background $D_s^{+}\to K_{S}^{0}\pi^+$. Figure~\ref{fig:fit_Ds} shows the fits to the invariant-mass distributions of the $D_s^{+}$ candidates reconstructed in the signal mode, $M_{\rm sig}$, for the two data samples. The signal is described by a MC-simulated shape convolved with a Gaussian resolution function and the background is described by a simulated shape based on inclusive MC samples. Finally, a mass window $[1.925, 1.985]$~GeV/$c^2$ is applied. There are 322 and 250 events retained for the amplitude analysis with purities of $(78.9\pm2.3)\%$ and $(75.6\pm2.9)\%$ for the data samples at $\sqrt{s}=4.178$~GeV and 4.189-4.226~GeV, respectively. \begin{figure*}[!htbp] \centering \includegraphics[width=0.45\textwidth]{realfitother2_ds_4180.eps} \includegraphics[width=0.45\textwidth]{realfitother2_ds_1923.eps} \caption{ Fits to the $M_{\rm sig}$ distributions of the data samples at $\sqrt{s}=$ (a) 4.178~GeV and (b) 4.189-4.226~GeV. The black points with error bars are data. The blue solid lines are the fit results. The red dotted and the black dashed lines are the fitted signal and background components, respectively. The red arrows indicate the signal regions. } \label{fig:fit_Ds} \end{figure*} \subsection{Fit method} The intermediate-resonant composition is determined by an unbinned maximum-likelihood fit to data. The likelihood function $\mathcal{L}$ is constructed with a signal-background combined probability density function~(PDF), which depends on the momenta of the three final state particles: \begin{eqnarray}\begin{aligned} \mathcal{L} = \prod_{i=1}^{2}\prod_{k=1}^{N_{D, i}}\left[w^{i}f_{S}(p_{j}^{k})+(1-w^{i})f_{B}(p_{j})\right]\,, \label{likelihood3} \end{aligned}\end{eqnarray} where $i$ and $j$ indicate the data sample groups and the final-state particles, respectively, $N_{D,i}$ is the number of candidate events in the data $i$, $f_{S}$~($f_{B}$) is the signal~(background) PDF and $w$ is the purity of signal. The signal PDF is written as \begin{eqnarray}\begin{aligned} f_{S}(p_{j}) = \frac{\epsilon(p_{j})\left|\mathcal{A}(p_{j})\right|^{2}R_{3}}{\int \epsilon(p_{j})\left|\mathcal{A}(p_{j})\right|^{2}R_{3}\,dp_{j}}\,, \label{signal-PDF} \end{aligned}\end{eqnarray} where $\epsilon(p_{j})$ is the detection efficiency modeled by a RooNDKeysPdf derived from phase space MC sample, $\mathcal{A}(p_{j})$ represents the total amplitude, and $R_{3}$ is the standard element of three-body phase space. The isobar formulism is used to model the total amplitude. The total amplitude is the coherent sum of individual amplitudes of intermediate processes, $\mathcal{A}=\sum \rho_{n}e^{i\phi_{n}}\mathcal{A}_{n}$ where magnitude $\rho_{n}$ and phase $\phi_{n}$ are the free parameters to be determined by data. The amplitude of the $n^{\rm th}$ intermediate process~($\mathcal{A}_{n}$) is given by \begin{eqnarray} \begin{aligned} \mathcal{A}_{n} = P_{n}S_{n}F_{n}^{r}F_{n}^{D}\,, \label{base-amplitude} \end{aligned}\end{eqnarray} where $S_{n}$ is the spin factor (Sec.~\ref{sec:spinfactor}); $F_{n}^{r}$ and $F_{n}^{D}$ are the Blatt-Weisskopf barriers of the intermediate state and the $D_{s}^{+}$ meson, respectively (Sec.~\ref{sec:barrier}); $P_{n}$ is the propagator of the intermediate resonance (Sec.~\ref{sec:propagator}). The two identical final state $\pi^0$'s are symmetrized in the model. The background PDF is given by \begin{eqnarray}\begin{aligned} f_{B}(p_{j}) = \frac{\epsilon(p_{j})B_{\epsilon}(p_{j})R_{3}}{\int \epsilon(p_{j})B_{\epsilon}(p_{j})R_{3}\,dp_{j}}\,,\label{bkg-PDF} \end{aligned}\end{eqnarray} where $B_{\epsilon}(p_{j})=B(p_{j})/\epsilon(p_{j})$ is the efficiency-corrected background shape. The background events in the signal region from the generic MC sample are used to derive the background shape $B(p_{j})$ with RooNDKeysPdf~\cite{Verkerke}. RooNDKeysPdf is a kernel estimation method~\cite{Cranmer} implemented in RooFit~\cite{Verkerke} which models the distribution of an input dataset as a superposition of Gaussian kernels. The $M_{\pi^+\pi^0}$ and $M_{\pi^0\pi^0}$ distributions of events outside the $M_{\rm sig}$ signal region between the data and the generic MC samples are compared to check validity of the background from the generic MC samples. The distributions of background events from the generic MC samples within and outside the $M_{\rm sig}$ signal region are also examined. They are found to be compatible within statistical uncertainties. Note that the $\epsilon(p_{j})$ term in Eq.~(\ref{bkg-PDF}) is explicitly written out as it is independent of the fitted variables and is dropped during the log-likelihood fit. The normalization integral terms in the signal and background PDF are handled by MC integration, \begin{eqnarray}\begin{aligned} \int \epsilon(p_{j}) X(p_{j}) R_{3}\,dp_{j} \approx \frac{1}{N_{\rm G}}\sum_{k}^{N_{\rm M}} \frac{ X(p_{j}^{k}) }{\left|\mathcal{M}^{g}(p_{j}^{k})\right|^{2}}\,, \label{MC-intergral} \end{aligned}\end{eqnarray} where $X(p_{j})$ is $|\mathcal{A}(p_{j})|^2$ or $B_{\epsilon}(p_{j})$, $k$ is the index of the $k^{\rm th}$ event, $N_{\rm G}$ is the number of the generated MC events and $N_{\rm M}$ is the number of the selected MC events. The $D_s^+$ meson in the MC samples used here decays to $\pi^+\pi^0\pi^0$ according to the PDF $\mathcal{M}^{g}(p_{j})$, while the $D_s^-$ meson decays into one of the tag modes. These MC samples are generated with different $\sqrt{s}$ according to the luminosities and cross sections, and satisfy all selection criteria as those of the data samples. At the beginning, a preliminary PDF is used, and then a recursive process is performed until the result converges. To account for any bias caused by differences in PID or tracking efficiencies between data and MC simulation, each signal MC event is weighted with a ratio, $\gamma_{\epsilon}(p)$, of the efficiency of data to that of MC simulation and the MC integration then becomes \begin{eqnarray}\begin{aligned} &\int \epsilon(p_{j}) X(p_{j}) R_{3}\,dp_{j} \approx &\frac{1}{N_{\rm G}} \sum_{k}^{N_{\rm M}} \frac{ X(p_{j}^{k}) \gamma_{\epsilon}(p_{j}^{k})}{\left|\mathcal{M}^{g}(p_{j}^{k})\right|^{2}}\,. \label{MC-intergral-corrected} \end{aligned}\end{eqnarray} \subsubsection{Spin factors}\label{sec:spinfactor} The spin-projection operators are defined as~\cite{covariant-tensors} \begin{eqnarray} \begin{aligned} P^{(1)}_{\mu\mu^{\prime}}(a) &= -g_{\mu\mu^{\prime}}+\frac{p_{a,\mu}p_{a,\mu^{\prime}}}{p_{a}^{2}}\,,\\ P^{(2)}_{\mu\nu\mu^{\prime}\nu^{\prime}}(a) &= \frac{1}{2}(P^{(1)}_{\mu\mu^{\prime}}(a)P^{(1)}_{\nu\nu^{\prime}}(a)+P^{(1)}_{\mu\nu^{\prime}}(a)P^{(1)}_{\nu\mu^{\prime}}(a))\\ &-\frac{1}{3}P^{(1)}_{\mu\nu}(a)P^{(1)}_{\mu^{\prime}\nu^{\prime}}(a)\,. \label{spin-projection-operators} \end{aligned} \end{eqnarray} The quantities $p_a$, $p_b$, and $p_c$ are the momenta of particles $a$, $b$, and $c$, respectively, and $r_a = p_b-p_c$. The covariant tensors are given by \begin{eqnarray} \begin{aligned} \tilde{t}^{(1)}_{\mu}(a) &= -P^{(1)}_{\mu\mu^{\prime}}(a)r^{\mu^{\prime}}_{a}\,,\\ \tilde{t}^{(2)}_{\mu\nu}(a) &= P^{(2)}_{\mu\nu\mu^{\prime}\nu^{\prime}}(a)r^{\mu{\prime}}_{a}r^{\nu^{\prime}}_{a}\,.\\ \label{covariant-tensors} \end{aligned} \end{eqnarray} The spin factors for $S$, $P$, and $D$ wave decays are \begin{eqnarray} \begin{aligned} S &= 1\,, &(S\ \text{wave}), &\\ S &= \tilde{T}^{(1)\mu}(D_{s}^{\pm})\tilde{t}^{(1)}_{\mu}(a)\,, &(P\ \text{wave}),\\ S &= \tilde{T}^{(2)\mu\nu}(D_{s}^{\pm})\tilde{t}^{(2)}_{\mu\nu}(a)\,, &(D\ \text{wave}), \label{spin-factor} \end{aligned} \end{eqnarray} where the $\tilde{T}^{(l)}$ factors have the same definition as $\tilde{t}^{(l)}$. The tensor describing the $D_{s}^{+}$ decay is denoted by $\tilde{T}$ and that of the $a$ decay is denoted by $\tilde{t}$. \subsubsection{Blatt-Weisskopf barrier factors}\label{sec:barrier} For the process $a \to bc$, the Blatt-Weisskopf barrier $F_L(p_j)$~\cite{PhysRevD.83.052001} is parameterized as a function of the angular momentum $L$ and the momentum $q$ of the final-state particle $b$ or $c$ in the rest system of $a$, \begin{eqnarray} \begin{aligned} F_{L=0}(q)&=1,\\ F_{L=1}(q)&=\sqrt{\frac{z_0^2+1}{z^2+1}},\\ F_{L=2}(q)&=\sqrt{\frac{z_0^4+3z_0^2+9}{z^4+3z^2+9}}\,, \end{aligned} \end{eqnarray} where $z=qR$ and $z_0=q_0R$. The effective radius of the barrier $R$ is fixed to 3.0~GeV$^{-1}$ for the intermediate resonances and 5.0~GeV$^{-1}$ for the $D_s^+$ meson. \subsubsection{Propagator}\label{sec:propagator} The intermediate resonances $f_2(1270)$ and $f_0(1370)$ are parameterized as relativistic Breit-Wigner functions, \begin{eqnarray}\begin{aligned} P = \frac{1}{(m_{0}^{2} - s_{a} ) - im_{0}\Gamma(s_{a})}\,,\; \Gamma(s_{a}) = \Gamma_{0}\left(\frac{q}{q_{0}}\right)^{2L+1}\left(\frac{m_{0}}{s_{a}}\right)\left(\frac{F_{L}(q)}{F_{L}(q_{0})}\right)^{2}\,, \label{RBW} \end{aligned}\end{eqnarray} where $s_{a}$ denotes the invariant-mass squared of the two final-state particles considered; $m_{0}$ and $\Gamma_{0}$ are the mass and the width of the intermediate resonance, respectively, and are fixed to the PDG values~\cite{PDG}. The $f_{0}(980)$ resonance is represented by the Flatt$\acute{\rm e}$ formula~\cite{flatte_f0}, \begin{equation} P_{f_0(980)}= \frac{1}{m_{f_{0}(980)}^{2} - s_{\pi^{0}\pi^{0}} - i(g_{1}\rho_{\pi\pi}(s_{\pi^{0}\pi^{0}}) + g_{2}\rho_{K\bar{K}}(s_{\pi^{0}\pi^{0}}))}, \label{Flatte} \end{equation} where $s_{\pi^{0}\pi^{0}}$ is the $\pi^0\pi^0$ invariant-mass squared and $g_{1,2}$ are coupling constants to the corresponding final states. The parameters are fixed to $g_{1}=0.165$~GeV$/c^2$, $g_{2}/g_{1}=4.21$ and $m_{f_{0}(980)}=955$~MeV/$c^{2}$, as reported in Ref.~\cite{flatte_f0}. The Lorentz invariant phase-space factors $\rho_{\pi\pi}(s)$ and $\rho_{K\bar{K}}(s)$ are given by \begin{eqnarray} \begin{aligned} \rho_{\pi\pi}&=\frac{2}{3}\sqrt{1-\frac{4m^{2}_{\pi^{\pm}}}{s}}+\frac{1}{3}\sqrt{1-\frac{4m^{2}_{\pi^{0}}}{s}}\,,\\ \rho_{K\bar{K}}&=\frac{1}{2}\sqrt{1-\frac{4m^{2}_{K^{\pm}}}{s}}+\frac{1}{2}\sqrt{1-\frac{4m^{2}_{K^{0}}}{s}}\,, \end{aligned} \end{eqnarray} where $m_{\pi^{\pm}}$, $m_{\pi^{0}}$, $m_{K^{\pm}}$, and $m_{K^{0}}$ are the known masses of $\pi^{\pm}$, $\pi^{0}$, $K^{\pm}$, and $K^{0}$, respectively~\cite{PDG}. The $f_{0}(500)$ resonance is also an amplitude candidate, and is described by a relativistic Breit-Wigner function or the Bugg lineshape~\cite{ref:bugg}. \subsection{Fit results} The Dalitz plot of $M^{2}_{\pi^+\pi^0}$ versus~$M^{2}_{\pi^+\pi^0}$ for the data samples is shown in Fig.~\ref{dalitz}(a) and that for the signal MC samples generated based on the results of the amplitude analysis is shown in Fig.~\ref{dalitz}(b). In the fit, the magnitude and phase of the reference amplitude $D_{s}^{+} \to f_0(980)\pi^+$ are fixed to 1.0 and 0.0, respectively, while those of other amplitudes are left floating. The masses and widths of all resonances are fixed to the corresponding PDG averages~\cite{PDG}, and $w^{i}$ are fixed to the purities discussed in Sec.~\ref{AASelection}. The systematic uncertainties associated with these fixed parameters are considered by repeating the fit after variation of the fixed parameters according to their uncertainties. Besides the dominant amplitudes $D_{s}^{+} \to f_0(980)\pi^{+}$, $D_{s}^{+} \to f_0(1370)\pi^{+}$, and $D_{s}^{+} \to f_2(1270)\pi^{+}$, we have tested all possible intermediate resonances including $\rho(1450)$, $f_0(1500)$, $\rho(1700)$, $(\pi\pi)_{S}$, $(\pi\pi)_{P}$, $(\pi\pi)_{D}$ etc., where the subscript denotes a relative $S$ ($P$ or $D$) wave between final-state particles. We have also examined all possible combinations of these intermediate resonances to check their significances, correlations, and interferences. By requiring a significance larger than $3\sigma$, eventually, $D_{s}^{+} \to f_0(980)\pi^{+}$, $D_{s}^{+} \to f_0(1370)\pi^{+}$, $D_{s}^{+} \to f_2(1270)\pi^{+}$, $D_{s}^{+} \to \pi^{+}(\pi^0\pi^0)_{D}$, and $D_{s}^{+} \to (\pi^{+}\pi^0)_{D}\pi^0$ are chosen for the nominal set. Note that $D_{s}^{+} \to f_0(500)\pi^{+}$ is tested but it has a significance less than $2\sigma$. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{dalitzfin_data.eps} \includegraphics[width=0.45\textwidth]{dalitz_MCdata.eps} \caption{The Dalitz plot of $M^{2}_{\pi^+\pi^0}$ versus~$M^{2}_{\pi^+\pi^0}$ for (a) the data sample and (b) the signal MC sample generated based on the results of the amplitude analysis at $\sqrt{s}= 4.178$-$4.226$~GeV, symmetrized for the indistinguishable $\pi^{0}$'s.} \label{dalitz} \end{figure} In the calculation of fit fractions (FFs) for individual amplitudes, the phase-space MC truth information is involved with neither detector acceptance nor resolution. The FF for the $n^{\rm th}$ amplitude is defined as \begin{eqnarray}\begin{aligned} {\rm FF}_{n} = \frac{\sum^{N_{\rm gen}} \left|c_{n}\mathcal{A}_{n}\right|^{2}}{\sum^{N_{\rm gen}} \left|\mathcal{A}\right|^{2}}\,, \label{Fit-Fraction-Definition} \end{aligned}\end{eqnarray} where $N_{\rm gen}$ is the number of the phase-space MC events at generator level. Interference between the $n^{\rm th}$ and the $n^{\prime{\rm th}}$ amplitudes (IN) is defined as (for $n<n^{\prime}$ only) \begin{eqnarray}\begin{aligned} {\rm IN}_{nn^{\prime}} = \frac{\sum^{N_{\rm gen}} 2\textrm{Re}[c_{n}c^{*}_{n^{\prime}}\mathcal{A}_{n}\mathcal{A}^{*}_{n^{\prime}}]}{\sum^{N_{\rm gen}} \left|\mathcal{A}\right|^{2}}\,. \label{interferenceFF-Definition} \end{aligned}\end{eqnarray} The statistical fluctuations of FFs are obtained by randomly sampling the fit variables according to their fitted values and covariance matrix. The distribution of each FF is fitted with a Gaussian function and the width of the Gaussian function is defined as the statistical uncertainty of the FF. The phases, FFs, and statistical significances for the amplitudes are listed in Table~\ref{fit-result}. The interferences between amplitudes are listed in Table~\ref{fit-interference}. The Dalitz plot projections are shown in Fig.~\ref{dalitz-projection}. The sum of the FFs is not unity due to interferences between amplitudes. Other tested amplitudes, but not included in the nominal fit, and their significances are listed in Table~\ref{tested}. \begin{table*}[htbp] \caption{The phases, FFs, and statistical significances for the amplitudes. The first and second uncertainties in the phases and FFs are statistical and systematic, respectively. The total FF is 111.4$\%$.} \label{fit-result} \begin{center} \begin{tabular}{lccc} \hline Amplitude & Phase $\phi_n$ (rad) & FF~(\%) &Significance~($\sigma$)\\ \hline $D_{s}^{+} \to f_0(980)\pi^{+}$ & 0.0(fixed) & $42.0 \pm 4.9 \pm 6.6$ &$>$10 \\ $D_{s}^{+} \to f_0(1370)\pi^{+}$ & $ -0.7 \pm 0.2 \pm 0.3$ & $25.8 \pm 4.4 \pm 4.8$ &$>$10\\ $D_{s}^{+} \to f_2(1270)\pi^{+}$ & $ -0.9 \pm 0.3 \pm 0.3$ & $15.0 \pm 4.6 \pm 5.2$ &5.0\\ $D_{s}^{+} \to \pi^{+}(\pi^0\pi^0)_{D}$ & $ -4.4 \pm 0.2 \pm 0.3$ & $19.1 \pm 5.2 \pm 5.2$ &6.3\\ $D_{s}^{+} \to (\pi^{+}\pi^0)_{D}\pi^0$ & $ -2.3 \pm 0.2 \pm 0.5$ & $\phantom{0}9.5 \pm 3.4 \pm 5.1$ &3.4\\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[htbp] \caption{Interference fraction (\%) between amplitudes where the uncertainties are statistical only.} \label{fit-interference} \begin{center} \begin{tabular}{l|cccc} \hline & $f_0(1370)\pi^{+}$ & $f_2(1270)\pi^{+}$ & $\pi^{+}(\pi^0\pi^0)_{D}$ & $(\pi^{+}\pi^0)_{D}\pi^0$\\ \hline $f_0(980)\pi^{+}$ & $-2.6\pm 7.3$ & $\phantom{0}0.0 \pm 0.1$ & $\phantom{0}-0.1 \pm 0.1$ & $\phantom{0}4.1 \pm 2.4$\\ $f_0(1370)\pi^{+}$ & & $-0.1 \pm 0.1$ & $\phantom{00}0.0 \pm 0.1$ & $\phantom{0}9.2 \pm 1.8$\\ $f_2(1270)\pi^{+}$ & & & $-15.7 \pm 4.4$ & $-3.3 \pm 1.7$\\ $\pi^{+}(\pi^0\pi^0)_{D}$ & & & & $-4.6 \pm 2.8$\\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[htbp] \caption{Significances of amplitudes tested, but not included in the nominal fit.} \label{tested} \begin{center} \begin{tabular}{lc} \hline Amplitude & Significance~($\sigma$)\\ \hline $D_{s}^{+} \to f_0(500)\pi^{+}$ & 1.5\\ $D_{s}^{+} \to f_0(1500)\pi^{+}$ & 2.1\\ $D_{s}^{+} \to \rho(1450)^+\pi^{0}$ & 2.4\\ $D_{s}^{+} \to \rho^+\pi^{0}$ & 2.0\\ $D_{s}^{+} \to (\pi^+\pi^0)_{P}\pi^0$ & 1.5\\ $D_{s}^{+} \to \pi^{+}(\pi^0\pi^0)_{S}$ & 1.3\\ \hline \end{tabular} \end{center} \end{table*} \begin{figure*}[!htbp] \centering \includegraphics[width=0.45\textwidth]{mpi0pi0_9c_GX.eps} \includegraphics[width=0.45\textwidth]{mpi0pi_9c_GX.eps} \caption{ The projections of (a) $M_{\pi^0\pi^0}$ and (b) $M_{\pi^+\pi^0}$ from the nominal fit. Two $M_{\pi^+\pi^0}$ are calculated and added due to the indistinguishable $\pi^0$'s. The data samples are represented by points with error bars, the fit results by the solid blue lines, and the background estimated from inclusive MC samples by the black dashed lines. Colored dashed lines show the components of the fit model. Due to interference effects, the total is not necessarily equal to the sum of the components. } \label{dalitz-projection} \end{figure*} \subsection{Systematic uncertainties for the amplitude analysis} \label{sec:PWA-Sys} The systematic uncertainties for the amplitude analysis are summarized in Table~\ref{systematic-uncertainties}, with their definitions described below: \begin{itemize} \item[\lowercase\expandafter{\romannumeral1}] Resonant parameters. The masses and the widths of $f_0(1370)$ and $f_2(1270)$ are varied by their corresponding uncertainties~\cite{PDG}. The mass and coupling constants of the $f_0(980)$ Flatt$\acute{\rm e}$ formula are varied according to Ref.~\cite{flatte_f0}. The changes of the phases $\phi$ and FFs are assigned as the associated systematic uncertainties. \item[\lowercase\expandafter{\romannumeral2}] $R$ values. The associated systematic uncertainties are estimated by repeating the fit procedure by varying the radii of the intermediate state and $D_s^{+}$ mesons within 1~GeV$^{-1}$. \item[\lowercase\expandafter{\romannumeral3}] Background estimation. First, the purities of signals for the two sample groups, i.e.~$w$ in Eq.~(\ref{likelihood3}) are varied by their corresponding statistical uncertainties to study uncertainties associated with backgrounds. The differences caused by the variation are assigned as the uncertainties. Second, an alternative MC-simulated shape is used to examine the uncertainty arising from the background shape modeling. Alternative background shapes are extracted with the relative fractions of the dominant backgrounds from $e^+e^-\to q\bar{q}$ and non-$D_{s}^{*\pm}D_{s}^{\mp}$ open-charm processes varied by the statistical uncertainties of their cross sections. \item[\lowercase\expandafter{\romannumeral4}] Resonances with significances less than $3\sigma$. The corresponding uncertainties are taken to be the differences of the phases $\phi$ and FFs with and without the intermediate resonances with statistical significances less than $3\sigma$. \item[\lowercase\expandafter{\romannumeral5}] Experimental effects. To estimate the systematic uncertainty related to the difference between MC simulation and data associated with the PID and tracking efficiencies, $\gamma_{\epsilon}$ in Eq.~(\ref{MC-intergral-corrected}), the amplitude fit is performed varying the PID and tracking efficiencies according to their uncertainties. The differences from the nominal results are so tiny that this source of systematic uncertainty is negligible. \end{itemize} \begin{table*}[tp] \renewcommand\arraystretch{1.25} \centering \caption{Systematic uncertainties on the phase $\phi$ and FF for each amplitude in unit of the corresponding statistical uncertainty. The sources are: (\lowercase\expandafter{\romannumeral1}) fixed parameters in the amplitudes, (\lowercase\expandafter{\romannumeral2}) the $R$ values, (\lowercase\expandafter{\romannumeral3}) background, (\lowercase\expandafter{\romannumeral4}) resonances with significances less than $3\sigma$. } \label{systematic-uncertainties} \begin{tabular}{lcccccc} \hline \multirow{2}{*}{Amplitude}&\multicolumn{6}{c}{Source}\cr & & \lowercase\expandafter{\romannumeral1} &\lowercase\expandafter{\romannumeral2} &\lowercase\expandafter{\romannumeral3} &\lowercase\expandafter{\romannumeral4} & Total \\ \hline $D_{s}^{+} \to f_0(980)\pi^{+}$ & FF & 1.07 & 0.29 & 0.31 & 0.70 & 1.35 \\ \hline \multirow{2}{*}{$D_{s}^{+} \to f_0(1370)\pi^{+}$} & $\phi$ & 1.32 & 0.30 & 0.34 & 0.42 & 1.46 \\ & FF & 1.06 & 0.20 & 0.09 & 0.08 & 1.09 \\ \hline \multirow{2}{*}{$D_{s}^{+} \to f_2(1270)\pi^{+}$} & $\phi$ & 0.56 & 0.09 & 0.23 & 0.85 & 1.05 \\ & FF & 0.93 & 0.53 & 0.36 & 0.16 & 1.14 \\ \hline \multirow{2}{*}{$D_{s}^{+} \to \pi^{+}(\pi^0\pi^0)_{D}$} & $\phi$ & 0.56 & 0.42 & 0.24 & 1.53 & 1.70 \\ & FF & 0.88 & 0.46 & 0.10 & 0.11 & 1.00 \\ \hline \multirow{2}{*}{$D_{s}^{+} \to (\pi^{+}\pi^0)_{D}\pi^0$} & $\phi$ & 1.36 & 0.09 & 0.17 & 2.15 & 2.55 \\ & FF & 0.72 & 0.14 & 0.20 & 1.30 & 1.50 \\ \hline \end{tabular} \end{table*} \section{Branching fraction measurement} \label{BFSelection} In addition to the selection criteria for final-state particles described in Sec.~\ref{ST-selection}, it is required that $\pi^+$ must have momentum greater than $100$~MeV/$c$ to remove soft $\pi^+$ from $D^{*+}$ decays. The best tag candidate with $M_{\rm rec}$ closest to the $D_{s}^{*+}$ known mass~\cite{PDG} is chosen if there are multiple ST candidates. The data sets are organized into three sample groups, 4.178~GeV, 4.189-4.219~GeV, and 4.226~GeV, that were acquired during the same year under consistent running conditions. The yields for various tag modes are obtained by fitting the corresponding $M_{\rm tag}$ distributions and listed in Table~\ref{ST-eff}. As an example, the fits to the $M_{\rm tag}$ spectra of the ST candidates in the data sample at $\sqrt s=4.178$~GeV are shown in Fig.~\ref{fit:Mass-data-Ds_4180}. In the fits, the signal is modeled by a MC-simulated shape convolved with a Gaussian function to take into account the data-MC resolution difference. The background is described by a second-order Chebyshev function. MC studies show that there is no significant peaking background in any tag mode, except for $D^{-} \to K_{S}^{0} \pi^-$ and $D_{s}^{-} \to \eta\pi^+\pi^-\pi^-$ faking the $D_{s}^{-} \to K_{S}^{0} K^-$ and $D_{s}^{-} \to \pi^-\eta^{\prime}$ tags, respectively. Therefore, the MC-simulated shapes of these two peaking background sources are added to the background models. \begin{table*}[htbp] \caption{The ST yields for the samples collected at $\sqrt{s} =$ (I) 4.178~GeV, (II) 4.199-4.219~GeV, and (III) 4.226~GeV. The uncertainties are statistical.}\label{ST-eff} \begin{center} \begin{tabular}{lccc} \hline Tag mode & (I) $N_{\rm ST}$ & (II) $N_{\rm ST}$ & (III) $N_{\rm ST}$ \\ \hline $D_{s}^{-}\to K_{S}^{0}K^{-}$ & $\phantom{0}31941\pm312\phantom{0}$ & $\phantom{0}18559\pm261$ & $\phantom{0}6582\pm160$ \\ $D_{s}^{-}\to K^{+}K^{-}\pi^{-}$ & $137240\pm614\phantom{0}$ & $\phantom{0}81286\pm505$ & $28439\pm327$ \\ $D_{s}^{-}\to K_{S}^{0}K^{-}\pi^{0}$ & $\phantom{0}11385\pm529\phantom{0}$ & $\phantom{00}6832\pm457$ & $\phantom{0}2227\pm220$ \\ $D_{s}^{-}\to K_{S}^{0}K^{-}\pi^{-}\pi^{+}$ & $\phantom{00}8093\pm326\phantom{0}$ & $\phantom{00}5269\pm282$ & $\phantom{0}1662\pm217$ \\ $D_{s}^{-}\to K_{S}^{0}K^{+}\pi^{-}\pi^{-}$ & $\phantom{0}15719\pm289\phantom{0}$ & $\phantom{00}8948\pm231$ & $\phantom{0}3263\pm172$ \\ $D_{s}^{-}\to \pi^{-}\eta^{\prime}$ & $\phantom{00}7759\pm141\phantom{0}$ & $\phantom{00}4428\pm111$ & $\phantom{0}1648\pm74\phantom{0}$ \\ $D_{s}^{-}\to K^{-}\pi^{+}\pi^{-}$ & $\phantom{0}17423\pm666\phantom{0}$ & $\phantom{0}10175\pm448$ & $\phantom{0}4984\pm458$ \\ \hline Total & $229560\pm1186$ & $135497\pm937$ & $48805\pm688$ \\ \hline \end{tabular} \end{center} \end{table*} \begin{figure*}[htp] \begin{center} \includegraphics[width=0.4\textwidth]{mc400.eps} \includegraphics[width=0.4\textwidth]{mc401.eps}\\ \includegraphics[width=0.4\textwidth]{mc402.eps} \includegraphics[width=0.4\textwidth]{mc405.eps}\\ \includegraphics[width=0.4\textwidth]{mc406.eps} \includegraphics[width=0.4\textwidth]{mc460.eps}\\ \includegraphics[width=0.4\textwidth]{mc502.eps} \caption{Fits to the $M_{\rm tag}$ distributions of the ST candidates from the data sample at $\sqrt{s}=4.178$~GeV. The points with error bars are data, the blue solid lines are the total fits, and the black dashed lines are background. The pairs of red arrows denote the signal regions. } \label{fit:Mass-data-Ds_4180} \end{center} \end{figure*} Once a tag mode is identified, the signal decay $D_{s}^{+} \to \pi^{+}\pi^{0}\pi^{0}$ is searched for at the recoiling side. In the case of multiple candidates, the DT candidate with the average mass, $(M_{\rm sig}+M_{\rm tag})/2$, closest to the $D_{s}^{+}$ nominal mass is retained. A $K^0_S\rightarrow \pi^0\pi^0$ mass veto, $M_{\pi^0\pi^0}\notin (0.458, 0.520)$~GeV/$c^2$, is applied on the signal $D_{s}^+$ to remove the peaking background $D_s^{+}\to K_{S}^{0}\pi^+$. To measure the BF, we start from the following equations for each tag mode: \begin{eqnarray}\begin{aligned} N_{\text{tag}}^{\text{ST}} = 2N_{D_{s}^{*+}D_{s}^{-}}\mathcal{B}_{\text{tag}}\epsilon_{\text{tag}}^{\text{ST}}\,, \label{eq-ST} \end{aligned}\end{eqnarray} \begin{equation} N_{\text{tag,sig}}^{\text{DT}}=2N_{D_{s}^{*+}D_{s}^{-}}\mathcal{B}_{\text{tag}}\mathcal{B}_{\text{sig}}\epsilon_{\text{tag,sig}}^{\text{DT}}\,, \label{eq-DT} \end{equation} where $N_{D_{s}^{*+}D_{s}^{-}}$ is the total number of $D_{s}^{*\pm}D_{s}^{\mp}$ pairs produced from the $e^{+}e^{-}$ collisions; $N_{\text{tag}}^{\text{ST}}$ is the ST yield for the tag mode; $N_{\text{tag,sig}}^{\text{DT}}$ is the DT yield; $\mathcal{B}_{\text{tag}}$ and $\mathcal{B}_{\text{sig}}$ are the BFs of the tag and signal modes, respectively; $\epsilon_{\text{tag}}^{\text{ST}}$ is the ST efficiency to reconstruct the tag mode; and $\epsilon_{\text{tag,sig}}^{\text{DT}}$ is the DT efficiency to reconstruct both the tag and the signal decay modes. In the case of more than one tag modes and sample groups, \begin{eqnarray} \begin{aligned} \begin{array}{lr} N_{\text{total}}^{\text{DT}}=\Sigma_{\alpha, i}N_{\alpha,\text{sig},i}^{\text{DT}} = \mathcal{B}_{\text{sig}} \Sigma_{\alpha, i}2N_{D_{s}^{+}D_{s}^{-}}\mathcal{B}_{\alpha}\epsilon_{\alpha,\text{sig}, i}^{\text{DT}}\,, \end{array} \label{eq-DTtotal} \end{aligned} \end{eqnarray} where $\alpha$ represents tag modes in the $i^{\rm th}$ sample group. By isolating $\mathcal{B}_{\text{sig}}$, we find \begin{eqnarray}\begin{aligned} \mathcal{B}_{\text{sig}} = \frac{N_{\text{total}}^{\text{DT}}}{ \mathcal{B}^2_{\pi^0\to\gamma\gamma}\begin{matrix}\sum_{\alpha, i} N_{\alpha, i}^{\text{ST}}\epsilon^{\text{DT}}_{\alpha,\text{sig},i}/\epsilon_{\alpha,i}^{\text{ST}}\end{matrix}}\,, \end{aligned}\end{eqnarray} where $N_{\alpha,i}^{\text{ST}}$ and $\epsilon_{\alpha,i}^{\text{ST}}$ are obtained from the data and inclusive MC samples, respectively. $\epsilon_{\alpha,\text{sig},i}^{\text{DT}}$ is determined with signal MC samples with $D_{s}^{+} \to \pi^{+}\pi^{0}\pi^{0}$ events are generated according to the results of the amplitude analysis. The BF for $\pi^0\to\gamma\gamma$ is introduced to account for the fact that the signal is reconstructed through this decay. The DT yield $N_{\text{total}}^{\text{DT}}$ is found to be $587\pm44$ from the fit to the $M_{\rm sig}$ distribution of the selected $D^+_s\to \pi^+\pi^0\pi^0$ candidates. The fit result is shown in Fig.~\ref{DT-fit}, where the signal shape is described by a MC-simulated shape convolved with a Gaussian function to take into account the data-MC resolution difference. The background is described by a simulated shape from the inclusive MC sample. A small peaking background originating from $D^0\to K^-\pi^+\pi^0$ is considered in the inclusive MC sample. Taking the difference in $\pi^{0}$ reconstruction efficiencies for each signal mode between data and MC simulation into account by multiplying the efficiencies by a factor of 99.5\% for each $\pi^{0}$, we determine the BF of $D_{s}^{+} \to \pi^{+}\pi^{0}\pi^{0}$ to be $(0.50\pm 0.04_{\rm stat}\pm 0.02_{\rm syst})\%$. \begin{figure}[!htbp] \centering \includegraphics[width=0.6\textwidth]{DT-A.eps} \caption{Fit to the $M_{\rm sig}$ distribution of the DT candidates from the data samples at $\sqrt{s}= 4.178$-$4.226$~GeV. The data are represented by points with error bars, the total fit by the blue solid line, and the fitted signal and the fitted background by the red dotted and the black dashed lines, respectively. } \label{DT-fit} \end{figure} The relative systematic uncertainty for the total yield of the ST $D_s^-$ mesons is assigned to be 0.4\% by examining the changes of the fit yields when varying the signal shape, background shape, and taking into account the background fluctuation in the fit. The systematic uncertainty due to the signal shape is studied by repeating the fit without the convolved Gaussian. The MC-simulated background shape is altered by varying the relative fractions of the dominant backgrounds from $e^+e^-\to q\bar{q}$ or non-$D_{s}^{*+}D_{s}^{-}$ open-charm processes by their statistical uncertainties of their related cross sections. The largest change is taken as the corresponding systematic uncertainty. The $\pi^{+}$ tracking (PID) efficiency is studied with the processes $e^+e^-\to K^+K^-\pi^+\pi^-$ ($e^+e^-\to K^+K^-\pi^+\pi^-(\pi^0)$ and $\pi^+\pi^-\pi^+\pi^-(\pi^0)$). The systematic uncertainty due to tracking (PID) efficiency is estimated to be 1\%(1\%). The systematic uncertainty of the $\pi^{0}$ reconstruction efficiency is investigated by using a control sample of the process $e^+e^-\to K^+K^-\pi^+\pi^-\pi^0$. The selection criteria listed in Sec.~\ref{ST-selection} are used to reconstruct the two kaons and the two pions. The recoiling mass distribution of $K^+K^-\pi^+\pi^-$ is fitted to obtain the total number of $\pi^0$'s and the $\pi^0$ selection is applied to determine the number of reconstructed $\pi^0$'s. The average ratio between data and MC efficiencies of $\pi^0$ reconstruction, weighted by the corresponding momentum spectra, is estimated to be $0.995 \pm 0.008$. After correcting the simulated efficiencies to data by this ratio, the residual uncertainty 0.8\% is assigned as the systematic uncertainty arising from each $\pi^0$ reconstruction. The uncertainty due to the limited MC statistics is obtained by $\sqrt{\begin{matrix} \sum_{i} (f_{i}\frac{\delta_{\epsilon_{i}}}{\epsilon_{i}}\end{matrix}})^2$, where $f_{i}$ is the tag yield fraction, and $\epsilon_{i}$ and $\delta_{\epsilon_{i}}$ are the signal efficiency and the corresponding uncertainty of tag mode $i$, respectively. The uncertainty from the amplitude analysis model is estimated by varying the model parameters based on their error matrix. The distribution of 600 efficiencies resulting from this variation is fitted by a Gaussian function and the fitted width divided by the mean value is taken as a relative uncertainty. All of the systematic uncertainties are summarized in Table~\ref{BF-Sys}. Adding them in quadrature gives a total systematic uncertainty in the BF measurement of 4.0\%. \begin{table}[htbp] \caption{Systematic uncertainties relative to the central value in the BF measurement.} \label{BF-Sys} \begin{center} \begin{tabular}{lccc} \hline Source & Systematic uncertainty (\%)\\ \hline $D_{s}^{-}$ yield & 0.4 \\ Signal shape & 1.6 \\ Background shape & 2.8 \\ $\pi^{+}$ PID efficiency & 1.0 \\ $\pi^{+}$ tracking efficiency & 1.0 \\ $\pi^0$ reconstruction & 1.6 \\ MC statistics & 0.2 \\ Signal MC model & 0.9 \\ \hline Total & 4.0 \\ \hline \end{tabular} \end{center} \end{table} \section{Summary} An amplitude analysis of the decay $D_{s}^{+} \to \pi^{+}\pi^{0}\pi^{0}$ has been performed for the first time. Amplitudes with significances larger than $3\sigma$ were selected. The results for the FFs and phases of the different intermediate processes are listed in Table~\ref{fit-result}. With the detection efficiency calculated according to the intermediate processes found in the amplitude analysis, the BF for the decay $D^+_s\to \pi^+\pi^0\pi^0$ is measured to be $(0.50\pm 0.04_{\rm stat}\pm 0.02_{\rm syst})\%$. The precision is improved by about a factor of two compared to the PDG value~\cite{PDG} due to the large dataset collected with the BESIII detector. The BFs for the intermediate processes are calculated with $\mathcal{B}_{i} = {\rm FF}_{i} \times \mathcal{B}(D_{s}^{+} \to \pi^{+}\pi^{0}\pi^{0})$ and listed in Table~\ref{inter-processes}. The BF of $D^+_s\to f_0(980)\pi^+$ with $f_0(980)\to \pi^0\pi^0$ is measured for the first time. In addition, no significant signal of $f_0(500)$ is observed. Assuming the BF ratio between $f_{0(2)}\to\pi^+\pi^-$ and $f_{0(2)}\to\pi^0\pi^0$ to be 2 based on isospin symmetry, our results favors with those from $D^+_s\to \pi^+\pi^+\pi^-$ than from $D_{s}^{+} \to K^{+}K^{-}\pi^{+}$. \begin{table}[htbp] \caption{The BFs for intermediate processes. The first and the second uncertainties are statistical and systematic, respectively.}\label{inter-processes} \begin{center} \begin{tabular}{lc} \hline Intermediate process & BF ($10^{-3}$)\\ \hline $D_{s}^{+} \to f_0(980)\pi^{+}$, $f_0(980)\to\pi^0\pi^0$ & $2.1\pm 0.3\pm 0.3 $ \\ $D_{s}^{+} \to f_0(1370)\pi^{+}$, $f_0(1370)\to\pi^0\pi^0$ & $1.3\pm 0.2\pm 0.2 $ \\ $D_{s}^{+} \to f_2(1270)\pi^{+}$, $f_2(1270)\to\pi^0\pi^0$ & $0.8\pm 0.3\pm 0.3 $ \\ $D_{s}^{+} \to \pi^{+}(\pi^{0}\pi^{0})_{D}$ & $1.0\pm 0.3\pm 0.3 $ \\ $D_{s}^{+} \to (\pi^{+}\pi^{0})_{D}\pi^{0}$ & $0.5\pm 0.2\pm 0.3 $ \\ \hline \end{tabular} \end{center} \end{table} \acknowledgments The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key R\&D Program of China under Contracts Nos. 2020YFA0406400, 2020YFA0406300; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11625523, 11635010, 11735014, 11822506, 11835012, 11875054, 11935015, 11935016, 11935018, 11961141012, 12022510, 12025502, 12035009, 12035013, 12061131003; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U2032104, U1732263, U1832207; CAS Key Research Program of Frontier Sciences under Contract No. QYZDJ-SSW-SLH040; 100 Talents Program of CAS; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; ERC under Contract No. 758462; European Union Horizon 2020 research and innovation programme under Contract No. Marie Sklodowska-Curie grant agreement No 894790; German Research Foundation DFG under Contracts Nos. 443159800, Collaborative Research Center CRC 1044, FOR 2359, FOR 2359, GRK 214; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; Olle Engkvist Foundation under Contract No. 200-0605; STFC (United Kingdom); The Knut and Alice Wallenberg Foundation (Sweden) under Contract No. 2016.0157; The Royal Society, UK under Contracts Nos. DH140054, DH160214; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0012069. \bibliographystyle{JHEP}
1,108,101,563,042
arxiv
\section*{Introduction} We shall assume that all rings are associative with identity and all modules are unitary. Let $R$ be a ring and $M$ be a left $R$-module. For any left ideal $I$ of $R$, set $M[I] = \{m\in M~|~Im = 0\}$. It is a subgroup of $M$. As in \cite{Mat59} and \cite{Mat85}, $M$ is said to be {\it semi-compact} if every finitely solvable set of congruences $x \equiv x_{\alpha}$ (mod $M[I_{\alpha}]$) ($\alpha \in \Lambda$, $x_{\alpha}\in M$ and $I_{\alpha}$ is a left ideal of $R$ for each $\alpha \in \Lambda$) has a simultaneous solution in $M$. Also, we say that $M$ is {\it $\Sigma$-semi-compact} if all direct sums of copies of $M$ are semi-compact. Semi-compactness was introduced by Matlis in \cite{Mat59} (and also in\cite{Mat85}) for modules over commutative rings. In the present article, we shall study semi-compact and $\Sigma$-semi-compact mo\-dules over arbitrary rings (not necessarily commutative). In Section~\ref{S:semicomp}, we consider some basic properties of semi-compact modules, their relationship to other concepts such as injectivity and pure-injectivity, and some rings characterized by semi-compactness. Several characterizations of semi-compact modules are given in Propo\-sition \ref{P:first character} and Theorem \ref{T:character}. For instance, it is shown that a left $R$-module $M$ is semi-compact if and only if every finite solvable system of equations of the form $r_jx=a_j\in M, r_j\in R$ has a global solution in $M$, if and only if, every pure extension of $M$ is cyclically pure. So, it follows that the semi-compact modules are exactly the singly pure-injective modules introduced by Azumaya in \cite{Azu87}. It is easy to see that every semi-compact left $R$-module is injective if and only if $R$ is a von-Neumann regular ring (see Theorem \ref{T:Von}). In Section~\ref{S:flat}, we introduce and study $\Sigma$-semi-compact modules. It is shown that a left $R$-module $M$ is $\Sigma$-semi-compact if and only if $M^{(\mathbb{N})}$ is semi-compact if and only if $M$ satisfies the descending chain condition (d.c.c.) on the subgroups of $M$ which are annihilators of (finitely generated) left ideals of $R$ (see Proposition \ref{P:faith proposition} and Theorem \ref{T:sigma semicompact}). It is also shown that every pure projective left $R$-module is semi-compact if and only if $R$ is left Noetherian (see Theorem \ref{T:Noeth-pproj}). In Theorem \ref{T:flat is semicompact}, we show that a ring $R$ is left $\Sigma$-semi-compact if and only if every flat left $R$-module is semi-compact. If $R$ is a commutative ring, we prove that each flat $R$-module is semi-compact if the quotient ring $Q$ is Noetherian (see Proposition \ref{P:QNoe}). In Section~\ref{S:fproj}, for a ring $R$ we compare the following conditions: \begin{itemize} \item each flat left $R$-module is semi-compact; \item each flat left $R$-module is finitely projective; \item each flat left $R$-module is singly projective. \end{itemize} There are many examples of rings for which these conditions are equivalent. For instance, if $R$ is a commutative ring and $Q$ its quotient ring, and if $R$ is either arithmetical or reduced with $\mathrm{Min}\ R$ compact, it is proven that these conditions are satisfied if and only if $Q$ is pure-semisimple\footnote{each commutative reduced pure-semisimple ring is semisimple}. But, if $R$ is a self left FP-injective ring, the two first conditions are not equivalent: the first holds if and only if $R$ is quasi-Frobenius, and the second if and only if $R$ is left perfect. We give an example of a self FP-injective commutative perfect ring which is not quasi-Frobenius. It is also shown that each ($\aleph_0,1$)-flat left $R$-module is singly projective if $R$ is $\Sigma$-semi-compact as left module (see Proposition \ref{P:(N_0,1)-flat}) and the converse holds if $R^{\mathbb{N}}$ is a ($\aleph_0,1$)-flat left $R$-module. In Section~\ref{S:pure-injectivity} we investigate the rings $R$ for which each semi-compact left $R$-module is pure-injective. We get only some partial results. However, if $R$ is a reduced commutative ring, then $R$ satifies this condition if and only if $R$ is von Neumann regular. \bigskip \section{Semi-compact modules}\label{S:semicomp} \begin{definition} \textnormal{ Let $M$ be a left $R$-module. For any subset (subgroup) $X$ of $M$, $^{\bot}X = \{r\in R~|~rX = 0\}$ is a left ideal of $R$. The set of such left ideals will be denoted by $\mathcal{A}_l(R, M)$. For any subset (left ideal) $X$ of $R$, $M[X] = \{m\in M~|~Xm = 0\}$ is a subgroup of $M$. The set of such subgroups will be denoted by $\mathcal{A}_r(R, M)$. Since $X\mapsto {}^{\bot}X$ is an order antiisomorphism between $\mathcal{A}_r(R, M)$ and $\mathcal{A}_l(R, M)$, then one satisfies the a.c.c. if and only if the other satisfies the d.c.c. In the special case $M=R$, the elements of $\mathcal{A}_r(R, R)$ (respectively $\mathcal{A}_l(R, R)$) are called right (respectively left) annulets of $R$: in this case we denote $X^{\bot}$ the right annulet of $X$. As in \cite{Mat59} and \cite{Mat85} $M$ is said to be {\it semi-compact} if every finitely solvable set of congruences $x \equiv x_{\alpha}$ (mod $M[I_{\alpha}]$) (where $\alpha\in \Lambda$, $x_{\alpha}\in M$ and $I_{\alpha}$ is a left ideal of $R$ for each $\alpha \in \Lambda$) has a simultaneous solution in $M$.} \end{definition} \begin{lemma} \label{L:M[I][J]} Let $M$ be a left $R$-module and $I$ and $J$ left ideals of $R$. Then \[(M[I])[J]=(M[J])[I]=M[I+J]=M[I]\cap M[J].\] \end{lemma} \begin{proposition} \label{P:submodule} Let $R$ be a ring and $M$ a semi-compact left $R$-module. Then $M[I]$ and $M/M[I]$ are semi-compact for each two-sided ideal $I$ of $R$. \end{proposition} \begin{proof} Let $x \equiv x_{\alpha}$ (mod $(M[I])[J_{\alpha}]=M[I+J_{\alpha}]$) be a finitely solvable set of congruences (where $\alpha\in \Lambda$, $x_{\alpha}\in M[I]$ and $J_{\alpha}$ is a left ideal of $R$ for each $\alpha \in \Lambda$). Since $M$ is semi-compact, there exists $m$ in $M$ such $m-x_{\alpha}\in M[I+J_{\alpha}]\subseteq M[I]$ for each $\alpha \in \Lambda$. Since $x_{\alpha}\in M[I]$, so is $m$. Therefore, $M[I]$ is semi-compact. Now, let $x \equiv x_{\alpha}+M[I]$ (mod $(M/M[I])[J_{\alpha}]$) be a finitely solvable set of congruences where $\alpha\in \Lambda$, $x_{\alpha}\in M$ and $J_{\alpha}$ is an ideal of $R$ for each $\alpha \in \Lambda$. Obviously, $x \equiv x_{\alpha}$ (mod $(M[IJ_{\alpha}]$) is a finitely solvable set of congruences and so has a global solution $m\in M$. Thus $J_{\alpha}(m-x_{\alpha})\in M[I]$. Therefore, $m+M[I]-x_{\alpha}+M[I]\in (M/M[I])[J_{\alpha}]$ for each $\alpha \in \Lambda$. \end{proof} Let $M$ be a left $R$-module. Then the system of equations $\sum_{i\in I} r_{ij}x_i =m_j\in M$, $j\in J$ is called {\it compatible} if, for any choice of $s_j\in R$, $j\in J$, where only a finite number of $s_j$ are nonzero, the relations $\sum_{j\in J}s_jr_{ij}=0$ for each $i\in I$ imply that $\sum_{j\in J}s_jm_{j}=0$ (see \cite[Chapter 18]{Dau94} for more details about systems of equations). Throughout this paper, all systems of equations are assumed to be compatible. The following proposition is crucial in our investigation. \begin{proposition} \label{P:first character} Let $M$ be a left $R$-module. Then the following statements are equivalent: \begin{enumerate} \item $M$ is semi-compact; \item Every finitely solvable set of congruences $x\equiv x_{\alpha}$ (mod $M[I_{\alpha}]$) (where $\alpha\in \Lambda,~x_{\alpha}\in M$ and $I_{\alpha}$ is a finitely generated left ideal of $R$ for each $\alpha \in \Lambda$) has a simultaneous solution in $M$; \item Every finitely solvable system of equations of the form $r_jx=m_j\in M, j\in J, r_j\in R$, is solvable in $M$. \item For each left ideal $I$ of $R$, every homomorphism $h:I\rightarrow M$, for which the restriction to any finitely generated left subideal $I_0$ of $I$ can be extended to $R$, extends itself to a homomorphism $R\rightarrow M$. \end{enumerate} \end{proposition} \begin{proof} (1)$\Rightarrow $(2) is clear. (2)$\Rightarrow $(3). Let $r_jx=m_j\in M, j\in J, r_j\in R$, be a finitely solvable system of equations. For each finite subset $J_{\alpha}$ of $J$, let $I_{\alpha}$ be the left ideal generated by $\{r_j\mid j\in J_{\alpha}\}$. Let $x_{\alpha}\in M$ be a solution of the finite system of equations $r_jx=m_j\in M, j\in J_{\alpha}$. Obviously, the set of congruences $x \equiv x_{\alpha}$ (mod $M[I_{\alpha}]$) is finitely solvable and by hypothesis has a global solution $z\in M$. Therefore, $z$ is a solution of the above system of equations. (3)$\Rightarrow$(1). Let $x\equiv x_{\alpha}$ (mod $M[I_{\alpha}])$ be a finitely solvable system of congruences in $M$, where $x_{\alpha}\in M$ and $I_{\alpha}$ is a left ideal of $R$ for each $\alpha\in \Lambda$. Consider the following system of equations. $$r_{\alpha j}x=m_{\alpha j},~ r_{\alpha j}\in I_{\alpha}, ~m_{\alpha j}=r_{\alpha j}x_{\alpha}\in M. \qquad (*)$$ Since $x\equiv x_{\alpha}$ (mod $M[I_{\alpha}])$ is finitely solvable, so is the system of equations $(*)$, and by hypothesis it has a global solution $z\in M$. So, $z$ is a solution of $x\equiv x_{\alpha}$ (mod $M[I_{\alpha}])$. (3)$\Leftrightarrow $(4) is clear. \end{proof} Let $\mathcal{S}$ be a class of finitely presented left $R$-modules. We say that an exact sequence of left $R$-modules is $\mathcal{S}$-{\it pure} if each module of $\mathcal{S}$ is projective relatively to it. Each left $R$-module which is injective relatively to each $\mathcal{S}$-pure exact sequence is said to be $\mathcal{S}$-{\it pure injective}. When $\mathcal{S}$ contains all finitely presented left $R$-modules we say respectively "pure exact sequence" and "pure-injective module". And, when $\mathcal{S}$ contains all finitely presented cyclic left $R$-modules we say respectively "($\aleph_0,1$)-pure exact sequence" and "($\aleph_0,1$)-pure-injective module". For any class $\mathcal{S}$ of finitely presented left $R$-modules, each pure-exact sequence of left modules is $\mathcal{S}$-pure exact, whence each $\mathcal{S}$-pure injective left $R$-module is pure injective. So, we prove the following results by using Proposition~\ref{P:first character}(3) and \cite[Theorem 1.35(d)]{Fac98} for the first result. \begin{example} \label{E:exe} Let $R$ be a ring. Then: \begin{itemize} \item[(i)] For each class $\mathcal{S}$ of finitely presented left $R$-modules, every $\mathcal{S}$-pure injective module is semi-compact. \item[(ii)] If $R$ is a domain, then every torsion-free (and so every flat) left $R$-module is semi-compact. \end{itemize} \end{example} \begin{remark} \label{R:f-injective} \textnormal{Let $R$ be a ring. Recall that a left $R$-module $M$ is {\it semi-injective} (or f-injective) if for each finitely generated left ideal $I$, every $R$-homomorphism $f:I\rightarrow M$ can be extended to an $R$-homomorphism from $R$ into $M$ (see \cite{Mat85}). By Proposition \ref{P:first character}, a left $R$-module $M$ is injective if and only if $M$ is semi-injective and semi-compact. Since every direct sum of semi-injective left $R$-modules is semi-injective, Bass's theorem implies that every direct sum of semi-compact left $R$-modules is semi-compact if and only if $R$ is left Noetherian.} \end{remark} \medskip Recall that a submodule $A$ of $B$ is {\it pure} (resp. ($\aleph_{0},1$)-{\it pure}) if and only if the exact sequence $ 0\longrightarrow A \hookrightarrow B \stackrel{g}\longrightarrow B/A\longrightarrow 0$ is pure (resp. ($\aleph_{0},1$)-pure). In this case we say that $B$ is a {\it pure extension } (resp. ($\aleph_{0},1$)-{\it pure extension}) of $A$. It is known that $A$ is a ($\aleph_{0},1$)-pure submodule of $B$ if and only if for each $n\in\mathbb{N}$, any system of equations $r_j x= a_j\in A$ ($r_j\in R$, $1\leq j\leq n$) is solvable in $A$ whenever it is solvable in $B$ (see \cite{War69}, \cite{Fac98}, \cite{Cou11}). As in \cite{Azu87}, a left $R$-module $B$ is called a {\it single extension} of $M$ if the factor module $B/M$ is cyclic, i.e., there is a cyclic submodule $A$ of $B$ such that $B=A+M$. We say that $B$ is a {\it single pure extension} (resp. {\it single} ($\aleph_{0},1$)-{\it pure extension}) of $M$, if $B$ is a pure (resp. ($\aleph_{0},1$)-pure) extension and a single extension of $M$. \begin{lemma}\label{L:upper} Let $M$ be a left $R$-module and $$r_jx=m_j, j\in J, r_j\in R, m_j\in M \qquad (*)$$ be a finitely solvable system of equations in $M$. Then there exists a singly pure extension $B$ of $M$ such that the system of equations $(*)$ has a solution $b\in B$. \end{lemma} \begin{proof} Set $B=(M\oplus F)/S$, where $F$ is the free module with the basis $\{x\}$ and $S$ is the submodule of $M\oplus F$ generated by $\{(m_j, - r_jx) ~|~j\in J\}$. Obviously, $$S=\{(\sum_{k=1}^{n}z_km_k,-\sum_{k=1}^{n} z_{k}r_k x)~|~ n\in\mathbb{N},~ z_k\in R\}.$$ Clearly, the map $\alpha: M\rightarrow B$ defined by $\alpha(m)=(m,0)+S$ ($m\in M$) is an $R$-homomorphism. We claim that $\alpha$ is a monomorphism. To see this, let $\alpha(m)=(m,0)+S=0$ for some $m\in M$. Then $\displaystyle{m=\sum_{k=1}^{n}z_km_k}$ and $\displaystyle{\sum_{k=1}^{n} z_{k} r_{k}x=0}$ for some $z_1,\ldots, z_n\in R$. Since the system of equations $(*)$ is compatible, we conclude that $m=0$ and so $\alpha$ is a monomorphism. One can easily see that $b=(0,x)+S\in B$ is a solution of the system of equations $ r_jX=(m_j,0)+S\in \alpha(M)$. We claim that $\alpha(M)$ is a pure submodule of $M$. Let $$\sum _{k=1}^{n}c_{lk}y_k=(m'_{l},0)+S\in \alpha(M), ~1\leq l\leq w, c_{lk}\in R, m'_{l}\in M \qquad (**)$$ be a system of equations with the solution $\{(a_k,t_{k}x)+S\}_{k=1}^{n}\subseteq B$, where $a_{k}\in M$ and $t_k\in R$. Then $\displaystyle{(\sum _{k=1}^nc_{lk}a_{k}-m'_{l},-\sum_{k=1}^n c_{lk}t_{k}x)\in S}$ for each $1\leq l\leq w$. Therefore, for each $l$ ($1\leq l\leq w$), there exist $n_l\in \mathbb{N}$, $z_{l1},\ldots, z_{ln_l}\in R$ such that $$\sum _{k=1}^nc_{lk}a_{k}-m'_{l}=\sum_{s=1}^{n_l}z_{ls}m_{ls} \qquad (1)$$ and $$\sum_{k=1}^nc_{lk}t_{k}=-\sum_{s=1}^{n_l}z_{ls}r_{ls} \qquad (2).$$ Since the system of equations $(*)$ is finitely solvable, there exists $m'\in M$ such that $r_{ls}m'=m_{ls}$ for some finite subset $\{ls\}\subset J$. In view of $(1)$ and $(2)$ we conclude that $$\sum_{k=1}^nc_{lk}t_{k}m'=-\sum_{s=1}^{n_l}z_{ls}r_{ls}m'=-\sum_{s=1}^{n_l}z_{ls}m_{ls}=-(\sum _{k=1}^nc_{lk}a_{k}-m'_{l}).$$ Therefore, $\displaystyle{\sum _{k=1}^nc_{lk}(a_{k}-t_{k}m')=m'_{l}}$. Thus $\{(a_k-t_{k}m',0)+S\}_{k=1}^{n}\subseteq \alpha(M)$ is a solution of the system $(**)$. It means that $\alpha(M)$ is a pure submodule of $B$. \end{proof} As in \cite{Azu87}, a left $R$-module $M$ is {\it singly split} in $B$ if, for every submodule $A$ of $B$ which is a single extension of $M$, $M$ is a direct summand of $A$, and $M$ is said to be {\it singly pure-injective} if $M$ is singly split in any pure extension of itself. Recall that an exact sequence $\varepsilon: 0\rightarrow A \rightarrow B \rightarrow C\rightarrow 0$ of left $R$-modules is {\it cyclically pure } if every cyclic left $R$-module has the projective property relative to $\varepsilon$ (see \cite{Sim87}). Proposition~\ref{P:first character} leads us to obtain the following characterizations of semi-compact left $R$-modules. \begin{theorem} \label{T:character} Let $M$ be a left $R$-module. Then the following statements are equivalent: \begin{enumerate} \item $M$ is semi-compact. \item $M$ has the injective property relative to every ($\aleph_0,1$)-pure exact sequence $0\rightarrow A\rightarrow B\rightarrow C \rightarrow 0$ where $C$ is a cyclic left $R$-module. \item $M$ is a direct summand of every single ($\aleph_0,1$)-pure extension. \item Every pure extension of $M$ is cyclically pure. \item $M$ is a direct summand of every module $B$ if $B$ contains $M$ as an ($\aleph_0,1$)-pure submodule and if $B/M$ is a direct summand of a direct sum of cyclic left $R$-modules. \item $M$ has the injective property relative to every ($\aleph_0,1$)-pure exact sequence $0\rightarrow A\rightarrow B\rightarrow C \rightarrow 0$ where $C$ is a direct summand of a direct sum of cyclic left $R$-modules. \item $M$ is singly pure-injective. \end{enumerate} \end{theorem} \begin{proof} (1)$\Rightarrow$(2). Since $C$ is cyclic then $B=A+Rb$ for some $b\in B$. Let $f:A\rightarrow M$ be a homomorphism of left $R$-modules. Let $\{r_j\}_{j\in J}\subseteq R$ be the set of all elements of $R$ such that $r_jb\in A$. Since $A$ is an ($\aleph_0,1$)-pure submodule of $B$, the system of equations $r_jx=f(r_jb)\in M$ is finitely solvable and by hypothesis it has a solution $m\in M$. Obviously, $\phi:B\rightarrow M$ defined by $\phi(a+sb)=f(a)+sm$ for each $a\in A$ and $s\in R$ is an extension of $f$. (2)$\Rightarrow$(3) is clear. (3)$\Rightarrow$(4). Let $B$ be a pure extension of $M$ and $A\subseteq B$ be a singly extension of $M$. Then $M$ is an ($\aleph_0,1$)-pure submodule of $A$. By hypothesis, $M$ is a summand of $A$ and so $M$ is singly split in $B$. Therefore, \cite[Theorem 3]{Azu87} implies that the sequence $0\rightarrow M\rightarrow B\rightarrow B/M \rightarrow 0$ is cyclically pure. (4)$\Rightarrow$(5). It is well-known that every direct summand of a direct sum of cyclic modules has the projective property relative to each cyclically pure exact sequence. (5)$\Rightarrow$(6). Let $f: A\rightarrow M$ be a homomorphism and $u: A\rightarrow B$ the inclusion map. We consider the following pushout diagram: \[\begin{CD} A @>u>> B \\ @V{f}VV @V{g}VV \\ M @>v>> T \end{CD}\] By \cite[33.4(2)]{Wis91} $v$ is an ($\aleph_0,1$)-pure monomorphism. Since coker $v\cong C$ then $v$ is a split monomorphism. So, $f$ extends to a homomorphism from $B$ into $M$. (6)$\Rightarrow$(3) is clear. (3)$\Rightarrow$(1). Let $r_jx=m_j, j\in J, r_j\in R, m_j\in M$, be a finitely solvable system of equations in $M$. By Lemma \ref{L:upper}, there exists a single pure extension $B$ of $M$ such that the system of equations $(*)$ contains a solution $b\in B$. By hypothesis, $M$ is a direct summand of $B$. Thus there exists a submodule $A$ of $B$ such that $B=M\oplus A$. Therefore, there exist $m\in M$ and $a\in A$ such that $b=m+a$. Since $r_{j}b=m_j$ for each $j\in J$, we conclude that $r_{j}m-r_{j}a=m_j$. Thus $r_ja=r_{j}m-m_j\in A\cap M=0$. (4)$\Leftrightarrow$(7) by \cite[Theorem 10]{Azu87}. \end{proof} \begin{remark} \label{R:cyclically pure} \textnormal{ Recall that a submodule $A$ of a left $R$-module $B$ is {\it cyclically pure} if and only if the exact sequence $ 0\longrightarrow A \hookrightarrow B \stackrel{g}\longrightarrow B/A\longrightarrow 0$ is cyclically pure. By \cite[Proposition 1.2]{GrHi09}, $A$ is a cyclically pure submodule of $B$ if and only if for each index set $J$, any system of equations $r_j x= a_j\in A$ ($r_j\in R$, $j\in J$) is solvable in $A$ whenever it is solvable in $B$.} \end{remark} \begin{corollary} \label{C:sum of semicompact} Let $R$ be a ring and $\{M_i\}_{i\in I}$ be a set of semi-compact left $R$-modules. Then $\oplus_{i\in I} M_i$ is semi-compact if and only if $\oplus_{i\in I} M_i$ is a cyclically pure submodule of $\prod_{i\in I}M_i $. \end{corollary} \begin{proof}($\Rightarrow$) is clear by Theorem \ref{T:character}.\\ ($\Leftarrow$). It is sufficient to show that every finitely solvable system of equations of the form $r_j x= a_j\in \oplus_{i\in I} M_i$ ($r_j\in R$, $j\in J$) is solvable in $A$. Since $\prod_{i\in I}M_i $ is semi-compact, there exists an element $b\in \prod_{i\in I}M_i$ such that $r_jb=a_j$ for each $j\in J$. Since $\oplus_{i\in I} M_i$ is a cyclically pure submodule of $\prod_{i\in I}M_i$, by Remark \ref{R:cyclically pure}, there is an element $a\in \oplus_{i\in I} M_i$ such that $r_ja=a_j$ for each $j\in J$. \end{proof} From Theorem~\ref{T:character} and Remark~\ref{R:f-injective}, we deduce the following corollary. \begin{corollary} \label{C:Noetherian} Let $R$ be a ring. Then the following statements are equivalent: \begin{enumerate} \item $R$ is left Noetherian. \item Every left $R$-module is semi-compact. \item Every direct sum of semi-compact left $R$-modules is semi-compact. \end{enumerate} \end{corollary} \begin{proof} (1)$\Rightarrow$(2). Since every cyclic left $R$-module is finitely presented and so pure projective, so, by Theorem~\ref{T:character} (3), every $R$-module is semi-compact. (2)$\Rightarrow$(3) is obvious and (3)$\Rightarrow$(1) holds by Remark~\ref{R:f-injective}. \end{proof} A ring $R$ is called left {\it pure-semisimple} if each left $R$-module is pure-injective. Recall that a ring $R$ is {\it left perfect} if each flat left $R$-module is projective. Theorem~\ref{T:character} implies that there exists a semi-compact left $R$-module which is not pure injective. It is easy to see that a left Noetherian ring $R$ is pure-semisimple if and only if every semi-compact left $R$-module is pure injective. In the next theorem we show that a domain $R$ is a division ring if and only if the class of semi-compact $R$-modules and the class of pure injective $R$-modules coincide. \begin{theorem} \label{T:Division} Let $R$ be a domain (not necessarily commutative). Then $R$ is a division ring if and only if every semi-compact $R$-module is pure injective. \end{theorem} \begin{proof} If $R$ is a division ring, it is obvious that each semi-compact module is pure injective. Conversely, let $M$ be a flat left $R$-module. Then there exists a free left $R$-module $F$ and submodule $K$ of $F$ such that $\varepsilon: 0\rightarrow K \rightarrow F \rightarrow M\rightarrow 0$ is pure exact. Since $R$ is domain, $F$ and $K$ are torsion-free and so semi-compact. By hypothesis, $K$ is pure injective and $\varepsilon$ splits. Therefore, $M$ is projective and so $R$ is left perfect. This implies that $R$ is a division ring. \end{proof} Recall that a ring $R$ is {\it von-Neumann regular} if for each element $a$ of $R$ there exists $b\in R$ such that $a=aba$. It is equivalent to the fact that every finitely generated left ideal is a summand of $R$. Also, it is known that a ring $R$ is von-Neumann regular if and only if every pure injective left $R$-module is injective. In the next theorem we have another characterization of von-Neumann regular rings. \begin{theorem} \label{T:Von} Let $R$ be a ring. Then the following statements are equivalent: \begin{enumerate} \item $R$ is von-Neumann regular. \item Every pure injective left $R$-module is injective. \item Every semi-compact left $R$-module is injective. \end{enumerate} \end{theorem} \begin{proof} (3)$\Rightarrow$(2) is obvious. (2)$\Rightarrow$(1). Each left module is semi-injective because it is a pure submodule of a pure injective module which is injective. So, $R$ is von Neumann regular since each left module is semi-injective. (1)$\Rightarrow$(3). In this case each left $R$-module is semi-injective. So, by \cite[Lemma 5.5]{Mat85} a left $R$-module is injective if and only if it is semi-compact. \end{proof} \bigskip \section{Rings whose flat modules are semi-compact}\label{S:flat} \begin{definition} \label{D:def sigma} \textnormal{Let $R$ be a ring and $M$ a left $R$-module. We say that $M$ is {\it $\Sigma$-semi-compact} if all direct sums of copies of $M$ are semi-compact. By Example~\ref{E:exe}(2) every torsion-free module over an integral domain is $\Sigma$-semi-compact.} \end{definition} Faith in \cite{Fai66}, proved that an injective left $R$-module $M$ is $\Sigma$-injective if and only if $R$ satisfies the a.c.c. on the left ideals in $\mathcal{A}_l(R ,M)$ (equivalently, $M$ satisfies the d.c.c. on the subgroups in $\mathcal{A}_r(R, M)$). We need the following proposition of \cite{Fai66} to characterize $\Sigma$-semi-compact left $R$-modules. \begin{proposition} \label{P:faith proposition} \textnormal{\cite[Proposition 1]{Fai66}} Let $M$ be a left $R$-module. Then $\mathcal{A}_l(R, M)$ satisfies the a.c.c., equivalently, $M$ satisfies the d.c.c. on the subgroups in $\mathcal{A}_r(R, M)$, if and only if for each left ideal $I$ of $R$, there exists a finitely generated subideal $I_1$ such that $M[I]=M[I_1]$. \end{proposition} \begin{theorem} \label{T:sigma semicompact} Let $R$ be a ring and $M$ a left $R$-module. Then the following statements are equivalent: \begin{enumerate} \item $M^{(\mathbb{N})}$ is semi-compact; \item $R$ satisfies the a.c.c. on the left ideals in $\mathcal{A}_l(R, M)~ (M$ satisfies the d.c.c. on the subgroups in $\mathcal{A}_r(R, M))$; \item $M$ satisfies the d.c.c. on the subgroups of $M$ which are annihilators of finitely generated left ideals of $R$; \item $M$ is $\Sigma$-semi-compact. \end{enumerate} \end{theorem} \begin{proof} (1)$\Rightarrow$(2). Let $M[I_1]\supset M[I_2]\supset M[I_3]\ldots$ be a strictly descending chain, where $I_n$ is a left ideal for each integer $n\geq 1$, and let $y_n\in M[I_n]\setminus M[I_{n+1}]$. Obviously, $x\equiv x_i$ (mod $M^{(\mathbb{N})}[I_i]=M[I_i]^{({\mathbb{N}})}$ ) is finitely solvable where $x_{i}=(y_1,\ldots,y_i,0,0,\ldots)$. So it has a simultaneous solution in $M^{(\mathbb{N})}$ since $M^{(\mathbb{N})}$ is semi-compact. But each $a=(s_1,s_2,\ldots,s_t,0,0,\ldots)\in M^{(\mathbb{N})}$ cannot be a solution of the above system, since $a-x_{t+2}=(s_1-y_1,\ldots,s_t-y_t,y_{t+1},y_{t+2})\notin M[I_{t+2}]^{({\mathbb{N}})}$, a contradiction. (2)$\Rightarrow$(3) is clear. (3)$\Rightarrow$(4). First we show that $M$ is semi-compact. Let $x\equiv x_i$ (mod $M[I_{\alpha}]$ ), where $\alpha\in \Lambda$, be a finitely solvable system. By Proposition \ref{P:faith proposition} we may assume that $I_{\alpha}$ is finitely generated for each $\alpha\in\Lambda$. There exists $V=M[I_{\alpha1}]\cap\ldots\cap M[I_{\alpha n}]$ which is minimal among the set of all finite intersections of the $M[I_{\alpha}]$ for $\alpha\in \Lambda$. Obviously, $V\subseteq M[I_{\alpha}]$ for each $\alpha\in \Lambda$. Let $y-x_{\alpha i}\in M[I_{\alpha i}]$ for each $1\leq i\leq n$. Let $y_{\beta}$ be a solution of $x\equiv x_{\alpha i}$ (mod $M[I_{\alpha i}])$ and $x\equiv x_{\beta}$ (mod $M[I_{\beta}])$ where $1\leq i\leq n$. It is easy to check that $y-y_{\beta}\in V\subseteq M[I_{\beta}]$. Thus $y-x_{\beta}=y-y_{\beta}+y_{\beta}-x_{\beta}\in M[I_{\beta}]$. Therefore, $y$ is a simultaneous solution in $M$. So, $M$ is semi-compact. Let $J$ is an index set, $I$ a left ideal of $R$ and $h:I\rightarrow M^{(J)}$ a homomorphism such that, for any finitely generated left ideal $I_0$ of $I$, there exists $m_0\in M^{(J)}$ such that $h(r_0)=r_0m_0$ for each $r_0\in I_0$. Let $I_1=Rs_1+\ldots+Rs_n$ be the finitely generated subideal of $I$ given by Proposition \ref{P:faith proposition} such that $M[I]=M[I_1]$. Therefore, there exists $m_1=(m_{1j})_{j\in J}\in M^{(J)}$ such that $h(r_1)=r_1m_1$ for each $r_1\in I_1$. Since $M^J$ is semi-compact, there exists an element $m'=(m'_j)_{j\in J} \in M^J$ such that $h(r)=rm'$ for each $r\in I$. Thus for each $j\in J$, $m'_j-m_{1j}\in M[I_1]=M[I]$. We conclude that $h(r)=rm_1$ for each $r\in I$. (4)$\Rightarrow$(1) is clear. \end{proof} \begin{corollary} \label{C:sigma semicompact submodule} Every submodule of a $\Sigma$-semi-compact module is $\Sigma$-semi-compact. \end{corollary} \begin{corollary} \label{C:factor of sigma semicompact} Let $R$ be a ring and $I$ a two-sided ideal of $R$. If $M$ is a $\Sigma$-semi-compact left $R$-module, then so is $M/M[I]$. \end{corollary} \begin{proof} It is clear by Theorem \ref{T:sigma semicompact} and corollary \ref{C:sigma semicompact submodule}. \end{proof} \begin{corollary} \label{C:sigma semicompact ring} Let $R$ be a ring. Then the following statements are equivalent: \begin{enumerate} \item $R$ satisfies the a.c.c. on the left annulets. \item $R$ satisfies the d.c.c. on the right annulets. \item $R$ is $\Sigma$-semi-compact as left $R$-module. \end{enumerate} \end{corollary} \begin{corollary} \label{C:sigma} Let $M$ be a semi-injective left $R$-module. Then $M$ is $\Sigma$-injective if and only if it is $\Sigma$-semi-compact. \end{corollary} The following theorem is a generalization of Corollary \ref{C:Noetherian}. \begin{theorem} \label{T:Noeth-pproj} Let $R$ be a ring. Then every pure projective left $R$-module is semi-compact if and only if $R$ is left Noetherian. \end{theorem} \begin{proof} Let $C$ be a cyclic left $R$-module. By \cite[Theorem 33.5]{Wis91}, there exists a pure exact sequence $\varepsilon: 0\longrightarrow K \longrightarrow P \longrightarrow C\longrightarrow 0$ where $P$ is a pure projective left module. By Corollary \ref{C:sigma semicompact submodule}, $K$ is $\Sigma$-semi-compact. Therefore, $\varepsilon$ splits and so $C$ is a direct summand of $P$. Since $C$ is cyclic, it is a direct summand of a finite direct sum of finitely presented $R$-modules. It follows that $C$ is finitely presented. Hence $R$ is left Noetherian. The converse is clear by Corollary \ref{C:Noetherian}. \end{proof} \begin{theorem} \label{T:flat is semicompact} Let $R$ be a ring. Then the following statements are equivalent: \begin{enumerate} \item Every flat left $R$-module is semi-compact. \item $R$ is $\Sigma$-semi-compact as left $R$-module. \item $R$ satisfies the a.c.c. on the left annulets ($R$ satisfies the d.c.c. on the right annulets). \end{enumerate} \end{theorem} \begin{proof} (1)$\Rightarrow$(2) is clear. (2)$\Leftrightarrow$(3) by Corollary \ref{C:sigma semicompact ring}. (2)$\Rightarrow$(1). Let $M$ be a flat left $R$-module. Then $M=F/K$ where $F$ is a free $R$-module and $K$ is a pure submodule of $F$. Suppose that $M$ is not $\Sigma$-semi-compact. Then by Theorem \ref{T:sigma semicompact}, there exists a strict descending chain $M[I_1]\supset M[I_2]\supset\ldots$. By Proposition \ref{P:faith proposition}, we may assume that for each $i\in \mathbb{N}$, $I_i$ is finitely generated. For each $i\in \mathbb{N}$, let $a_i+K\in M[I_{i}]\setminus M[I_{i+1}]$ and let $r_{i,1},\ldots r_{i,m_i}$ be generators of $I_{i}$ where $m_i\in \mathbb{N}$. Therefore, $r_{i,j}a_i\in K$ for $j=1,\ldots,m_i$. Since $K$ is pure submodule of $F$, there exist $k_i\in K$ such that $I_i(a_i-k_i)=0$ for each $i\in \mathbb{N}$. One can easily see that $I_{i+1}(a_i-k_i)\neq 0$ for each $i\in \mathbb{N}$. Thus we get the strict descending chain $F[I_1]\supset F[I_2]\supset\ldots\supset F[I_n]\supset\ldots$. This contradicts that $F$ is $\Sigma$-semi-compact. \end{proof} In \cite{Bjo70}, Bj\"{o}rk proved that a left semi-injective ring $R$ is quasi-Frobenius if and only if it satisfies the a.c.c. on the left annulets. Therefore we have the following evident corollary by Theorem \ref{T:flat is semicompact}. \begin{corollary} \label{C:QF} Let $R$ be a self left semi-injective ring. Then each flat left $R$-module is semi-compact if and only if $R$ is a quasi-Frobenius ring. \end{corollary} \begin{proposition} \label{P:QNoe} Let $R$ be a ring. Assume that $R$ is a subring of a left Noetherian ring $S$. Then each flat left $R$-module is semi-compact. \end{proposition} \begin{proof} As left $S$-module, $S$ is $\Sigma$-semi-compact. If $A$ is a left ideal of $R$ and $A'=SA$ then it is easy to check that $S[A]=S[A']$. So $S$ is a left $\Sigma$-semi-compact $R$-module. It follows that so is $R$ by Corollary~\ref{C:sigma semicompact submodule}. \end{proof} From these last two propositions we deduce the following corollary. \begin{corollary} Let $R$ be a commutative ring and $Q$ its quotient ring. Assume that $Q$ is semi-injective. Then each flat $R$-module is semi-compact if and only if $Q$ is quasi-Frobenius. \end{corollary} \section{Finite projectivity and $\Sigma$-semi-compactness}\label{S:fproj} As in \cite{Azu87}, a left $R$-module $M$ is called {\it finitely projective} (respectively {\it singly projective}) if any homomorphism from a finitely generated (respectively cyclic) left $R$-module into $M$ factors through a free left $R$-module. If $m,n$ are positive integers, a right $R$-module is said to be {\it ($m,n$)-flat} if, for each $n$-generated left submodule $K$ of $R^m$, the homomorphism $M\otimes_RK\rightarrow M\otimes_RR^m$ deduced of the inclusion map is injective. We say that $M$ is ($\aleph_0,1)$)-flat if it is ($m,1)$)-flat for each integer $m>0$. In \cite[Theorem 5]{She91} Shenglin proved that every flat left module is singly projective if, for each descending chain of finitely generated right ideals $I_1\supseteq I_2\supseteq I_3\supseteq\dots$, the ascending chain ${}^{\bot}I_1\subseteq {}^{\bot}I_2\subseteq {}^{\bot}I_3\subseteq\dots$ terminates. Therefore by Corollary~\ref{C:sigma semicompact ring}(1), we deduce that every flat left module is singly projective if $R$ is $\Sigma$-semi-compact as left module. The following proposition generalizes this result. \begin{proposition}\label{P:(N_0,1)-flat} Let $R$ be a ring which is $\Sigma$-semi-compact as left $R$-module. Then each ($\aleph_0,1$)-flat left $R$-module is singly projective. \end{proposition} \begin{proof} Let $M$ be a ($\aleph_0,1$)-flat left $R$-module and let $\pi:F\rightarrow M$ be an epimorphism with $F$ a free left $R$-module. Let $C$ be a left cyclic module generated by $c$, let $f:C\rightarrow M$ be a homomorphism and let $A={}^{\bot}\{c\}$. Since $F$ is $\Sigma$-semi-compact, so by Proposition~\ref{P:faith proposition} there exists a finitely generated left subideal $B$ of $A$ such that $F[A]=F[B]$. Let $g:R/B\rightarrow M$ be the homomorphism defined by $g(1+B)=f(c)$. Since $M$ is ($\aleph_0,1$)-flat, so, by \cite[Proposition 4.1 and Theorem 1.1]{Cou11}, there exists a homomorphism $h:R/B\rightarrow F$ such that $g=\pi\circ h$. Let $x=h(1+B)$. Then $x\in F[B]=F[A]$. So, the homomorphism $\phi:C\rightarrow F$ defined by $\phi(c)=x$ satisfies $f=\phi\circ\pi$. \end{proof} \begin{corollary} \label{C:singly-semi-compact} Let $R$ be a ring such that $R^{\mathbb{N}}$ is ($\aleph_0,1$)-flat as left module. Then the following conditions are equivalent: \begin{enumerate} \item $R$ is $\Sigma$-semi-compact as left module. \item Each ($\aleph_0,1$)-flat left $R$-module is singly projective. \end{enumerate} \end{corollary} \begin{proof} (1)$\Rightarrow$(2) follows from Proposition \ref{P:(N_0,1)-flat}. (2)$\Rightarrow$(1). From $R^{\mathbb{N}}$ ($\aleph_0,1$)-flat and $R^{(\mathbb{N})}$ pure submodule of $R^{\mathbb{N}}$ we deduce that $R^{\mathbb{N}}/R^{(\mathbb{N})}$ is ($\aleph_0,1$)-flat. We conclude by \cite[Proposition 1]{Len76} and Corollary \ref{C:sigma semicompact ring}. \end{proof} An $R$-module is called {\it uniserial} if if the set of its submodules is totally ordered by inclusion. A commutative ring $R$ is a {\it chain ring} (or a valuation ring) if it is a uniserial $R$-module. The following example satisfies the equivalent conditions of the previous corollary but the hypothesis does not hold. \begin{example} \textnormal{Let $R$ be a chain ring whose quotient ring $Q$ is Artinian and not reduced. We assume that $\mathrm{Spec}\ R$ is finite and $R\ne Q$. By \cite[Corollary 36]{Couch03} each ideal is countably generated. Each flat $R$-module is semi-compact by Proposition~\ref{P:QNoe} and singly projective by \cite[Theorem 7]{Coucho07}. Since $R\ne Q$, $R$ is not FP-injective as $R$-module, so, by \cite[Theorem 37(6)]{Couch03} $R^{\mathbb{N}}$ is not $(\aleph_0,1)$-flat\footnote{Over a chain ring each $(\aleph_0,1)$-flat module is flat.}.} \end{example} If $R$ is a commutative ring, then we consider on $\mathrm{Spec}\ R$ the equivalence relation $\mathcal{R}$ defined by $L\mathcal{R} L'$ if there exists a finite sequence of prime ideals $(L_k)_{1\leq k\leq n}$ such that $L=L_1,$ $L'=L_n$ and $\forall k,\ 1\leq k\leq (n-1),$ either $L_k\subseteq L_{k+1}$ or $L_k\supseteq L_{k+1}$. We denote by $\mathrm{pSpec}\ R$ the quotient space of $\mathrm{Spec}\ R$ modulo $\mathcal{R}$ and by $\lambda_R: \mathrm{Spec}\ R\rightarrow\mathrm{pSpec}\ R$ the natural map. The quasi-compactness of $\mathrm{Spec}\ R$ implies the one of $\mathrm{pSpec}\ R$, but generally $\mathrm{pSpec}\ R$ is not $T_1$: see \cite[Propositions 6.2 and 6.3]{Laz67}. \begin{lemma}[{\cite[Lemma 2.5]{Couc09}}] \label{L:pure} Let $R$ be a commutative ring and let $C$ a closed subset of $\mathrm{Spec}\ R$. Then $C$ is the inverse image of a closed subset of $\mathrm{pSpec}\ R$ by $\lambda_R$ if and only if $C=V(A)$ where $A$ is a pure ideal. Moreover, in this case, $A=\cap_{P\in C}0_P$ (where $0_ P$ is the kernel of the canonical homomorphism $R\rightarrow R_ P$). \end{lemma} A commutative ring $R$ is called \textit{arithmetical} if $R_L$ is a chain ring for each maximal ideal $L$. \begin{theorem} \label{T:Areth} Let $R$ be a commutative arithmetical ring and $Q$ its quotient ring. Then the following conditions are equivalent: \begin{enumerate} \item $Q$ is pure-semisimple; \item each flat $R$-module is semi-compact; \item each flat $R$-module is finitely projective; \item each flat $R$-module is singly projective. \end{enumerate} \end{theorem} \begin{proof} $(1)\Rightarrow (2)$ is a consequence of Proposition~\ref{P:QNoe}, $(1)\Rightarrow (3)$ follows from \cite[Theorem 7]{Coucho07}, $(2)\Rightarrow (4)$ holds by Proposition \ref{P:(N_0,1)-flat} and $(3)\Rightarrow (4)$ is obvious. $(4)\Rightarrow (1)$. First we show that $\mathrm{Min}\ R$ is finite. Since each prime ideal contains a unique minimal prime ideal, then each point of $\mathrm{pSpec}\ R$ is of the form $V(L)$ where $L$ is a minimal prime ideal. By Lemma~\ref{L:pure} there exists a pure ideal $A$ such that $V(L)=V(A)$. Since $R/A$ is flat, it is projective, whence $A=Re$, where $e$ is an idempotent of $R$. Hence each single subset of $\mathrm{pSpec}\ R$ is open. From the quasi-compacity of $\mathrm{pSpec}\ R$, we deduce that $\mathrm{Min}\ R$ is finite. . Let $P$ be a maximal ideal of $R$. By using \cite[Proposition 6]{Coucho07} we get that $R_P$ satisfies $(3)$. By \cite[Theorem 33]{Coucho07} the quotient ring $Q(R_P)$ of $R_P$ is artinian. It follows that $Q(R_P)=R_L$ where $L$ is the minimal prime ideal contained in $P$. Let $s$ be an element of $R$ which does not belong to any minimal prime ideal. If $a\in R$ satisfies $sa=0$ then it is easy to check that $\dfrac{a}{1}=0$ in $R_P$ for each maximal ideal $P$. So, $a=0$ and $s$ is regular. We deduce that $Q\cong\prod_{L\in\mathrm{Min}\ R}R_L$. Hence $Q$ is pure-semisimple. \end{proof} The following proposition is a slight generalization of \cite[Proposition 6]{Coucho07} and the proof is similar. \begin{proposition} \label{P:locali} Let $\phi: R\rightarrow S$ be a right flat epimorphism of rings. Then: \begin{enumerate} \item For each singly (respectively finitely) projective left $R$-module $M$, $S\otimes_RM$ is singly (respectively finitely) projective over $S$; \item Let $M$ be a singly (respectively finitely) projective left $S$-module. If $\phi$ is injective then $M$ is singly (respectively finitely) projective over $R$. \end{enumerate} \end{proposition} \begin{theorem} \label{T:Flat=f-pro} Let $R$ be a ring. Assume that $R$ has a right flat epimorphic extension $S$ which is von Neumann regular. Then the following conditions are equivalent: \begin{enumerate} \item $S$ is semisimple; \item each flat left $R$-module is semi-compact; \item each flat left $R$-module is finitely projective; \item each flat left $R$-module is singly projective. \end{enumerate} \end{theorem} \begin{proof} $(1)\Rightarrow (3)$ is an immediate consequence of \cite[Corollary 7]{She91} and $(3)\Rightarrow (4)$ is obvious. $(1)\Rightarrow (2)$ is an immediate consequence of Proposition~\ref{P:QNoe}, and $(2)\Rightarrow (4)$ holds by Proposition~\ref{P:(N_0,1)-flat}. $(4)\Rightarrow (1)$. First we show that each left $S$-module $M$ is singly projective. Every left $S$-module $M$ is flat over $S$ and $R$. So, $M$ is singly projective over $R$. It follows that $M\cong S\otimes_RM$ is singly projective over $S$ by Proposition~\ref{P:locali}(1). Now let $A$ be a left ideal of $S$. Since $S/A$ is singly projective, it is projective. So, $S/A$ is finitely presented over $S$ and $A$ is a finitely generated ideal of $S$. Hence $S$ is semisimple. \end{proof} \begin{corollary} \label{C:Flat=f-pro} Let $R$ be a commutative reduced ring and $Q$ its quotient ring. Assume that the space $\mathrm{Min}\ R$ of minimal prime ideals of $R$ is compact in its Zariski topology. Then the following conditions are equivalent: \begin{enumerate} \item $Q$ is semisimple; \item each flat $R$-module is semi-compact; \item each flat $R$-module is finitely projective; \item each flat $R$-module is singly projective. \end{enumerate} \end{corollary} \begin{proof} We use the assumption that $\mathrm{Min}\ R$ is compact. By \cite[Theorem 3.14.1]{Ste71} and \cite[Proposition 1]{Que71} $Q$ is a subring of a von Neumann regular ring $S$ such that the inclusion map $R\rightarrow S$ is a flat epimorphism ($S$ is the maximal flat epimorphic extension of $R$). If either $Q$ or $S$ is semisimple, then $Q=S$. \end{proof} Given a ring $R$, a left $R$-module $M$ and $x\in M$, the \textit{content ideal} $\mathrm{c}(x)$ of $x$ in $M$, is the intersection of all right ideals $A$ for which $x\in AM$. We say that $M$ is a \textit{content module} if $x\in\mathrm{c}(x)M,\ \forall x\in M$. We say that $M$ is {\it FP-injective} if $\mathrm{Ext}_R^1(F,M)=0$ for each finitely presented left $R$-module $F$. It is easy to see that each FP-injective module is semi-injective, but we do not know if the converse holds, except for some classes of rings. \begin{proposition} \label{P:Perfect} Let $R$ be a self left FP-injective ring. Then each flat left $R$-module is finitely projective if and only if $R$ is left perfect. \end{proposition} \begin{proof} Let $M$ be a flat left $R$-module. Since it is finitely projective, so it is FP-injective and a content module by \cite[Proposition 3(2)]{Coucho07}. We conclude that $R$ is left perfect by \cite[Theorem 2]{Coucho07}. \end{proof} \begin{corollary} \label{C:Flat=f-proj} Let $R$ be a ring. Assume that $R$ has a right flat epimorphic extension $S$ which is self left FP-injective. Then each flat left $R$-module is finitely projective if and only if $S$ is left perfect. \end{corollary} \begin{proof} If $S$ is left perfect we conclude by \cite[Corollary 7]{She91}. Conversely, first we show that each flat left $S$-module is finitely projective. The proof is similar to that of $(4)\Rightarrow (1)$ of Theorem~\ref{T:Flat=f-pro}, and then we use the previous proposition. \end{proof} By \cite[Corollary 16]{ZiZi78} (a result due to Jensen) each semiprimary ring with square of the Jacobson radical zero is $\Sigma$-pure-injective (hence $\Sigma$-semi-compact) on either side. Since these rings are left and right perfect then each flat module is projective. \begin{remark}\label{R:cond} \textnormal{Consider the following two conditions: \begin{enumerate} \item each flat left $R$-module is semi-compact; \item each flat left $R$-module is finitely projective. \end{enumerate} Let us observe that there are many examples of rings satisfying the two conditions. We shall see that they are not equivalent. Does the first condition imply the second?} \end{remark} Let $R$ be a ring which is a FP-injective left module. Then $R$ is left perfect if and only if $R$ satisfies the second condition by Proposition~\ref{P:Perfect}. By Corollary~\ref{C:QF}, $R$ satisfies the first condition if and only if $R$ is quasi-Frobenius. It remains to give an example of a left perfect ring which is self left FP-injective and which is not quasi-Frobenius. \begin{proposition}\label{P:perf} Let $R$ be a local commutative ring of maximal ideal $P$ such that $P^2$ is the only minimal non-zero ideal of $R$. Then: \begin{enumerate} \item $R$ is perfect and self FP-injective; \item $R$ is quasi-Frobenius if and only if $P$ is finitely generated if and only if $R^{\mathbb{N}}$ is $(1,1)$-flat. \end{enumerate} \end{proposition} \begin{proof} $(1)$. Since $R$ is local and $P$ nilpotent ($P^3=0$), $R$ is perfect. For each $R$-module $M$ we put $M^*=\mathrm{Hom}_R(M,R)$. By \cite[Theorem 2.3]{Ja73} to show that $R$ is self FP-injective it is enough to prove that the evaluation map $\phi_M:M\rightarrow M^{**}$ is injective for each finitely presented $R$-module $M$. We consider a finitely presented module $M$. We have the following exact sequence $0\rightarrow K\xrightarrow{u} F\xrightarrow{\pi} M\rightarrow 0$ where $F$ is a free $R$-module of finite rank, $K$ a finitely generated submodule of $F$ and $u$ the inclusion map. We may assume that $K\subseteq PF$. We have the following commutative diagram with exact horizontal sequences: \[\begin{CD} K @>u>> F @>\pi>>M \\ @V{\phi_K}VV @V{\phi_F}VV @V{\phi_M}VV \\ K^{**} @>u^{**}>> F^{**} @>\pi^{**}>> M^{**} \end{CD}\] \bigskip Since $\phi_F$ is an isomorphism and $u$ a monomorphism then $\phi_K$ is injective. On the other hand, if $E=\mathrm{E}(R/P)$ then $E\cong\mathrm{E}(R)$. If $N$ is a module of finite length, denoted by $\ell(N)$, then $\ell(N)=\ell(\mathrm{Hom}_R(N,E))$ (this can be proved by induction on $\ell(N)$). We have $PK\subseteq P^2F$. So, $PK$ is a semisimple module of finite length. Since so is $K/PK$, it follows that $K$ is of finite length too. From $K^*\subseteq\mathrm{Hom}_R(K,E)$ we deduce that $\ell(K^*)\leq \ell(K)$. In the same way we get that $\ell(K^{**})\leq\ell(K)$. Whence $\ell(K^{**})=\ell(K)$, $\phi_K$ is an isomorphism and $u^{**}$ is injective. From snake Lemma we deduce that $\phi_M$ is a monomorphism. $(2)$. The first equivalence is obvious and the second is a consequence of Corollaries \ref{C:QF} and \ref{C:singly-semi-compact}, and \cite[Theorem 4.11]{Cou11}. \end{proof} \begin{example} Let $K$ be a field, $\Lambda$ an index set and $\alpha\in\Lambda$. Let $R$ be the factor ring of the polynomial ring $K[X_{\lambda}\mid \lambda\in\Lambda]$ modulo the ideal generated by \[\{X_{\lambda}^2-X_{\alpha}^2\mid \lambda\in\Lambda\}\cup\{X_{\lambda}X_{\mu}\mid \lambda,\mu\in\Lambda, \lambda\ne\mu\}.\] Then $R$ satisfies the assumptions of Proposition~\ref{P:perf}. Consequently, if $\Lambda$ is not a finite set then $R$ verifies the second condition of Remark~\ref{R:cond} but not the first. \end{example} \section{Semi-compactness and pure-injectivity}\label{S:pure-injectivity} By Example~\ref{E:exe}(1) each pure-injective module is semi-compact. By Theorem~\ref{T:Von} the converse holds over every von Neumann regular ring. From Corollary~\ref{C:Noetherian} we deduce the following: \begin{corollary} \label{C:NoPss} Let $R$ be a left Noetherian ring. Then each semi-compact left $R$-module is pure-injective if and only if $R$ is left pure-semisimple. \end{corollary} Now, we investigate rings for which each semi-compact left module is pure-injective. We shall give a partial answer. Recall that a left $R$-module $M$ is {\it cotorsion} if $\mathrm{Ext}_R^1(F,M)=0$ for each flat left $R$-module $F$. It is easy to check that every pure-injective module is cotorsion, and, by \cite[Proposition 3.3.1]{Wu96} a ring $R$ is left perfect if and only if each left $R$-module is cotorsion. So, if $R$ is left Artinian, then each left $R$-module is semi-compact and cotorsion, but each left module is pure-injective if and only $R$ is left pure-semisimple. The following theorem completes Theorem~\ref{T:Von}. \begin{theorem} \label{T:Von1} For any ring $R$ the following conditions are equivalent: \begin{enumerate} \item $R$ is von Neumann regular; \item each cotorsion left (right) $R$-module is injective. \end{enumerate} \end{theorem} \begin{proof} $(1)\Rightarrow (2)$. Since each left $R$-module is flat, so each cotorsion left module is injective. $(2)\Rightarrow (1)$. Let $M$ be a left $R$-module. We shall prove that $M$ is flat. By \cite[Theorem 3]{BiEBEn01} and \cite[Lemma 2.1.1]{Wu96}, there exists an exact sequence \[0\rightarrow K\rightarrow F\rightarrow M\rightarrow 0,\] where $F$ is flat and $K$ cotorsion. Since $K$ is injective, the sequence splits and we deduce that $M$ is flat. \end{proof} \begin{proposition}\label{P:sc=pi} Let $R$ be a commutative ring. Then: \begin{enumerate} \item if each semi-compact $R$-module is cotorsion (repectively pure-injective) then, for each multiplicative subset $S$ of $R$, each semi-compact $S^{-1}R$-module is cotorsion (respectively pure-injective). \item if each semi-compact $R$-module is pure-injective then each prime ideal of $R$ is maximal. \end{enumerate} \end{proposition} \begin{proof} $(1)$. Let $M$ be a semi-compact $S^{-1}R$-module. Then $M$ is semi-compact over $R$ too. It follows that $M$ is cotorsion (respectively pure-injective) as $R$-module. We easily check that it satisfies this property as $S^{-1}R$-module. $(2)$. We apply Theorem~\ref{T:Division} to $R/L$ where $L$ is a prime ideal. \end{proof} \begin{theorem} \label{T:reduced} Let $R$ be a commutative reduced ring. Then the following conditions are equivalent: \begin{enumerate} \item $R$ is von Neumann regular; \item each semi-compact $R$-module is pure-injective. \end{enumerate} \end{theorem} \begin{proof} It easy to prove that $(1)\Rightarrow (2)$. $(2)\Rightarrow (1)$. By Proposition~\ref{P:sc=pi}(2), each prime ideal is maximal. It follows that $R_P$ is a field for each maximal ideal $P$ of $R$. Hence $R$ is von Neumann regular. \end{proof} \begin{proposition}\label{P:nonidemp} Let $R$ be a commutative local ring of maximal ideal $P$. Assume that $P\ne P^2$. Then $R$ is pure-semisimple if each semi-compact $R$-module is pure-injective. \end{proposition} \begin{proof} By Proposition~\ref{P:sc=pi}, $P$ is the only prime ideal of $R$. So, it suffices to show that $\dim_{R/P}P/P^2=1$. By way of contradiction suppose that $\dim_{R/P}P/P^2>1$. After replacing $R$ by a suitable factor, we may assume that $1<\dim_{R/P}P/P^2<\infty$. So, $R$ is Artinian but not pure-semisimple. Then each $R$-module is semi-compact (and cotorsion) but there exists a module which is not pure-injective, whence a contradiction. \end{proof} \begin{proposition}\label{P:Jacregular} Let $R$ be ring and $J$ its Jacobson radical. Assume that $R/J$ is von Neumann regular and $J$ nilpotent. Then each semi-compact left (right) $R$-module is cotorsion. \end{proposition} \begin{proof} If $J=0$, then we apply Theorem~\ref{T:Von}. Let $n$ be the smallest integer satisfying $J^n=0$. We proceed by induction on $n$. Let $M$ be a semi-compact module. We consider the following exact sequence: \[0\rightarrow M[J^p]\rightarrow M[J^{p+1}]\rightarrow M[J^{p+1}]/M[J^p]\rightarrow 0.\] We assume that the theorem holds if $n=p$. For any proper two-sided ideal $A$, a left $R/A$-module is cotorsion as $R$-module if so is as $R/A$-module (see \cite[Proposition 3.3.3]{Wu96}). On the other hand $M[J^p]=M[J^{p+1}][J^p]$. By using Proposition~\ref{P:submodule} and the fact that the class of cortorsion modules is closed by extension, we get that $M[J^{p+1}]$ is cortorsion, because $M[J^p]$ and $M[J^{p+1}]/M[J^p]$ are left modules over $R/J^p$ and they are semi-compact by Proposition~\ref{P:submodule}. \end{proof} \begin{corollary} \label{C:nilpotent} Let $R$ be a commutative ring and $N$ its nilradical. Then: \begin{enumerate} \item $R_P$ is perfect for each maximal ideal $P$ if $N$ is T-nilpotent and if every semi-compact $R$-module is cotorsion. \item $R_P$ is pure-semisimple for each maximal ideal $P$ if $N$ is T-nilpotent and if every semi-compact $R$-module is pure-injective. \item Every semi-compact $R$-module is cotorsion if $N$ is nilpotent and if each prime ideal is maximal. \end{enumerate} \end{corollary} \begin{proof} (1). By Proposition~\ref{P:sc=pi}(2) each prime ideal is maximal. So, for each maximal ideal $P$ the Jacobson of $R_P$ is T-nilpotent, whence $R_P$ is perfect. (2). As in (1) we prove that $R_P$ is perfect for each maximal ideal $P$. So, $PR_P\ne (PR_P)^2$. We use Proposition~\ref{P:nonidemp} to conclude. (3). It is easy to see that $N$ is the Jacobson radical of $R$ and that $R/N$ is von Neumann regular. We conclude by Proposition~\ref{P:Jacregular}. \end{proof}
1,108,101,563,043
arxiv
\section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers. You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments. Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction. The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading). \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1 inch (2.54 cm) from the top edge of the page. The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. Please number any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \centering \fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper). See \LaTeX\ template for a workaround. \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } {\small \bibliographystyle{ieee_fullname} \section{Introduction} \label{sec:intro} Recently, researchers have developed object detection methods in huge progress \cite{girshick2015fast,ren2015faster,redmon2016yolo,liu2016ssd,yolov4,Wang_2021_CVPR,yolox}. While the industry pursues high-performance object detection methods with real-time constraints, researchers focus on designing one-stage detectors \cite{liu2016ssd,redmon2016yolo,lin2017focal,yolov3,yolov4} with efficient network architectures ~\cite{detnas,spnas,maedet,NEURIPS2018_75fc093c,jiang2022giraffedet} and advanced training stages \cite{lin2017feature,tan2020efficientdet,jiang2022giraffedet,yolov3,ghiasi2019fpn}. Especially, YOLOv5/6/7\cite{yolov5,yolov6,yolov7}, YOLOX \cite{yolox} and PP-YOLOE \cite{yoloe} have achieved significant AP-Latency trade-offs on COCO, making YOLO series object detection methods widely used in the industry. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{Fig/curve.pdf} \end{center} \caption{Latency-accuracy trade-off of models for DAMO-YOLO and other state-of-the-art object detectors.} \label{fig:coco_stoa} \end{figure} Although object detection has achieved great progress, there are still new techs that can be brought in to further improve performance. Firstly, the network structure plays a critical role in object detection. Darknet holds a dominant position in the early stages of YOLO history \cite{redmon2016yolo,yolov2,yolov3,yolov4,yolov5,yolox}. Recently, some works have investigated other efficient networks for their detectors, \ie, YOLOv6~\cite{yolov6} and YOLOv7~\cite{yolov7}. However, these networks are still manually designed. Thanks to the development of the Neural Architecture Search (NAS), there are many detection-friendly network structures found through the NAS techs~\cite{detnas,spnas,maedet}, which have shown great superiority over previous manually designed networks. Therefore, we take advantage of the NAS techs and import MAE-NAS~\cite{maedet}\footnote{ \href{https://github.com/alibaba/lightweight-neural-architecture-search}{https://github.com/alibaba/lightweight-neural-architecture-search}. A demo can be found at the \href{https://modelscope.cn/studios/damo/TinyNAS/summary}{ModelScope}. } for our DAMO-YOLO. MAE-NAS is a heuristic and training-free neural architecture search method without supernet dependence and can be utilized to archive backbones at different scales. It can produce ResNet-like / CSP-like structures with spatial pyramid pooling and focus modules. \begin{table} \begin{center} \caption{CSP-Darknet vs MAE-NAS Backbone under DAMO-YOLO framework with different scales.} \label{tab:head_ab_temp} \setlength{\tabcolsep}{3pt} \begin{tabular}{lccccc} \toprule & Backbone & AP & Latency(ms) \\ \midrule DAMO-YOLO-S & CSP-Darknet & 44.9 & 3.92 \\ DAMO-YOLO-S & MAE-ResNet & 45.6 & 3.83 \\ DAMO-YOLO-S & MAE-CSP & 45.3 & 3.79 \\ DAMO-YOLO-M & MAE-ResNet & 48.0 & 5.64 \\ DAMO-YOLO-M & MAE-CSP & 48.7 & 5.60 \\ \bottomrule \end{tabular} \end{center} \end{table} Secondly, it is crucial for a detector to learn sufficient fused information between high-level semantic and low-level spatial features, which makes the detector neck to be a vital part of the whole framework. The importance of neck has also been discussed in other works~\cite{jiang2022giraffedet,ghiasi2019fpn, wang2019panet, tan2020efficientdet}. Feature Pyramid Network (FPN)~\cite{ghiasi2019fpn} has been proved effective to fuse multi-scale features. Generalized-FPN (GFPN)~\cite{jiang2022giraffedet} improves FPN with a novel queen-fusion. In DAMO-YOLO, we design a Reparameterized Generalized-FPN (RepGFPN). It is based on GFPN but involved in an accelerated queen-fusion, the efficient layer aggregation networks (ELAN) and re-parameterization. To strike the balance between latency and performance, we conducted a series of experiments to verify the importance of the neck and head of the detector and found that "large neck, small head" would lead to better performance. Hence, we discard the detector head in previous YOLO series works~\cite{redmon2016yolo,yolov2,yolov3,yolov4,yolov5,yolox,yoloe}, but only left a task projection layer. The saved calculations are moved to the neck part. Besides the task projection module, there is no other training layer in the head, so we named our detector head as ZeroHead. Coupled with our RepGFPN, ZeroHead achieves state-of-the-art performance, which we believe would bring some insights to other researchers. In addition, the dynamic label assignment, such as OTA~\cite{ge2021ota} and TOOD~\cite{tood}, is widely acclaimed and achieves significant improvement compared to the static label assignment~\cite{zhu2020autoassign}. However, the misalignment problem is still unsolved in these works. We propose a better solution called AlignOTA to balance the importance of classification and regression, which can partly solve the problem. At last, Knowledge Distillation (KD) has been proved effective in boosting small models by the larger model supervision. This tech does exactly fit the design of real-time object detection. Nevertheless, applying KD on YOLO series sometimes can not achieve significant improvements as hyperparameters are hard to optimize and features carry too much noise. In our DAMO-YOLO, we first make distillation great again on models of all sizes, especially on small ones. As shown in Fig.\ref{fig:coco_stoa}, with the above improvements, we proposed a series of models that exceed the state of the arts by a large margin, \eg, the DAMO-YOLO-S model achieves 46.8 mAP and outperforms YOLOv6-S 43.4 mAP and YOLOE-S 43.1 mAP, while its latency close to these models. In summary, the contributions are three-fold: \begin{enumerate} \item This paper proposes a new detector called \textbf{DAMO-YOLO}, which extends from YOLO but with more new techs, including MAE-NAS backbones, RepGFPN neck, ZeroHead, AlignedOTA and distillation enhancement. \item DAMO-YOLO outperforms the state-of-the-art detectors (\eg YOLO series) on public COCO datasets. \item A suite of models with various scales is presented in DAMO-YOLO (tiny/small/medium) to support different deployments. The code and pre-trained models are released at \href{https://github.com/tinyvision/damo-yolo}{https://github.com/tinyvision/damo-yolo}, with ONNX and TensorRT supported. \end{enumerate} \begin{figure*} \begin{center} \includegraphics[width=1.0\textwidth]{Fig/damo-yolo-framework.pdf} \end{center} \vspace{-2mm} \caption{ Overview of the network architecture of DAMO-YOLO. 1) MAE-NAS as backbone to extract multi-scale feature maps; 2) Efficient RepGFPN as neck to refine and fuse high-level semantic and low-level spatial features; 3) ZeroHead is presented which only contains a task projection layer for each loss. } \label{fig:network} \end{figure*} \section{DAMO-YOLO} \label{sec:xxyolo} In this section, we introduce each module of DAMO-YOLO in detail, including Neural Architecture Search (NAS) backbones, efficient Reparameterized Generalized-FPN (RepGFPN) neck, ZeroHead, AlignedOTA label assignment and distillation enhancement. The whole framework of DAMO-YOLO is displayed in Fig.\ref{fig:network}. \subsection{MAE-NAS Backbone} Instead of scaling technology, we use MAE-NAS\cite{maedet} to obtain optimal networks under different computational budgets. MAE-NAS constructs an alternative proxy based on information theory to rank initialized networks without training. Therefore, the search process only takes a few hours, which is much lower than the training costs. Following previous works \cite{maedet}, we design our backbones in the vanilla convolution network space with a new search block ``k1kx'', which is similar to blocks used in Darknet-53~\cite{yolov3}. Meanwhile, inspired by YOLOv6~\cite{yolov6}, we directly use the GPU inference latency, not FLOPs, as the target budget. After searching, we apply Spatial Pyramid Pooling (SPP)~\cite{he2015spatial}, Focus~\cite{yolov5} and Cross Stage Partial (CSP)~\cite{wang2020cspnet} modules into the final backbones. The performance comparisons of CSP-Darknet and our MAE-NAS backbones under our DAMO-YOLO with different scales are listed in Table.\ref{tab:head_ab_temp}, which implies the effectiveness of MAE-NAS backbones. In this table, ``MAE-ResNet'' means there are only SPP and Focus modules in the MAE-NAS backbones, and ``MAE-CSP'' means there are CSP modules in it as well. Besides, ``S'' (Small) and ``M'' (Medium) represent different scales of backbones. Considering the trade-off between performance and inference speed, we use ``MAE-ResNet'' in ``T'' (Tiny) and ``S'' scales and ``MAE-CSP'' in ``M'' scale in the final settings, as shown in Table.\ref{coco_sota}. \subsection{Efficient RepGFPN} Feature pyramid network aims to aggregate different resolution features extracted from the backbone, which has been proven to be a critical and effective part of object detection~\cite{ghiasi2019fpn, wang2019panet, tan2020efficientdet}. The conventional FPN~\cite{ghiasi2019fpn} introduces a top-down pathway to fuse multi-scale features. Considering the limitation of one-way information flow, PAFPN~\cite{wang2019panet} adds an additional bottom-up path aggregation network, but with higher computational costs. BiFPN~\cite{tan2020efficientdet} removes nodes that only have one input edge, and adds skip-link from the original input on the same level. In~\cite{jiang2022giraffedet}, Generalized-FPN (GFPN) is proposed to serve as neck and achieves SOTA performance, as it can sufficiently exchange high-level semantic information and low-level spatial information. In GFPN, multi-scale features are fused in both level features in previous and current layers. What's more, the $log_2(n)$ skip-layer connections provide more effective information transmission that can scale into deeper networks. \begin{table} \begin{center} \caption{Ablation Study on the depth and width of our neck. ``Depth'' denotes the repeat times on the bottleneck of fusion block. ``Width'' indicates the channel dimensions of feature maps.} \label{tab:featuremap_scale} \setlength{\tabcolsep}{3pt} \begin{tabular}{c|ccccc} \toprule Depth & Width & Latency & FLOPs & AP \\ \midrule 2 & (192, 192, 192) & 3.53 & 34.9 & 44.2 \\ 2 & (128, 256, 512) & 3.72 & 36.1 & 45.1 \\ 3 & (160, 160, 160) & 3.91 & 38.2 & 44.9 \\ \textbf{3} & \textbf{(96, 192, 384)} & \textbf{3.83} & \textbf{37.8} & \textbf{45.6} \\ 4 & (64, 128, 256) & 3.85 & 37.2 & 45.3 \\ \bottomrule \end{tabular} \end{center} \end{table} When we directly replace modified-PANet with GFPN on modern YOLO-series models, we achieved higher precision. However, the latency of GFPN-based model is much higher than modified-PANet-based model. By analyzing the structure of GFPN, the reason can be attributed to the following aspects: 1) feature maps with different scales share the same dimension of channels; 2) the operation of queen-fusion can not meet the requirement for real-time detection model; 3) the convolution-based cross-scale feature fusion is not efficient. Based on GFPN, we propose a novel Efficient-RepGFPN to meet the design of real-time object detection, which mainly consists of the following insights: 1) Due to the large difference in FLOPs from different scale feature maps, it is difficult to control the same dimension of channels shared by each scale feature map under the constraint of limited computation cost. Therefore, in the feature fusion of our neck, we adopt the setting of different scale feature maps with different dimensions of channels. Performance with the same and different channels as well as precision benefits from the Neck depth and width trade-offs are compared, Table.\ref{tab:featuremap_scale} shows the results. We can see that by flexibly controlling the number of channels in different scales, we can achieve much higher accuracy than sharing the same channels at all scales. Best performance is obtained when depth equals 3 and width equals (96, 192, 384). 2) GFPN enhances feature interaction by queen-fusion, but it also brings lots of extra upsampling and downsampling operators. The benefits of those upsampling and downsampling operators are compared and results are shown in Table.\ref{tab:queen_fusion}. We can see that the additional upsampling operator results in a latency increase of 0.6ms, while the accuracy improvement was only 0.3mAP, far less than the benefit of the additional downsampling operator. Therefore, under the constraints of real-time detection, we remove the extra upsampling operation in queen-fusion. 3) In the feature fusion block, we first replace original 3x3-convolution-based feature fusion with CSPNet and obtain 4.2 mAP gain. Afterward, we upgrade CSPNet by incorporating re-parameterization mechanism and connections of efficient layer aggregation networks (ELAN)~\cite{yolov7}. Without bringing extra huge computation burden, we achieve much higher precision. The results of comparison are listed in Table.\ref{tab:fusion}. \begin{table} \begin{center} \caption{Ablation Study on the connection of queen-fusion. $\searrow$ and $\nearrow$ denote the upsampling and downsampling operations respectively.} \label{tab:queen_fusion} \setlength{\tabcolsep}{3pt} \begin{tabular}{cc|cccc} \toprule $\searrow$ & $\nearrow$ & Latency & FLOPs & AP \\ \midrule & & 3.62 & 33.3 & 44.2 \\ \checkmark & & 4.19 & 37.7 & 44.5 \\ & \textbf{\checkmark} & \textbf{3.83} & \textbf{37.8} & \textbf{45.6} \\ \checkmark & \checkmark & 4.58 & 42.8 & 45.9 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Ablation study on the feature fusion style. CSP denotes the Cross-Stage-Partial Connection. Reparam denotes applying re-parameter mechanism on the bottleneck of CSP. ELAN denotes the connections of efficient layer aggregation networks.} \label{tab:fusion} \setlength{\tabcolsep}{3pt} \begin{tabular}{l|ccccc} \toprule Merge$-$Style & Latency & FLOPs & AP \\ \midrule Conv & 3.64 & 44.3 & 40.2 \\ CSP & 3.72 & 36.7 & 44.4 \\ CSP + Reparam & 3.72 & 36.7 & 45.0 \\ \textbf{CSP + Reparam + ELAN} & \textbf{3.83} & \textbf{37.8} & \textbf{45.6} \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{ZeroHead and AlignOTA} In recent advancements of object detection, decoupled head is widely used~\cite{yolox,yolov6,yoloe}. With the decoupled head, those models achieve higher AP, while the latency grows significantly. To trade off the latency and the performance, we have conducted a series of experiments to balance the importance of neck and head, and the results are shown in Table.\ref{tab:neck_head_tradeoff}. From the experiments, we find that ``large neck, small head'' would lead to better performance. Hence, we discard the decoupled head in previous works~\cite{yolox,yolov6,yoloe}, but only left a task projection layer, \ie, one linear layer for classification and one linear layer for regression. We named our head as ZeroHead as there is no other training layers in our head. ZeroHead can save computations for the heavy RepGFPN neck to the greatest extent. It is worth noticing that ZeroHead essentially can be considered as a coupled head, which is quite a difference from the decoupled heads in other works~\cite{yolox,yolov5,yolov6,yoloe}. \begin{table} \begin{center} \caption{Studies on the balance between RepGFPN and ZeroHead. } \label{tab:neck_head_tradeoff} \setlength{\tabcolsep}{3pt} \begin{tabular}{lccccc} \toprule Neck(width/depth) & Head(width/depth) & Latency(ms) & AP \\ \midrule \textbf{(1.0/1.0)} & \textbf{(1.0/0.0)} & \textbf{3.83} & \textbf{45.6} \\ (1.0/0.50) & (1.0/1.0) & 3.79 & 44.9 \\ (1.0/0.33) & (1.0/2.0) & 3.85 & 43.7 \\ (1.0/0.0) & (1.0/3.0) & 3.87 & 41.2 \\ \bottomrule \end{tabular} \end{center} \end{table} In the loss after head, following GFocal~\cite{li2020generalized}, we use Quality Focal Loss (QFL) for classification supervision, and Distribution Focal Loss (DFL) and GIOU loss for regression supervision. QFL encourages to learn a joint representation of classification and localization quality. DFL provides more informative and precise bounding box estimations by modeling their locations as General distributions. The training loss of the proposed DAMO-YOLO is formulated as: \begin{equation} Loss = \alpha\;loss_{QFL} + \beta\;loss_{DFL} + \gamma\;loss_{GIOU} \end{equation} \begin{table} \begin{center} \caption{The comparison of different on MSCOCO val dataset. } \label{tab:label_assignment_comparison} \setlength{\tabcolsep}{3pt} \begin{tabular}{lccccc} \toprule Assigner & AP \\ \midrule ATSS~\cite{zhu2020autoassign} & 43.1 \\ simOTA~\cite{ge2021ota} & 44.2 \\ TOOD~\cite{tood} & 45.4 \\ \textbf{AlignOTA} & \textbf{45.6} \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Studies on the distillation methods for DAMO-YOLO on MSCOCO val dataset. The baseline of student is 38.2.} \label{tab:distill} \setlength{\tabcolsep}{3pt} \begin{tabular}{lcc} \toprule Methods & Epochs & AP \\ \midrule Mimicking \cite{li2017mimicking} & 36 & 40.2 \\ MGD \cite{yang2022masked} & 36 & 39.6 \\ CWD \cite{shu2021channel} & 36 & \textbf{40.7} \\ \bottomrule \end{tabular} \end{center} \end{table} Besides head and loss, label assignment is a crucial component during detector training, which is responsible for assigning classification and regression targets to pre-defined anchors. Recently, dynamic label assignment such as OTA~\cite{ge2021ota} and TOOD~\cite{tood} is widely acclaimed and achieves significant improvements compares to static one~\cite{zhu2020autoassign}. Dynamic label assignment methods assign labels according to the matching cost between prediction and ground truth, \eg, OTA~\cite{ge2021ota}. Although the alignment of classification and regression in loss is widely studied~\cite{tood,li2020generalized}, the alignment between classification and regression in label assignment is rarely mentioned in current works. The misalignment of classification and regression is a common issue in static assignment methods~\cite{zhu2020autoassign}. Though dynamic assignment alleviates the problem, it still exists due to the unbalance of classification and regression losses, \eg, CrossEntropy and IoU. To solve this problem, we introduce the focal loss~\cite{lin2017focal} into the classification cost, and use the IoU of prediction and ground truth box as the soft label, which is formulated as follows: \begin{equation} \begin{split} AssignCost &= C_{reg} + C_{cls} \\ IoU &= IoU(reg_{gt}, reg_{pred}) \\ C_{reg} &= -IoU \\ C_{cls} &= |IoU - cls_{pred}| \times CE(cls_{pred}, IoU) \end{split} \end{equation} With this formulation, we are able to choose the classification and regression aligned samples for each target. Besides the aligned matching cost, following OTA~\cite{ge2021ota}, we form the solution of aligned matching cost from a global perspective. We name our label assignment as AlignOTA. The comparison of label assignment methods is conducted in Table.\ref{tab:label_assignment_comparison}. We can see that AlignOTA outperforms all other label assignment methods. \begin{figure}[t] \centering \includegraphics[width=0.48 \textwidth]{Fig/distill_loss.pdf} \caption{The classification loss and AP curves of distillation. The distillation loss weight is set to 0.5, 2, and 10 respectively. The classification loss has a significantly fast convergence with higher accuracy when the distillation loss weight is set to 0.5.} \label{fig:distill} \end{figure} \subsection{Distillation Enhancement} Knowledge Distillation (KD) \cite{hinton2015distilling} is an effective method to further boost the performance of pocket-size models. Nevertheless, applying KD on YOLO series sometimes can not achieve significant improvements as hyperparameters are hard to optimize and features carry too much noise. In DAMO-YOLO, we first make distillation great again on models of all sizes, especially on the small size. We adopt the feature-based distillation to transfer dark knowledge, which can distill both recognition and localization information in the intermediate feature maps \cite{huang2022masked}. We conduct fast validation experiments to choose a suitable distillation method for our DAMO-YOLO. The results are shown in Table.\ref{tab:distill}. We conclude that CWD is more fit for our models, while MGD is worse than Mimicking as complex hyperparameters make it not general enough. Our proposed distillation strategy is split into two stages: 1) Our teacher distills the student at the first stage (284 epochs) on \textbf{strong mosaic} domain. Facing the challenging augmented data distribution, the student can further extract information smoothly under the teacher's guidance. 2) The student finetunes itself on \textbf{no mosaic} domain at the second stage (16 epochs). The reason why we do not adopt distillation at this stage is that, in such a short period, the teacher's experience will damage the student's performance when he wants to pull the student in a strange domain (\ie, no mosaic domain). A long-term distillation would weaken the damage but is expensive. So we choose a trade-off to make the student independent. \begin{table*}[t!] \begin{center} \caption{Comparison with the state-of-the-art single-model detectors on MSCOCO test-dev. * denotes using distillation.} \label{coco_sota} \setlength{\tabcolsep}{2pt} \begin{tabular}{l|c|c|c|c|c| c c c c c} \toprule Method & Size & Latency(ms) & GFLOPs & Params(M) & AP & AP$^{50}$ & AP$^{75}$ & AP$^S$ & AP$^M$ & AP$^L$ \\ \midrule YOLOX-T & 416 & 1.78 & 6.5 & 5.1 & 32.8 & - & - & - & - & - \\ YOLOX-S & 640 & 3.20 & 26.8 & 9.0 & 40.5 & - & - & - & - & - \\ YOLOX-M & 640 & 6.46 & 73.8 & 25.3 & 46.9 & - & - & - & - & - \\ YOLOX-L & 640 & 11.44 & 155.6 & 54.2 & 49.7 & - & - & - & - & - \\ \midrule YOLOv5-N & 640 & 2.23 & 4.5 & 1.9 & 28.0 & 45.7 & - & - & - & - \\ YOLOv5-S & 640 & 3.04 & 16.5 & 7.2 & 37.4 & 56.8 & - & - & - & - \\ YOLOv5-M & 640 & 5.71 & 49.0 & 21.2 & 45.4 & 64.1 & - & - & - & - \\ YOLOv5-L & 640 & 8.92 & 109.1 & 46.5 & 49.0 & 67.3 & - & - & - & - \\ \midrule YOLOv6-T & 640 & 2.53 & 36.7 & 15.0 & 40.3 & 56.6 & - & - & - & - \\ YOLOv6-S & 640 & 3.10 & 44.2 & 17.0 & 43.5 & 60.4 & - & - & - & - \\ YOLOv6-M\textsuperscript{*} & 640 & 5.72 & 82.2 & 34.3 & 49.5 & 66.8 & - & - & - & - \\ YOLOv6-L\textsuperscript{*} & 640 & 9.87 & 144.0 & 58.5 & 52.5 & 70.0 & - & - & - & - \\ \midrule YOLOv7-T-silu & 640 & 3.13 & 13.7 & 6.2 & 38.7 & 56.7 & 41.7 & 18.8 & 42.4 & 51.9 \\ YOLOv7 & 640 & 9.08 & 104.7 & 36.9 & 51.2 & 69.7 & 55.9 & 31.8 & 55.5 & 65.0 \\ \midrule YOLOE-S & 640 & 3.21 & 17.4 & 7.9 & 43.0 & 60.5 & 46.6 & 23.2 & 46.4 & 56.9 \\ YOLOE-M & 640 & 6.67 & 49.9 & 23.4 & 49.0 & 66.5 & 53.0 & 28.6 & 52.9 & 63.8 \\ YOLOE-L & 640 & 9.94 & 110.1 & 52.2 & 51.4 & 68.9 & 55.6 & 31.4 & 55.3 & 66.1 \\ \midrule DAMO-YOLO-T & 640 & 2.78 & 18.1 & 8.5 & 41.8 & 58.0 & 45.2 & 23.0 & 46.1 & 58.5 \\ DAMO-YOLO-T\textsuperscript{*} & 640 & 2.78 & 18.1 & 8.5 & 43.0 & 59.4 & 46.6 & 23.3 & 47.4 & 61.0 \\ DAMO-YOLO-S & 640 & 3.83 & 37.8 & 16.3 & 45.6 & 61.9 & 49.5 & 25.9 & 50.6 & 62.5 \\ DAMO-YOLO-S\textsuperscript{*} & 640 & 3.83 & 37.8 & 16.3 & 46.8 & 63.5 & 51.1 & 26.9 & 51.7 & 64.9 \\ DAMO-YOLO-M & 640 & 5.62 & 61.8 & 28.2 & 48.7 & 65.5 & 53.0 & 29.7 & 53.1 & 66.1 \\ DAMO-YOLO-M\textsuperscript{*} & 640 & 5.62 & 61.8 & 28.2 & 50.0 & 66.8 & 54.6 & 30.4 & 54.8 & 67.6 \\ \bottomrule \end{tabular} \end{center} \end{table*} In DAMO-YOLO, the distillation is equipped with two advanced enhancements: 1) Align Module. On the one hand, it is a linear projection layer to adapt student feature’s to the same resolution (${C, H, W}$) as teacher's. On the other hand, forcing the student to approximate teacher feature directly leads to minor gains compared to the adaptive imitation \cite{wang2019distilling}. 2) Channel-wise Dynamic Temperature. Inspired by PKD \cite{cao2022pkd}, we add a normalization to teacher and student features, to weaken the effect the difference of real values brings. After subtracting the mean, standard deviation of each channel would function as temperature coefficient in KL loss. Besides, we present two key observations for a better usage of distillation. One is the balance between distillation and task loss. As shown in Fig.\ref{fig:distill}, when we focus more on distillation (weight=10), the classification loss in student has a slow convergence, which results in a negative effect. The small loss weight\footnote{In our method, the cosine weight is utilized.} (weight=0.5) hence is necessary to strike a balance between distillation and classification. The other is the shallow head of detector. We found that the proper reduction of the head depth is beneficial to the feature distillation on neck. The reason is that when the gap between the final outputs and the distilled feature map is closer, distillation can have a better impact on decision. \section{Implementation Details} Our models are trained 300 epochs with SGD optimizer. The weight decay and SGD momentum are 5e-4 and 0.9 respectively. The initial learning rate is 0.4 with a batch size of 256, and the learning rate decays according to a cosine schedule. Following YOLO-Series~\cite{yolox,yolov5,yolov6,yolov7} model exponential moving average (EMA) and grouped weight decay are used. To enhance data diversity, Mosaic~\cite{yolov4,yolov5} and Mixup~\cite{zhang2017mixup} augmentation is a common practice. However, recent advancement\cite{zoph2020learning,chen2021scale} shows that properly designed box-level augmentation is crucial in object detection. Inspired by this, we apply Mosaic and Mixup for image-level augmentation and employ the box-level augmentation of SADA~\cite{chen2021scale} after image-level augmentation for more robust augmentation. \section{Comparison with the SOTA} The final performance compared with SOTAs is listed in Table.\ref{coco_sota}. For a comprehensive look, we list the results with and without distillation. It shows that our DAMO-YOLO family outperforms all YOLO series in accuracy and speed, which indicates that our method can detect objects effectively and efficiently. \section{Conclusion} In this paper, we propose a new object detection method called DAMO-YOLO, the performance of which is superior to other methods in YOLO series. Its advantages come from new techs, including MAE-NAS backbone, efficient RepGFPN neck, ZeroHead, AlignedOTA label assignment and distillation enhancement. {\small \bibliographystyle{ieee_fullname}
1,108,101,563,044
arxiv
\section{Introduction}\label{sec:Introduction} Calorimeter showers initiated by a primary particle can be understood as a sequence of stochastic interaction processes of the primary and all secondary particles with the material. The four-dimensional spatial and temporal development of the shower and its energy depositions can be described with good accuracy by numerical methods taking into account the relevant cross sections, e.g., using the GEANT4 program \cite{Agostinelli:2002hh,Allison:2006ve,Allison:2016}. Sequential simulations of particle showers, however, require significant computing resources. The most recent approach for simulating particle showers in calorimeters are so-called generative adversarial networks (GAN) \cite{2014arXiv1406.2661G,2016arXiv161207828S,Hooberman:2017nips,Paganini:2017hrr,Paganini:2017dwg}. According to this concept, the temporal sequence of the shower development is first marginalized by training a generator and the three-dimensional spatial distribution of the energy depositions is then generated directly. At first glance, this ansatz appears similar to common detector-specific parameterizations of detailed simulations which are usually developed by experts in the field. With the GAN concept, however, a high-dimensional probability distribution for spatial energy depositions is obtained automatically either directly from measured data, or alternatively from the above-mentioned detailed simulations. In order to produce libraries of network-generated particle showers with the relevant properties of measured showers, the probability distribution of realistic energy depositions needs to be encoded in the numerous trainable parameters of the network. The key challenge is to build a converging framework which is capable of approximating the entire high-dimensional probability distribution. In contrast to the above-mentioned sequential shower simulations, which calculate many stochastic processes of individual interactions, the entire spatial energy depositions of the shower are determined in a single evaluation of the generator network. In order to obtain the realization of a single shower, the stochastic process is incorporated through a set of random numbers on input to the probability distribution coded in the network. These random numbers ensure that none of the generated particle showers look alike. The speed for calculating a shower realization is several orders of magnitude faster than detailed shower simulations, since in the network only a fixed sequence of linear algebra operations is carried out together with the evaluation of the activation functions. Current research is assessing methods of training the generator to learn the high-dimensional probability distribution for calorimeter energy depositions. The GAN concept consists of two networks working in opposition to one another. The generator network is meant to learn the probability distribution which is encoded in realistic data sets. The second network is used to evaluate the differences between the generated data sets and the realistic data sets. The feedback of the second network to the generator network is used to improve the probability distribution encoded in the generator. Conversely, the second network is trained to distinguish between ever smaller remaining differences between the generated data sets and the realistic data sets. In this dual training process, the probability distribution of energy depositions is sampled from realistic data and transferred to the generator network. In the original work on GANs for simulating particle showers in calorimeters, the evaluating network was a binary classifier that returned as feedback a probability for each shower to be real. So far it turned out that here the generator training was only partially successful. Instead of using binary classification in the evaluating network, recently a high-dimensional distance measure between the probability distributions of example data and generated showers was successfully used for simulating an atmospheric calorimeter with a single readout layer \cite{Erdmann:2018kuh}. In this simulation, the adversarial training was performed using the Wasserstein distance as it was applied in computer science research in the past \cite{2017arXiv170107875A,2017arXiv170400028G}. This variant of the GAN concept is referred to as WGAN (Wasserstein GAN). In this paper we apply the WGAN concept to the concrete setup of a test beam for a realistic electromagnetic sampling calorimeter, which consists of several pixelated readout layers interspersed with absorbers. Primary beam particles are electrons with different energies $E$ impinging perpendicularly at different positions $(P_x,P_y)$ on the front surface of the calorimeter. To generate dedicated particle showers for the initial conditions $(P_x,P_y,E)$ of an electron, the architecture of the WGAN is supported by two additional networks constraining these beam conditions. The concept of using conditioning networks together with generative adversarial networks has been explored before \cite{2014arXiv1406.2661G,Hooberman:2017nips,Paganini:2017hrr,Erdmann:2018kuh}. The requirements for the generation of electromagnetic calorimeter showers are considerable. In each layer of the calorimeter, the transverse energy depositions must correspond to those of a particle shower. For the description of the longitudinal shower development, layer-wise correlations of reconstructed observables are also decisive. Therefore, a considerable proportion of our study is devoted to the quality assessment of the generated showers. This concerns aspects that are directly trained for, such as the initial beam conditions, as well as physical aspects which have to be learned during the adversarial training. Our paper is structured as follows. We start by presenting the test beam experiment for the calorimeter and describe the data simulations used as sample data sets for training the WGAN. After that we explain the network structure of the WGAN together with the usage of the Wasserstein distance and the constrainer networks to respect the beam conditions. We then examine in detail the quality of the generated showers by comparing in particular the generated calorimeter showers with showers simulated using the GEANT4 program. Finally, we present our conclusions. \section{Experimental setup}\label{sec:HGCal} \subsection{Calorimeter configuration}\label{subsec:HGCal} \begin{figure}{b} \centering \includegraphics[width=0.35\textwidth]{coordinateMapping.PDF} \caption{Full wafer pixelation. Shaded pixels are of constant $x$ and $y$ coordinates, respectively.} \label{fig:CoordinateMapping} \end{figure} \begin{figure*}{b} \centering \includegraphics[width=0.7\textwidth]{eventDisplaysGeant4.PDF} \caption{Energy depositions of an electromagnetic shower induced by a $20~$GeV electron (top) and $90~$GeV electron (bottom) with different impact positions (X,Y) simulated using GEANT4. The 3D shower images consist of 12$\times$15$\times$7 pixels.} \label{fig:GEANT4Displays} \end{figure*} To discuss generative shower models in the context of a realistic detector, we choose from many possible calorimeter setups a configuration of the electromagnetic compartment of a CMS High Granularity Calorimeter (HGCAL) prototype \cite{CMS:2008CMSExperiment,Contardo:2018TechnicalReportHGCal,Martelli:2018HGCalOverview}. Various other HGCAL prototypes with different sampling configurations have already been tested with beams at CERN and Fermilab since 2016 \cite{Jain:2017BeamtestSummary2016,Quast:2018BeamtestSummary2017}. This specific configuration incorporates seven sensitive layers covering 2.8 - 16.2 radiation lengths ($X_{0}$) of electromagnetic showers and was tested with highly energetic secondary electrons at CERN's Super Proton Synchrotron test beam facility in September 2017. Each sensor is made of one 6-inch hexagonal silicon n-type wafer. Its active thickness at full depletion amounts to 300$~\mu$m. Most of the 135 individual pixels on each wafer are 1$~$cm$^2$-sized hexagons. Pixels at the edges have various shapes. Only full and half pixels at the edges are considered in this study. Furthermore, centrally placed calibration pixels are treated as dead pixels here and hence are excluded from the shower measurement. Instead of using directly hexagonal geometries, a coordinate system is constructed such that the pixel positions are indicated in a 12$\times$15 Cartesian-like frame following \cite{Erdmann:2018kuh}. Shaded hexagons in Figure \ref{fig:CoordinateMapping} illustrate the lines of constant $x$ and $y$ coordinates, respectively. After their assembly to full modules, the wafers are glued to 1.2$~$mm thick copper-tungsten baseplates and subsequently inserted into a hanging file system where they are interspersed by 6$~$mm thick copper- and 4.9$~$mm thick iron-coated lead absorbers. The test beam line is 15$~$m long and adds another 0.27$~X_{0}$ of upstream material. It comprises six gas-filled delay wire chambers (DWC) \cite{Spanggaard:1998H2DWCs} and six scintillation counters. \subsection{Reference dataset}\label{subsec:dataset} The training and the subsequent evaluation of the WGAN performance require a well-defined reference dataset of electromagnetic showers. In general, this set is sampled from an underlying highly dimensional probability density which the WGAN is ultimately supposed to learn. Showers taken from this dataset are referred to as "real" in the following. \newline In this paper, we construct sequences of real showers from simulation of electromagnetic cascades with GEANT4 version 10.2 \cite{Agostinelli:2002hh,Allison:2006ve} using a specific tune of the FTFP\_BERT physics list \cite{Banerjee:2017CMSPhysicsLists}. The geometry of the calorimeter and of the test beam line is implemented within a release of the official CMS offline computing software which is publicly available on GitHub \cite{github}. Similar to real test beam data, energy depositions are converted into units of signal produced by minimum ionizing particles (MIPs) traversing a pixel. \newline Furthermore, in order not to rely on idealized assumptions in the simulation, various constraints of typical beam tests of such a calorimeter are taken into account: \begin{itemize} \item Electrons inducing showers traverse the upstream material in the beam line and impinge perpendicularly onto the calorimeter. \item Impact positions are extrapolated from straight line tracks computed from four position measurements as they would be measured by four DWCs in the beam line with 200$~\mu$m resolution each. \item The beam profile is modeled with a rectangular geometry and covers an active area of 6$\times$5$~$cm$^2$. Its energies are smeared with a 1\% uncertainty. \item Calorimeter pixels with energy depositions below 2 MIPs are removed from each shower to reject noise contributions. \end{itemize} The dataset for training the WGAN consists of $5\times 100,000$ electromagnetic showers induced by $20~$GeV, $32~$GeV, $50~$GeV, $80~$GeV and $90~$GeV electrons. Due to the smearing of the energy and impact position measurements, the assigned labels deviate from the nominal values at the percentage level. By way of example, Figure \ref{fig:GEANT4Displays} illustrates two real showers induced by one $20~$GeV and one $90~$GeV electron, respectively. For further evaluation, another dataset is constructed from simulated $70~$GeV electrons. This sample serves exclusively to investigate the WGAN's interpolation capacities to energy labels for which it has not been trained. The aforementioned beam energy smearing of 1\% is not applied for this particular dataset. \section{Fast simulation approach}\label{sec:Fastsim} \subsection{Generative adversarial networks (GANs)}\label{subsec:GAN} Generative adversarial networks (GANs) are a widely used concept of generative models that was introduced by Goodfellow et al. in 2014 \cite{2014arXiv1406.2661G}. The framework consists of two adversarial networks, namely a generator network $G$ and a discriminator network $D$. The overall goal of this adversarial framework is to train a generator to be able to generate samples $\tilde{x} = G(z)$ out of noise $z$, which are very similar to real samples $x$. During the training process the generator improves its performance using the feedback provided by the discriminator, which measures the similarity between generated and real samples. Even though traditional GANs show impressive results, the training process is unstable and hard to monitor. Furthermore, GANs often suffer from mode collapsing when the generator is only able to generate data in a subspace of the real distribution. The recently published Wasserstein GAN \cite{2017arXiv170107875A} and its improvement \cite{2017arXiv170400028G} allows for a stabilized training procedure by delivering adequate gradients to the generator, providing a meaningful loss metric not being susceptible to mode collapsing. In the following section, we first introduce the Wasserstein GAN and the method of label conditioning, and then present our network architecture and our training strategy. Finally, we describe the training of our adversarial framework to generate calorimeter showers. \subsection{Wasserstein GANs}\label{subsec:WGAN} In Wasserstein GANs, the Wasserstein-1 metric is used as a similarity measure between the generated samples $\tilde{x} = G(z)$ and the real samples $x$. This distance is also known as Earth mover's distance, because in a figurative sense it defines the cost for moving a distribution onto a target distribution using optimal transport. In the adversarial framework the Wasserstein loss is constructed using the Kantorovich-Rubinstein duality: \begin{equation} L = \sup_{f \in \mathrm{Lip}_1} \left( \mathbb{E}[ f(x) ] - \mathbb{E}[ f(\tilde{x}) ] \right). \label{eq:DW} \end{equation} Here, "$\sup_{f \in \mathrm{Lip}_1}$" states that the supremum is over all the 1-Lipschitz functions $f$ after application on the real samples $x$ and generated samples $\tilde{x}$. During the adversarial training, the 1-Lipschitz functions $f$ which fulfill (\ref{eq:DW}) are approximated by the discriminator network $D$. It is called \textit{critic} because it is trained to allow for an estimate of the Wasserstein distance instead of being able to discriminate between real and generated samples. To allow for the approximation of the 1-Lipschitz functions using a neural network, the Lipschitz constraint is enforced by the gradient penalty \cite{2017arXiv170400028G} which extends the objective function to: \begin{equation} \label{eq:wasserstein_loss} L = \mathbb{E}[ D(x) ] - \mathbb{E}[ D(G(z)) ] - \lambda \; \mathbb{E}[(\vert\vert \nabla_{\hat{u}} f_w(\hat{u}) \vert\vert_2 - 1 ) ^2 ]\;. \end{equation} Here, $\lambda$ is a hyperparameter for scaling the gradient penalty. The mixture term \begin{equation} \hat{u} = \varepsilon x + (1-\varepsilon) \tilde{x} \end{equation} states that the Lipschitz constraint is enforced by sampling on straight lines between pairs of generated samples $\tilde{x}$ and real samples $x$. The random sampling is performed by sampling $\varepsilon$ from a uniform distribution $\mathcal{U}(0,1)$. To ensure accurate gradients for the generator, the critic is usually trained for several iterations before one generator update is applied. Thus, in Wasserstein GANs the generator attempts to \textit{minimize} the Wasserstein distance (\ref{eq:wasserstein_loss}) between the generated and the real samples, while the Wasserstein distance is approximated using the critic network by \textit{maximizing} (\ref{eq:wasserstein_loss}). This differs to the traditional GAN setup where, under the assumption of an optimal discriminator, the generator attempts to minimize the Jensen-Shannon divergence. \subsection{Label conditioning}\label{subsec:conditioning} For calorimeter simulations, generated samples must reflect certain label characteristics according to physics laws. The labels define the initial state of the simulation such as the incident particle's kinematics and the degree of possible background activity (pileup) in the calorimeter. However, this label dependency is not ensured for samples generated by a generator which is trained using the WGAN approach. To be able to generate samples which can be associated with explicit labels, the concept of label conditioning introduced by Auxiliary Classifier GANs (AC-GANs) \cite{2016arXiv161009585O} is a widely used concept for generative approaches in physics simulations \cite{Hooberman:2017nips,Paganini:2017hrr,Erdmann:2018kuh}. To advance the WGAN concept to label conditioning we adapt the concept of \cite{Erdmann:2018kuh}. In this specific configuration, the initial state is determined by the electron energy and its impact position. Accordingly, the generator dependency is modified to $G=G(z,E,P)$, and in our setup, besides noise $z$, the generator is given the physics labels of the electron energy $E$ and the impact position coordinates $P=(P_x,P_y)$ as input. Furthermore, we also provide the critic with the label information. To constrain the generator and to evaluate how well the label characteristics are reflected in the generated samples, two constrainer networks $a_i$ are used. These constrainer networks are trained under supervision to reconstruct the impact position and the electron energy respectively using the real (labeled) samples. The mean squared error \begin{equation} L_\mathrm{{real,i}} = [y_i - a_i(x)]^{2} \end{equation} is used as an objective function for the constrainer networks. Here, $y_i$ is one label associated with the real sample $x$ and $a_i(x)$ denotes the respective reconstruction by the constrainer network. The constrainer networks are trained under supervision during the critic training and are fixed during the generator training. To enforce label conditioning of the generator, the generator loss is extended by \begin{equation} \label{eq:aux_loss} L_{\mathrm{aux}}=\sum_i^n \kappa_i |L_\mathrm{{real,i}}-L_\mathrm{{fake,i}}|, \end{equation} where $\kappa_i$ is a hyperparameter to scale the respective auxiliary loss. The loss \begin{equation} L_\mathrm{{fake,i}} = [\alpha_i - a_i(\tilde{x})]^{2} = [\alpha_i - a_i(G(z, E, P))]^{2}, \end{equation} states how well the input labels $\alpha_1 = E,\; \alpha_2 = P$ for the generation can be reconstructed from the generated shower by the constrainer networks. In summary, the generator is trained to minimize the Wasserstein distance (\ref{eq:wasserstein_loss}) and the auxiliary loss (\ref{eq:aux_loss}) provided by the constrainer networks. The absolute difference between both loss terms in (\ref{eq:aux_loss}) ensures that the label reconstruction of the generated and real samples remains on the same scale. \subsection{Strategy and network training}\label{subsec:training} \begin{figure} \captionsetup[subfigure]{aboveskip=-1pt,belowskip=-1pt} \begin{centering} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={1.8cm 0cm 0.5cm 0cm},clip,,width=\textwidth]{cost_p.pdf} \subcaption{} \label{fig:cost_p} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={1.8cm 0cm 0.5cm 0cm},clip,,width=\textwidth]{cost_e.pdf} \subcaption{} \label{fig:cost_e} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[trim={1.8cm 0 0.5cm 0cm},clip,,width=\textwidth]{cost_c.pdf} \subcaption{} \label{fig:cot_c} \end{subfigure} \caption{Loss curves of the constrainer networks during the supervised training for (a) the position regression and (b) the energy reconstruction. (c) Total critic loss $c_{loss}$ during the training (black) and rescaled gradient penalty (yellow).} \label{fig:costs} \end{centering} \end{figure} Our framework for the generation of electromagnetic calorimeter showers consists of four networks: one generator, one critic and two constrainer networks. One of the constrainer networks is used for conditioning the energy $E$, while the second is used for conditioning the impact position $P$. The networks, their training and their evaluation are implemented using the Tensorflow \cite{tensorflow} framework (v1.5). Exact details of all architectures can be found in the appendix \ref{sec:Appendix} in table \ref{table:critic}, \ref{table:generator} and \ref{table:constrainer}. The generator consists of two parts: The first part is separated into 7 towers, each of which has the same structure, and a joint part which merges the towers. Each of the 7 towers is given $10$ latent variables $z$ and $3$ labels $\alpha$ describing the energy and the impact position of the calorimeter shower as input. After two fully connected layers and a reshape, a block of three 2D transposed convolutions and a single 2D convolution follows. Next, the 7 towers are concatenated to a joint part with three 2D convolutional layers. Finally a locally connected convolutional layer completes the generator architecture. Between the convolutional and transposed convolutional layers we use batch normalization and leaky ReLUs as activations. After the last layer we do not apply batch normalization and use ReLU as activation to allow for the generation of sparse calorimeter images. To enlarge the prior for the generation process, a masking layer masks the dead pixels and regions outside the calorimeter by setting the respective values to zero. \begin{figure*} \centering \includegraphics[width=0.65\textwidth]{eventDisplaysWGAN.PDF} \caption{Energy depositions generated with the WGAN for a fixed impact position of an electromagnetic shower for a 20$~$GeV electron (top) and for a $90~$GeV electron (bottom).} \label{fig:eventDisplayWGAN} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.65\textwidth]{Occupancies.PDF} \caption{Cell occupancy for $90~$GeV electrons simulated using GEANT4 (top) and generated by the WGAN (bottom). Dead pixels and areas outside the sensor acceptance are masked in the generator.} \label{fig:AverageOccupancy} \end{figure*} The critic network is given as additional input the $3$ labels, which are processed by two fully connected layers and a reshape to obtain a two-dimensional shape. The following architecture of the critic is straightforward and consists of five 2D convolutional layers followed by a fully connected layer and the output layer. As activation we use leaky ReLU to avert sparse gradients. Between the layers we use layer normalization instead of batch normalization as we use the gradient penalty loss. For both constrainer networks we used a very similar architecture of 3D convolutions where we varied only the classification layer. For better convergence and regularizing effects we use batch normalization between the layers. Furthermore, we use leaky ReLU as nonlinearity to ensure sufficient gradients. During training the losses of the constrainer networks are scaled with $\kappa_{\mathrm{E}}=\kappa_{\mathrm{P}}=0.01$. The gradient penalty scale is set to $\lambda=5$. We update the constrainer networks and the critic for $n_{cr}=9$ iterations before updating the generator once. We use a batch size of 256 and train the framework for $150$ epochs on a single NVIDIA GeForce GTX 1080 which takes about $30$ hours. We use $10$ latent variables each following a uniform distribution $\mathcal{U}(-1,1)$. Furthermore, we use the Adam optimizers with $\beta_1=0.0,\;\beta_2=0.9$ \cite{2017arXiv170400028G} and different learning rates for the networks. The constrainer networks use a small learning rate of $lr = 5\cdot 10^{-5}$. Their training is stopped after 50 epochs. For the generator we use a learning rate of $lr = 10^{-3}$ and drop the learning rate after 70, 90 and 100 epochs to $lr = 5\cdot 10^{-4}$, $lr = 2\cdot 10^{-4}$ or rather $lr = 10^{-4}$. For the critic we use an initial learning rate of $lr = 5\cdot 10^{-4}$ and change the learning rate to $lr = 2\cdot10^{-4}$, $lr = 10^{-4}$ and $lr = 5 \cdot 10^{-5}$ after 60, 80 and 100 epochs, respectively. \section{Performance benchmarks}\label{sec:benchmarks} Various benchmarks related to the quality of our generated electromagnetic showers are discussed in the following. It will be demonstrated that WGAN produces high-quality showers which resemble the real dataset in many aspects while lowering the computing time to simulate a full electromagnetic shower by three orders of magnitude. \begin{figure*} \captionsetup[subfigure]{aboveskip=-1pt,belowskip=-1pt} \begin{centering} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 1.6cm 0cm},clip,,width=\textwidth]{DNN_energy_reco.pdf} \subcaption{} \label{fig:dep_energy} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 1.6cm 0cm},clip,,width=\textwidth]{labelDependence_posX.pdf} \subcaption{} \label{fig:dep_posX} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 1.6cm 0cm},clip,,width=\textwidth]{labelDependence_posY.pdf} \subcaption{} \label{fig:dep_posY} \end{subfigure} \caption{Distribution of energies reconstructed by the constrainer network computed on generated and GEANT4 simulated showers for different energy labels (a). Impact position $P_x$ (b), and impact position $P_y$ (c) of generated showers reconstructed by the constrainer network as a function of their true labels. Statistical errors on these means are negligible. Note that $70~$GeV showers were not part of the training set.} \end{centering} \end{figure*} The investigation is structured as follows. In section \ref{subsec:displays} we perform a visual inspection of WGAN generated showers. First, we illustrate two examples on which qualitative observations are highlighted. Second, the analysis of pixel occupations reveals that the trained WGAN considers the radially decreasing occupancy profile. In section \ref{subsec:labels} we then show that the generated samples reflect characteristics related to the input physics labels. This behavior was enforced indirectly through the extended generator loss (\ref{eq:aux_loss}). By contrast, any other physically motivated observable evaluated on the generated showers was not constrained in the training. However, as illustrated in section \ref{subsec:observables}, many distributions of shower characterizing quantities computed on the generated showers match those computed from the real dataset well. Moreover, we demonstrate that key correlations between calorimeter observables are obtained. For all reported benchmark scenarios, good shower qualities for $70~$GeV electron showers are obtained despite the fact that these were not part of the training set. Finally, this section is concluded with a report on the WGAN's computational time advantage over detailed simulation using GEANT4. \subsection{Visual inspection of generated showers}\label{subsec:displays} Figure \ref{fig:eventDisplayWGAN} shows two exemplary electron-induced showers with $20~$GeV and $90~$GeV energy labels generated using the WGAN approach. This set of energy depositions is consistent with the physical intuition of how electron-induced cascades in this sampling calorimeter configuration should develop. First, it is noted that both pixel occupancies and pixel intensities scale with the incident electron energy. Second, the positions of the largest energy depositions move according to the input impact position labels. Finally, it is evident that the main activity of generated showers occurs in the central sampling compartments. In particular, the spread and the scale of energy depositions is maximal in intermediate layers, while only a few pixels are active in the first and last layers. Figure \ref{fig:AverageOccupancy} shows the average pixel occupancy with energy depositions above the 2 MIP threshold of $90~$GeV electron-induced showers simulated using GEANT4 compared to those generated by the WGAN. White spaces correspond to areas of the sensors which are activated above the threshold in less than $1\%$ of the events. The radial development of the pixel occupancy of WGAN-generated showers is similar to GEANT4 while the overall scale appears underestimated. \subsection{Label dependency}\label{subsec:labels} Three physics labels, namely the incident's electron energy $E$ and its impact position $P=(P_x, P_y)$, are input to the WGAN. Ideally, these labels should constrain the shower generation process. As described in section \ref{subsec:conditioning}, two constrainer networks are trained with real samples for this purpose and then reconstruct these labels based on the full shower information. For the training to be rated as successful, the reconstructed labels of the generated showers should correlate with the imposed physics labels. Figure \ref{fig:dep_energy} shows the distribution of reconstructed energies for different energy labels. Their maxima correlate to the true labels. Furthermore, the energy spectra computed on WGAN generated and GEANT4 simulated showers exhibit a reasonable agreement. Figures \ref{fig:dep_posX} and \ref{fig:dep_posY} show the correlation for position labels. Here, the symbols indicate the mean reconstructed label in bins of the true label. On average, a generated shower with a certain set of labels is reconstructed accordingly. Evidently, shower characteristics which the two constrainer networks are sensitive to are able to condition the generation process. Even the $70~$GeV electron cascades, which were not considered in the training, exhibit the same behavior. \begin{figure*}[t!] \captionsetup[subfigure]{aboveskip=-1pt,belowskip=-1pt} \begin{centering} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observables_E_tot.pdf} \subcaption{} \label{fig:Etot} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observables_depth_X0.pdf} \subcaption{} \label{fig:X0} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observables_E_max_layer2.pdf} \subcaption{} \label{fig:Emax2} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observables_E_max_layer4.pdf} \subcaption{} \label{fig:Emax4} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{layer2_Y.pdf} \subcaption{} \label{fig:DeltaY2} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{layer4_Y.pdf} \subcaption{} \label{fig:DeltaY4} \end{subfigure} \caption{Comparison of calorimeter observables computed in generated showers (symbols) to those computed in fully simulated showers using GEANT4 (histograms). (a) Energy sum of all pixels, (b) energy-weighted shower depth, (c, d) the maximum pixel energy in layer 2, respectively 4, and (e, f) the transverse shower spread along the y-direction in layer 2, respectively 4. The $70~$GeV showers were not part of the training set.} \label{figure:OneDComparison} \end{centering} \end{figure*} \subsection{Calorimeter observables}\label{subsec:observables} In this section, typical calorimeter observables are presented which are computed both in GEANT4 simulations and WGAN-generated showers. For better clarity, only the $32~$GeV and $90~$GeV as well as the additional $70~$GeV electron samples are shown. The agreement between real and generated cascades illustrated therein are representative for the entire dataset. Note that $70~$GeV showers were not part of the training set. \subsubsection{Distributions of calorimeter observables}\label{subsubsec:observablesCalo} Figure \ref{figure:OneDComparison} shows six sets of representative observables that characterize particle showers in sampling calorimeter configurations. The distributions are normalized to unity. A reasonable agreement between WGAN- and GEANT4-simulated showers is seen in Figure \ref{fig:Etot} for the total energy deposition summed over all pixels and in Figure \ref{fig:X0} for the longitudinal shower depth. Also, the maximum pixel energy for each individual layer exhibits a good match with the full simulation (only layers 2 and 4 are shown in Figures \ref{fig:Emax2}, \ref{fig:Emax4}). \newline Furthermore, we compute the energy-weighted transverse spread in each layer. \begin{equation} \Delta \text{Y}_{l}~=~\sum_{\text{pixel~}i}^{\text{layer } l} \big| y_i - \sum_{\text{pixel~}j}^{\text{layer }l} y_j\cdot \frac{E_j}{E_{\text{sum, }l}}\big| \cdot \frac{E_i}{E_{\text{sum, }l}} \label{eq:spread} \end{equation} $\Delta Y_{\text{layer~2}}$ and $\Delta Y_{\text{layer~4}}$ are shown here (\ref{fig:DeltaY2}, \ref{fig:DeltaY4}) by way of example. The computation for other layers $l$ and for the x coordinate is analogous. With the exception of the first layer at 2.8 $\text{X}_\text{0}$, the agreement therein is representative for all other layers and the x-coordinate. Thus the transverse shower shapes are well-modeled by the WGAN. \begin{figure}[t!] \captionsetup[subfigure]{aboveskip=-1pt,belowskip=-1pt} \begin{centering} \begin{subfigure}[b]{0.455\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observables_N_tot.pdf} \subcaption{} \label{fig:Nhits} \end{subfigure} \hfill \begin{subfigure}[b]{0.455\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{PixelSpectrum.pdf} \subcaption{} \label{fig:Nspectrum} \end{subfigure} \caption{(a) Distributions of the number of pixels with energy depositions above 2 MIPs equivalents for several energies. (b) Single pixel energy spectra show reasonable agreement with the simulation for energy densities above $\approx 10~$MIPs per pixel. Note that $70~$GeV showers were not part of the training set.} \label{figure:NhitsComparison} \end{centering} \end{figure} \begin{figure*}[t!] \captionsetup[subfigure]{aboveskip=-1pt,belowskip=-1pt} \begin{centering} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observableCorrelation_E_tot_layer34.pdf} \subcaption{} \label{fig:Etot_3_4} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observableCorrelation_E_max_layer34.pdf} \subcaption{} \label{fig:Emax_3_4} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observableCorrelation_E_tot_layer45.pdf} \subcaption{} \label{fig:Etot_4_5} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observableCorrelation_E_max_layer45.pdf} \subcaption{} \label{fig:Emax_4_5} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observableCorrelation_E_tot_N_tot.pdf} \subcaption{} \label{fig:Etot_Ntot} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[trim={0cm 0 0cm 0cm},clip,,width=\textwidth]{observableCorrelation_E_tot_depth_X0.pdf} \subcaption{} \label{fig:Etot_X0} \end{subfigure} \caption{Comparison of calorimeter observable correlations evaluated on generated and GEANT4-simulated showers. (a) Energy sum, respectively (b) maximum pixel energy, in layer 4 vs. total energy deposit (maximum pixel energy) in layer 3. (c) Energy sum, respectively (d) maximum pixel energy, in layer 5 vs. total energy deposit (maximum pixel energy) in layer 4. (e) Total energy deposit plotted against number of hits. (f) Shower depth vs. total energy deposit. Note that $70~$GeV showers were not part of the training set.} \label{fig:correlations} \end{centering} \end{figure*} In Figure \ref{fig:Nspectrum} we investigate the energy spectrum of active pixels. The region of low energy densities, i.e. pixels with depositions below $\approx 10~$MIPs, is underrepresented by the WGAN with respect to GEANT4 causing a mismatch in the number of active pixels (energy $\geq 2~$MIPs) in Figure \ref{fig:Nhits} and ultimately also resulting in the underestimate of their energy sum (Figure \ref{fig:Etot}). Similar mismodeling of sparsity-describing quantities has also been reported in the work based on traditional GANs \cite{Hooberman:2017nips,Paganini:2017dwg}. It should be noted that the analysis of such fast simulated showers could always be limited to the well-described range by restricting the analysis to pixels with energies above 10 MIP equivalents. In this calorimeter setup, the rejected part of the spectrum contributes only 10\% to the total signal. This could be corrected by scale factors. Following this principle, we conducted a supplementary performance benchmark. Example graphics are shown in the appendix (Figures \ref{fig:AppendixAverageOccupancy}, \ref{fig:Appendixcorrelations}). The agreement between the WGAN and GEANT4 in this regime is improved also for all other observables. \subsubsection{Correlations}\label{subsubsec:correlation} For energy reconstruction or particle identification in modern particle detection systems, multiple calorimeter observables are needed simultaneously (e.g. \cite{ATLAS:2005NIMPR}). Typical approaches exploit correlations of shower characteristics. Consequently, a crucial quality measure for a simulation tool is the assessment of the pairwise correlations of reconstructed physics observables. In this section, we focus on four examples of such correlations. First, both the summed energy and the maximum pixel energy in a fixed layer is expected to correlate with the sum (maximum) in the previous layer. Fig. \ref{fig:Etot_3_4} - \ref{fig:Emax_4_5} visualize this trend for layers 3 and 4, respectively 4 and 5, by way of example. It must be noted that the WGAN-generated showers exhibit good agreement with the GEANT4 showers. Second, a greater number of active pixels should correspond to higher values in the sum of energy depositions, which is illustrated in Figure \ref{fig:Etot_Ntot}. Some discrepancy is to be expected owing to the underestimation of the low energy spectrum by the WGAN. While the positive correlation is also obtained for the WGAN samples, an agreement is not reached. Ultimately, the sampling in this specific calorimeter configuration is not uniform as the setup comprises non-equidistant sampling layers. As a consequence, showers with large energy depositions at a fixed incident electron energy should relate to lower shower depths. Showers whose center of gravity is located deeper in the calorimeter deposit higher fractions of their energy in the larger amount of passive material between layer 5 and 7, hence resulting in lower values of energy sums in the sensitive layers. Also for this example, a good match between real and WGAN-generated showers is displayed in Figure \ref{fig:Etot_X0}. \begin{table*}[t!] \caption{Computational time required for the generation of one $20~$GeV, respectively $90~$GeV, electron-induced cascade through evaluation of the WGAN using different hardware setups and enhancement with respect to a full simulation using GEANT4. Using the WGAN approach, a speed-up of more than three orders of magnitude has been achieved.} \centering \begin{tabular}{cc|cc|cc} Method & Hardware & $20~$GeV $\text{e}^{-}$ & \textbf{Speed-up} & $90~$GeV $\text{e}^{-}$ & \textbf{Speed-up} \\ \toprule GEANT4 & any CPU & $\orderof\left(500~\text{ms} \right)$ & - & $\orderof\left(2000~\text{ms} \right)$ & - \\ WGAN & Intel\textsuperscript{\textcopyright} Xeon\textsuperscript{\textcopyright} CPU E5-1620 & 52 ms & \textbf{x10} & 52 ms & \textbf{x40} \\ WGAN & NVIDIA\textsuperscript{\textcopyright} Quadro\textsuperscript{\textcopyright} K2000 GPU & 3.6 ms & \textbf{x140} & 3.6 ms & \textbf{x560} \\ WGAN & NVIDIA\textsuperscript{\textcopyright} GTX\textsuperscript{\texttrademark} 1080 GPU & 0.3 ms & \textbf{x1660} & 0.3 ms & \textbf{x6660} \\ \end{tabular} \label{table:computing} \end{table*} In summary, typical calorimeter observables computed in fast WGAN-generated showers correspond well to those simulated using GEANT4, not only in terms of their spectra but also in their pairwise correlations. Only the number of low-energy depositions is too small. A detailed simulation typically requires extensive tuning of its model parameters using expert knowledge in particle interactions with matter to achieve a similar level of agreement to real data. By contrast, no dedicated knowledge on how particles interact with the material had to be input into the generative model here. \subsection{Computational speed-up}\label{subsec:computing} We observe that fast simulation of electromagnetic showers using this WGAN architecture is up to three orders of magnitude faster than full simulations. By contrast, expert engineered, parameterized fast simulation provides CPU gains of 10-100 with respect to GEANT4 depending on the particle energy, see e.g. \cite{CMS:FastSim}. In Table \ref{table:computing}, we provide numbers on the computation time advantage of our WGAN compared to GEANT4 using three different hardware architectures. As expected, graphics processing units (GPUs) are the preferred hardware unit since the WGAN's internal set of linear computational operations is more efficiently parallelized and runs faster than in CPUs. Furthermore, it is remarkable that the time for evaluation of the WGAN is independent of the incident electron energy while it scales significantly in simulations using GEANT4. \newline \section{Conclusion}\label{sec:Conclusion} In this paper, we presented a method for generating electromagnetic showers using a Wasserstein GAN (WGAN). As a concrete example, we followed a prototype setup for a future high-granularity calorimeter with seven sensitive layers featuring about one thousand readout pixels placed in an electron beam. As a generative model, we constructed a system of two adversarial networks, one being the generator and the other ensuring high-quality showers. The latter is called the critic network and was active only during the training phase where the probability distribution encoded in reference data was transferred to the generator. The critic network is designed to provide an approximation of the Wasserstein distance between generated and reference data. Using this concept, the training process was found to be well under control. Furthermore, two constrainer networks were included within the elaborate multi-network architecture. These constrainer networks ensure the desired dependencies of the generated showers with respect to the primary electron energies and impact locations on the calorimeter. They were also trained successfully within the elaborate multi-network architecture. When benchmarking the WGAN-generated showers, visual inspections of single showers reveal the typical shower properties with high fluctuations and sparse energy depositions. The average pixel occupancy in the sensitive layers of WGAN-generated showers and of GEANT4 showers appears to be very similar. Various observables typically inspected for calorimeter showers exhibit good agreement with WGAN-generated and GEANT4 showers. For example, the shapes of the longitudinal shower depths are well reproduced not only for the different primary electron energies which the network had been trained for but also for an intermediate electron energy which the network had not encountered before. Also, the total energy depositions in each sensitive layer and the maximum energy in a sensor are described in detail. Furthermore, the transverse shower shapes appear as expected. Only the spectrum at low-energy densities is underestimated by the WGAN. Hence, the total number of sensor pixels with energies above a threshold energy equivalent to two minimum ionizing particles (MIPs) was found to be reduced by about $15\%$. In addition, we analyzed correlations between energy depositions in the sensitive layers. Also here we established a strong agreement between the energies of neighboring layers for the different primary electron energies. The longitudinal shower depth decreases with increasing energy sums of all layers as expected for this specific sampling configuration. We also correlate the total energy sum with the total number of sensor pixels with energies above the 2 MIPs threshold. While this setup has only been studied in the context of isolated showers, there are ideas for its application to full collision events. In particular, it is straight forward to extend the set of labels limiting the WGAN simulation to a quantitative measure of background activity around a given shower. Alternatively, energy deposits of independent but simultaneously occurring particles in a sensor could be superimposed if electronic and saturation effects can be neglected in the underlying detection technology. In the work presented here it was shown that a WGAN can be successfully used to simulate isolated electromagnetic showers in a realistic setup of a multi-layer sampling calorimeter. The computational speed-up compared to traditional sequential simulations amounts to several orders of magnitude. At the same time, in most aspects, the quality of these ultra-fast shower simulations with the WGAN reaches the level of showers generated with the GEANT4 program. \newpage \section*{Acknowledgments} For valuable discussions and comments on the manuscript we wish to thank Lucie Linssen, Eva Sicking and Florian Pitters from the EP-LCD group at CERN, and Yannik Rath from the Aachen group. We gratefully acknowledge permission to apply the geometry files provided by the CMS HGCAL group for simulating data needed for this study. This work is supported by the Ministry of Innovation, Science and Research of the State of North Rhine-Westphalia, and the Federal Ministry of Education and Research (BMBF). Thorben Quast gratefully acknowledges the grant of the Wolfgang Gentner scholarship.
1,108,101,563,045
arxiv
\section{Introduction} \label{sec:intro} Liquid xenon (LXe) time projection chambers~(TPCs) are the most sensitive technology searching for weakly interacting massive particle~(WIMP) dark matter via characteristic keV-scale nuclear recoils (NRs)~\cite{LUX:2016ggv,XENON:2018voc,PandaX-4T:2021bab}. In addition, these detectors are sensitive to numerous novel physics processes in the electron recoil (ER) channel~\cite{xenon1t-excess,lz-lowER-sensitivity}. To maximize their experimental sensitivity for rare processes, care must be taken to minimize backgrounds caused by cosmic rays, ambient gamma rays and neutrons, and radioactive isotopes within the LXe target itself. One potential source of background is the radioactive noble gas \isotope{Ar}{37}, which can contaminate the few-keV energy region where LXe TPCs are most sensitive to WIMP dark matter. \isotope{Ar}{37} can be introduced into LXe as residuals of argon imuprities, via ambient air leaks and activation. In this manuscript, we first describe the \isotope{Ar}{37} decay and its relevance to these searches. We then discuss the cosmogenic production of \isotope{Ar}{37} in xenon and estimate its activity in the context of the LUX-ZEPLIN (LZ) experiment~\cite{lz-detector} assuming a simplified schedule of xenon purification, storage on the surface and delivery. Finally, the impact on LZ backgrounds and physics searches is discussed. \section{Experimental signature of \isotope{A\lowercase{r}}{37} in LX\lowercase{e} TPCs} \label{sec:Ar37} \noindent The isotope \isotope{Ar}{37} decays to the ground state of \isotope{Cl}{37} by electron capture with a half-life of 35.01(2)~days~\cite{Cameron:2012ogv}. The subsequent atomic relaxation of the \isotope{Cl}{37} daughter results in energy deposits at the atomic scale: K-shell (2.82~keV, 90.2\%), L-shell (0.270~keV, 8.9\%), and M-shell (0.018~keV, 0.9\%). The K-shell capture results in some mixture of emitted Auger electrons and x rays with energies that sum to 2.82~keV. Particle interactions in the active region of a LXe TPC generate both a scintillation~(S1) and an ionization~(S2) signal, the ratio of which can be used to identify events as ERs or NRs. The S1 and S2 response of LXe TPCs to \isotope{Ar}{37} decay, in particular the 2.82~keV K-shell feature, has been observed and characterized both in small surface installations~\cite{pixey, akimov, Baudis:2020} and in large underground installations (including LUX~\cite{balajthy_thesis, boulton_thesis} and XENON1T~\cite{xenon1t-excess}). The Noble Element Simulation Technique (NEST)~\cite{NEST2011, NESTv2.2.1p1} is a response model which well describes S1 and S2 production for low-energy ER sources~\cite{xe127, tritium} including \isotope{Ar}{37}~\cite{Szydagis:2020isq, Baudis:2020}. The S2/S1 signal from \isotope{Ar}{37} electron capture may be slightly affected by the atomic relaxation following the K-shell vacancy, but a recent measurement of \isotope{Xe}{127} electron capture indicates this should be a very small effect in \isotope{Ar}{37}~\cite{temples2021measurement} and thus this effect is not considered here. Figure~\ref{fig:contour} shows the expected S1 vs log(S2) distribution of several populations in the LZ detector assuming the operating conditions and data selections described in Ref.~\cite{lz-wimp-sensitivity} and using the NESTv2.2.1patch1 model~\cite{NESTv2.2.1p1}. The $\beta$ decay of \isotope{Pb}{214} (a \isotope{Rn}{222} daughter) broadly populates the ER band, \isotope{B}{8} neutrinos produce NR signals at very low energies, and a typical 40 GeV/c$^2$ WIMP signature populates the NR region between the ER band and the \isotope{B}{8} neutrinos. Also shown is the 2.82~keV K-shell decay of \isotope{Ar}{37}. Its small but finite overlap with the WIMP distribution indicates that \isotope{Ar}{37} decay can weaken experimental sensitivity to a WIMP signal. More directly, this feature of \isotope{Ar}{37} forms a background in searches for novel physics processes at similar few-keV energies in the ER band, such as solar axion and neutrino magnetic moment interactions~\cite{lz-lowER-sensitivity}. The lower-energy L-shell and M-shell peaks may appear in analyses utilizing only the S2 signal, but they are typically below any anticipated S1 threshold. Anticipating and modeling any potential \isotope{Ar}{37} background is particularly important given a recent observation from the XENON1T experiment of an excess of events in this low-energy ER region~\cite{xenon1t-excess,Szydagis:2020isq}. \begin{figure}[tb!] \includegraphics[width=\columnwidth]{contours_times_lama.pdf} \caption{The distributions of \isotope{Ar}{37} decays and several other populations in the \{S1$_{\rm c}$, $\log_{10}$S2$_{\rm c}$\} plane (where S1$_{\rm c}$ and S2$_{\rm c}$ are S1 and S2 signals which have been corrected for position dependence within the TPC and phd denotes the number of photons detected) expected in LZ assuming the data selection described in Ref.~\cite{lz-wimp-sensitivity}. Shown also are NRs from a 40~GeV/c$^2$ WIMP (purple), coherent elastic neutrino-nucleus scattering (${\rm CE}\nu{\rm NS}$) of \isotope{B}{8} solar neutrinos (green), and ground-state $\beta$ decays of \isotope{Pb}{214} (from dissolved \isotope{Rn}{222}) (blue). For each population, the dark and light regions indicate the $1\sigma$ and $2\sigma$ regions, respectively.} \label{fig:contour} \end{figure} \section{Cosmogenic production of \isotope{A\lowercase{r}}{37} } \label{sec:production} Argon-37 is found in small quantities in the atmosphere. This \isotope{Ar}{37} can be generated by cosmic bombardment of atmospheric Ar, mostly via the spallation process \isotope{Ar}{40}$(\text{n},4\text{n})$\isotope{Ar}{37} but also via neutron capture on \isotope{Ar}{36}~\cite{ar37atmosphere, ar37atmosphere2}. Atmospheric \isotope{Ar}{37} can also be produced by cosmic bombardment of calcium-containing soils, via \isotope{Ca}{40}$(\text{n},\alpha)$\isotope{Ar}{37}~\cite{ar37atmosphere3}. This atmospheric \isotope{Ar}{37} has been considered as a potential source of low-energy excess above other backgrounds by both the LUX experiment and the XENON1T experiment in the context of potential air leaks and residuals of initial argon contamination.~\cite{lux-data,xenon1t-excess}. A separate production mechanism has not been previously considered in the literature: the cosmogenic production of \isotope{Ar}{37} in xenon itself via spallation of Xe by protons and neutrons~(more precisely, nuclear fragmentation). This process has a nonzero cross section since spallation product yields are generally continuous in mass/atomic number, provided basic conservation laws are not violated~\cite{Russell:1990bq}. Due to the large mass difference between Xe and \isotope{Ar}{37}, the production of \isotope{Ar}{37} from natural xenon by spallation is limited in rate and has not yet been observed experimentally. The energy-dependent proton-induced spallation cross sections are frequently modeled using the semiempirical formula by Silberberg and Tsao~\cite{silberberg1973partial,Silberberg:1973partial2,silberberg1990spallation,Silberberg:1998}. In this model, the spallation cross section takes the form~\cite{silberberg1973partial} \begin{eqnarray} \label{eq:ST} \sigma &=& \sigma_0 \, \Omega \, \eta \, \xi \, f_{(A)}f_{(E)} \; \; e^{-P\;\Delta A} \;\; e^{-R|Z-SA+TA^2|^\nu} \, , \end{eqnarray} where $E$ is the incident proton energy, $A$ and $Z$ are the atomic mass and atomic number of the product nucleus, and $P$, $R$, $S$, $T$, and $\nu$ are empirical parameters. The generic cross section behavior is captured in $\sigma_0$, which depends on the mass/atomic number of the product and target and also the incident proton energy. The functions $f_{(A)}$ and $f_{(E)}$ provide corrections when the product nucleus is produced from heavy targets and when the change in mass number~($\Delta A=A_t-A$) is large, respectively. The first exponential term describes the decrease in cross section as the target-product mass difference becomes large, and the second exponential term describes the statistical distribution of various isotopes for a product of a given $Z$. The three factors $\Omega$, $\eta$ and $\xi$ account for corrections due to nuclear structure, nuclear pairing, and enhancement of light evaporation products, respectively~\cite{silberberg1973partial}. The model's prediction is generally accurate to within a factor of 2 or 3, as assessed by comparing the predicted and experimentally measured cross sections for various target-product pairs at discrete energies~\cite{silberberg1973partial}. The actual computation of spallation cross sections is more involved as many of the above-mentioned parameters~($\sigma_0$, $P$, $R$, $S$, $T$, $\nu$) take different expressions depending on the mass numbers of the target and product, and the incident energy. Interested readers are referred to the original article~\cite{silberberg1973partial} for a complete description of the model. Although the original Silberberg and Tsao model is formulated for proton-induced spallation, isospin invariance allows the model to also describe neutron-induced spallation at the relevant (high) energies of 100s of MeV and higher, obtained by cosmic-ray-induced neutrons. The model is conveniently implemented in the \textsc{ACTIVIA} package~\cite{activia2008} and is frequently used to calculate activation due to neutrons~\cite{Baudis:2015kqa,Cebrian:2017oft}. Figure~\ref{fig:xsec} (right-side vertical scale) shows the differential cross section of \isotope{Ar}{37} production from natural xenon by spallation as a function of incident nucleon energy. The low-energy cutoff at approximately \SI{250}{\MeV} reflects the energy required by the incident nucleon to initiate an intranuclear cascade in the target nucleus~\cite{Serber:1947zza}. Only the cross sections of the lightest and heaviest stable xenon isotopes are shown for clarity: all other stable isotopes lie between these two curves. The black curve represents an average cross section, weighted by natural isotopic abundance. In calculating the final production rate, the cosmic neutron energy spectrum measured by Gordon~\textit{et}~al.~\cite{gordon2004neutron} and the proton spectrum from the Cosmic-Ray Shower Generator~(CRY, version~1.7)~\cite{cry2007cosmic} are used. Since CRY accounts for products from protons in the primary cosmic ray only and hence underestimates the flux, the CRY proton spectrum is further scaled by the ratio of Gordon's neutron spectrum to CRY's neutron spectrum. These spectra are shown in Fig.~\ref{fig:xsec} (left-side vertical scale). The proton spectrum is generated at the latitude of New York City to be consistent with Gordon's measurement of the neutron spectrum~\cite{gordon2004neutron}. A correction due to geomagnetic latitude is not included in the nucleon spectrum, as the geomagnetic rigidity cutoff in the locations of relevance in North America does not vary significantly enough compared to uncertainties due to other sources. Temporal change in the nucleon flux is similarly not considered here. The additional shielding due to building structure and storage material is not considered either. \begin{figure}[tb!] \centering \includegraphics[width=\columnwidth]{ar37_xsection_v8.png} \caption{The calculated spallation cross section of \isotope{Ar}{37} from individual xenon isotopes (light dotted curves) and natural xenon (solid black). Overlaid is the surface nucleon flux used in our calculations~\cite{gordon2004neutron, cry2007cosmic}. According to the model of Silberberg and Tsao~\cite{silberberg1973partial}, the spallation cross section is negligible below about \SI{210}{\MeV} and increases with energy until \SI{4}{\GeV}, beyond which it is assumed to be constant.} \label{fig:xsec} \end{figure} As shown in Fig.~\ref{fig:xsec}, the spallation cross section increases towards higher incident nucleon energy whereas the cosmic proton and neutron fluxes decrease rapidly with energy. As a result, \isotope{Ar}{37} production at sea level is dominated by protons and neutrons with energies between \SI{300}{\MeV} and a few GeV. The differential production rate of \isotope{Ar}{37} in natural xenon is shown in Fig.~\ref{fig:prodrate} as a function of nucleon energy. Upon integrating the differential rate, the final production rate of \isotope{Ar}{37} due to cosmogenic activation of natural xenon at sea-level is estimated to be 0.024~atoms/kg/day, subject to the same factor of 2 or 3 theoretical uncertainty pointed out earlier for the Silberberg and Tsao spallation model more generally. Currently, there is no experimental data on the \isotope{Ar}{37} production cross section due to its relatively short half-life. Although a partial measurement is possible in a neutron beam facility such as LANSCE~\cite{lansce,ar37atmosphere2}, due to the deviation of the neutron beam profile above 500~MeV from true cosmic neutrons and the increase of the production cross section towards higher energies, the calculation of the total production rate is still model dependent. Therefore, we expect that an \emph{in situ} measurement of its concentration in LZ can provide data on the total, flux-weighted cross section. \begin{figure}[tb!] \centering \includegraphics[width=\columnwidth]{ar37_prodrate_v4.png} \caption{The differential production rate of \isotope{Ar}{37} in natural xenon via spallation as a function of the incident nucleon energy. The production is dominated by nucleons with energies between \SI{300}{\MeV} and \SI{2}{\GeV}. The decrease of production rate below \SI{300}{\MeV} is due to the nucleon energy being too low to initiate an intranuclear cascade, while on the higher end, the decrease of production rate is caused by the decrease of nucleon flux.} \label{fig:prodrate} \end{figure} The cosmogenic production of \isotope{Ar}{37} in Xe via spallation should be very limited in a deep underground setting. The hadronic components of the cosmic rays are strongly attenuated by the rock overburden while the low-energy neutrons from spontaneous fission and $(\alpha,\text{n})$ reactions are below the spallation threshold energy. Instead, the production of \isotope{Ar}{37} in xenon underground is dominated by muon-induced neutrons, of which the flux in the relevant energy range in a typical underground laboratory is $10^5$--$10^7$ times smaller than that on the surface~\cite{gordon2004neutron,mei2006muon}. \iffalse It is worth mentioning that \isotope{Ar}{37} can also be produced by the activation of peripheral materials surrounding the Xe, and that \isotope{Ar}{37} could then be introduced into the Xe through diffusion. For this case, surface activation is likely not important as many detector components are often underground for much longer than the half life of \isotope{Ar}{37}. Production of \isotope{Ar}{37} by spallation in the peripheral detector components is also suppressed by the number of neutrons with sufficient energy. An exception occurs when the target mass number is close to that of Ar and the nuclear transmutation can be triggered by low-energy neutrons~(e.g. neutron capture by \isotope{Ca}{40}). Material contaminations at these atomic masses should therefore be given particular attention in future experiments. \fi \section{Cosmogenic production of \isotope{A\lowercase{r}}{37} in the LZ context} \label{sec:LZ} The xenon used in the LZ experiment is purified to remove the radioisotope \isotope{Kr}{85}. This purification proceeds via gas-phase chromatography in charcoal at SLAC National Accelerator Laboratory~(California, USA)~\cite{LUX:2016wel}, which removes noble gas elements other than xenon. Although a detailed analysis is still in progress, preliminary data indicates that the argon concentration is reduced by at least a factor of 100 by the charcoal chromatography. As a result, the \isotope{Ar}{37} produced prior to chromatography is strongly suppressed. After purification, the xenon is transported by road to the Sanford Underground Research Facility (SURF) in South Dakota, USA~\cite{lesko2015sanford} and brought underground to the LZ experiment site at a depth of \SI{-1480}{\meter}~(4300~m.w.e.). Because of the argon removal during purification, the majority of \isotope{Ar}{37} activity is produced during storage and shipment (between purification at SLAC and delivery underground). As will be shown later, the rate of cosmogenic production is rapid enough such that the argon reduction by chromatography does not play an important role. During ground transportation to SURF, the production rate is also accelerated by the increased proton and neutron flux at higher altitudes, since the SURF surface facility is located at an altitude of 1600~m. Once the xenon is brought underground, the production of \isotope{Ar}{37} in natural xenon becomes negligible, and the \isotope{Ar}{37} accumulated during the transportation decays exponentially over time. As an illustrative model of this process, we assume a simplified schedule of xenon purification, storage and delivery, broadly representative of the actual xenon logistics in LZ. We assume xenon is purified at SLAC in successive, 1-tonne batches at a rate of one batch/month, and we assume ten equal batches totaling 10 tonnes of xenon. The batches are shipped from SLAC to SURF in pairs by ground transportation once every two months, and during each shipment it is assumed that the altitude increases linearly from 86~m above sea level~(at SLAC) to 1600~m~(at the SURF surface facility) over a three-day period. Once at SURF, we assume the xenon is immediately moved underground. The incident proton and neutron flux is assumed to increase exponentially with altitude with attenuation coefficients of 110~g/cm$^2$ and 148~g/cm$^2$, respectively~\cite{ziegler1996}: \begin{eqnarray} I_j = I_i e^{(A_i-A_j)/L} \end{eqnarray} where $I_i$ and $I_j$ are the intensities at altitude $i$ and $j$, and $A_i$ and $A_j$ are the atmospheric depths of the respective locations. $L$ is the attenuation coefficient of the particle of concern. The atmospheric depth is defined as the integral of air density with respect to depth measured from the upper atmosphere. In the lower atmosphere, its difference can be approximated simply as density times height difference, namely $A_i-A_j=\rho (h_j-h_i)$. This correction is applied uniformly to the proton and neutron spectra since the energy-dependent attenuation coefficient does not vary significantly over the energy range of interest~\cite{cry2007cosmic,ziegler1996}. The \isotope{Ar}{37} production rate at higher altitudes is obtained by scaling the surface production rate with the elevation-specific increase in nucleon flux. Figure~\ref{fig:ar37_transportation} shows the result of this simplified model of LZ logistics. The instantaneous \isotope{Ar}{37} activities in each 1~tonne batch are shown as faint dotted lines, beginning at the time of each batch's purification at SLAC. Also shown (as a thick solid line) is the activity per unit mass in the purified xenon payload. Because the Ar removal efficiency of the chromatography at SLAC remains somewhat uncertain, we also show a conservative model in which chromatography results in no \isotope{Ar}{37} removal (dashed line). Assuming complete removal of argon by purification at SLAC, the estimated \isotope{Ar}{37} activity at the time of last delivery is \SI{0.058}{\micro\becquerel/\kilogram}. If no argon is removed, the estimated activity on that date is roughly 50\% higher~(\SI{0.090}{\micro\becquerel/\kilogram}). After this date of last delivery underground, the average activity falls with the 35-day half-life. Notice that details of the production and delivery schedule of the last few batches will have a dominant effect on the final total activity as compared to the earlier batches. The trace natural argon left in the xenon after purification can also be activated during storage and shipment to produce some amount of \isotope{Ar}{37}. This cosmogenic production rate of \isotope{Ar}{37} in argon is about 5000~times higher than the rate of cosmogenic production of \isotope{Ar}{37} in natural xenon~\cite{ar37atmosphere3}. However, taking the most extreme assumptions~(that argon is the only impurity in the initial 99.999\%-purity xenon\footnote{The actual concentration of argon in the xenon prior to chromatography is less than 30~ppb.}, and that the argon is not removed at any level during purification), we find that \isotope{Ar}{37} produced by activation of argon will be subdominant, accounting for at most 5\% of the total cosmogenic \isotope{Ar}{37} in LZ. Argon-37 can also be produced in the plumbing and storage material---most notably steel---and subsequently diffuse into the xenon. The production rate of \isotope{Ar}{37} in iron by spallation is predicted to be around 2.4~atoms/kg/day by ACTIVIA. However, its contribution to radioactivity in the xenon is strongly limited by the slow diffusion rate of argon in steel: even if argon had the same diffusivity in steel as helium~(about $10^{-13}$~cm$^2$/s in common metals~\cite{helium-diffusion}), only a surface depth of a fraction of a millimeter can contribute to the xenon radioactivity over the timescale of a few months. In practice, argon diffusion is significantly slower than that of helium; thus, the contribution of \isotope{Ar}{37} produced in the steel housing material is negligible compared to \isotope{Ar}{37} produced in the bulk xenon. Furthermore, the \isotope{Ar}{37} in the underground plumbing and storage material produced during surface exposures is negligible since these components have been underground for longer than the half-life of \isotope{Ar}{37}. \emph{In situ} production of \isotope{Ar}{37} by spallation in these peripheral detector components is also suppressed by the number of neutrons with sufficient energy. An exception occurs when the target mass number is close to that of Ar and the nuclear transmutation can be triggered by low-energy neutrons~(e.g. neutron capture by \isotope{Ca}{40}). Material contamination at these atomic masses should be given particular attention in future experiments. \begin{figure}[t] \includegraphics[width=\columnwidth]{ar37_shipment_v7.png} \caption{\label{fig:ar37_transportation} Projected \isotope{Ar}{37} activity in the xenon following the simplified purification, storage, and transportation schedule described in the text (ten 1-tonne batches of xenon, delivered at monthly intervals, two batches per shipment). The dotted lines track the \isotope{Ar}{37} activity in each of the 1-tonne batches after they have undergone purification, assuming complete removal of argon is achieved. Note that in each shipment group, as the second batch is being purified, the first batch is stored on the surface and \isotope{Ar}{37} continues to grow. The solid magenta curve shows the average activity in the final 10-tonne payload under that same assumption. The green dashed line shows the scenario when Ar is not removed during purification.} \end{figure} \section{Impact on LZ Backgrounds and Physics Searches}\label{sec:impact} Figure~\ref{fig:activity_vs_time} shows the time evolution of the \isotope{Ar}{37} event rate~(after the same selection criteria as Fig.~\ref{fig:contour}) after the last xenon batch arrives underground assuming the delivery schedule of the previous section. The width of the band represents the assumptions of either perfect or negligible Ar removal during gas chromatography. The band does not include the uncertainties in the spallation cross section estimate from Silberberg's model. For comparison, two other activities are shown: the expected rate of other LZ backgrounds in the ER band in a 1.5--6.5~keV window~(predominantly the \isotope{Pb}{214} daughter of \isotope{Rn}{222})~\cite{lz-wimp-sensitivity} and the rate of the excess seen in the 1--7~keV window by XENON1T in Ref.~\cite{xenon1t-excess}. Initially, the \isotope{Ar}{37} K-shell feature is seen to be a dominant background in this window, weakening early sensitivity to novel physics processes via ERs. \isotope{Ar}{37} becomes subdominant as it decays: at about 150~days since last delivery, the \isotope{Ar}{37} event rate is comparable to both the XENON1T excess rate and other ER background rates in the LZ detector. After this point, the detector begins to reach its optimal sensitivity to the ER excess signal seen by the XENON1T or other novel physics processes in the low-energy ER channel. \begin{figure}[tb] \includegraphics[width=\columnwidth]{ar37_activity_over_time10.png} \caption{Expected event rate from cosmogenic \isotope{Ar}{37} since the time the last batch of xenon is delivered underground. The width of the band indicates variation from assuming either complete or negligible Ar removal during purification at SLAC. The blue dashed line shows the rate of excess observed in this region above the best-fit background model in the XENON1T experiment~\cite{xenon1t-excess}. The solid red line shows the rate of expected ER backgrounds in the LZ experiment, integrated over a 1.5--6.5~keV window relevant for a 40~GeV/c$^2$ WIMP and several ER new-physics signals~\cite{lz-wimp-sensitivity,lz-lowER-sensitivity}. } \label{fig:activity_vs_time} \end{figure} Previous work has quantified the effect of an unknown constant \isotope{Ar}{37} rate in limiting LZ's sensitivity to several specific ER signals~\cite{lz-lowER-sensitivity}. However, if the predominant source of \isotope{Ar}{37} is steadily decaying over a 1000~day run, then the mean \isotope{Ar}{37} activity is 20~times smaller than the instantaneous activity at the beginning of the run. This aids in the statistical inference for new physics in the few-keV region since a fit to the \isotope{Ar}{37} rate early in the exposure reduces the rate uncertainty for later times in the run. \iffalse \begin{figure}[htb!] \includegraphics[width=\columnwidth]{Ar37_leakage_lama_final.pdf} \caption{\isotope{Ar}{37} leakage fraction below flat NR median~(dashed) and below WIMP median~(solid) of different masses. About 0.5\% of \isotope{Ar}{37} ER events leak into below the flat NR median. As WIMP mass increases, \isotope{Ar}{37} ER overlaps more with WIMP NR. For high-enough WIMP mass, the overlap is essentially mass-independent.} \label{fig:ar37 leakage} \end{figure} \fi \section{Conclusions} The noble radioisotope \isotope{Ar}{37} is a background of concern for LXe-based detectors searching for new physics at the few-keV energy scale. Estimations of the production rate of \isotope{Ar}{37} in natural xenon via cosmic-ray-induced spallation yield \SI{0.024}{atoms\per\kilogram\per day} at sea level, subject to a model uncertainty of a factor of 2 or 3. Using a simplified model of the LZ xenon purification, storage and transportation schedule, the \isotope{Ar}{37} activity in the LZ payload is estimated to be 0.058--\SI{0.090}{\micro\becquerel/\kilogram} on the date when the last xenon is delivered underground. The upper (lower) bound assumes no removal (complete removal) of argon during the above-ground purification process. This is an experimental uncertainty which does not include the uncertainty in the spallation cross section estimated using Silberberg and Tsao's model. The K-shell electron capture of \isotope{Ar}{37} will likely appear as a significant background feature at 2.82~keV in early LZ data, due to the large quantity of recently above-ground xenon and the expected exceptionally low rate of all other backgrounds. This background will gradually become subdominant compared to other ER backgrounds (primarily \isotope{Pb}{214}) as it decays with a 35-day half-life. The statistical strength of long-duration searches can be increased by taking advantage of the time dependence in this background component over the course of the exposure. While the \isotope{Ar}{37} background has only a minimal effect on the primary physics goals of LZ, the effect can potentially be greater in future LXe experiments with increased target masses and decreased backgrounds. The cosmogenic production of \isotope{Ar}{37} in natural xenon via spallation discussed here should therefore be considered when planning these future experiments. The timing of xenon handling and purification activities above ground should be optimized to limit \isotope{Ar}{37} activity in the purified xenon brought underground. Ideally, xenon would be stored underground as early as possible in the logistics chain after purification. The present work also highlights how the capacity to separate noble elements in the underground environment is important for future experiments. The XMASS, XENON1T and XENONnT experiments have demonstrated a system of underground cryogenic distillation to this effect~\cite{xmass-distillation,xenon-distillation}, followed by the PandaX Collaboration~\cite{distillation_pandax}. This cryogenic distillation method or some similar method (e.g., membrane methods~\cite{Kr_removal_membrane}) for underground removal of \isotope{Ar}{37} should now be considered an essential element in the design of future experiments. \begin{acknowledgments} The research supporting this work took place in whole or in part at the Sanford Underground Research Facility (SURF) in Lead, South Dakota. Funding for this work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Contracts Number DE-AC02-05CH11231, DE-SC0020216, DE-SC0012704, DE-SC0010010, DE-AC02-07CH11359, DE-SC0012161, DE-SC0014223, DE-SC0010813, DE-SC0009999, DE-NA0003180, DE-SC0011702, DE-SC0010072, DE-SC0015708, DE-SC0006605, DE-SC0008475, DE-FG02-10ER46709, UW PRJ82AJ, DE-SC0013542, DE-AC02-76SF00515, DE-SC0018982, DE-SC0019066, DE-SC0015535, DE-SC0019193 DE-AC52-07NA27344, and DOE-SC0012447. This research was also supported by U.S. National Science Foundation (NSF); the U.K. Science \& Technology Facilities Council under Grants number ST/M003655/1, ST/M003981/1, ST/M003744/1, ST/M003639/1, ST/M003604/1, ST/R003181/1, ST/M003469/1, ST/S000739/1, ST/S000666/1, ST/S000828/1, ST/S000879/1, ST/S000933/1, ST/S000747/1, ST/S000801/1, and ST/R003181/1 (JD); Portuguese Foundation for Science and Technology (FCT) under Grants number PTDC/FIS-PAR/2831/2020; the Institute for Basic Science, Korea (budget numbers IBS-R016-D1). We acknowledge additional support from the STFC Boulby Underground Laboratory in the U.K., the GridPP and IRIS Consortium, in particular at Imperial College London and additional support by the University College London (UCL) Cosmoparticle Initiative. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was completed in part with resources provided by the University of Massachusetts' Green High Performance Computing Cluster (GHPCC). The University of Edinburgh is a charitable body, registered in Scotland, with the registration number SC005336. The assistance of SURF and its personnel in providing physical access and general logistical and technical support is acknowledged. \end{acknowledgments} \newpage \bibliographystyle{apsrev4-2}
1,108,101,563,046
arxiv
\section{Introduction} Astrophysical black holes (BHs) are Kerr black holes fully characterized by their mass $M_{\rm BH}$ and spin $\mathbf{J}_{\rm BH}$, customarily expressed in terms of the dimensionless spin parameter $a$ ($\leq 1$), and unit vector ${\hat {\bf{J}}}_{\rm BH}$: \begin{equation} \label{eqn:jbh definition} \mathbf{J}_{\rm BH}=a\frac{G M_{\rm BH}^2 }{c}\hat{\mathbf{J}}_{{\rm BH}}. \end{equation} The spin and mass of BHs residing in galaxy nuclei do not remain constant, close to their birth values, but change sizeably through cosmic time, in response to major accretion events. In current cosmological scenarios for the evolution of galaxies, repeated interactions among gas-rich halos play a key role not only in shaping galaxies, but also in triggering quasar activity \citep{WhiteRees1978, DiMatteoSpringelHernquist2005}. Massive gaseous nuclear discs that form in the aftermath of major galaxy mergers \citep{MihosHernquist1996, Mayeretal2007} may provide enough fuel to feed, on sub-parsec scales, the BH through a Keplerian accretion disc \citep{Dottietal2007, Dottietal2009}. If these episodes repeat recursively and/or at random phases \citep{KingPringle2006} the BH spin $\mathbf{J}_{\rm BH}$ is expected, initially, to be misaligned relative to the direction of the angular momentum of the disc ${\hat \bf{J}}_{\rm disc,out}$ at its unperturbed, outer edge $R_{\rm out}$. In this configuration, the gas elements inside the disc undergo Lense-Thirring precession \citep[see, e.g. ][]{Wilkins1972}. In the fluid, the action of viscosity onto the differentially precessing disc ensures that the inner portion of the accretion disc aligns (or anti-aligns) its orbital angular momentum with the BH spin $\bf{J}_{\rm BH}$, out to a transition radius $R_{\rm warp}$ beyond which the disc remains aligned to the outer disc, as first shown by \citet{BardeenPetterson1975}\citep[see also][]{ArmitageNatarajan1999,NelsonPapaloizou2000,FragileAnninos2005,Fragileetal2007}. Warping of the inner disc at distance $R$ from the BH is communicated through the fluid elements on a timescale $t_{\rm BP}(R)$ related to the vertical shear viscosity of the accretion disc. Therefore, the inner regions of the disc align (or counter-align if the disc is counter-rotating) with the BH spin on the scale $t_{\rm BP}(R_{\rm warp})$ when the viscous time for vertical propagation of disturbances equals the Lense-Thirring precession time. On a longer timescale, the joint evolution of BH+disc system restores full axisymmetry, with the BH spin direction aligned relative to the total angular momentum of the composite system \citep{Rees1978, ThornePriceMacDonald1986, Kingetal2005}. The change in $\bf{J}_{\rm BH}$ is a consequence of angular momentum conservation: since the BH acts on the disc with a torque that warps the disc, then an equal and opposite gravito-magnetic torque acts on the BH that modifies its direction {\it only}. BH spin alignment has been studied in two main contexts. In the first, explored by \citet{Kingetal2005} and \citet{LodatoPringle2006}, the focus is on an {\it closed} system where the accretion disc has a finite mass and radial extent. Here, the total angular momentum $\bf{J}_{\rm tot}=\bf{J}_{\rm BH}+\bf{J}_{\rm disc}$ is well defined vector, and the BH eventually aligns it spin vector to the direction of $\bf{J}_{\rm tot}$. In the second, explored by \citet{ScheuerFeiler1996}, \citet{NatarajanArmitage1999} and \citet{MartinPringleTout2007}, the focus is on an {\it open} system, where the accretion disc has infinite extension and it is continually fed at its outer edge by matter whose angular momentum has constant direction $\hat{\bf{J}}_{\rm disc,out}$. In this second case, the BH aligns its spin to the outer disc direction $\hat{\bf{J}}_{\rm disc,out}$ on a timescale $t_{\rm al}$ that exceeds $t_{\rm BP}$ by a few orders of magnitude \citep{ScheuerFeiler1996,MartinPringleTout2007}. In this paper, we progress on the study of BH alignment including the contemporary change in mass and spin modulus due to accretion of matter, neglected in previous works. During BH precession and alignment, matter flows inward and accretes carrying the energy and the specific angular momentum of the innermost stable circular orbit (ISO). This study thus provides estimates of the fractional increase of mass $\Delta M_{\rm BH}/M_{{\rm BH},0}$ and spin $\Delta {a}/{a}_0$ during BH alignment (the subscript $0$ refers to initial conditions), together with a sensible expression for the alignment time $t_{\rm al}$. In our context, we assume a continuous and coherent feeding of the accretion disc around the BH, at least for a time as long as the alignment timescale $t_{\rm al}$. Thus, we consider an open system and we fix the orbital angular momentum direction $\hat{\bf{J}}_{\rm disc,out}$ at the outer edge of the disc. In Section 2, we introduce key parameters and highlight our model assumptions. Section 3 surveys properties of steady state warped discs and key scales associated to the Bardeen-Petterson effect; disc models with constant and power-law viscosity profile are explored for completeness. In Section 4, we describe the equations for the BH mass and spin evolution, and introduce the adiabatic approximation to solve these equations. In the same section we also revisit the expression for the BH alignment time. Results are illustrated in Section 5; there we explore also the tendency to alignment in initially counter-rotating warped discs. Section 6 contains the discussion of the results and our conclusions. \section{Initial assumptions and main parameters} \label{sec:initial assumptions} We consider a BH with spin ${\bf{J}_{\rm BH}}$, surrounded by a geometrically thin, standard Shakura-Sunyaev $\alpha$-disc \citep[e.g.][]{ShakuraSunyaev1973, FrankKingRaine2002}. The $\alpha$-disc is initially misaligned relative to $\bf{J}_{\rm BH}$, i.e. the angular momentum unit vector of the disc at the outer edge is $\hat{\bf{J}}_{{\rm disc,out}}\neq {\hat{\bf{J}}}_{\rm BH}$; the relative inclination angle between the two unit vectors is $\theta_{{{\rm out}}}$. Following \citet{Pringle1992}, we assume that the accretion disc has a high viscosity ($\alpha > H/R$, where $H$ is the disc vertical scale height) so that perturbations propagate diffusively. We introduce two viscosity parameters, $\nu_1$ and $\nu_2$: $\nu_1$ is the standard radial shear viscosity while $\nu_2$ is the vertical share viscosity associated to the diffusion of vertical warps through the disc, due to Lense-Thirring precession. For $\nu_1$ we adopt the $\alpha$ prescription \begin{equation} \label{eqn:alpha-prescription} \nu_1=\alpha H c_{\rm s} \end{equation} where $c_{\rm s}$ is the sound speed inside the accretion disc. It is still poorly understood which is the relation between the radial and the vertical viscosity; in particular, if $\nu_1 \sim \nu_2$ or $\nu_1 \ll \nu_2$. In order to simplify our discussion, we refer to the recent analysis of \citet{LodatoPringle2007}, and for $\nu_2$ we take: \begin{equation} \label{eqn:nu1-nu2 relation} \frac{\nu_2}{\nu_1}=\frac{f_{\nu_2}}{2\alpha^2} \end{equation} where $f_{\nu_2}$ (given in Table 1) is a coefficient determined in numerical simulations that accounts for non-linear effects. The disc model is defined after specifying five free parameters (subscript 0 will be introduced to indicate initial values when mass and spin evolution is considered): \noindent (1) The BH mass, $M_{\rm BH}$; we explore a mass range between $10^5 \rm M_{\odot} < M_{\rm BH} < 10^7 \rm M_{\odot}$. For the BH mass we introduce the dimensionless parameter $M_6$ as $ M_{\rm BH} = M_6 \times 10^6 \rm M_{\odot}.$ \noindent (2) The spin modulus, in terms of the dimensionless spin parameter $a$, which varies between $0 < a \leq 0.95$. We do not use the theoretical limit $a=1$ because, if accretion is driven by magneto-rotational instabilities in a relativistic MHD disc, the final equilibrium spin due to continuous accretion is $a \approx 0.95$ \citep{GammieShapiroMcKinney2004}; \noindent (3) The relative inclination angle $\theta_{{\rm out}},$ between the spin versor $\hat{\bf{J}}_{\rm BH}$ and the orbital angular momentum versor at the external edge of the accretion disc, $\hat{\bf{J}}_{\rm disc,out}$. This angle varies isotropically from 0 to $\pi$. In the following, however, we will confine this interval to ($0,\sim \pi/6$) in order to satisfy the used approximations. \noindent (4) The viscosity parameter $\alpha$ which is assumed to vary between $10^{-2}\lesssim \alpha \lesssim 10^{-1}$ to bracket uncertainties \citep{KingPringleLivio2007}. For our purposes we selected values of $\alpha$ according to \citet{LodatoPringle2007}, as in Table \ref{tab:viscosita' lodato}. In this study, $\alpha$ is considered as a constant inside the disc. \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline $\alpha$ & $f_{\nu_2}$\\ \hline 0.18 & 1.00 \\ 0.15 & 0.85 \\ 0.09 & 0.60 \\ 0.05 & 0.38 \\ \hline \end{tabular} \caption{Table of the coefficients $\alpha$ and $f_{\nu_2}$.} \label{tab:viscosita' lodato} \end{center} \end{table} \noindent (5) The accretion rate onto the BH, $\dot{M},$ is expressed in terms of the Eddington ratio $f_{\rm Edd}=L/ L_{\rm Edd}$ and of the accretion efficiency $\eta$ (where $L_{\rm Edd}$ is the Eddington luminosity): ${\dot M}=f_{\rm Edd} L_{\rm Edd}/(\eta c^2)$. We consider values of $f_{\rm Edd}$ in the interval $ 10^{-4} < f_{\rm Edd} < 1$ and compute $\eta$ as a function of the BH spin modulus. If the disc, warped in its innermost parts, is described to first order by the Shakura-Sunyaev $\alpha$-model, both $\nu_1$ and $\nu_2$ follow a power-law. If viscosities satisfy relation (\ref{eqn:nu1-nu2 relation}) and $\alpha$ is assumed to be constant, their exponent are equal: \begin{equation} \label{eqn:Sa-Su viscosity 1} \nu_1=A_{\nu_1}R^{\beta} \qquad {\rm and} \qquad \nu_2=A_{\nu_2}R^{\beta}. \end{equation} Following standard Shakura-Sunyaev disc solutions for external regions of an accretion disc \citep{FrankKingRaine2002}, we have $\beta=3/4$ and \begin{equation} \label{eqn:Sa-Su viscosity 2} \begin{aligned} & A_{\nu_1}= 9.14 \times 10^{6}\alpha_{0.1}^{4/5}M_{6}^{1/20}\left( \frac{f_{\rm Edd}}{\eta_{0.1}} \right)^{3/10}{\rm cm^{5/4}s^{-1}}\\ & A_{\nu_2}= \left(\frac{\nu_2}{\nu_1} \right) A_{\nu_1} = 50~f_{\nu_2}~\alpha_{0.1}^{-2}~A_{\nu_1}. \end{aligned} \end{equation} In equation (\ref{eqn:Sa-Su viscosity 2}), $\alpha_{0.1}$ and $\eta_{0.1}$ are the $\alpha$ coefficient and the BH radiative efficiency in unit of $0.1$. $f_{\nu_2}$ is tabulated in Table~1 (Lodato \& Pringle 2007). \section{Warped accretion disc} \subsection{The angular momentum content of discs: extended versus truncated discs} \label{subsection:disc content} The dynamics of a fluid element in a misaligned disc around a spinning BH is given by the combination of three different motions: the Keplerian rotation around the BH; the radial drift, due to radial shear viscosity, and finally the Lense-Thirring precession, due to the gravitomagnetic field $\mathbf{H}_{\rm gm}$ generated by $\mathbf{J}_{\rm BH}$ \citep[see, e.g.,][]{Weinberg1972, ThornePriceMacDonald1986}. In response to Lense-Thirring induced precession, viscous stresses in the disc acts rapidly to produce in the vicinity of the BH an axisymmetric configuration whereby adjacent fluid elements rotates in the equatorial plane of the spinning BH. The disc thus warps and the warp disturbance propagates diffusely \citep{PapaloizouPringle1983} in the disc. As the Bardeen-Petterson effect modifies the inclination of the orbital plane of consecutive infinitesimal rings, then the warped profile of the accretion disc can be described by the specific angular momentum density, $\mathbf{L}$, expressed as \begin{equation} \label{eqn:L definition} \mathbf{L}=L\hat{\bf{l}}=\Sigma\Omega_{\rm K} R^2 \hat{\bf{l}} \end{equation} where $\hat{\bf{l}}(R)$ is a unit vector indicating the local direction of the orbital angular momentum, $L$ is the modulus, $\Sigma$ is the surface density of the disc and $\Omega_{\rm K}$ the local Keplerian angular velocity. The angle describing the tilted disc is defined as \begin{equation} \theta(R)=\cos^{-1} (\hat{\bf{ l}}(R)\cdot \hat{\bf{J}}_{\rm BH}), \end{equation} so that ${\hat {\bf l}}(R)$ carries information of the warped structure of the accretion disc. The angular momentum of the accretion disc within radius $R$ is given by \begin{equation} \label{eqn:j vector disc within R} \mathbf{J}_{\rm disc}(R)=\int_{R_{\rm ISO}}^{R} 2 \pi x \mathbf{L}(x)dx \end{equation} where the integration domain extends from the innermost stable orbit $R_{\rm ISO}$ out to $R$. In order to calculate the {\it total} disc angular momentum we define an outermost radius, $R_{\rm out}$. For an extended disc with $R_{{\rm out}}\to \infty$, the disc angular momentum $\bf{J}_{\rm disc}$ always dominates over $\bf{J}_{\rm BH}.$ Real discs are likely to be truncated by their own self-gravity that becomes important at distances where the disc mass $M_{\rm disc}(R)\sim (H/R)M_{\rm BH}$ \citep[see, e.g.,][]{Pringle1981, FrankKingRaine2002, Lodato2007}. Outside the truncation radius, gas can be either turned into stars or expelled by winds from stars which do form \citep{Levin2007, KingPringle2007}. Thus, we are led to define a disc outer edge as the distance where the Toomre parameter for stability, $Q=\kappa c_s/(\pi G \Sigma)$ (where $\kappa^2= R(d \Omega^2/dR)+ 4 \Omega^2$), becomes less than unity, and the cooling timescale of the clumping gas is less than its dynamical timescale. When the Toomre parameter drops toward unity, the disc becomes unstable on a lenghtscale $\lambda = c_s^2/(G\Sigma)$ \citep{Polyachenkoetal1997,Levineetal2008}; for a nearly Keplerian, Shakura-Sunyaev $\alpha$-disc, this scale is much smaller than the disc radial dimension, and the cooling time of the associated perturbation is less or of the same order of its orbital period. Then, as long as the accretion disc can be described as a Shakura-Sunyaev disc\footnote{This condition is fullfilled only for $(M_6 f_{\rm Edd})/(\alpha_{0.1}\, \eta_{0.1}) \gtrsim 4.3$. If this condition is not satisfied the gas temperature drops below $\sim 10^4$ K, in the external region of the disc where $Q$ is still greater than unity. The change in the opacity likely modifies the structure of the outer disc, and we can not explicitly use (\ref{eqn:r out}). In this paper we assume that the outer region is sufficiently extended to provide matter and angular momentum to the inner regions and use self-consistently the Shakura-Sunyaev model to describe the disc in regions where the gravitomagnetic interaction takes place. }, the external radius can be defined from the condition $Q(R_{\rm out})=1$, so that \begin{equation} \label{eqn:r out} R_{\rm out}=1.21 \times 10^5 \alpha_{0.1}^{28/45} M_6^{-52/45} \left( \frac{f_{\rm Edd}}{\eta_{0.1}} \right)^{-22/45} R_{\rm S}, \end{equation} where $R_{\rm S}=2GM_{\rm BH}/c^2$ is the Schwarschild radius. At the outer edge of the disc, $\hat{\bf l}(R_{{\rm out}})=\hat\bf{J}_{\rm disc, out}$ and $\theta(R_{{\rm out}})=\theta_{{{\rm out}}}.$ Definitions (\ref{eqn:L definition}) and (\ref{eqn:j vector disc within R}) for $\mathbf{L}$ and ${\bf{J}_{\rm disc}}(R)$ hold for any disc profile. At first order, we can neglect details about the warped disc structure around $R_{\rm warp}$ assuming $\hat{\mathbf{l}} \approx (0,0,1)$, and estimate the modulus of the orbital angular momentum within radius $R$, $J_{\rm disc}(R)$, using Shakura-Sunyaev solutions for a flat disc. In this approximation, the surface density is $\Sigma_{\rm flat} \approx \dot{M}/(3 \pi \nu_1)$ \citep[see, e.g.,][]{Pringle1981, FrankKingRaine2002} and \begin{equation} \label{eqn:L flat} L(R) \approx \frac{\dot{M}}{3 \pi \nu_1} \sqrt{G M_{\rm BH} R}. \end{equation} Using equations (\ref{eqn:j vector disc within R}) and (\ref{eqn:L flat}), and espression ($\ref{eqn:Sa-Su viscosity 1}$) for $\nu_1$ in the case of $\beta=3/4$, the modulus of the disc angular momentum within $R$ reads: \begin{equation} \label{eqn:j disc R} J_{\rm disc}(R)=\frac{8}{21}\frac{\dot{M}\sqrt{GM_{\rm BH}}}{A_{\nu_1}}R^{7/4}. \end{equation} If espression (\ref{eqn:j disc R}) is estimated at the outer radius (\ref{eqn:r out}), the resulting dimensionless ratio between the disc and BH angular momenta is \begin{equation} \label{eqn:ratio between momenta} \frac{J_{\rm disc}(R_{\rm out})}{J_{\rm BH}} = 7.3 \, \alpha_{0.1}^{13/45}M_6^{-37/45}\left( \frac{f_{\rm Edd}}{\eta_{0.1}} \right)^{-7/45}a^{-1}. \end{equation} \subsection{Timescales and warp radius} \label{subsection:disc basic equations} The time-dependent evolution of the disc is described by the continuity equation \begin{equation} \label{eqn:continuity} R\frac{\partial \Sigma}{\partial t}+\frac{\partial}{\partial R}\left(v_{\rm R} \Sigma R \right)=0, \end{equation} where $v_{\rm R}$ is the radial component of the velocity vector, and by the equation of conservation of angular momentum. In presence of a gravitomagnetic field, for a geometrically thin disc characterized by the two viscosities $\nu_1$ and $\nu_2$, the equation reads \citep{Pringle1992}: \begin{equation} \label{eqn:angular momentum} \begin{aligned} & \frac{\partial\mathbf{L}}{\partial t}= - \frac{1}{R}\frac{\partial}{\partial R}(R \mathbf{L} v_{\rm R})+\frac{1}{R}\frac{\partial}{\partial R}\left(\nu_1 \Sigma R^3 \frac{d\Omega}{dR}~ \mathbf{\hat l} \right)+ \\ & \qquad +\frac{1}{R}\frac{\partial}{\partial R}\left(\frac{1}{2}\nu_2 R L \frac{\partial \mathbf{\hat l}}{\partial R} \right) + \frac{2G}{c^2} \frac{\mathbf{J}_{{\rm BH}} \times \mathbf{L}} {R^3}. \end{aligned} \end{equation} The last term is the Lense-Thirring precession term and the associated angular velocity is \begin{equation} \label{eqn:omegalt} \boldsymbol{\Omega}_{\rm LT}(R)=\frac{2G}{c^2}\frac{\mathbf{J}_{{\rm BH}}}{R^3}. \end{equation} The time-dependent equation (\ref{eqn:angular momentum}) describes the radial drift of matter and the diffusion of warping disturbances across the high-viscosity disc.\\ This equation introduces several key scales: \begin{enumerate} \item The viscous/accretion timescale for radial drift, related to the angular momentum transport parallel to $\bf{J}_{\rm disc, out}$, $t_{\rm acc}(R)$. It can be seen as the time it takes for a fluid element at $R$ to accrete onto the BH \citep[see, e.g.,][]{Pringle1981} Considering equation (\ref{eqn:angular momentum}), the balance between the advection term and the viscous term proportional to $\nu_1$ (both on the right side of equation \ref{eqn:angular momentum}) leads to an estimate of the accretion time: \begin{equation} \label{eqn:t_acc} t_{\rm acc}(R) \sim {R^2}/{\nu_1} . \end{equation} According to equation (\ref{eqn:t_acc}), we can introduce the disc consumption timescale $t_{\rm disc}$, a concept useful when considering transient, truncated discs, as the accretion timescale at the outer radius: \begin{equation} \label{eqn:disc accretion timescale} \begin{aligned} & t_{\rm disc} \sim t_{\rm acc}(R_{\rm out}) = \\ & \qquad 1.71 \times 10^{6} \alpha_{0.1}^{-1/45}M_6^{-11/45}\left(\frac{f_{\rm Edd}}{\eta_{0.1}} \right)^{-41/45} {\rm yr}. \end{aligned} \end{equation} \item The timescale for warp propagation, related to the radial diffusion of gravitomagnetic perturbations that transport the component of the disc angular momentum lying in the plane of the disc; this scale is inferred from equation (\ref{eqn:angular momentum}) considering the term proportional to $\nu_2$, \begin{equation} \label{eqn:tbp} t_{{\rm BP}}(R) \sim \frac{R^2}{\nu_2}\sim \left( \frac{\nu_1}{\nu_2} \right) t_{\rm acc} (R). \end{equation} The physical interpretation of this timescale has been recently investigated by solving numerically equation (\ref{eqn:angular momentum}) for a thin disc \citep{LodatoPringle2006}: starting at $t=0$ with a flat disc misaligned relative to the fixed BH spin, $t \approx t_{{\rm BP}}(R)$ indicates the time it takes for the radial diffusion of the warp to reach radius $R$; on longer timescale, the disc approaches a steady warped state. \item The characteristic extension of the warp $R_{\rm warp}$, defined as the distance at which the Bardeen-Petterson timescale $t_{\rm BP}(R)$ equals the Lense-Thirring precession timescale $\Omega_{\rm LT}^{-1}$: \begin{equation} \label{eqn:Rw implicit} R_{\rm warp}=\frac{4GJ_{\rm BH}}{\nu_2 c^2}. \end{equation} For power-law viscosity model, equations (\ref{eqn:jbh definition}), (\ref{eqn:Sa-Su viscosity 2}) and (\ref{eqn:Rw implicit}) give \begin{equation} \label{eqn:warp radius} R_{\rm warp}=476~\alpha_{0.1}^{24/35}f_{\nu_2}^{-4/7}M_{6}^{4/35}\left( \frac{f_{{\rm Edd}}}{\eta_{0.1}} \right)^{-6/35}a^{4/7}~R_{\rm S}. \end{equation} The warp radius represents the dividing between the outer region for $R \gg R_{\rm warp}$, where the disc keeps its original inclination, given by $\hat \bf{J}_{\rm disc,out}$, and the inner region for $R \ll R_{\rm warp}$, where the disc aligns (or anti-aligns) its orbital angular momentum with the BH spin, ${\bf {\hat l}}\parallel \hat\bf{J}_{\rm BH}$. The warp radius fixes also the magnitude of the relevant Bardeen-Petterson timescale, which reads \begin{equation} \label{eqn:tbp at rbp} \begin{aligned} & t_{\rm BP}(R_{\rm warp})= 33.5~\alpha_{0.1}^{72/35}f_{\nu_2}^{-12/7}M_6^{47/35}\\ & \qquad \qquad \times \left( \frac{f_{\rm Edd}}{\eta_{0.1}} \right)^{-18/35}a^{5/7}{\rm yr}. \end{aligned} \end{equation} If we define the function \begin{equation} \label{eqn:psi definition} \psi(R) \doteq \left| \frac{d \mathbf{\hat l}}{dR} \right| \end{equation} and $R_{{\rm BP}}$ the radius where the disc is maximally deformed \begin{equation} \label{eqn:Rbp definition} \Psi \doteq \psi (R_{{\rm BP}})=\max \left( \psi \right), \end{equation} we expect that: \begin{equation} \label{eqn:nbp definition} R_{{\rm BP}}=n_{{\rm BP}}R_{\rm warp} \end{equation} with $n_{{\rm BP}}$ of order unity. $R_{{\rm BP}}$ has two important properties: first, if it is the radius where the disc is maximally warped, i.e. where the diffusive propagation of vertical perturbations is more significant; second, it provides a reliable estimate of the distance from the BH where the gravitomagnetic interaction is stronger. From equation (\ref{eqn:angular momentum}) this interaction is proportional to $(\mathbf{L} \times \mathbf{J}_{{\rm BH}})/R^3$: this term vanishes in the inner part of the disc ($R\ll R_{{\rm BP}}$) since the Bardeen-Petterson effect aligns $\mathbf L$ with $\bf{J}_{\rm BH}$, and also in the outer regions ($R \gg R_{{\rm BP}}$), due to the rapid decline with $R$. Accordingly, the region near $R_{{\rm BP}}$ (or equivalently $R_{\rm warp}$) is the only one significantly misaligned with $\mathbf{J}_{{\rm BH}}$. \end{enumerate} \subsection{Analytical solutions} \label{Sec:analitic solution} \begin{figure*} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=82mm]{psi.png} \caption{$\psi$ as function of $R/R_{\rm warp}$, for $W'_{\rm C}$ (solid line) and $W'_{\rm PL}$ (dashed line) for different inclination angles: black lines correspond to $\theta_{{\rm out}}=\pi /3$, blue lines to $\pi / 30$, red lines to $\pi/300$. The vertical black dotted line represents $R_{\rm BP}/R_{\rm warp}=0.42$, the Bardeen-Petterson radius for constant viscosity profiles. The parameters set for the BH and the disc is given by $M_{\rm BH,0}=10^6 \rm M_{\odot}$, $a=0.5$, $f_{\rm Edd}=0.1$, $\alpha=0.09$} \label{fig:psi} \end{minipage} \hspace{10mm} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=82mm]{chi.png} \caption{$\chi_{\beta=0}(R)$ (solid line) and $\chi_{\beta=3/4}$ (dashed lines) as functions of $R/R_{\rm warp}$ for different inclination angles. Black lines correspond to $\theta_{{\rm out}}=\pi /3$, blue ones to $\pi / 30$, red ones to $\pi/300$; the vertical black dotted line represents $R_{\rm BP}/R_{\rm warp}=0.42$, the Bardeen-Petterson radius for constant viscosity profiles. The parameters set for the BH and the disc is given by $M_{\rm BH,0}=10^6 \rm M_{\odot}$, $a=0.5$, $f_{\rm Edd}=0.1$, $\alpha=0.09$} \label{fig:chi} \end{minipage} \end{figure*} In this Section we summarise the properties of the steady warped disc structure used to compute the joint evolution of the disc and the BH. Following previous studies we assume that the viscosity profiles are power-laws with exponent $\beta$, as in equation (\ref{eqn:Sa-Su viscosity 1}), and explore two possible cases. In the first, we formally extend the Shakura-Sunyaev solution everywhere in the disc, i.e. $\nu_1\propto R^{\beta}$ with $\beta=3/4$, and $\nu_2$ given by equation (\ref{eqn:nu1-nu2 relation}) \citep{MartinPringleTout2007}. In the second case, we assume the viscosities to remain approximately constant everywhere in the disc \citep{ScheuerFeiler1996}. In order to compare the two models \citep[cfr.]{MartinPringleTout2007}, we impose the continuity of the viscosities at $R_{\rm BP}$ where the gravitomagnetic torque is most important.\\ Before solving equations (\ref{eqn:continuity}) and (\ref{eqn:angular momentum}) we introduce two appropriate reference frames. The first is the inertial reference frame $Oxyz$ referred to the outer disc; we can always rotate it so that its $z$ axis is parallel to the direction of ${\hat \bf{J}}_{\rm disc,out}$. The second reference frame is the not-inertial frame $O'x'y'z'$ referred to the BH spin, which is always centered to the BH and whose $z'$ axis is parallel to the black hole time varying spin $\bf{J}_{\rm BH}$ . If we use the adiabatic approximation, then frame $O'x'y'z'$ can be approximated, with time $t$, as a sequence of frames, one for every quasi-stationary state of the system. The shape of the warped accretion disc is studied in the $O'x'y'z'$ frames and the cartesian components of any vector $\boldsymbol{v}$ are there indicated as $v'_x,v'_y,v'_z$; $Oxyz$ is the natural frame to study the temporal evolution of the BH spin and here the Cartesian components of the previous vector are denoted as $v_x,v_y,v_z$.\\ For a stationary state, continuity equation (\ref{eqn:continuity}) can be easily intergrated introducing the accretion rate $\dot{M}$ as constant of integration: \begin{equation} \label{eqn:integral of continuity} R \Sigma v_{\rm R} = -\frac{\dot{M}}{2 \pi} \end{equation} while the projection of equation (\ref{eqn:angular momentum}) along $\hat{\mathbf{l}}$ reads: \begin{equation} \label{eqn:angular momentum 3} \left(\frac{3}{2}\nu_1\frac{dL}{dR}-\frac{\dot{M}\sqrt{GM_{\rm BH}}}{4 \pi \sqrt{R}} \right)+\frac{1}{2}R\nu_2 L \left|\frac{d \mathbf{\hat l}}{dR} \right|^2=0 \end{equation} In the small deformation approximation \citep{ScheuerFeiler1996} the warp is gradual and we can neglect the non-linear term, proportional to $| \partial \mathbf{l} / \partial R |^2$. Using the boundary condition $\Sigma(R_{\rm ISO})=0$, the integral of (\ref{eqn:angular momentum 3}) is \begin{equation} \label{eqn:solution for L} L(R)=\frac{\dot M}{3 \pi \nu_1}\sqrt{G M_{\rm BH} R} \left( 1-\sqrt{\frac{R_{\rm ISO}}{R}} \right). \end{equation} This means that, in this approximation scheme, the modulus of the angular momentum density for a warped accretion disc far from the horizon is the same as for a flat disc, equation (\ref{eqn:L flat}). Following \citet{ScheuerFeiler1996} we study the disc profile of the steady disc introducing the complex variable $W'={\hat l}'_x+i{\hat l}'_y$ and considering the case $\theta_{\rm out}< \pi/2$. Using power-law viscosities according to (\ref{eqn:Sa-Su viscosity 1}), analytic solutions of equation (\ref{eqn:angular momentum}) in the small deformation approximation have been found by \citet{MartinPringleTout2007}: \begin{equation} \label{eqn:solution for W, nu power-law} \begin{aligned} & W'_{\rm PL}=B \left( \frac{R}{R_{\rm warp}} \right)^{-\frac{1}{4}}\\ & \qquad \times K_{{1}/{2(1+\beta)}}\left(\frac{\sqrt{2}(1-i)}{(1+\beta)}\left( \frac{R}{R_{\rm warp}} \right)^{-\frac{1+\beta}{2}} \right) \end{aligned} \end{equation} where $B$ is a complex constant of integration, depending on the boundary condition at the external edge, the subscript "PL" is a reminder of the power-law viscosities and $K_{1/(2(1+\beta))}$ is the modified Bessel function of order $1/(2(1+\beta))$. In the particular case where we consider constant viscosities, i.e. $\beta=0$, the solution can be written as \begin{equation} \label{eqn:solution for W, nu constant} W'_{\rm C}=A \exp{\left(-\sqrt{2}~(1-i)\left( \frac{R}{R_{\rm warp}}\right)^{-\frac{1}{2}} \right) } \end{equation} where $A$ is a complex constant of integration and the subscript "C" is a reminder of the constant viscosity model \citep{ScheuerFeiler1996}. In this latter case, $n_{\rm BP}$ is calculated self-consistently, using the definition (\ref{eqn:nbp definition}) and the prescription for constant viscosity evaluation at $R_{\rm BP}$; we find that, for every possible parameters set, $n_{\rm BP} \approx 0.42$. We notice that $W'$ (and so also $\psi$) depends on the radius $R$ through the dimentionless ratio $R/R_{\rm warp}$ \citep{MartinPringleTout2007}. In Figure \ref{fig:psi} we plot the modulus of the gradient of $\hat{\mathbf{l}}$, $\psi(R)$, which is a local measure of the deformation degree of the disc, for a particular set of parameters and for three different angles, $\theta_{{\rm out}}= \pi /3,\pi /30,\pi / 300 $. The shape of $\psi$ is similar for the two different disc profiles and for all the angles: there is a well defined maximum near $R_{\rm warp}$, where we expect the disc to be more deformed. At radii smaller than $R_{\rm warp}$ and far from $R_{\rm warp}$ the disc is almost flat (note that the graph is logarithmic in both axes). For the constant viscosity (power-law) profile the peak is at $R_{\rm BP} \approx 0.42 R_{\rm warp}$ ($R_{\rm BP}\approx 0.38 R_{\rm warp}$). In Figure 1 we also see that a constant viscosity disc is less warped (since the maximum deformation is the smaller) than the power law viscosity disc. The ratio between the maximum deformations in the power law vs constant viscosity is roughly a factor 2, and it does not depend on the inclination angle (except for a scale factor, approximately equal to the ratio between the corresponding angles). \subsection{Validity of the approximation} \begin{figure*} \centering \includegraphics[width=144mm]{chimaxalpha.jpg} \caption{Color coded plot of $(\chi_\beta)_{\rm max}$ in the $\theta_{{\rm out}}$ versus $M_{\rm BH}$ plane, for four different $\alpha$ parameters: $\alpha=0.05$ top left panel, $\alpha=0.09$ top right panel, $\alpha=0.15$ bottom left panel, $\alpha=0.18$ bottom right panel. The disc has constant viscosity profiles, i.e. $\beta=0$. The accretion rate is $f_{\rm Edd}=0.1$ and the spin parameter is $a=0.9$} \label{fig:chimaxalpha} \end{figure*} \begin{figure*} \centering \includegraphics[width= 144mm]{chimaxtsd.jpg} \caption{Left panels: color coded plot of $(\chi_\beta)_{\rm max}$ in the $a$ versus $\theta_{{\rm out}}$ plane, for $\alpha=0.09$, $M_{\rm BH}= 10^5 \rm M_{\odot}$ and $f_{\rm Edd}=0.1$. Right panels: color coded plot of $(\chi_\beta)_{\rm max}$ in the $f_{\rm Edd}$ versus $\theta_{{\rm out}}$ plane, for $\alpha=0.09$, $M_{\rm BH}= 10^5 \rm M_{\odot}$ and $a=0.9$. Top panels refer to constant viscosity profiles, bottom panels to power-law viscoisty profiles.} \label{fig:chimaxtsd} \end{figure*} We calculated the warped disc profile under the small deformation approximation. We neglected second order terms in equation (\ref{eqn:angular momentum 3}) and found an analytic solution for $\mathbf{L}$; in order to verify the consistence of this approximation, we define $\chi_{\beta}$ as the ratio between the neglected term and the first term into the round brackets of (\ref{eqn:angular momentum 3}), assuming to have a Keplerian disc with power-law viscosity profile with exponent $\beta$, like in equation (\ref{eqn:Sa-Su viscosity 1}). Considering equations (\ref{eqn:solution for L}) for $L$, we have $dL/dR \approx (1/2+\beta)(L/R) $ and then $\chi_{\beta}$ reads \begin{equation} \label{eqn:chi definition} \chi_{\beta}(R)=\frac{2}{3\left(\beta+1 \right)}\frac{\nu_2}{\nu_1}\left|\frac{d\mathbf{l}}{dR} \right|^2 R^2. \end{equation} Once we know the explicit solutions, the consistence of this approximation can be tested a posteriori calculating $\chi_\beta$: the approximation is well satisfied if $\chi_\beta\ll 1.$ From equation (\ref{eqn:chi definition}), $\chi_\beta$ can be expressed also as a function of $R/R_{\rm warp}$ and $\psi$: \begin{equation} \chi_\beta =\frac{2}{3\left(\beta + 1\right)}\,\frac{\nu_2}{\nu_1}\,R_{\rm warp}^{2}\left(\frac{R}{R_{\rm warp}}\right)^2\psi^2 \left( \frac{R}{R_{\rm warp}}\right). \end{equation} Figure \ref{fig:chi} shows the function $\chi_{\beta=0}$ for constant viscosity profiles (dashed lines) and the function $\chi_{\beta=3/4}$ for power-law viscosity profiles (solid lines), for the same parameters as in Figure \ref{fig:psi}.\\ The function $\chi_{\beta}$ exhibits a maximum, $(\chi_{\beta})_{\rm max}$, around $R_{\rm warp}$. Far from $R_{\rm warp}$ the accuracy of the approximation increases, albeit slowly. The function $\chi_{\beta}$ is most sensitive to the inclination angle, as expected (notice that Figure 2 uses logarithmic axes). In Figure \ref{fig:chimaxalpha}, we test the validity of the small deformation approximation plotting, in the BH mass versus $\theta_{{\rm out}}$ plane, the color coded values of $(\chi_{\beta=0})_{\rm max}$, for different values of the viscosity parameter $\alpha$ (Table \ref{tab:viscosita' lodato}), using the constant viscosity profile model (we fix $f_{\rm Edd}=0.1$ and $a=0.9$). White zones represent the regions where $(\chi_{\beta=0})_{\rm max} > 1$, i.e. where the small deformation approximation becomes invalid. $(\chi_{\beta=0})_{\rm max}$ shows mainly a strong dependence on inclination angle $\theta_{{\rm out}}$, but also a weaker dependence on the BH mass which reveals that the small deformation approximation is less accurate for $M_{\rm BH} \gtrsim 10^6 \rm M_{\odot}$ and increasing BH mass. Comparing different $\alpha$ values, the approximation is better satisfied for large viscosities parameters (i.e. $\alpha=0.18$). We repeated the analysis for the power-law viscosity model that shows no significant differences in the parameters dependence.\\ In Figure \ref{fig:chimaxtsd}, using the same colours conventions, we explored $(\chi_{\beta})_{\rm max}$ in the $\theta_{{\rm out}}$ versus $a$ (left panels), and $f_{\rm Edd}$ versus $\theta_{{\rm out}}$ (right panels) planes, once we have fixed the viscosity parameter ($\alpha=0.09$), the BH mass ($M_{\rm BH}= 10^6 \rm M_{\odot}$), and $f_{\rm Edd}=0.1$ for the left panels and $a=0.9$ for the right panels. For both constant ($\beta=0$) and power-law ($\beta=3/4$) models the relative inclination angle is again the leading parameter gauging the goodness of the fit as the approximation depends very weakly on $a$ and $f_{\rm Edd}$. \section{Black hole evolution} \subsection{Basic equations} \label{subsection:bh basic equations} In this section we explore the equations for the BH evolution. The BH is accreting and its mass increases, from an initial value $M_{{\rm BH},0},$ according to \begin{equation} \label{eqn:mass evolution} \frac{dM_{\rm BH}}{dt}=\dot{M}\frac{E(R_{\rm ISO})}{c^2} \end{equation} where $E(R_{\rm ISO})$ is the energy per unit mass of a test particle at the innermost stable orbit. ${E(R_{\rm ISO})/c^2}=1-\eta(a) $ is related to the efficiency $\eta(a)$ that depends only on the spin parameter \citep{Bardeen1970,Bardeenetal1972}. Equation (\ref{eqn:mass evolution}) introduces a natural timescale for BH mass growth, known as Salpeter time $t_{\rm S}$: \begin{equation} \label{eqn:Salpeter time} t_{\rm S}= 4.5 \times 10^{8}\frac{\eta}{f_{\rm Edd}(1-\eta)}\quad {\rm yr}. \end{equation} As argued by \citet{Rees1978} and shown by \citet{ThornePriceMacDonald1986}, there is a coupling between the BH spin and the angular momentum of the disc. Even though the disc is much less massive that the BH, the moving fluid elements perturb the Kerr metric and interact with the BH spin, causing spin {\it precession,} and if viscous dissipation is present, {\it alignment}. For an infinitesimal ring of inviscid matter with total angular momentum $\mathbf{J}_{\rm ring}$, the BH spin precesses, following the equation \begin{equation} \label{eqn:jbh precession-ring} \frac{d\mathbf{J}_{\rm BH}}{dt}=\frac{2G}{c^2}\frac{\mathbf{J}_{\rm ring}}{R^3} \times \mathbf{J}_{\rm BH}, \end{equation} with a precession frequency \begin{equation} \Omega^{\rm precession}_{\rm BH}=\Omega_{\rm LT} \frac{J_{\rm ring}}{J_{\rm BH}}. \end{equation} Equation (\ref{eqn:jbh precession-ring}) can be extended to the case of an accretion disc to yield: \begin{equation} \label{eqn:jbh precession-disc} \begin{aligned} &\frac{d\mathbf{J}_{\rm BH}}{dt}=\dot{M}\Lambda(R_{\rm ISO})\hat {\bf{l}}(R_{\rm ISO}) \\ & \qquad \qquad + \frac{4\pi G}{c^2}\int_{\rm disc}\frac{\mathbf{L}(R) \times \mathbf{J}_{\rm BH}}{R^2}dR. \end{aligned} \end{equation} The first contribution is due to accretion of matter at $R_{\rm ISO}$ where $\Lambda(R_{\rm ISO})$ indicates the orbital angular momentum per unit mass carried by matter at ISO; the Bardeen-Petterson effect ensures that the direction of $ \hat {\bf{l}}(R_{\rm ISO})$ is parallel or anti-parallel to $\hat {\bf{J}}_{\rm BH}$, so that the accretion modifies only the spin modulus. As shown by \citet{Bardeen1970}, a variation of mass $\Delta M_{\rm BH}= \sqrt{6}M_{\rm BH,0}$ is necessary to pass from a Schwarzschild BH ($a=0$) to an extreme Kerr BH ($a=1$), while spin flip of $\pi$, due only to accretion on an initially extreme Kerr BH, needs $\Delta M_{\rm BH}= 3 M_{\rm BH,0}$. So, the spin accretion timescale for the spin modulus is of the same order of the mass accretion timescale $t_{\rm S}$. The second term in equation (\ref{eqn:jbh precession-disc}) describes the gravitomagnetic interaction between the rotating viscous disc and the BH spin vector. This term modifies only the {\it spin direction} of the BH in order to conserve the total angular momentum of the system. Under the working hypothesis that the disc is continually fed by matter carrying the same angular momentum (see Section \ref{sec:conclusions} for a critical discussion), the BH aligns its spin $\bf{J}_{\rm BH}$ in the direction of ${\hat{\bf{J}}}_{\rm disc,out}$. Alignment implies that $\theta_{{\rm out}}(t)=\cos^{-1} (\hat{\bf{J}}_{\rm BH}(t)\cdot \hat{\bf{J}}_{\rm disc, out})$ goes to $0$ with time. Figure \ref{fig:integ} shows the function $I$ defined as the modulus of the integral kernel of equation (\ref{eqn:jbh precession-disc}) \begin{equation} I(R)= \frac{4 \pi G}{c^2} \frac{L(R)J_{\rm BH} \sin[{\theta(R)}]}{R^2} \end{equation} as a function of $R/R_{\rm warp},$ for different value of $\theta_{\rm out}= \pi / 3, \pi /30, \pi / 300$, where $\theta(R)$ is computed along the profile of the steady warped disc of equation (\ref{eqn:solution for W, nu power-law}) and (\ref{eqn:solution for W, nu constant}). The function $I$, similarly to $\psi$ (defined in eq. [\ref{eqn:psi definition}]), peaks near $R_{{\rm warp}}$. Contrary to $\psi$, power law viscosity profiles have lower peaks, compared with constant viscosity profiles. This figure indicates also that the BH-disc gravitomagnetic interaction is spread over a relatively small region of the disc around the warp radius; the characteristic spreading length, which is slightly larger for constant viscosity profiles, is usually of a few warp radii. \subsection{Alignment time} \begin{figure} \centering \includegraphics[width=82mm]{integ.png} \caption{In this figure we draw modulus of gravitomagnetic interaction term, $I$, as a function of radius normalized to warp radius, $R/R_{\rm warp}$. Disc profiles are obtained by BH with $M_{\rm BH} = 10^6 \rm M_{\odot}$ and $a=0.5$, and accretion disc with $f_{\rm Edd} = 0.1$ and $\alpha=0.09$. Solid (dashed) lines refer to constant (power-law) viscosity profiles; black lines to $\theta_{\rm out}=\pi / 3$, blue lines to $\theta_{\rm out}= \pi / 30$, red lines to $\theta_{\rm out}= \pi / 300$.} \label{fig:integ} \end{figure} In this Paragraph we want to give simple estimations for the alignment and the precession timescales, starting from equation (\ref{eqn:jbh precession-disc}). Assuming BH mass and spin modulus variations due to accretion to be small compared with gravitomagnetic effects during the alignment, we neglect the term proportional to $\Lambda(R_{\rm ISO})$ in (\ref{eqn:jbh precession-disc}); if BH spin aligns and precess, left hand side of (\ref{eqn:jbh precession-disc}) can be estimated introducing a characteristic gravitomagnetic timescale $\tau_{\rm gm}$ as \begin{equation} \nonumber \left| \frac{d \bf{J}_{{\rm BH}}}{dt} \right| \sim \frac{\left| \Delta \bf{J}_{\rm BH} \right|}{\tau_{\rm gm}} \sim \frac{J_{\rm BH}\sin{\theta_{{\rm out},0}} }{\tau_{\rm gm}}, \end{equation} and the integral on the right hand side as \begin{equation} \nonumber \left| \frac{4\pi G}{c^2}\int_{\rm disc}\frac{\mathbf{L}(R) \times \mathbf{J}_{\rm BH}}{R^2}dR \right| \sim \frac{4\pi G}{c^2} \frac{L(R_{\rm warp})J_{\rm BH} \sin{\theta_{{\rm out}}}}{R_{\rm warp}} \end{equation} since the bulk of the gravitomagnetic interaction occurs around $R_{\rm warp}.$ Equating these two expressions and using equation (\ref{eqn:solution for L}) for the specific angular momentum density modulus, we obtain \begin{equation} \label{eqn:T} \tau_{\rm gm} \sim \frac{3}{4}\frac{c~\nu_1(R_{\rm warp})}{G\dot{M}}\sqrt{\frac{R_{\rm warp}}{R_{\rm S}}}. \end{equation} Using equation (\ref{eqn:Rw implicit}) and (\ref{eqn:j disc R}) which imply $\dot{M}\sqrt{GM_{\rm BH}}/\nu_1(R_{\rm warp}) \approx (21/8)\,J_{\rm disc}(R_{\rm warp})\,R_{\rm warp}^{-5/2}$, the gravitomagnetic scale $\tau_{\rm gm}$ can be written in terms of the Bardeen-Petterson warp timescale (eq. [\ref{eqn:tbp}]): \begin {equation} \label{eqn:T-tbp} \tau_{\rm gm}\sim \frac{4 \sqrt{2}}{7}\, {J_{\rm BH}\over J_{\rm disc}(R_{\rm warp})} t_{\rm BP}(R_{\rm warp}) \end{equation} and also in term of the accretion timescale (eq. [\ref{eqn:t_acc}]) \begin {equation} \label{eqn:T-tacc} \tau_{\rm gm}\sim \frac{4 \sqrt{2}}{7} \, \frac{\nu_1}{\nu_2} {J_{\rm BH}\over J_{\rm disc}(R_{\rm warp}).}t_{\rm acc}(R_{\rm warp}) \end{equation} where $J_{\rm disc}(R_{\rm warp})$ is the disc angular momentum modulus within the warp radius, estimated by (\ref{eqn:j disc R}). Finally, considering equation (\ref{eqn:j disc R}) and (\ref{eqn:t_acc}) for $t_{\rm acc}$ together with the expression for the spin modulus and Schwarzschild radius, $\tau_{\rm gm}$ of expression (\ref{eqn:T-tacc}) can be rearranged as \begin{equation} \tau_{\rm gm }\sim \frac{3}{2 }\, {a}\, \frac{\nu_1}{\nu_2} \, \frac{M_{\rm BH}}{\dot{M}}\, \sqrt{{R_{\rm S}\over R_{\rm warp}}}. \end{equation} Since the disc carries very little angular momentum at the warp radius, from equation (\ref{eqn:T-tbp}) $\tau_{\rm gm} \gg t_{\rm BP},$ always. The gravitomagnetic BH-disc interaction causes BH spin precession and alignment at the same time, and then introduces two scales related with $\tau_{\rm gm}$, the {\it precession} and the {\it alignment} timescales, $t_{\rm prec}$ and $t_{\rm al}$ respectively. We separate their relative importance following \citet{MartinPringleTout2007} results, and define the parameter $\mu$, so that \begin{equation} \label{eqn:2timescales} t_{{\rm al}}=\frac{\tau_{\rm gm}}{\cos{\mu}}, \qquad t_{\rm prec}=\frac{\tau_{\rm gm}}{\sin{\mu}}. \end{equation} The exact value of $\mu$ depends on the viscosity profile, and can be estimated either analytically \citep{MartinPringleTout2007}, or numerically as in this paper. Initially, we assume alignment and precession to have the same timescale, $\cos \mu = \sin \mu = \sqrt{2}/2$ according to \citet{ScheuerFeiler1996}. Substituting expressions (\ref{eqn:Sa-Su viscosity 2}) for the viscosities, (\ref{eqn:warp radius}) for the warp radius, and (\ref{eqn:T}) for $\tau_{\rm gm}$ in (\ref{eqn:2timescales}), the alignment time reads \begin{equation} \label{eqn:alignment timescale} t_{\rm al} = 1.13 \times 10^5 \alpha_{0.1}^{58/35}f_{\nu_2}^{-5/7}M_6^{-2/35} \left(\frac{f_{\rm Edd}}{\eta_{0.1}} \right)^{-32/35}a^{5/7}~{\rm yr}. \end{equation} The timescale $t_{\rm al}$ increases with $a$, indicating that a rapidly rotating Kerr BH offers some resistance before changing its direction. Interestingly, the alignment timescale does not depend on the initial inclination $\theta_{{\rm out},0}$ since a more inclined configuration implies more pronunced disc deformations and stronger mutual gravitomagnetic interactions (as also shown in Figure \ref{fig:psi} and \ref{fig:integ}). $t_{\rm al}$ has a weak dependence on the BH mass and scales nearly as ${\dot M}^{-1}$: a higher accretion rate implies a higher angular momentum density ${\bf L}(R)$ and thus a stronger gravitomagnetic coupling. We notice also that, apart from numerical factors of order unity, this timescale is consistent with the alignment scales found by by \citet{ScheuerFeiler1996,NatarajanPringle1998, NatarajanArmitage1999,MartinPringleTout2007}. \subsection{The adiabatic approximation} \label{subsection:the adiabatic approximation} \begin{figure*} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=82mm]{twtal.png} \caption{In the $M_{\rm BH}$ versus $f_{\rm Edd}$ plane, we draw lines of constant $t_{\rm BP}(R_{\rm warp})/t_{\rm al}$ ratio for a BH with $a = 0.9$ and an accretion disc with $\alpha = 0.09$.} \label{fig:twtal} \end{minipage} \hspace{10mm} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=82mm]{tstal.png} \caption{In the $M_{\rm BH}$ versus $f_{\rm Edd}$ plane, we draw lines of constant $t_{\rm S}/t_{\rm al}$ ratio for a BH with $a=0.9$ and an accretion disc with $\alpha = 0.09$.} \label{fig:tstal} \end{minipage} \end{figure*} In Section \ref{subsection:disc basic equations} and \ref{subsection:bh basic equations} we described the equations governing the evolution of a warped accretion disc around a fixed BH, and the evolution of an accreting Kerr BH in gravitomagnetic interaction with its accretion disc. The BH and the accretion disc evolve contemporary and their evolution is coupled, so that we can solve simultaneously equations (\ref{eqn:continuity}) and (\ref{eqn:angular momentum}) for a Keplerian disc and (\ref{eqn:mass evolution}) and (\ref{eqn:jbh precession-disc}) for the accreting and precessing BH. In this paper, we solve these coupled equations using the {\it adiabatic} approximation that separates the rapid temporal evolution of the warped disc from the longer temporal evolution of the BH. Equations are integrated starting from given initial conditions: at $t=0$ the BH spin $\hat{\bf{J}}_{\rm BH}$ is inclined with respect to $\hat \bf{J}_{\rm disc,out}$ by an angle $\theta_{{\rm out},0},$ and the warped disc profile is described by a quasy-stationary profile ${\bf L}(R,t=0)$; $M_{\rm BH,0}$ and $\bf{J}_{\rm BH,0}$ are the initial BH mass and spin.\\ In order to justify this approximation scheme, we survey the BH and disc timescales, as functions of $M_{\rm BH}$ and $f_{\rm Edd}$, for two selected values of the viscosity and spin parameter: $\alpha = 0.15$ and $a=0.9.$ In Figure (\ref{fig:twtal}) and in Figure (\ref{fig:tstal}), we draw in the $M_{\rm BH}$-$f_{\rm Edd}$ plane lines of constant $t_{\rm BP}(R_w)/t_{\rm al}$ and $t_{\rm S}/t_{\rm al}$ ratios. The comparison between the different timescales lead to the following hierarchy of timescales: \begin{equation} t_{\rm BP} (R_{\rm warp}) \ll t_{\rm al}\ll t_{\rm S}. \end{equation} Then, in the adiabatic approximation, the disc transits through a sequence of warped states over the shortest timescale $t_{\rm BP}(R_{\rm warp})$ while, on the longer timescale $t_{\rm al},$ the BH aligns its spin to $\hat\bf{J}_{\rm disc, out}$, and modifies a little its spin modulus and mass due to accretion. Considering one of these disc quasi-steady states, initially at time $t$, after a time gap $\delta t \sim t_{\rm BP}(R_{\rm warp})$ the BH mass and spin $\bf{J}_{\rm BH}$ are updated according to \begin{equation} \label{eqn:update} \left\{ \begin{aligned} & M_{\rm BH}(t+t_{\rm BP}(R_{\rm warp}))= M_{\rm BH}(t)+\delta M_{\rm BH} \\ & \mathbf{J}_{\rm BH}(t+t_{\rm BP}(R_{\rm warp}))= \mathbf{J}_{\rm BH}(t)+\delta \mathbf{J}_{\rm BH} \end{aligned} \right. \end{equation} and these variations produce a new quasi-stationary warped state at $t+t_{\rm BP}(R_{\rm warp})$, ${\bf L}(R,t+t_{\rm BP}(R_{\rm warp}))$.\\ For the BH mass variation $\delta M_{\rm BH}$, we integrate equation (\ref{eqn:mass evolution}) from $t$ to $t+t_{\rm BP}(R_{\rm BP})$: \begin{equation} \label{eqn:mass variation} \delta M_{\rm BH} \approx \dot{M}\frac{E(R_{{\rm ISO}})}{c^2}~t_{\rm BP}(R_{\rm BP}) \end{equation} where $R_{{\rm ISO}}$ is the last innermost stable orbit associated with the current value of $a(t)$.\\ For the spin variation, we need to integrate equation (\ref{eqn:jbh precession-disc}) that includes the two different and coupled contributions due to accretion and gravitomagnetic interaction; if $\delta M_{\rm BH}$ and $\delta \mathbf{J}_{\rm BH}$ are small on the timescale $t_{\rm BP}(R_{\rm BP})$, to first order the two contributions decouple and they can be integrated separately: \begin{equation} \label{eqn:spin modulus variation} \left( \delta J_{\rm BH} \right)_{\rm acc} \approx \dot{M} \Lambda(R_{{\rm ISO}})~t_{\rm BP}(R_{\rm BP}) \end{equation} \begin{equation} \label{eqn:spin gravitomagnetic variation} \begin{aligned} & \left( \delta \mathbf{J}_{\rm BH} \right)_{\rm gm} \approx \frac{4 \pi G}{c^2} t_{\rm BP}(R_{\rm BP}) \\ & \qquad \qquad \times \int_{\rm disc}\frac{\mathbf{L}(R,t) \times \mathbf{J}_{\rm BH}(t)}{R^2}~dR \end{aligned} \end{equation} where $\left( \delta J_{\rm BH} \right)_{\rm acc}$ is due to accretion and changes only the spin modulus while $\left( \delta \mathbf{J}_{\rm BH} \right)_{\rm gm}$ is due to gravitomagnetic interaction and changes only the spin direction. After the interval $t_{\rm BP}(R_{\rm BP}),$ the angular momentum of (\ref{eqn:update}) are updated according to this rule \begin{equation} \label{eqn:total spin variation} \begin{aligned} & \mathbf{J}_{\rm BH}(t+t_{\rm BP}(R_{\rm warp}))= \left( \mathbf{J}_{\rm BH}(t)+\left( \delta \mathbf{J}_{\rm BH} \right)_{\rm gm}\right) \\ & \qquad \times \frac{J_{\rm BH}(t)+ \left( \delta J_{\rm BH} \right)_{\rm acc}}{J_{\rm BH}(t)} \end{aligned} \end{equation} This procedure can be repeated iteratively on a timescale $t_{\rm al}$ to study the coupled evolution of $\mathbf{L}(R,t)$, $\mathbf{J}_{\rm BH}$ and $M_{\rm BH}$ during the alignment process. \section{Spin alignment} \begin{figure*} \centering \includegraphics[width= 144mm]{mpt.png} \caption{Results for precession and alignment processes. Black lines refer to our result while red lines refer to results published by \citet{MartinPringleTout2007}; solid lines (dashed lines) refer to constant (power-law) viscosity profile. Top left panel represents temporal evolution of relative inclination angle $\theta_{{\rm out}}$ while top right shows evolution of $J_{\rm BH,x}/J_{\rm BH}$ agains $J_{\rm BH,y}/J_{\rm BH}$, both for an initial BH with $M_{\rm BH,0}=10^6 \rm M_{\odot}$, $a_0=0.5$ and an accretion disc with $f_{\rm Edd}=0.1$ and $\alpha=0.09$, with $\theta_{{\rm out},0}= \pi / 6$. Blue dashed line represents the evolution of the spin components for a pure precession motion around $\hat{\bf{J}}_{\rm disc,out} \, || \, \hat{z}$. In bottom left (right) panel we represent evolution of $J_{\rm BH,x}/J_{\rm BH}$ agains $J_{\rm BH,y}/J_{\rm BH}$ for an initial relative inclination angle $\theta_{{\rm out},0}= \pi / 30$ ($\theta_{{\rm out},0}= \pi /3$), for an initial BH with $M_{\rm BH,0}= 10^6 \rm M_{\odot}$, $a_0=0.5$ and an accretion disc with $f_{\rm Edd}=0.1$ and $\alpha=0.09$.} \label{fig:spinth} \end{figure*} \begin{figure*} \centering \includegraphics[width=144mm]{vsl.png} \caption{Coupled evolution of the relative inclination angle $\theta_{{\rm out}}$, BH mass $M_{\rm BH}$ and spin parameter $a$. Solid (dashed) lines refer to constant (power-law) viscosity profile. Black lines to $f_{\rm Edd}=1$, red lines to $f_{\rm Edd}=0.1$, blue lines to $f_{\rm Edd}=0.01$. Dotted horizontal lines which appear in top panels represent angles $\theta_{{\rm out}} / \theta_{{\rm out},0}=10^{-1},10^{-2}, 10^{-3}$. Initial configuration: $M_{\rm BH,0}=10^6 \rm M_{\odot}$, $a_0=0.5$, $f_{\rm Edd}=0.1$ and $\alpha=0.09$, with $\theta_{{\rm out},0}= \pi / 6$.} \label{fig:vsl} \end{figure*} \subsection{Set up} In this Section we study the coupled evolution of the BH and warped accretion disc using the approximation scheme described in the previous Section, in order to infer the evolution of $M_{\rm BH}$ and $\bf{J}_{\rm BH}$ as a function of time, in response to the gravitomagnetic interaction and matter accretion.\\ \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline VP & $f_{\rm Edd}$ & $\theta_{{\rm out}} / \theta_{out,0}$ & $\Delta t$ & $\Delta t/t_{\rm al} $ & $\Delta M_{\rm BH}/M_{{\rm BH},0}$ & $\Delta a/a_0$ \\ & & & ($10^6{\rm yr}$)& &(in units of $10^{-2}$) & (in units of $10^{-2}$)\\ \hline \hline & & $10^{-1}$ & 0.15 & 2.2 & 0.67 & 1.57 \\ & 1 & $10^{-2}$ & 0.30 & 4.4 & 0.74 & 3.13 \\ & & $10^{-3}$ & 0.45 & 6.6 & 1.11 & 4.69 \\ \hline & & $10^{-1}$ & 1.12 & 2.0 & 0.29 & 1.24 \\ C & 0.1 & $10^{-2}$ & 2.33 & 4.2 & 0.58 & 2.46 \\ & & $10^{-3}$ & 3.52 & 6.3 & 0.87 & 3.68 \\ \hline & & $10^{-1}$ & 9.28 & 2.0 & 0.23 & 0.99 \\ & 0.01 & $10^{-2}$ & 18.6 & 4.1 & 0.46 & 1.96 \\ & & $10^{-3}$ & 27.9 & 6.1 & 0.68 & 2.94 \\ \hline \hline & & $10^{-1}$ & 0.26 & 3.8 & 0.65 & 2.77 \\ & 1 & $10^{-2}$ & 0.53 & 7.8 & 1.30 & 5.49 \\ & & $10^{-3}$ & 0.81 & 11.9 & 1.98 & 8.20 \\ \hline & & $10^{-1}$ & 2.18 & 3.9 & 0.54 & 2.30 \\ PL & 0.1 & $10^{-2}$ & 4.39 & 7.9 & 1.08 & 4.57 \\ & & $10^{-3}$ & 6.64 & 11.9 & 1.64 & 6.84 \\ \hline & & $10^{-1}$ & 18.1 & 3.9 & 0.45 & 1.91 \\ & 0.01 & $10^{-2}$ & 36.3 & 7.9 & 0.89 & 3.79 \\ & & $10^{-3}$ & 54.7 & 11.9 & 1.35 & 5.67 \\ \hline \end{tabular} \caption{Summary of our parameters and results for the co-rotating case; we consider viscosity coefficient $\alpha=0.09$ and initial inclination angle $\theta_{{\rm out},0}=\pi/ 6 $, both for constant (C) and power-law (PL) viscosity profiles (VP). The initial BH has $M_{\rm BH,0} = 10^6 \rm M_{\odot}$ and $a_{0} = 0.5$. Accretion rate $f_{\rm Edd}$ varies over three orders of magnitude and we record times needed to decrease the relative inclination angle of a factor 10, 100 or 1000, comparing it with estimated alignment timescale, equation (\ref{eqn:alignment timescale}); we also report mass and spin relative variations.} \label{tab:angle variations} \end{center} \end{table*} At $t=0$ the outer disc, extending up to a radius $R_{\rm out}$, defines the fixed reference frame $Oxyz$. In this frame the external edge of the disc lies in the $x,y$ plane and the orbital angular momentum at $R_{out}$ is \begin{equation} \label{eqn:initial disc position} (L_x(R_{{\rm out}}),L_y(R_{{\rm out}}),L_z(R_{{\rm out}}))=L(R_{{\rm out}})(0,0,1) \end{equation} while the BH spin is initially inclined of $\theta_{{\rm out},0}$ with respect to the $z$ axis: \begin{equation} \label{eqn:initial BH spin position} (J_{{\rm BH},x},J_{{\rm BH},y},J_{{\rm BH},z})=J_{\rm BH} (\sin{\theta_{{\rm out},0}},0,\cos{\theta_{{\rm out},0}}). \end{equation} If at $t \neq 0$ we know the components of $\mathbf{J}_{\rm BH}$ in the fixed reference frame $Oxyz$ there is always a rotated reference frame $Ox'y'z'$ where $\mathbf{J}_{\rm BH}$ is along the new $z'$ axis (see also the discussion about reference frames of Section \ref{Sec:analitic solution}). The two reference frames are related by a rotation $\mathcal{R}$, which depends only on the components $J_{{\rm BH},x}$, $J_{{\rm BH},y}$, $J_{{\rm BH},z}$ of $\mathbf{J}_{\rm BH}(t)$ in the fixed reference frame. If $\mathcal{R}_{ij}$ is the matrix associated with this rotation, we can easily find the components of $\mathbf{J}_{\rm BH} (t)$ and $\mathbf{L}(R_{{\rm out}},t)$ in the rotated frame: \begin{equation} \begin{aligned} & J'_{{\rm BH},i}(t)=\mathcal{R}_{ij}J_{{\rm BH},j}(t)\Rightarrow J'_{\rm BH}(t)=(0,0,J_{\rm BH}(t))\\ & L'_{i}(R_{{\rm out}},t)=\mathcal{R}_{ij}L_{j}(R_{{\rm out}},t). \end{aligned} \end{equation} As shown by \citet{ScheuerFeiler1996} for the constant viscosity profile and \citet{MartinPringleTout2007} for the power-law viscosity profile, in this special rotated frame of reference it is possible to calculate analytically the expression of the gravitomagnetic torque, using equation (\ref{eqn:spin gravitomagnetic variation}): \begin{equation} \label{eqn:torque integral} \begin{aligned} & (\delta J'_{{\rm BH},x}+i\delta J'_{{\rm BH},y})_{\rm gm}= \\ & \left( -i \frac{4 \pi G J_{\rm BH}(t)}{c^2}\int_{\rm disc}\frac{L(R,t)~W'(R,t)}{R^2}~dR \right)~t_{\rm BP}(R_{\rm warp}) \end{aligned} \end{equation} where $L(R,t)$ is given by (\ref{eqn:solution for L}) and $W'(R,t)$ by (\ref{eqn:solution for W, nu constant}) or (\ref{eqn:solution for W, nu power-law}). The analytic expressions of the gravitomagnetic torques for the two viscosity profiles are reported in the Appendix. From the torques it is possible to find the values of the spin variations $(\delta J'_{{\rm BH},x})_{\rm gm}$, $(\delta J'_{{\rm BH},y})_{\rm gm}$ and $(\delta J'_{{\rm BH},z})_{\rm gm}$ in this rotated reference frame. Finally, in order to know their expressions in our fixed reference frame $Oxyz$, we have to rotate them back, using the inverse rotation $\mathcal{R}^{-1}$ \begin{equation} (\delta J_{{\rm BH},i})_{\rm gm}= (\mathcal{R}^{-1})_{ij} (\delta J'_{{\rm BH},j})_{\rm gm} \end{equation} Once we know the spin variations due to gravitomagnetic coupling, the modulus variation can be calculated from equation (\ref{eqn:spin modulus variation}) and the global spin variation from equation (\ref{eqn:total spin variation}). \subsection{Results} We computed, within the adiabatic approximation, the joint evolution of the BH mass and spin during the process of alignment under the assumption that matter is corotating with the BH. We iterated equations (\ref{eqn:mass variation}) and (\ref{eqn:total spin variation}), from the initial conditions (\ref{eqn:initial disc position}) and (\ref{eqn:initial BH spin position}), recording the updated values of $M_{\rm BH}$, $a$ and of the relative inclination angle $\theta_{{\rm out}}$ every snapshot of time $\delta t\sim t_{\rm BP}(R_{\rm warp})$. We initially choose a spinning BH with $M_{\rm BH}=10^6 \rm M_{\odot}$ and $a=0.5$, and an accretion disc with $\dot{M}=0.1~\dot{M}_{\rm Edd}$, $\alpha=0.09$; both power-law and constant viscosity profiles are considered. Three initial relative inclination angles of $\theta_{{\rm out},0}=\pi/3, \pi/6, \pi/30$ have been tested. Figure \ref{fig:spinth} shows as a function of time the inclination angle $\theta_{{\rm out}}(t)$ and the two components of the BH spin unit vector in the plane $Oxy$; red lines refers to the analytic solutions given by \citet{MartinPringleTout2007}. As shown in top-left panel of Figure {\ref{fig:spinth}}, the relative inclination angle $\theta_{{\rm out}}$ decreases exponentially with time on the scale $t_{\rm al}$ and the decrease is more rapid for the constant viscosity disc (solid line).\\ The BH spin aligns with the external disc and precesses, as illustrated in the the top-right panel, where we compare also the actual evolution of the spin versor with a pure precessional motion (blue dashed line). Our results are only qualitatively consistent with Martin's results; in our calculations the alignment process appears to be less efficient and the spin precession more pronounced. The difference between our results and Martin's analytical solutions arises from three facts: (i) we included mass and spin modulus evolution; (ii) Martin et al. neglected to carry out the rotation connecting the BH reference frame $O'x'y'z'$ to the disc frame $Oxyz$; (iii) for constant viscosity profile, we evaluate $\nu_1$ and $\nu_2$ from equation (\ref{eqn:Sa-Su viscosity 2}) at the Bardeen-Petterson radius, $R_{\rm BP}\approx 0.4 R_{\rm warp}$, while Martin et al. evaluate them at the warp radius, $R_{\rm warp}$. For an initially not very inclined BH spin, the difference tends to disappear, because the rotation matrix nears the identity matrix. However, for large $\theta_{{\rm out},0}$ the discrepancy becomes more important (see, e.g., bottom panels of Figure \ref{fig:spinth}). Figure \ref{fig:vsl} shows the evolution of $\theta_{{\rm out}}$ and $a$ as functions of time and of the increasing BH mass, for an initial BH with $M_{{\rm BH},0}= 10^6 \rm M_{\odot}$, $\theta_{{\rm out},0}=\pi/6$ and spin parameter $a_{0}=0.5$, and for $f_{\rm Edd}=1,0.1$, and $0.01$. Both constant and power-law viscosity profiles are explored, always with viscosity parameter $\alpha=0.09$. Alignment is a process that shows a strong dependence on the accretion rate: for the constant (power-law) viscosity model the time necessary to reduce the relative inclination angle by a factor 100 varies from $3.0 \times 10^5{\rm yr}$ ($5.3 \times 10^5{\rm yr}$) for $f_{\rm Edd}=1$ to $1.86 \times 10^7{\rm yr}$ ($3.63 \times 10^7{\rm yr}$) for $f_{\rm Edd}=0.01$. During this alignment time, the BH has increased its mass by a small fraction, between $0.74\%$ ($1.30\%$) for $f_{\rm Edd}=1$ and $0.46\%$ ($0.89\%$) for $f_{\rm Edd}=0.01$. The spin parameter $a$ increases due to accretion, but only by a small amount, between $3.13\%$ ($5.47\%$) for $f_{\rm Edd}=1$ to $1.96\%$ ($3.79\%$) for $f_{\rm Edd}=0.01$.\\ In Table \ref{tab:angle variations} we summarize the results of Figure \ref{fig:vsl}. There, we compare also the time $\Delta t$ necessary to decrease the initial inclination angle by a given amount, with $t_{\rm al}$ estimated from equation (\ref{eqn:alignment timescale}): the values of the time $\Delta t$ are consistent with the interpretation of $t_{\rm al}$ as $e$-folding time. For constant viscosity profiles, a closer match of $t_{\rm al}$ with the numerical outcomes requires $(\cos \mu)_{\rm C} \approx 0.78$ instead of $\sqrt{2}/2$, and $(\cos \mu)_{\rm PL} \approx 0.41$ for power-law viscosity profiles (subscripts C and PL simply remind that for different viscosity prescriptions we found different $cos{\mu}$ values). Then, the ratios between the precession and the alignment timescales are $(t_{\rm prec}/t_{\rm al})_{\rm C}=0.81$ and $(t_{\rm prec}/t_{\rm al})_{\rm PL}=2.2$. These results are still qualitatively consistent with \citet{MartinPringleTout2007}, who have shown that both $t_{\rm al}$ and $t_{\rm prec}/t_{\rm al}$ increase with the exponent $\beta$ of the viscosity profile; small quantitative differences are due to different assumptions and different calculations methods. Finally, we notice that the scaling of $\Delta t$ with $f_{\rm Edd}$ is in good agreement with estimation (\ref{eqn:alignment timescale}). \subsection{Exploring the parameters space} \begin{figure*} \begin{minipage}{0.45 \linewidth} \centering \includegraphics[width=82mm]{allt.png} \caption{Alignment time (defined as the time needed for the relative inclination angle to reduce of two orders of magnitude, going from $\pi /6$ to $\pi /600$ ), as a function of the initial BH mass, $M_{\rm BH,0}$, for constant viscosity profile. The black lines refer to $f_{\rm Edd}=1$ and the red ones to $f_{\rm Edd}=0.001$; the solid lines are for initial spin parameter $a_0=0.9$ while the dashed ones $a_0=0.1$; finally, the thin lines represent $\alpha=0.09$ and the thick ones $\alpha=0.18$.} \label{fig:allt} \end{minipage} \hspace{10mm} \begin{minipage}{0.45 \linewidth} \centering \includegraphics[width=82mm]{allms.png} \caption{Mass and spin relative increase during the alignment time (defined as the time needed for the relative inclination angle to reduce of two orders of magnitude, going from $\pi /6$ to $\pi /600$ ) as a function of the initial BH mass, for constant viscosity profile. Black (red) lines refer to $f_{\rm Edd}=1$ ($f_{\rm Edd}=0.001$). Solid (dashed) lines are for $a_0=0.9$ ($a_0=0.1$); finally, the thin (thick) lines represent $\alpha=0.09$ ($\alpha=0.18$).} \label{fig:allms} \end{minipage} \end{figure*} \begin{figure*} \includegraphics[width=144mm]{tempi.jpg} \caption{Color coded map of the aligment time (defined as the time necessary for the relative inclination angle to go from $\theta_{{\rm out},0}= \pi / 6$ to $\theta_{{\rm out}}= \pi / 600$) for a BH of $M_{{\rm BH},0}=10^6 \rm M_{\odot}$, as a function of the accretion rate expressed through the Eddinfton factor $f_{\rm Edd}$ and of the initial BH spin parameter $a_0$. The colour scale represents $t_{\rm al}$ in years. Top (bottom) panels refer to the constant (power-law) viscosity profile. Left (right) panels refer $\alpha=0.09$ ($\alpha=0.18$). } \label{fig:tempi} \end{figure*} \begin{figure*} \includegraphics[width=144mm]{masse.jpg} \caption{Color coded map of the relative increase of mass during the alignment time (defined as the time necessary for the relative inclination angle to go from $\theta_{{\rm out},0}= \pi / 6$ to $\theta_{{\rm out}}= \pi / 600$) for a BH of $M_{{\rm BH},0}=10^6 \rm M_{\odot}$, as a function of the accretion rate expressed through the Eddington factor $f_{\rm Edd}$ and of the initial BH spin parameter $a_0$. The colour scale represents $\Delta M_{\rm BH} / M_{\rm BH,0}$. Top (bottom) panels refer to the constant (power-law) viscosity profile. Left (right) panels refer $\alpha=0.09$ ($\alpha=0.18$).} \label{fig:masse} \end{figure*} \begin{figure*} \includegraphics[width=144mm]{spin.jpg} \caption{Color coded map of the relative increase of spin parameter during the alignment time (defined as the time necessary for the relative inclination angle to go from $\theta_{{\rm out},0}= \pi / 6$ to $\theta_{{\rm out}}= \pi / 600$) for a BH of $M_{{\rm BH},0}=10^6 \rm M_{\odot}$, as a function of the accretion rate expressed through the Eddinfton factor $f_{\rm Edd}$ and of the initial BH spin parameter $a_0$. The colour scale represents $\Delta a/ a_0$. Top (bottom) panels refer to the constant (power-law) viscosity profile. Left (right) panels refer $\alpha=0.09$ ($\alpha=0.18$).} \label{fig:spin} \end{figure*} Here we explore more systematically how the fractional increases of $M_{\rm BH}$ and $a$, and the alignment time vary with initial mass $M_{{\rm BH},0}$, spin ${a}_0$, $f_{\rm Edd}$ and $\alpha$, for both constant and power-law viscosity profiles, fixing $\theta_{\rm out,0}= \pi/6$. The evolution is followed until $\theta_{\rm out}$ has decreased by a factor 100; we define as $\Delta t_{\theta_0 \rightarrow \theta_0/100}$ the corresponding "alignment" time, computed self-consistently. We also infer from the numerical model the relative growths of BH mass $\Delta M_{\rm BH} / M_{\rm BH,0}$ and spin parameter $\Delta a / a_0$ during $\Delta t_{\theta_0 \rightarrow \theta_0/100}$. Figure \ref{fig:allt} and Figure \ref{fig:allms} show the weak dependence of the alignment time $\Delta t_{{\theta_0} \rightarrow \theta_0/100}$ on the initial BH mass $M_{\rm BH,0}$ , and of the relative mass and spin parameter increases, for eight different sets of the other parameters. Comparing numerical scaling factors for $M_6$ in $\Delta t_{\theta_0 \rightarrow \theta_0/100}$ with that of expression (\ref{eqn:alignment timescale}), we notice again a good agreement, in particular for $f_{\rm Edd}$ not too close to the Eddington limit and $M_{\rm BH,0} \lesssim 10^6 \rm M_{\odot}$. By contrast, the alignment process is more sensitive on $f_{\rm Edd}$, ${a}_0$ and $\alpha$. Color-coded maps of $\Delta t_{{\theta_0} \rightarrow \theta_0/100}$ (Figure \ref{fig:tempi}), of $\Delta M/M_{{\rm BH},0}$ (Figure \ref{fig:masse}) and $\Delta {a}/{a}_0$ (Figure \ref{fig:spin}) are constructed in the ${a}_0$ versus $f_{\rm Edd}$ plane, varying the coefficient $\alpha$ and the viscosity law inside the accretion disc.\\ In Figure \ref{fig:tempi} we infer the interval of the alignment time $\Delta t_{{\theta_0} \rightarrow \theta_0/100}$ (as inferred from the numerical model) of interest for the study of BH evolution. The alignment time can vary by many orders of magnitude from $\sim 10^5 {\rm yr}$ to $\sim 10^{10} {\rm yr}$, and it reveals strong dependencies both on the accretion rate and on the initial spin parameter. In addition, smaller viscosities ($\alpha=0.09$) gives shorter timescales compared to higher viscosities ($\alpha=0.18$). A simple comparison between alignment times $\Delta t_{{\theta_0}\rightarrow {\theta_0/100}}$ for different initial spin parameters, but identical $f_{\rm Edd}$, reveals that the scaling factors for $a$ and $\eta_{0.1}(a)$ in equation (\ref{eqn:alignment timescale}) are in good agreement with numerical results.\\ Figure \ref{fig:masse} shows that the relative amount of mass accreted during the alignment process is small, compared to the initial BH mass. It varies between $\sim 10^{-3}$ and $\sim 10^{-2}$ for the constant viscosity profile, and between $\sim 2.5 \times 10^{-3}$ and $\sim 3 \times 10^{-2}$ for the power-law viscosity profile. Even if the accretion rate varies over four orders of magnitude, there are no comparable variations for the relative BH mass growth, in fact a larger $f_{\rm Edd}$ means a larger accretion rate, but it also reduces the alignment time. The relative increase of the spin parameter $a$ is shown in Figure \ref{fig:spin}. The evolution of $a$ is the combination of different, and sometime opposite, tendencies: a highly spinning black hole requires a longer time to align, but the particles at its innermost stable orbit carry on the BH a smaller angular momentum. The spin modulus increases significantly during the alignment for initially slowly rotating BHs and high accretion rates, typically with $ 5 \times 10^{-3} \lesssim \Delta a / a_0 \lesssim 8 \times 10^{-2}$ for a constant viscosity profile and $ 10^{-2} \lesssim \Delta a / a_0 \lesssim 2 \times 10^{-1}$ for a power-law profile. \subsection{Counter-rotating case} \begin{figure*} \centering \includegraphics[width=144mm]{vslcnt.png} \caption{ Coupled evolution of the relative inclination angle $(\pi-\theta_{{\rm out}})$, BH mass $M_{\rm BH}$ and spin parameter $a$ for a counter-rotating disc. Solid (dashed) lines refer to constant (power-law) viscosity profile. Black lines to $f_{\rm Edd}=1$, red lines to $f_{\rm Edd}=0.1$, blue lines to $f_{\rm Edd}=0.01$. Dotted horizontal lines which appear in top panels represent angles $\theta_{{\rm out}} / \theta_{{\rm out},0}=10,10^{2},10^{3}$. Initial configuration: $M_{\rm BH,0}=10^6 \rm M_{\odot}$, $a_0=0.5$, $f_{\rm Edd}=0.1$ and $\alpha=0.09$, with $\pi-\theta_{{\rm out},0}= \pi / 600$.} \label{fig:vslcnt} \end{figure*} \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline VP & $f_{\rm Edd}$ & $(\pi - \theta_{{\rm out}}) / (\pi-\theta_{{\rm out},0})$ & $\Delta t$ & $\Delta M_{\rm BH}/M_{{\rm BH},0}$ & $\Delta a/a_0$ \\ & & & ($10^6{\rm yr}$)&(in units of $10^{-2}$) & (in units of $10^{-2}$)\\ \hline \hline & & $10$ & 0.085 & 0.40 & -3.99 \\ & 1 & $10^2$ & 0.17 & 0.78 & -7.81 \\ & & $10^3$ & 0.25 & 1.16 & -11.5 \\ \hline & & $10$ & 0.66 & 0.31 & -3.12 \\ C & 0.1 & $10^2$ & 1.31 & 0.61 & -6.15 \\ & & $10^3$ & 1.96 & 0.91 & -9.11 \\ \hline & & $10$ & 5.29 & 0.25 & -2.49 \\ & 0.01 & $10^2$ & 10.5 & 0.49 & -4.92 \\ & & $10^3$ & 15.7 & 0.73 & -7.31 \\ \hline \hline & & $10$ & 0.15 & 0.68 & -6.79 \\ & 1 & $10^2$ & 0.29 & 1.33 & -13.1 \\ & & $10^3$ & 0.42 & 1.95 & -19.1 \\ \hline & & $10$ & 1.21 & 0.57 & -5.68 \\ PL & 0.1 & $10^2$ & 2.38 & 1.11 & -11.0 \\ & & $10^3$ & 3.52 & 1.64 & -16.2 \\ \hline & & $10$ & 10.1 & 0.47 & -4.73 \\ & 0.01 & $10^2$ & 19.9 & 0.93 & -9.24 \\ & & $10^3$ & 29.5 & 1.37 & -13.6 \\ \hline \end{tabular} \caption{Summary of our parameters and results for the counter-rotating case; we consider viscosity coefficient $\alpha=0.09$ and initial inclination angle $\theta_{{\rm out},0}=\pi(1-1/6000)$, both for constant (C) and power-law (PL) viscosity profiles (VP). The initial BH has $M_{\rm BH,0} = 10^6 \rm M_{\odot}$ and $a_{0} = 0.5$. Accretion rate $f_{\rm Edd}$ varies over three orders of magnitude and we record times needed to ($\pi-\theta_{{\rm out}},0$) of a factor 10, 100 or 1000; we also report mass and spin relative variations.} \label{tab:angle variations cnt} \end{center} \end{table*} In this Section, we investigate the counter-rotating configuration for a BH and its misaligned accretion disc, for initial values of $\theta_{{\rm out},0}$ close to $\pi$. As shown by \citet{ScheuerFeiler1996} and \citet{MartinPringleTout2007}, on the timescale $t_{\rm al}$ the BH spin again aligns with the outer regions of the accretion disc, if this disc is regularly and coherently fed. Due to the Bardeen-Petterson effect, we expect the innermost part of the disc (approximately within $R_{\rm warp}$) to orbit in a plane which is perpendicular to $\bf{J}_{\rm BH}$, with orbital angular momentum density ${\bf L}$ counter-aligned with respect to the BH spin. In this BH-disc configuration, one of the major changes is in the radius of the innermost stable orbit, which increases due to the asymmetry seeded in the geodetic motion of particles in Kerr metrics. As a consequence, the energy and the orbital angular momentum of particles at $R_{\rm ISO}$ increase, while the BH radiative efficiency decreases \citep[see, i.e.,][]{Wilkins1972,Bardeenetal1972}. The Bardeen-Petterson timescale (\ref{eqn:tbp}) and the warp radius (\ref{eqn:Rw implicit}) have the same values as in the co-rotating case. Since $t_{\rm BP} \ll t_{\rm al}$, the adiabatic approximation holds again, but the small deformation approximation\footnote{The functions $\psi$ and $\chi$, defined in equations (\ref{eqn:psi definition}) and (\ref{eqn:chi definition}), are invariant under the transformation $\theta_{{\rm out}} \rightarrow \left( \pi-\theta_{{\rm out}} \right)$. } has a limited validity, requiring $\theta_{{\rm out}} \sim \pi.$ In order to remain consistent with the approximation scheme, we trace the alignment process, from $\pi$ to $\left( \pi - \pi/6 \right)$, only. In the couter-rotating case and small deformation approximation (i.e. $\theta_{{\rm out}} \sim \pi$), the disc profile can be solved analytically. We choose a reference frame $O''x''y''z''$ where $\hat{\bf{J}}_{\rm BH}=(0,0,-1)$ and we solved equation (\ref{eqn:angular momentum}) for $\hat{\mathbf{l}}$ in it. For constant viscosities, the function $W''(R/R_{\rm warp})={\hat l}''_{\rm x}+i{\hat l}''_{\rm y}$ describing the warp is \begin{equation} W''_{\rm C,cnt}=C \exp{\left(-\sqrt{2}~(1+i)\left( \frac{R}{R_{\rm warp}}\right)^{-\frac{1}{2}} \right) } \end{equation} while for a power-law viscosity profile \begin{equation} \begin{aligned} & W''_{\rm PL,cnt}=D \left( \frac{R}{R_{\rm warp}} \right)^{-\frac{1}{4}}\\ & \qquad \times \quad K_{{1}/{2(1+\beta)}}\left(\frac{\sqrt{2}(1+i)}{(1+\beta)} \left( \frac{R}{R_{\rm warp}} \right)^{-\frac{1+\beta}{2}} \right). \end{aligned} \end{equation} We then apply the adiabatic approximation to study the coupled evolutions of the system BH-disc. The jointed evolutions of $\theta_{\rm out}$, $M_{\rm BH}$ and $a$ are presented in Figure \ref{fig:vslcnt} and in Table \ref{tab:angle variations cnt}, for different accretion rates and viscosity profiles. The shorter timescales in the counter-rotating configuration stem from the the dependence of the alignment timescale on the spin modulus, $t_{\rm al} \propto {a}^{5/7}$. Counter-rotating matter carries larger and opposite angular momentum, reducing the spin modulus and the alignment timescale in the process. \section{Discussion and conclusions} \label{sec:conclusions} In this paper, we followed the joint evolution of the mass $M_{\rm BH}$ and spin $\bf{J}_{\rm BH}$ of a BH inside a geometrically thin, extended accretion disc. The BH spin is initially misaligned with the angular momentum of the disc in its outer regions. On the short Bardeen-Petterson timescale, the disc responds to the Lense-Thirring precession, imposed by the BH spin, and propagates a warp that is maximum around $R_{\rm warp}$; within this radius matter orbits around the BH in a plane which is perpendicular to the BH spin. According to angular momentum conservation, the warped disc interacts with the BH spin and, on the longer alignment timescale, the BH aligns its spin to ${\hat \bf{J}}_{\rm disc,out}$. In its outer regions, the disc is assumed to be fed by matter that flows along a plane that keeps its coherence in direction ${\hat \bf{J}}_{\rm disc,out}$ for a sufficiently long timescale to allow for the gravitomagnetic interaction to complete BH-disc alignment. While doing so the BH is accreting matter and angular momentum from the inner portion of the disc, which is aligned or anti-aligned to the BH spin. Given the mismatch between the timescale for warp propagation and the alignment time \citep{ScheuerFeiler1996, NatarajanPringle1998}, we devised a method that enabled us to follow, in the small-deformation approximation, the co-evolution of the BH mass and spin in a self-consistent manner, carrying out a large survey of the parameter space and a critical review of the used approximations. It is found that, considering an initial small relative inclination angle ($\theta_{{\rm out},0} \lesssim \pi/6$, small deformation approximation), matter in the inner part of the accretion disc has orbital angular momentum density parallel to $\bf{J}_{\rm BH}$. The gravitomagnetic interaction of the BH with this warped accretion disc and their coupled evolution bring the BH into alignment with the outer regions of the disc, i.e. $\theta_{{\rm out}}(t) \rightarrow 0$. The timescale $t_{\rm al}$ of equation (\ref{eqn:alignment timescale}) gives a good estimate of the BH-disc alignment time for an $e$-folding reduction of the angle of misalignment, in very good agreement with numerical results. For a maximally rotating Kerr BH accreting at the Eddington rate, $t_{\rm al}\sim 10^{5-6}$ yr, depending on the viscosity parameter $\alpha$ and on the viscosity profile model, in agreement with early findings by \citet{NatarajanPringle1998}. On the other hand, environments where the accretion rate is extremely low imply longer alignment timescales, as $t_{\rm al} \propto {\dot M}^{-32/35}$. In the explored BH mass range, the alignment time displays a weak dependence on $M_{\rm BH,0}$: fixed all the other parameters, alignment of a $10^7\,\rm M_{\odot}$ BH occurs, on average, at the same pace of a $10^5\rm M_{\odot}$ BH. The BH mass and spin modulus increase during alignment, but their fractional increases are modest. After surveying a wide parameters space, we find that $0.1 \% \lesssim \Delta M_{\rm BH}/M_{{\rm BH},0}\lesssim 3\%$ while the spin parameter increases by $ 0.5 \% \lesssim \Delta {a}/{a}_0 \lesssim 20 \% $. Starting with an almost anti-parallel BH-disc configuration ($\theta_{{\rm out},0} \approx \pi$), the orbital angular momentum density of the inner part of the disc is initially counter-aligned with respect to the BH spin. Nevertheless, the BH still tends to reduce the degree of misalignment (i.e. $\theta_{{\rm out}}(t)$ decreases), because of the nature of the gravitomagnetic interaction \citep[see also][]{ScheuerFeiler1996, MartinPringleTout2007}. The accretion of matter with opposite angular momentum at $R_{\rm ISO}$ decreases $\bf{J}_{\rm BH}$ and $a$ with higher rates, compared with their growths in the specular co-rotating case. Since $t_{\rm al} \propto a^{5/7}$, the alignment process is then more efficient, i.e. the angle reduction speed is higher. Comparing decreases of the relative inclination angle $\theta_{{\rm out}}$ symmetric with respect to $\pi/2$, we find that the fractional decrease of the spin parameter in the counter rotating case is, in modulus, higher than in the specular co-rotating case while the mass relative increase is slightly lower. BH spin flip, due to $\theta_{\rm out}$ reduction below $\pi/2$, will occur in this extended disc when $a$ will reach its minimum value. At that time the jet of relativistic particles (if present) will cross the warped disc, likely affecting the subsequent BH-disc evolution and the BH feeding. This process deserves a separate investigation. It is still poorly known whether a spinning BH in an active galactic nucleus is fed through a disc that maintains its angular momentum direction {\it stable} over a Salpeter timescale $t_{\rm S}$. Two opposite, still plausible scenarios, have been proposed and discussed. \citet{NatarajanPringle1998} speculated that the stability of jets in radio-loud AGNs requires a long-lived phase of stable accretion capable to maintain spatial coherence, i.e. a fixed direction of $\bf{J}_{\rm disc, out},$ for a time as long as $10^8$ yr. By contrast, \citet{KingPringle2006, KingPringle2007, KingPringleHofmann2008} speculated recently that AGN activity, triggered by gas-rich major mergers, is chaotic in nature even within a single merger event, i.e. is occurring through a sequence of uncorrelated short-lived accretion episodes. In their picture the corresponding discs, truncated by their own-self gravity, continuously change their inclination and feed the BH on their consumption timescale. Under these circumstances the BH spin modulus is seen to either increase or decrease at random clustering around small average values ${a}\sim 0.1-0.3$. This model would simultaneously explain the relatively low radiative efficiency of the quasar population as inferred from the background light \citep[e.g.,][]{Merloni2004,MerloniHeinz2008}, and the possibility of growing BH as massive as $10^9 \rm M_{\odot}$ from small BH seeds already at redshift $z\sim 6$ \citep{KingPringle2006}. Isolated discs, truncated by their own self-gravity, carry a well defined disc angular momentum and are accreted by the BH on a finite timescale. Starting with a misaligned BH-disc configuration, the BH spin changes direction significantly only if (i) the alignment time is shorter than the disc consumption time, $t_{\rm al}<t_{\rm disc}$; and (ii) the magnitude of the disc angular momentum is comparable to the BH spin magnitude, i.e. $J_{\rm disc} \gtrsim J_{\rm BH}$. The first condition is verified for the whole parameter range explored in this paper. The estimate $J_{\rm disc} \gtrsim J_{\rm BH}$ depends instead sensitively upon $R_{{\rm out}}$. Equation \ref{eqn:ratio between momenta} establishes that isolated discs around large BHs truncate at $R_{{\rm out}}$ such that $J_{\rm disc} < J_{\rm BH}$. Condition (ii) is satisfied for BH masses $ \lesssim 3 \times 10^7 \rm M_{\odot}$. We note here that since we model our discs using the Shakura-Sunyaev solution for Kramer's opacity, we cannot rigourously estimate $R_{{\rm out}}$, and $J_{\rm disc}$, for BHs with mass $\lesssim10^5-10^6 \rm M_{\odot}$. An extension of disc solutions to different, self-consistent opacities is non trivial \citep{HureetalI1994,HureetalII1994} and we postpone a detailed analysis to future work. BHs with masses $\lesssim 3\times10^7$ align efficiently in discs truncated by their own self-gravity, implying alignment also in the case of stochastically fed AGNs\footnote{Here we are not considering accretion events involving a disc with a very small amount of mass, i.e. below $\sim (H/R)M_{\rm BH}$. This light accretion disc has an outer radius much smaller than (\ref{eqn:r out}) and thus carries an angular momentum $J_{\rm disc}\ll J_{\rm BH}$. As a consequence, the disc has a very short consumption timescale, and the alignment process is active for a very short period of time. Therefore the alignment of $\bf{J}_{\rm BH}$ around $\bf{J}_{\rm tot}$ (close to $\bf{J}_{\rm BH}$) is expected to be unimportant.}. where not only $a$ fluctuates with time, but also the direction of the BH spin continually changes due to the rapidity of the alignment process. By contrast, rapidly spinning ($a\sim 1$) heavier BHs with $M_{\rm BH} \gtrsim 10^8 \rm M_{\odot}$ have truncated discs that carry little angular momentum compared with $J_{\rm BH}$. In this case alignment is uneffective and the orientation of the BH spin is not influenced significantly by the surrounding short-lived disc. In light of these findings, the vector $\bf{J}_{\rm BH}$ appears to carry precious information on the orientation of the plane through which the BH has been fed, and on whether accretion has been long-lived and coherent or short-lived and random. The method developed in the paper is sufficiently versatile that it will be implemented in numerical simulations describing the process of pairing of dual BHs in circumnuclear discs during their on-fly accretion (Dotti et al., in preparation) to improve upon the speculation \citep{Bogdanovicetal2007} that, in gas-rich galaxy mergers, binary BHs have time to align their spin orthogonally to their orbital plane, as discussed in \cite{Escalaetal2005,Dottietal2006,Mayeretal2007,Dottietal2007,Dottietal2009,ColpiDotti2009}. The spin-orbit configuration is relevant to study the impact of BH recoils, that occur after two BHs have coalesced \citep[see, e.g.,][]{Pretorius2007}. Detection of gravitational waves, emetted by coalescing BHs, with the {\it Laser Interferometer Space Antenna} ({\it LISA}) \citep{Benderetal1994, HilsBender1995} will be able to constrain the moduli and the directions of the coalescing BHs spins \citep{Vecchio2004,LangHughes2006}. \section{ACKNOWLEDGMENTS} We wish to thank Vittorio Gorini, Sergio Cacciatori, Alberto Sesana, Bernadetta Devecchi, Oliver Piattella and Luca Rizzi for usefull discussions and suggestions. \bibliographystyle{mn2e}
1,108,101,563,047
arxiv
\section{Motivation} In a perfect world, decision makers of any shape or form would always be capable of producing optimal behaviors. In reality, however, the inescapable constraints of an overwhelmingly-complex environment force agents to seek out alternative behaviors that are sufficiently satisfying or, more succinctly, satisficing. While satisficing is a longstanding, well-studied idea about how to understand resource-limited cognition~\citep{simon1955behavioral,simon1956rational,newell1958elements,newell1972human,simon1982models}, it has been usually treated as contrary to rational analysis~\citep{anderson1990adaptive}. In particular, modern resource-rational analyses, both utility-theoretic~\citep{griffiths2015rational,lieder2020resource} and information-theoretic~\citep{sims2003implications,sims2016rate,gershman2020origin}, still aim to find an optimal policy but to do so within resource constraints. An alternative view for the resource-rational learning of satisficing agents comes from recent theoretical work in bandit learning and reinforcement learning~\citep{arumugam2021deciding,arumugam2022deciding}; this perspective focuses on learning to achieve a deliberately-sub-optimal, satisficing policy that requires obtaining fewer bits of information from the environment. Unlike other approaches to resource-rational analysis, this methodology is accompanied by guarantees about provably-efficient learning through its use of epistemic uncertainty~\citep{der2009aleatory} to resolve the underlying exploration-exploitation trade-off. \section{Rate-Distortion Theory} Here we offer a brief, high-level overview of rate-distortion theory~\citep{shannon1959coding,berger1971rate,cover2012elements}. Due to space constraints, precise mathematical details are relegated to Appendix \ref{sec:prelims}. A lossy compression problem consumes as input a fixed information source $\mathbb{P}(X \in \cdot)$ and a distortion function $d: \mc{X} \times \mc{Z} \rightarrow \mathbb{R}_{\geq 0}$ which quantifies the loss of fidelity by using an element $z \in \mc{Z}$ in lieu of $x \in \mc{X}$. Then, for any $D \in \mathbb{R}_{\geq 0}$, the rate-distortion function quantifies the fundamental limit of lossy compression as $$\mc{R}(D) = \inf\limits_{Z \in \mc{Z}} \mathbb{I}(X;Z) \triangleq \inf\limits_{Z \in \mc{Z}} \mathbb{E}\left[\kl{\mathbb{P}\left(X \in \cdot \mid Z\right)}{\mathbb{P}(X \in \cdot)}\right] \text{ such that } \mathbb{E}\left[d(X,Z)\right] \leq D,$$ where the infimum is taken over all random variables $Z$ (representing the output of a channel given $X$ as input) that incur bounded expected distortion, $\mathbb{E}\left[d(X,Z)\right] \leq D$. As an aside, we note that one may equivalently define $\mc{R}(D)$ as an infimum over a constrained collection of either joint distributions (as done in \citep{csiszar1974extremum,dembo2002source}, for example) or conditional distributions (such as in \citep{blahut1972computation,kawabata1994rate}), representing the channel or lossy compression itself. Naturally, $\mc{R}(D)$ represents the minimum number of bits of information that must be retained on average from $X$ in order to achieve this bound on the expected loss of fidelity. Moreover, $\mc{R}(D)$ is well-defined for arbitrary information source and channel output random variables taking values on abstract spaces~\citep{csiszar1974extremum}. In certain contexts, it can be more suitable to employ the inverse of $\mc{R}(D)$, the distortion-rate function: $\mc{D}(R) = \inf\limits_{Z \in \mc{Z}} \mathbb{E}\left[d(X,Z)\right] \text{ such that } \mathbb{I}(X;Z) \leq R.$ For a given upper limit $R \in \mathbb{R}_{\geq 0}$ on the bits of information that can be transmitted, $\mc{D}(R)$ quantifies the minimum achievable distortion of the resulting compression. \section{Problem Formulation} We formulate a sequential decision-making problem as an episodic, finite-horizon Markov Decision Process (MDP)~\citep{bellman1957markovian,Puterman94} defined by $\mc{M} = \langle \mc{S}, \mc{A}, \mc{U}, \mc{T}, \beta, H \rangle$. Here $\mc{S}$ denotes a set of states, $\mc{A}$ is a set of actions, $\mc{U}:\mc{S} \times \mc{A} \rightarrow [0,1]$ is a deterministic reward or utility function providing evaluative feedback signals, $\mc{T}:\mc{S} \times \mc{A} \rightarrow \Delta(\mc{S})$ is a transition function prescribing distributions over next states, $\beta \in \Delta(\mc{S})$ is an initial state distribution, and $H \in \mathbb{N}$ is the maximum length or horizon. Within each one of $K \in \mathbb{N}$ episodes, the agent acts for exactly $H$ steps beginning with an initial state $s_1 \sim \beta$. For each timestep $h \in [H]$, the agent observes the current state $s_h \in \mc{S}$, selects action $a_h \sim \pi_h(\cdot \mid s_h) \in \mc{A}$, enjoys a reward $r_h = \mc{U}(s_h,a_h) \in [0,1]$, and transitions to the next state $s_{h+1} \sim \mc{T}(\cdot \mid s_h, a_h) \in \mc{S}$. A stationary, stochastic policy for timestep $h \in [H]$, $\pi_h:\mc{S} \rightarrow \Delta(\mc{A})$, encodes behavior as a mapping from states to distributions over actions. Letting $\{\mc{S} \rightarrow \Delta(\mc{A})\}$ denote the class of all stationary, stochastic policies, a non-stationary policy $\pi = (\pi_1,\ldots,\pi_H) \in \{\mc{S} \rightarrow \Delta(\mc{A})\}^H$ is a collection of exactly $H$ stationary, stochastic policies whose overall performance in any MDP $\mc{M}$ at timestep $h \in [H]$ when starting at state $s \in \mc{S}$ and taking action $a \in \mc{A}$ is assessed by its associated action-value function $Q^\pi_{\mc{M},h}(s,a) = \mathbb{E}\left[\sum\limits_{h'=h}^H \mc{U}(s_{h'},a_{h'}) \bigm| s_h = s, a_h = a\right]$, where the expectation integrates over randomness in the action selections and transition dynamics. Taking the corresponding value function as $V^\pi_{\mc{M},h}(s) = \mathbb{E}_{a \sim \pi_h(\cdot \mid s)}\left[Q^\pi_{\mc{M},h}(s,a)\right]$, we define the optimal policy $\pi^\star = (\pi^\star_1,\pi^\star_2,\ldots,\pi^\star_H)$ as achieving supremal value $V^\star_{\mc{M},h}(s) = \sup\limits_{\pi \in \{\mc{S} \rightarrow \Delta(\mc{A})\}^H} V^\pi_{\mc{M},h}(s)$ for all $s \in \mc{S}$, $h \in [H]$. We let $\tau_k = (s^{(k)}_1, a^{(k)}_1, r^{(k)}_1, \ldots,s^{(k)}_{H}, a^{(k)}_{H}, r^{(k)}_{H}, s^{(k)}_{H+1})$ be the random variable denoting the trajectory experienced by the agent in the $k$th episode. Meanwhile, $H_k = \{\tau_1,\tau_2,\ldots, \tau_{k-1}\} \in \mc{H}_k$ is the random variable representing the entire history of the agent's interaction within the environment at the start of the $k$th episode. \section{Capacity Limitation as a Policy Information Bottleneck} There is a long, rich literature exploring the natural limitations on time, knowledge, and cognitive capacity faced by human (and animal) decision makers~\citep{simon1956rational,newell1958elements,newell1972human,simon1982models,gigerenzer1996reasoning,vul2014one,griffiths2015rational,gershman2015computational,icard2015resource,lieder2020resource,bhui2021resource,brown2022humans,ho2022people}. Crucially, our focus is on a recurring theme throughout this literature of modeling these limitations on cognitive capabilities as being information-theoretic in nature~\citep{sims2003implications,peng2005learning,parush2011dopaminergic,botvinick2015reinforcement,sims2016rate,sims2018efficient,zenon2019information,ho2020efficiency,gershman2020reward,gershman2020origin,mikhael2021rational,lai2021policy,gershman2021rational,jakob2022rate,bari2022undermatching}. Broadly speaking, these approaches all center around the perspective that a policy $\pi_h: \mc{S} \rightarrow \Delta(\mc{A})$ should be modeled as a communication channel that, like a human decision-maker with limited information processing capability, is subject to a constraint on the maximal number of bits that may be sent across it. Consequently, an agent aspiring to maximize returns must do so subject to this constraint on policy complexity; conversely, an agent ought to transmit the minimum amount of information possible while it endeavors to reach a desired level of performance~\citep{polani2009information,polani2011informational,tishby2011information,rubin2012trading}. Paralleling the distortion-rate function $\mc{D}(R)$, the resulting policy-optimization objective follows as $\sup\limits_{\pi \in \{\mc{S} \rightarrow \Delta(\mc{A})\}^H} \mathbb{E}\left[Q^\pi(S, A)\right] \text{ such that } \mathbb{I}(S; A) \leq R.$ Depending on the precise work, subtle variations on this optimization problem exist from choosing a fixed state distribution for the random variable $S$~\citep{polani2009information,polani2011informational}, incorporating the state visitation distribution of the policy being optimized~\citep{still2012information,gershman2020origin,lai2021policy}, or assuming access to the generative model of the MDP and decomposing the objective across a finite state space~\citep{tishby2011information,rubin2012trading}. In all of these cases, the end empirical result tends to converge by using variants of the classic Blahut-Arimoto algorithm~\citep{blahut1972computation,arimoto1972algorithm} to solve the Lagrangian associated with the constrained optimization~\citep{boyd2004convex} and produce policies that exhibit higher entropy across states under an excessively limited rate $R$, with a gradual convergence towards the greedy optimal policy as $R$ increases. The alignment between this optimization problem and that of the distortion-rate function is slightly wrinkled by the non-stationarity of the distortion function (here, $Q^\pi$ is used as an analogue to distortion which changes as the policy or channel does) and, when using the policy visitation distribution for $S$, the non-stationarity of the information source. Despite these slight, subtle mismatches with the core rate-distortion problem, the natural synergy between cognitive and computational decision making~\citep{tenenbaum2011grow,lake2017building} has led to various reinforcement-learning approaches that draw direct inspiration from this line of thinking~\citep{klyubin2005empowerment,ortega2011information,still2012information,ortega2013thermodynamics,shafieepoorfard2016rationally,tiomkin2017unified,lerch2018policy,lerch2019rate,abel2019state}, most notably including parallel connections to work on ``control as inference'' or KL-regularized reinforcement learning~\citep{todorov2007linearly,toussaint2009robot,kappen2012optimal,levine2018reinforcement,ziebart2010modeling,fox2016taming,haarnoja2017reinforcement,haarnoja2018soft,galashov2019information,tirumala2019exploiting}. Nevertheless, despite their empirical successes, such approaches lack principled mechanisms for addressing the exploration challenge~\citep{o2020making}. While exploration is quintessentially studied in the multi-armed bandit setting~\citep{lai1985asymptotically,lattimore2020bandit}, we focus our main discussion on reinforcement learning and defer consideration of bandit learning to Appendix \ref{sec:bandits}. Similar to human decision making~\citep{gershman2018deconstructing,schulz2019algorithmic,gershman2019uncertainty}, provably-efficient reinforcement-learning algorithms have historically relied upon one of two possible exploration strategies: optimism in the face of uncertainty~\citep{kearns2002near,brafman2002r,kakade2003sample,auer2009near,bartlett2009regal,strehl2009reinforcement,jaksch2010near,dann2015sample,azar2017minimax,dann2017unifying,jin2018q,zanette2019tighter,dong2022simple} or posterior sampling~\citep{osband2013more,osband2017posterior,agrawal2017optimistic,lu2019information,lu2021reinforcement}. While both paradigms have laid down solid theoretical foundations, a line of work has demonstrated how posterior-sampling methods can be more favorable both in theory and in practice~\citep{osband2013more,osband2016deep,osband2016generalization,osband2017posterior,osband2019deep,dwaracherla2020hypermodels}. In the next section, we outline how these latter posterior-sampling algorithms still lack a consideration for agents acting under limited capacity constraints and demonstrate an alternative utilization of rate-distortion theory to help account for such limitations. \section{Learning Targets for Capacity-Limited Decision Making} As is standard in Bayesian reinforcement learning~\citep{bellman1959adaptive,duff2002optimal,ghavamzadeh2015bayesian}, neither the transition function nor the reward function are known to the agent and, consequently, both are treated as random variables. An agent's initial uncertainty in the (unknown) true MDP $\mc{M}^\star = (\mc{U}^\star, \mc{T}^\star)$ is reflected by a prior distribution $\mathbb{P}(\mc{M}^\star \in \cdot \mid H_1)$. Since the regret is a random variable due to our uncertainty in $\mc{M}^\star$, we integrate over this randomness to arrive at the Bayesian regret over $K$ episodes: $\textsc{BayesRegret}(K, \pi^{(1:K)}) = \mathbb{E}\left[\textsc{Regret}(K, \pi^{(1)},\ldots,\pi^{(K)}, \mc{M}^\star)\right] = \mathbb{E}\left[\sum\limits_{k=1}^K \left( V^\star_{\mc{M}^\star,1}(s_1) - V^{\pi^{(k)}}_{\mc{M}^\star, 1}(s_1)\right)\right].$ In the following, we will denote the entropy and conditional entropy conditioned upon a specific realization of an agent's history $H_k$, for some episode $k \in [K]$, as $\mathbb{H}_k(X) \triangleq \mathbb{H}(X \mid H_k = H_k)$ and $\mathbb{H}_k(X \mid Y) \triangleq \mathbb{H}_k(X \mid Y, H_k = H_k)$, for two arbitrary random variables $X$ and $Y$. This notation will also apply analogously to mutual information: $\mathbb{I}_k(X;Y) \triangleq \mathbb{I}(X;Y \mid H_k = H_k) = \mathbb{H}_k(X) - \mathbb{H}_k(X \mid Y) = \mathbb{H}_k(Y) - \mathbb{H}_k(Y \mid X).$ The dependence on the realization of a random history $H_k$ makes $\mathbb{I}_k(X;Y)$ a random variable and the usual conditional mutual information arises by integrating over this randomness: $\mathbb{E}\left[\mathbb{I}_k(X;Y)\right] = \mathbb{I}(X;Y \mid H_k).$ Additionally, we will also adopt a similar notation to express a conditional expectation given the random history $H_k$: $\mathbb{E}_k\left[X\right] \triangleq \mathbb{E}\left[X|H_k\right].$ A natural starting point for addressing the exploration challenge in a principled manner is via Thompson sampling~\citep{thompson1933likelihood,russo2018tutorial}. The Posterior Sampling for Reinforcement Learning (PSRL)~\citep{strens2000bayesian,osband2013more,osband2014model,abbasi2014bayesian,agrawal2017optimistic,osband2017posterior,lu2019information} algorithm does this by, in each episode $k \in [K]$, sampling a candidate MDP $\mc{M}_k \sim \mathbb{P}(\mc{M}^\star \in \cdot \mid H_k)$ and executing its optimal policy in the environment $\pi^{(k)} = \pi^\star_{\mc{M}_k}$; notably, such posterior sampling guarantees the hallmark probability-matching principle of Thompson sampling: $\mathbb{P}(\mc{M}_k = M \mid H_k) = \mathbb{P}(\mc{M}^\star = M \mid H_k)$, $\forall M \in \mathfrak{M}, k \in [K]$. The resulting trajectory $\tau_k$ leads to a new history $H_{k+1} = H_k \cup \tau_k$ and an updated posterior over the true MDP $\mathbb{P}(\mc{M}^\star \in \cdot \mid H_{k+1})$. We recognize that, for complex environments, pursuit of the exact MDP $\mc{M}^\star$ may be an entirely infeasible goal. A MDP representing control of a real-world, physical system, for example, suggests that learning the associated transition function requires the agent internalize laws of physics and motion with near-perfect accuracy. More formally, identifying $\mc{M}^\star$ demands the agent obtain exactly $\mathbb{H}_1(\mc{M}^\star)$ bits of information from the environment which, under an uninformative prior, may either be prohibitively large by far exceeding the agent's capacity constraints or be simply impractical under time and resource constraints~\citep{lu2021reinforcement}. Consequently, an agent must embrace a satisficing solution and \citet{arumugam2022between,arumugam2022deciding} employ the following rate-distortion function to identify a suitable lossy compression of the underlying MDP $\widetilde{\mc{M}} \in \mathfrak{M}$ whose information an agent may prioritize as an alternative learning target to $\mc{M}^\star$: $\mc{R}_k(D) = \inf\limits_{\widetilde{\mc{M}} \in \mathfrak{M}} \mathbb{I}_k(\mc{M}^\star; \widetilde{\mc{M}}) \text{ such that } \mathbb{E}_k[d(\mc{M}^\star, \widetilde{\mc{M}})] \leq D.$ Here, the rate-distortion function is indexed by each episode $k \in [K]$ as the agent takes $\mathbb{P}(\mc{M}^\star \in \cdot \mid H_k)$ for the information source to be compressed, allowing for incremental refinement of the learning target as knowledge of the environment accumulates and is transmitted to the ``next'' agent~\citep{tomasello1993cultural,tomasello1999cultural}. By definition, the $\widetilde{\mc{M}}$ that achieves this rate-distortion limit will demand that the agent acquire fewer bits of information than what is needed to identify $\mc{M}^\star$. Since, the rate-distortion function is a non-negative; convex; and non-increasing function in its argument~\citep{cover2012elements}, the preceding claim is guaranteed for all $k \in [K]$ and any $D > 0$: $\mc{R}_k(D) \leq \mc{R}_k(0) \leq \mathbb{I}_k(\mc{M}^\star; \mc{M}^\star) = \mathbb{H}_k(\mc{M}^\star)$. \citet{arumugam2022deciding} study two distinct choices of distortion function to assess the loss of fidelity incurred by learning a compressed MDP over the original; the first of these provides an information-theoretic account of recent successes in deep model-based reinforcement learning~\citep{silver2017predictron,farahmand2017value,oh2017value,asadi2018lipschitz,farahmand2018iterative,d2020gradient,abachi2020policy,cui2020control,ayoub2020model,schrittwieser2020mastering,nair2020goal,nikishin2022control,voelcker2022value} through the value-equivalence principle~\citep{grimm2020value,grimm2021proper}, where agents deliberately forego learning the true model of the environment in exchange for some approximate surrogate, which discards irrelevant environment features and only models dynamics of the world critical to agent performance. In the interest of space, we focus on the second distortion function which has a simpler form: $$d_{Q^\star}(\mc{M}, \widehat{\mc{M}}) = \sup\limits_{h \in [H]} ||Q^\star_{\mc{M},h} - Q^\star_{\widehat{\mc{M}},h}||_\infty^2 = \sup\limits_{h \in [H]} \sup\limits_{(s,a) \in \mc{S} \times \mc{A}} | Q^\star_{\mc{M},h}(s,a) - Q^\star_{\widehat{\mc{M}},h}(s,a)|^2.$$ Crucially, \citet{arumugam2022deciding} establish an information-theoretic Bayesian regret bound for a posterior-sampling algorithm that performs probability matching with respect to $\widetilde{\mc{M}}$ instead of $\mc{M}^\star$: $\textsc{BayesRegret}(K, \pi^{(1:K)}) \leq \sqrt{\overline{\Gamma}K\mc{R}^{Q^\star}_1(D)} + 2K(H+1)\sqrt{D}.$ Such an algorithm, by virtue of probability matching, explicitly links an agent's exploration strategy not only to its epistemic uncertainty but also to that $\widetilde{\mc{M}}$ which it aspires to learn~\citep{cook2011science}. The bound communicates that an agent with limited capacity must tolerate a higher distortion threshold $D$ and pursue the resulting compressed MDP that bears less fidelity to the original MDP; in exchange, the resulting number of bits needed from the environment to identify such a simplified model of the world is given as $\mc{R}_1^{Q^\star}(D)$ and guaranteed to be less than the entropy of $\mc{M}^\star$\footnote{Here $\overline{\Gamma} < \infty$ is an (assumed) uniform upper bound to the information ratio~\citep{russo2016information,russo2014learning,russo2018learning} that emerges as an artifact of the analysis.}. Additionally, one can express a near-identical result through the associated distortion-rate function that explicitly takes an agent's capacity of only being able to acquire $R \in \mathbb{R}_{\geq 0}$ bits into account: $\textsc{BayesRegret}(K, \pi^{(1:K)}) \leq \sqrt{\overline{\Gamma}KR} + 2K(H+1)\sqrt{\mc{D}^{Q^\star}_1(R)}.$ \section{Conclusion} A number of recent proposals have approached resource-rationality by combining tools from rate-distortion theory with those of sequential decision-making. Here, we have reviewed a parallel line of work that uses related ideas but within the framework of satisficing and Bayesian reinforcement learning. The distinctive feature of this approach is that it precisely characterizes how capacity-limited learners optimally balance \emph{model complexity} and \emph{value distortion} in a manner that gives rise to \emph{learning targets} which reflect rational satisficing--that is, intelligently choosing how sub-optimal to be based on information acquired during learning. A key challenge for future work will be to translate insights about provably-efficient learning algorithms from this literature into plausible models of human decision-making. More broadly, by modulating the exploration of a satisficing agent to identify a fundamentally different learning target than that of an unconstrained agent, these analyses can provide a framework for understanding how information-theoretic considerations not only shape the products of learning (such as perception, memory, and decisions), but also the process of learning itself. \bibliographystyle{plainnat}
1,108,101,563,048
arxiv
\subsection{Building Defenses: Website Fingerprinting} \label{sec:eval-wfp} In the previous section, we considered a setting in which the adversary's knowledge is white-box. This assumption, however, does not always hold in practice. When the access to the ML model is black-box, the adversary cannot use the admissible heuristic to efficiently obtain minimal-cost adversarial examples using A$^*$\xspace. We show that even in this setting the graphical framework is a useful tool for finding adversarial examples in constrained domains. In this section, we also change the adversarial perspective. We consider a scenario in which the ML model is at the core of a privacy-invasive system and the entity deploying adversarial examples is a target of this system. Therefore, the model becomes the adversary, and adversarial examples become defenses. Concretely, we take the case of \newterm{website fingerprinting} (WF), an attack in which a network adversary attempts to infer which website a user is visiting by only looking at encrypted network traffic~\cite{BackMS01, LiberatoreL06}, often using machine learning~\cite{PanchenkoNZE11, WangCNJG14, HayesD16, SirinamIJW18, RimmerPJGJ18}. This attack is mostly considered a threat to users of anonymous communication networks such as Tor~\cite{DingledineMS04}. However, as the encrypted SNI proposal~\cite{ietf-tls-esni} becomes standardized within TLS~1.3~\cite{RFC8446}---hiding the destination of encrypted HTTPS traffic from network observers---this attack becomes a privacy threat to all Internet users. To counter the attack, existing ad-hoc defenses transform the traffic to reduce the accuracy of the ML classifier~\cite{PanchenkoNZE11, DyerCRS12, CaiNWJG14, CaiNJ14, JuarezIPDW17}. We first show how the graphical framework can be used as a systematic tool for designing traffic modifications that defend users against a WF adversary. We then show how the defenses produced by our method can be used as a baseline to evaluate both the effectiveness (ability to fool a classifier) and efficiency (incurred overhead) of existing defenses. \subsubsection{Website-Fingerprinting Attack} We consider a WF adversary that takes as input a network \newterm{trace}, i.e., a sequence of incoming (from server to client) and outgoing (from client to server) encrypted packets, and outputs a binary guess of whether the user is visiting a website that is in the \newterm{monitored set} or not. For instance, this could be a censorship adversary that wants to know if the visited website is in a list of censored websites in order to stop the connection. We simulate a WF adversary that uses the classifier by Panchenko et al.~\cite{PanchenkoNZE11}, an SVM with an RBF kernel trained on \newterm{CUMUL features}. For a given trace, a CUMUL feature vector contains the total incoming and outgoing packet counts, and 100 interpolated cumulative packet counts. We refer the reader to the original paper for the details on computing the vector. \descr{Dataset} We use the dataset of Tor network traces collected by \citet{WangCNJG14}. This dataset contains 18,004 traces, half of them coming from a simulated monitored set compiled from 90 websites censored in China, the UK, and Saudi Arabia, and half coming from a non-monitored set of 5,000 other popular websites. The average trace length is 2,155. We randomly split the dataset into 90\% training and 10\% testing subsets of 15,397 and 1,711 traces, respectively. We use these splits to train and test the target WF classifier. To keep the running time of our experiments reasonable, we find adversarial examples only for traces with less than 2000 packets that are classified as being in the monitored set. There are 577 such traces, with an average length of 1750 packets. The CUMUL classifier performs remarkably well on our test dataset, with an accuracy of 97.8\% (the random baseline is 50\%). Thus, we consider it is a good example to illustrate the potential of our framework. \subsubsection{Building Defenses} \descr{The Defender's Goals} The goal of the defender is to modify monitored traces such that they are misclassified as non-monitored by the WF classifier. These modifications can be of two kinds: adding dummy packets and adding delay. Removing packets is not possible without affecting the content of the page, and the delay is dependent on the network and cannot be decreased from an endpoint. The previous work (e.g., \citet{PanchenkoNZE11}) has noted that perturbing timing information is not as important to classification as the volume of packets. Hence, even though adding delay is possible within the graphical framework, we consider that our defense \emph{only adds dummy packets}. As in the previous works, we assume that dummy packets are filtered by the client and the server and do not affect the application layer. Adding dummy packets has a cost in terms of bandwidth and delay (routers have to process more packets). Therefore, the defender wants to introduce as few packets as possible. In terms of confidence level $l$, we consider only the case where the defender wants to flip the classifier decision ($l = d = 0.5$). Finding higher confidence adversarial examples is possible at the cost of running a longer search. \descr{The Defender's Capabilities: Transformation Graph and Cost} For a given trace we define the following transformations: add one dummy outgoing packet, or one dummy incoming packet, between any two existing packets in the trace. This means that each node in the transformation graph is a copy of its parent trace with an added dummy packet. We assign every transformation a cost equal to one, thus representing the added packet. Path costs in this graph are equal to the number of added packets. \descr{Heuristic and Search Algorithms} Recall that CUMUL feature processing includes an interpolation step. Because of the interpolation part of a CUMUL feature vector, the costs in our transformation graph can not be trivially expressed as norm-induced distances between the feature vectors. Hence, we cannot use the admissible heuristic from \Eqref{eq:heuristic-threshold}, or its approximated version. Instead, we use the following \newterm{confidence-based heuristic}, similar to the heuristics used in other attacks in discrete domains (e.g., by \citet{GaoLSQ18}): \[h_t(x) = \begin{cases} +f(x), & t = 0 \\ -f(x), & t = 1 \\ -\infty, & F(x) = t \\ \end{cases} \] The value of this heuristic becomes lower as the confidence of the WF classifier for the target class (non-monitored in our case) becomes higher. Note that this heuristic \emph{does not} require any knowledge of the classifier, only the ability to query $f(x)$. As in this case we cannot use optimal algorithms, we implement the hill climbing variation of the greedy best-first search ($\mathsf{score}(x) = h(x)$ for its speed, see \Secref{sec:bfs}). We also limit the number of iterations of the algorithm to 5,000 in order to keep down the runtime of our experiments. Our results show that this is sufficient to find adversarial examples in 100\% of the cases. We also run a random search, i.e., we follow a random path in the graph until an adversarial example is found, to obtain a baseline in terms of cost and runtime. We run this algorithm three times for each trace with different random seeds. \descr{Results} Hill climbing with the confidence-based heuristic finds adversarial traces in 100\% of the cases, with an average time to find an adversarial example of 0.8 seconds. Random search succeeds in slightly less than 100\%, with an average time of about 0.3 seconds. As we discuss below, the results differ in the overhead required to find an adversarial example. \subsubsection{Comparing Defenses} \begin{figure*}[t] \centering \includegraphics[width=0.402\textwidth]{images/wfp__defs_packets_comparison.pdf} \includegraphics[width=0.28\textwidth,trim={6.5cm 0 0 0},clip]{images/wfp__defs_overhead_comparison.pdf} \includegraphics[width=0.28\textwidth,trim={6.5cm 0 0 0},clip]{images/wfp__defs_success_comparison.pdf} \caption{Left: number of added packets by WF defenses and adversarial examples (x-axis is logarithmic). Center: overhead of WF defenses compared to the number of packets added by adversarial examples found with hill climbing (x-axis is bi-symmetrically logarithmic). Right: success rates of WF defenses and adversarial examples against the SVM-RBF classifier.} \label{fig:wfp-defenses} \end{figure*} Here, we use minimal-cost adversarial examples for website fingerprinting to evaluate existing ad-hoc WF defenses: Decoy pages~\cite{PanchenkoNZE11}, BuFLO~\cite{DyerCRS12}, CS BuFLO~\cite{CaiNJ14}, and adaptive padding (WTF-PAD)~\cite{JuarezIPDW17}. We use the implementations of Decoy pages and BuFLO by Wang,\footnote{\url{http://home.cse.ust.hk/~taow/wf/}} CS BuFLO by \citet{Cherubin17},\footnote{\url{https://github.com/gchers/wfes}} and WTF-PAD by \citet{JuarezIPDW17}.\footnote{\url{https://github.com/wtfpad/wtfpad}} Concretely, we measure the overhead in terms of (1) the number of dummy packets added, and (2) the success rates of the defenses, i.e., the percentage of traces for which they successfully evade the classifier. We evaluate the efficiency by measuring the raw overhead in terms of added packets (see \Figref{fig:wfp-defenses}, left). The existing defenses add up to 3000 dummy packets. Unexpectedly, BuFLO, which is deliberately inefficient, is the most expensive defense. On the contrary, adversarial examples add on average 12 dummy packets, and at most 52. In terms of relative overhead, i.e., how many more packets the defenses add, compared to the adversarial examples found using hill climbing, all defenses and random search require significantly more bandwidth (see \Figref{fig:wfp-defenses}, center). In five cases, CS BuFLO and WTF-PAD add fewer packets, but in those cases the defenses do not succeed in evading the classifier. We then analyze the defenses' success rates in the light of the overhead they impose. The graph search yields a 100\% success rate, whereas the existing defenses (aside from BuFLO) only succeed in 70\%---80\% of the cases (\Figref{fig:wfp-defenses}, right). Also, as it can be seen in \Figref{fig:wfp-defenses} (center), increasing the number of packets is not a guarantee of success. We see how for all defenses but BuFLO some cases fail even with hundreds of overhead packets. A closer analysis shows that CS BuFLO and WTF-PAD often fail to defend shorter traces (under 1000 packets), whereas they can successfully defend the longer ones (see \Figref{fig:wfp-defenses-delta} in \Appref{app:figures} for illustration). This hints that the ad-hoc defenses use the dummies in an inefficient way, and there is a significant room for improvement. The adversarial examples found with hill climbing present a tight upper bound on the minimal cost of any successful defense. Hence, we hope that our graphical framework can serve as a baseline to evaluate the efficiency of future defenses, and guide the design of effective website-fingerprinting countermeasures. \subsubsection{Applicability Discussion} In this section, we apply the graphical framework to a setting with no white-box knowledge, against a non-linear classifier, by using non-optimal algorithms, and show that the framework is still useful when none of the assumptions from \Secref{sec:eval-bots} hold. We do not evaluate the transferability of obtained adversarial examples to other classifiers. The main reason is that the state-of-the-art WF classifiers are based on deep learning \cite{RimmerPJGJ18, SirinamIJW18}, thus require datasets larger than the one we use in our comparison. Although we leave the transferability evaluation for future work, we expect that the results would be qualitatively similar to those in the Twitter-bot case (see \Secref{sec:bots-discussion}). \section{Experimental Evaluation} \label{sec:eval} We evaluate the graph search approach for finding adversarial examples as \emph{means to evaluate the security of an ML model} by computing its robustness against adversarial examples, and as \emph{means to build efficient defenses against privacy-invasive ML models}. We use two ML applications that work with constrained discrete domains: a bot detector in a white-box setting, where it is possible to use A$^*$\xspace with admissible heuristics to obtain provably minimal-cost adversarial examples, and a traffic-analysis ML classifier in a black-box setting, where we obtain upper bounds on the minimal adversarial cost. \descr{Implementation} We use scikit-learn~\cite{scikit-learn} for training and evaluation of ML models, and Jupyter notebooks~\cite{jupyter} for visualizations. The reported runtimes come from executions on a machine with Intel i7-7700 CPU working at 3.60GHz. The code to run the attacks is available as a Python package; all experiments are reproducible\footnote{[Link to the code anonymized]}. \nocite{gnu-parallel} \input{parts/eval-bots.tex} \input{parts/eval-wfp.tex} \section{Related Work} \label{sec:related} We overview existing attacks in discrete domains and highlight their differences with respect to our work. \descr{Discretized Image Domain} \citeauthor{PapernotMJFCS16} propose the Jacobian saliency map approach (JSMA)~\cite{PapernotMJFCS16} to find adversarial images. JSMA is a greedy white-box attack that transforms images by increasing or decreasing pixel intensity to maximize \newterm{saliency}, that is computed using the forward gradient of the target model. This attack is a basis for attacks in other discrete domains~\cite{GrossePM0M16,JiaG18}, as we discuss below. \descr{Text Domain} Multiple works study evasion attacks against text classifiers~\cite{DalviDMSV04,LowdM05,PapernotMSH16, LiangSBLS18, HosseiniKZP17, GaoLSQ18, EbrahimiRLD18, AlzantotSEHSC18}. Recent attacks can be divided into three groups: those employing a hill-climbing algorithm over the set of possible transformations of an initial piece of text~\cite{PapernotMSH16, LiangSBLS18, HosseiniKZP17, GaoLSQ18}; those that greedily optimize the forward gradient-based heuristic but run beam search~\cite{EbrahimiRLD18}; and those that use an evolutionary algorithm~\cite{AlzantotSEHSC18}. \descr{Malware Domain} Several works explore evasion attacks for malware, either adapting JSMA~\cite{GrossePM0M16}, applying forward gradient-based heuristics~\cite{KolosnjajiDBMGER18}, using a black-box hill-climbing algorithm over a set of feasible transformations~\cite{DangHC17}, or using a black-box evolutionary algorithm~\cite{XuQE16}. \descr{Protecting Users} Finally, some works use adversarial examples as means to protect users. \citeauthor{JiaG18}~adapt JSMA to modify user-item relationship vectors in the context of recommendation systems. \citeauthor{OverdorfKBTG18}~use exhaustive search to find adversarial examples that counter anti-social effects of machine learning. \subsection{Comparison to Our Work} \label{sec:related-instantiations} Our framework can be seen as a generalization of most of the attacks mentioned previously. Moreover, it can encode arbitrary adversarial costs, and can be configured to output minimal-cost adversarial examples using A$^*$\xspace search. We note that in parallel to our work, \citeauthor{WuWRHK18} also used A$^*$\xspace and randomized tree search to obtain bounds on robustness of neural networks~\cite{WuWRHK18}. Their work, however, only considers the setting of image recognition. \citet{DalviDMSV04} have used integer linear programming to find approximate minimal-cost adversarial examples against a Na\"ive Bayes classifier. Our graphical framework enables us to consider arbitrary transformations and cost models and to find minimal-cost adversarial examples for any non-linear classifier for which adversarial robustness in a continuous domain can be computed. Except for the method by \citeauthor{DalviDMSV04} and the attacks based on evolutionary algorithms, all of the attacks above can be instantiated in our graphical framework (see \Tabref{tab:comparison} in \Appref{app:figures} for a summary of such instantiations). The attacks implicitly define a transformation graph by specifying a set of domain-specific transformations (e.g., word insertions for text) that define the graph. The cost of transformations can be equal to the number of such transformations (e.g., \cite{PapernotMSH16, LiangSBLS18}), or, equivalently, to the $\lp$ distance between feature vectors interpreted as the number of transformations (e.g., \cite{GrossePM0M16, JiaG18}). The attacks can be seen as special cases of running the BF$^*$ search algorithm (see \Secref{sec:bfs}) over a transformation graph. They differ in adversarial knowledge assumptions (white-box or black-box), transformation graphs, adversarial cost models, and the choice of the scoring function and priority-queue capacity that defines the instantiation of BF$^*$. Most of the attacks~(e.g., \cite{PapernotMSH16, LiangSBLS18, GrossePM0M16, JiaG18}) run a hill-climbing search over the transformation graph. They maximize a heuristic either based on the forward gradient of the model (in the white-box setting where the adversary can compute the gradient), or on the confidence (in the black-box setting where the adversary can only query the model). \citet{EbrahimiRLD18} use beam search instead of hill climbing, \citet{textfool} uses an instance of backtracking best-first search, and \citet{OverdorfKBTG18} use an exhaustive search over the space of feasible transformations, equivalent to UCS. \section{Improving the heuristic in the optimal instantiation} In the previous section we have presented a setting and a heuristic which allows to a find minimal-cost adversarial examples using A$^*$\xspace search. We now show how to construct heuristics that improve on the heuristic constructed from the exact value of adversarial robustness $\dist(x)$ by incorporating knowledge about the transformation graph. \subsection{Feature space constraints for linear models} In practice, only a part of the full feature vector may be transformable. In our running example, imagine if the adversary could not change the \emph{days since the account was created} feature out of practicality constraints. In such case, the heuristic value can be improved by taking these constraints into account. We show how to do this for linear models, i.e., with $\phi({\bm{x}}) = {\bm{x}}$. Let $\proj{J}({\bm{x}})$ denote a projection of a vector ${\bm{x}} \in {\mathbb{X}} \subset {\mathbb{R}}^m$: \[\proj{J}({\bm{x}}) = [x_j]_{j \in J}\] for a set of indices $J \subset \{1, 2, ..., m\}^k$, with $k < m$. The set $J$ represents the set of feature indices that can be transformed in the transformation graph ${\mathcal{G}}$. Then, we can re-define the original discriminant function $f: {\mathbb{R}}^m \rightarrow \{0, 1\}$ in terms of the projected inputs as follows: \[ f^\downarrow({\bm{x}}) = \underbrace{(\proj{J}({\bm{w}})) \cdot (\proj{J}({\bm{x}}))}_{{\bm{w}}^\downarrow \cdot \proj{J}({\bm{x}})} + \underbrace{(\proj{\bar J}({\bm{w}})) \cdot (\proj{\bar J}({\bm{x}})) + b}_{b^\downarrow} \] Since only features $J$ vary for all transformations in the graph ${\mathcal{G}} = (V, E, \omega)$, $f^\downarrow$ is an equivalent re-parametrization: \[ f^\downarrow({\bm{x}}) = f({\bm{x}}) \quad \text{for all ${\bm{x}} \in V$} \] Note that the values ${\bm{w}}^\downarrow$, $b^\downarrow$ are fixed for all transformations. Treating $f^\downarrow({\bm{x}})$ as the target function, we can take into account the constraints: \begin{equation} \dist^\downarrow({\bm{x}}) = \frac{|f^\downarrow({\bm{x}})|}{\norm{{\bm{w}}^\downarrow}_q} \end{equation} One can obtain an equivalent result by taking a partial gradient $\nabla_{\proj{J} {\bm{x}}}f({\bm{x}})$ in the Taylor expansion from \Eqref{eq:dist-nonlinear-decision-boundary}. \subsection{Incorporating the knowledge about costs} We now show two simple improvements of the basic heuristic that take into account the cost distribution. \descr{Minimal edge cost} Let the minimal possible cost of any edge in graph ${\mathcal{G}}$ be $\theta$. Then, we can define $\dist^\textsf{min}(x)$ as follows: \[\dist^\textsf{min}(x) = \max(\theta, \dist(x))\] \begin{statement} Let the transformation graph ${\mathcal{G}} = (V, E, \omega)$ have $\omega(a, b) = \lp(a, b) \geq \theta$, and the initial example be $x \in V$. Then the heuristic defined by $\dist^\textsf{min}(x)$ is an admissible heuristic for the graph search problem from \Eqref{eq:mincost-graph-problem}. \end{statement} \begin{IEEEproof} By \Stmtref{thm:admissibility} and \Defref{def:admissibility}. \end{IEEEproof} \descr{Constant edge cost} In an extension of the previous case, assume that the cost of \emph{all} edges is constant and is equal to $\sigma$. We can define $\dist^\textsf{grid}(x)$ as rounding of the value of $\dist(x)$ up to the next multiple of $\sigma$: \[\dist^\textsf{grid}(x) = \sigma \ceil{\frac{\dist(x)}{\sigma}}\] \begin{statement} Let the transformation graph ${\mathcal{G}} = (V, E, \omega)$ have $\omega(a, b) = \lp(a, b) = \sigma$ for all $a, b \in V$, and the initial example be $x \in V$. Then the heuristic defined by $\dist^\textsf{grid}(x)$ is an admissible heuristic for the graph search problem from \Eqref{eq:mincost-graph-problem}. \end{statement} \begin{IEEEproof} \bknote{TODO} \end{IEEEproof} We evaluate this heuristic in \Secref{sec:eval-bots}. \section{A Graph Search Approach to Evasion}\label{sec:model} After some preliminaries, we introduce our graphical framework for designing evasion attacks. \subsection{Preliminaries}\label{sec:model:prelims} Throughout the paper we denote vectors in ${\mathbb{R}}^m$ using bold face: ${\bm{x}}$. We denote by ${\bm{a}} \cdot {\bm{b}}$ a dot product between two vectors. \subsubsection{Binary Classifiers} In this work, we focus on binary \newterm{classifiers}, $F: {\mathbb{X}} \rightarrow \{0, 1\}$ that produce a decision $\{0, 1\}$ by thresholding a \newterm{discriminant function} $f(x)$: \[ F(x) = \begin{cases} 1, & f(x) > \theta \\ 0, & \text{otherwise} \end{cases} \] where $\theta \in {\mathbb{R}}$ is a decision threshold. The discriminant function $f(x) = {\bm{w}} \cdot \phi(x) + b$ is a composition of a possibly non-linear \newterm{feature mapping} $\phi: {\mathbb{X}} \rightarrow {\mathbb{R}}^m$ from some input space ${\mathbb{X}}$ to a feature space ${\mathbb{R}}^m$, and a linear function ${\bm{w}} \cdot {\bm{z}} + b$ with ${\bm{z}} = \phi(x)$. This encompasses several families of models in machine learning, including logistic regression, SVM, and neural network-based classifiers. In the rest of this paper we use the terms \emph{classifier} and \emph{model} interchangeably. Often, the decision threshold is defined through a \newterm{confidence} value $d \in [0, 1]$ such that $d = \sigma(\theta)$, that is, $\theta = \sigma^{-1}(d)$, where $\sigma: {\mathbb{R}} \rightarrow [0, 1]$ is a sigmoid function: \[\sigma(y) = \frac{1}{1 + e^{-y}}\] Binary classifiers are often employed in security settings for detecting security violations. Some standard examples are spam, fraud, bot, or network-intrusion detection. \subsubsection{Graph Search} \label{sec:bfs} Let ${\mathcal{G}} = (V, E, \omega)$ be a directed weighted graph, where $V$ is a set of nodes, $E$ is a set of edges, and $\omega: E \rightarrow {\mathbb{R}}^+$ associates each edge with a weight, or \newterm{edge cost}. For a given path $x_1 \rightarrow x_2 \rightarrow \ldots \rightarrow x_n$ in the graph, we define the \newterm{path cost} as the sum of its edges' costs: \[W(x_1 \rightarrow x_2 \rightarrow \ldots \rightarrow x_n) = \sum_{i=1}^{n-1} \omega(x_i, x_{i + 1}).\] Let $\mathsf{goal}\xspace: V \rightarrow \{\top, \bot\}$ be a \newterm{goal predicate}. For a given starting node $s \in V$ a \newterm{graph search algorithm} aims to find a node $g \in V$ that satisfies $\mathsf{goal}\xspace(g) = \top$ such that the cost of reaching $g$ from $s$ is minimal: \[ g = \arg \min_{v \in V} C_{{\mathcal{G}}}(s, v) \text{ s.t. } \mathsf{goal}\xspace(v) = \top, \] where $C_{{\mathcal{G}}}(x, x')$ is defined as the minimal path cost over all paths in graph ${\mathcal{G}}$ from $s$ to $v$: \begin{equation}\label{eq:graph-cost} \begin{aligned} & C_{{\mathcal{G}}}(s, v) = \min\limits_{v_i \in V} W(s \rightarrow v_1 \rightarrow \ldots \rightarrow v_{n-1} \rightarrow v) \\ \text{ s.t. } & (s, v_1), (v_{n-1}, v), (v_i, v_{i+1}) \in E \quad \forall{i \in \overline{1, n-1}} \end{aligned} \end{equation} We call a global minimizer $g$ an \newterm{optimal}, or \newterm{admissible}, solution to the graph search problem. In Algorithm~\ref{alg:bfs}, we show the pseudocode of the BF$^*$ graph-search algorithm~\cite{DechterP85}. Some common graph-search algorithms are specializations of BF$^*$: uniform-cost search (UCS)~\cite{HartNR68}, greedy best-first search~\cite{DoranMichie66}, A$^*$\xspace and some of its variants~\cite{HartNR68, Pohl70}. They differ in their instantiation of the scoring function used to select the best nodes at each step of the algorithm. Additionally, by limiting the number of items in the data structure holding candidate nodes, beam-search and hill-climbing variations of the above algorithms can be obtained~\cite{RichKnight91}. We summarize these differences in~\Tabref{tab:search-algos}. \input{parts/extras/bfs.tex} \begin{table}[t] \centering \caption{Specializations of BF$^*$. $h: V \rightarrow {\mathbb{R}}$ is a heuristic function that estimates the path cost to reach a goal node.}\label{tab:search-algos} \resizebox{\columnwidth}{!}{ \begin{tabular}{ll} \toprule \textbf{Algorithm} & \textbf{$\mathsf{score}\xspace$} \\ \midrule Greedy best-first~\cite{DoranMichie66} & $h(v')$ \\ Uniform-cost~\cite{HartNR68} & $\omega(v, v')$ \\ A$^*$\xspace~\cite{HartNR68} & $\omega(v, v') + h(v')$ \\ $\varepsilon$-weighted A$^*$\xspace~\cite{Pohl70} & $\omega(v, v') + \varepsilon h(v')$ \\ \bottomrule \\ \toprule \textbf{Algorithm} & \textbf{$\mathsf{pqueue}\xspace$} \\ \midrule Hill climbing & Limited to one best-scoring item \\ Beam search~\cite{RichKnight91} & Limited to $B$ best-scoring items \\ \bottomrule \end{tabular} } \end{table} \subsection{The Graphical Framework} \subsubsection{The Adversary's Strategy and Goal} \label{sec:model:goal} We assume the adversary relies on the ``mimicry'' strategy~\cite{DemontisMBMARCG17} to \newterm{evade} an ML classifier: Departing from a known initial example $x$, the adversary applies structure-preserving \newterm{transformations} until a transformed \newterm{adversarial example}, $x'$, is misclassified. The adversary also wants to minimize the cost of these transformations. This problem is often formulated as an optimization problem: \begin{equation}\label{eq:mincost-problem} \optim{x} = \arg \min_{x' \in {\mathbb{X}}} C(x, x') \text{ s.t. } \mathsf{goal}\xspace(x') = \top, \end{equation} where $x$ is the initial example, and $C(x, x') > 0$ is the \newterm{adversarial cost}. $C$ models the ``price'' that the adversary pays to transform example $x$ into~$x'$. The adversary's goal in this problem is to cause a misclassification error with a certain confidence level $l \geq d$: \begin{equation}\label{eq:target-conf-goal} \mathsf{goal}\xspace(x') = \begin{cases} \top, & t = 1 \text{ and } \sigma(f(x')) > l \\ \top, & t = 0 \text{ and } \sigma(f(x')) \leq 1-l \\ \bot, & \text{otherwise} \end{cases} \end{equation} where $t$ is the target class which is different from the original class $F(x)$ (if $F(x)=0$, then $t=1$, and vice-versa). If $l$ is equal to the decision threshold of the classifier, $l = d$, the adversary merely aims to flip the decision. The adversary might also want not only to flip the decision, but to make adversarial examples that are classified with higher confidence. This corresponds to higher confidence levels: $l > d$. \subsubsection{The Adversary's Knowledge} Following standard practices for evaluating security properties of ML models, we assume the worst-case, \newterm{white-box}, adversary that has full knowledge of the target model parameters, including ${\bm{w}}, b$ and the feature mapping~$\phi$. In \Secref{sec:eval-wfp} we also discuss attacks that are applicable to a \newterm{black-box setting}, where the adversary does not have knowledge of the model parameters or architecture, but can query it with arbitrary examples $x$ to obtain $f(x)$. \subsubsection{The Adversary's Capabilities} We model the capabilities of the adversary, including inherent domain constraints and the cost of modifications, using a \newterm{transformation graph} that encodes the transformations the adversary can perform on an example $x$. This graph has to be defined \emph{before} running the attack. The transformation graph is a directed weighted graph ${\mathcal{G}} = (V, E, \omega)$, with $V \subseteq {\mathbb{X}}$ being a subset of the model's input space that the adversary can craft. An edge $(x, x') \in E$ represents the transformation of an example $x$ into an example $x'$. For each edge $(x, x') \in E $ the function $\omega$ defines the cost $\omega(x, x') > 0$ associated with that transformation. A path cost $C(x_1 \rightarrow x_2 \rightarrow \ldots \rightarrow x_n)$ represents the cost of performing a chain of transformations $x_1 \rightarrow x_2 \rightarrow \ldots \rightarrow x_n$. \subsubsection{Graphical Formulation} Within the graphical framework, the problem in \Eqref{eq:mincost-problem} is reduced to minimizing the transformation cost as defined by the graph ${\mathcal{G}}$, thus narrowing the search space to only those $x'$ that are reachable from $x$: \begin{equation}\label{eq:mincost-graph-problem} \begin{aligned} & \optim{x} = \arg \min_{x' \in V} C_{{\mathcal{G}}}(x, x') \\ \text{ s.t. } & \mathsf{goal}\xspace(x') = \top \\ & \text{$x'$ is reachable from $x$ in ${\mathcal{G}}$} \end{aligned} \end{equation} \begin{example} \label{ex:example-graph} Consider a toy Twitter-bot detection classifier that takes as input the \emph{days since the account was created}, and the \emph{total number of replies to the tweets made by this account}, and outputs a binary decision: bot or not. Starting from an arbitrary account, the adversary wants to create a bot that evades the detector by only modifying these two features. To save time and money the adversary wants to keep these modifications to a minimum. In this setting, the transformation graph can be built as follows. For each feature vector $v \in V$ representing an account, there exist up to four children in the graph: an example with the value of the \emph{number of days since account creation} feature incremented by one, or decremented by one, and analogously two children for the \emph{number of replies to the tweets}. Let all edges have cost 1. In such a graph, the cost of a transformation chain is the number of edges traversed, e.g., incrementing the \emph{number of days since account creation} by three is equivalent to a path with three edges (the path cost is 3). The adversary's goal is to find the path with the lowest cost (minimal number of transformations) that flips the classifier's decision. The resulting account is the solution to~\Eqref{eq:mincost-graph-problem}. \end{example} \subsection{Provable Optimality Guarantees} For a given adversary model, and an initial example $x$, we define $\optim{c}$ as the minimal cost of the transformations needed to achieve a misclassification goal: \[\optim{c} = \min_{x' \in V} C_{{\mathcal{G}}}(x, x') \text{ s.t. } \mathsf{goal}\xspace(x') = \top\] The minimal $\optim{c}$, or a tight \emph{lower} bound on $\optim{c}$, for a given $x$ is a measure of adversarial robustness of the model, equivalent to the notion of \newterm{pointwise adversarial robustness}~\cite{BastaniILVNC16, FawziMF16}, and \newterm{minimal adversarial cost} (MAC)~\cite{LowdM05}. The MAC can be used to quantify the \newterm{security of models}: the more secure a model is, the higher the cost of successfully mounting an evasion attack. In \Secref{sec:eval-bots} we illustrate this idea in the context of an ML classifier for Twitter-bot detection, usign $\optim{c}$ to evaluate the security. Finding the globally optimal $\optim{c}$ could be computationally expensive. A tight \emph{upper} bound on $\optim{c}$, however, is easier to find in practice. In \Secref{sec:eval-wfp} we use upper bounds on $\optim{c}$ to evaluate the effectiveness of evasion attacks as privacy defenses against traffic analysis. \section{Conclusions} In this paper, we proposed a graphical framework for formalizing evasion attacks in discrete domains. This framework casts attacks as search over a graph of valid transformations of an initial example. It generalizes many proposed attacks in various discrete domains, and offers additional benefits. First, as a formalization, it enables us to define arbitrary adversarial costs and to choose search algorithms from the vast literature on graph search, whereas the previous attacks often use $\lp$ norms as costs and mostly focus on hill-climbing strategies. Second, we show that when it is possible to compute adversarial robustness in a continuous domain, this robustness measure can be used as a heuristic to efficiently explore a discrete domain. Thus, an adversary with the white-box knowledge can use A$^*$\xspace search to obtain adversarial examples that incur minimal adversarial cost. This enables us to provably evaluate the adversarial robustness of a classifier given the domain constraints and adversary's capabilities. Third, the versatility of our framework to model transformations and their costs independently of the ML model under attack make it suitable to tackle both security and privacy problems. As examples, we showed how it can be used to evaluate the adversarial robustness of a Twitter-bot classifier, and to evaluate the cost-effectiveness of privacy defenses against a website-fingerprinting classifier. \vspace{2em} \section{Provably Minimal-Cost Attacks Using Heuristic Graph Search} \label{sec:optim-attack} One way to find an optimal, or admissible, solution to the graph-search problem in~\Eqref{eq:mincost-graph-problem} is to use uniform-cost search (see \Secref{sec:model}). This approach, however, can be inefficient or even infeasible. For example, let us consider the transformation graph in Example~\ref{ex:example-graph}, where the branching factor is 4. Assuming that at most 30 decrements or increments can be performed to any of the features, the number of nodes in this graph is bounded by $n = 4^{30} = 2^{60}$. Given that uniform-cost search (UCS) needs to expand $n$ nodes in the worst case, if a single expansion takes a nanosecond, a full graph traversal would take 36 years. For certain settings, however, it is possible to use heuristics to identify the best direction in which to traverse the graph, escaping the combinatorial explosion through the usage of heuristic search algorithms like A$^*$\xspace (see \Secref{sec:bfs}). To ensure that these algorithms find the admissible $\optim{x}$ it is sufficient that the heuristic is \emph{admissible}~\cite{DechterP85}: \begin{definition}[Admissible heuristic]\label{def:admissibility} Let ${\mathcal{G}} = (V, E, \omega)$ be a weighted directed graph with $\omega(v, v') \geq 0$. A heuristic $h(v)$ is admissible if for any $v \in V$ and any goal node $g \in V$ it never overestimates the $C_{{\mathcal{G}}}(v, g)$: $h(v) \leq C_{{\mathcal{G}}}(v, g)$. \end{definition} In general, admissibility does not guarantee that A$^*$\xspace runs in an optimally \emph{efficient} way. To guarantee optimality in terms of efficiency the heuristic must be \newterm{consistent}, a stronger property~\cite{DechterP85}. \subsection{Optimal Instantiation} We detail one setting for which there exists an admissible heuristic for the adversarial example search problem. Let the input domain ${\mathbb{X}}$ be a discrete subset of the vector space ${\mathbb{R}}^m$, and let the cost of an edge $({\bm{x}}, {\bm{x}}')$ in the transformation graph be a norm-induced metric between examples ${\bm{x}}$ and ${\bm{x}}'$: \[\omega({\bm{x}}, {\bm{x}}') = \norm{{\bm{x}} - {\bm{x}}'},\] Let ${\mathbb{S}} \subseteq {\mathbb{R}}^m$ be a superset of ${\mathbb{X}}$, e.g., a continuous closure of a discrete ${\mathbb{X}}$. Let $\dist({\bm{x}})$ denote the minimal adversarial cost of the classifier at input ${\bm{x}}$ with respect to cost $\norm{{\bm{x}} - {\bm{x}}'}$ over the search space ${\mathbb{S}}$. Because the search space is a subset of ${\mathbb{R}}^m$, $\dist$ can be simplified from \Eqref{eq:mincost-problem} to the following: \begin{equation}\label{eq:robustness} \dist({\bm{x}}) = \min_{\Delta \in {\mathbb{S}}} \norm{\Delta} % \text{ s.t. } \mathsf{goal}\xspace({\bm{x}} + \Delta) = \top,\ {\bm{x}} + \Delta \in {\mathbb{S}} \end{equation} Any \newterm{lower bound} $\distlowbound({\bm{x}})$ on $\dist({\bm{x}})$ over any ${\mathbb{S}}$ such that ${\mathbb{X}} \subseteq {\mathbb{S}}$, can be used to construct an admissible heuristic $h({\bm{x}})$: \begin{equation}\label{eq:heuristic-threshold} h({\bm{x}}) = \begin{cases} \distlowbound({\bm{x}}), &\mathsf{goal}\xspace({\bm{x}}') \neq \top \\ 0, &\text{otherwise} \end{cases} \end{equation} If ${\bm{x}}$ is not already classified as the target class $t$, this heuristic returns the lower bound on the MAC, $\distlowbound({\bm{x}})$. When ${\bm{x}}$ is classified as the target class, i.e., ${\bm{x}}$ is already on the other side of the decision boundary, the heuristic returns $0$. The heuristic $h$ is admissible because it returns a lower bound on the path cost from an example ${\bm{x}}$ to any adversarial example $\optim{{\bm{x}}}$. \begin{statement}[Admissibility of $h$]\label{thm:admissibility} Let the transformation graph ${\mathcal{G}} = (V, E, \omega)$ have $\omega({\bm{a}}, {\bm{b}}) = \norm{{\bm{a}}-{\bm{b}}}$, and the initial example be ${\bm{x}} \in V \subseteq {\mathbb{R}}^m$. Then $h$ is an admissible heuristic for the graph search problem from \Eqref{eq:mincost-graph-problem}. (Proof in \Appref{app:admissibility-proof}) \end{statement} In the rest of the paper, we use $\dist({\bm{x}})$, $\distlowbound({\bm{x}})$ and ``heuristic'' interchangeably, as $\dist({\bm{x}})$ or $\distlowbound({\bm{x}})$ unambiguously define the heuristic $h({\bm{x}})$ through \Eqref{eq:heuristic-threshold}. We note that in many domains, and in particular in non-image domains, the transformation cost for different features might not be equally distributed. \Stmtref{thm:admissibility} holds for any norm. Hence, the edge cost can be instantiated with weighted norms to capture differences in adversarial cost between features. We note that, regardless of the cost model, the structure of the transformation graph can encode more complex cost functions than $\lp$ distances between vectors, as we demonstrate in \Secref{sec:eval-bots}. \begin{figure} \centering \input{parts/extras/illustration.tex} \end{figure} \subsection{Computing the Heuristic}\label{sec:computing-heuristic} \subsubsection{Linear Models} The MAC $\dist({\bm{x}})$ over a continuous ${\mathbb{R}}^m$ is equivalent to the standard notion of pointwise adversarial robustness, and can be computed efficiently for linear models. When the target model is a linear model, $\phi({\bm{x}}) = {\bm{x}}$, $\dist({\bm{x}})$ is a distance from ${\bm{x}}$ to the hyperplane defined by the discriminant function (see \Figref{fig:heuristic-illustration}), and has a closed form: \begin{equation}\label{eq:dist-decision-boundary} \dist({\bm{x}}) = \frac{|{\bm{w}} \cdot {\bm{x}} + b|}{\dualnorm{{\bm{w}}}} = \frac{|f({\bm{x}})|}{\dualnorm{{\bm{w}}}}, \end{equation} where $\dualnorm{{\bm{w}}}$ is the dual norm corresponding to $\norm{{\bm{w}}}$ \cite{Mangasarian99,PlastriaCarrizosa01}. If the edge cost is induced by an $\lp$ norm, $\omega({\bm{a}}, {\bm{b}}) = \norm{{\bm{a}} - {\bm{b}}}_p$, the denominator $\dualnorm{{\bm{w}}}$ is $\norm{{\bm{w}}}_q$, where $q$ is the H\"older conjugate of $p$: $\frac{1}{p} + \frac{1}{q} = 1$. \subsubsection{Non-Linear Models} For some models, $\dist({\bm{x}})$ can be either computed using formal methods~\cite{CarliniKBD17, KatzBDJK17, BastaniILVNC16}, or bounded analytically, yielding a lower bound $\distlowbound({\bm{x}})$~\cite{TsuzukuSS18, PeckRGS17, HeinA17}. Existing methods usually perform the computation over a box-constrained ${\mathbb{S}} = I_1 \times I_2 \times \cdots \times I_m$ for some contiguous intervals $I_j \subset {\mathbb{R}}$, and are only applicable to $\lp$ norm-based costs. These methods are much more expensive to compute than the closed-form solution for linear models above. \subsubsection{Bounded Relaxations}\label{sec:optim:bounded} A number of works explore bounded relaxations of the admissibility properties of A$^*$\xspace search, aiming to trade off admissibility guarantees for computational efficiency~\cite{Pohl70, Pohl73, PearlK82, LikhachevGT03}. In this paper, we employ \emph{static weighting}~\cite{Pohl70} for its simplicity. In this approach, the heuristic value is multiplied by $\varepsilon > 1$. This results in adversarial examples that have at most $\varepsilon$ times higher cost than MAC. \section{Introduction} Many classes of machine-learning (ML) models are vulnerable to efficient gradient-based attacks that cause classification errors at test time~\cite{MadryMSTV17, CarliniWagner17, Moosavi-Dezfooli16, PapernotMJFCS16, GoodfellowSS14, BiggioCMNSLGR13, SzegedyZSBEGF13}. A large body of work has been dedicated to obtaining \newterm{provable guarantees of adversarial robustness} of ML classifiers against such attacks~\cite{BastaniILVNC16, KatzBDJK17, HeinA17, FawziFF18, WongSMK18, TsuzukuSS18}. These works focus exclusively on continuous domains such as images, where an attacker adds small \newterm{perturbations} to regular examples such that the resulting \newterm{adversarial examples} cause a misclassification. Their definition of robustness is that the classifier's decision is stable in a certain $\lp$-neighbourhood around a given example, i.e., perturbations having an $\lp$ norm lower than a threshold cannot flip the decision of the classifier. Security-critical applications of machine learning such as bot, malware, or spam detection rely on feature vectors whose values are constrained by the specifics of the problem. In these settings, adding small perturbations to examples could result in feature vectors that cannot appear in real life~\cite{DemontisMBMARCG17, EbrahimiRLD18, JiaG18, KolosnjajiDBMGER18}. For example, perturbing a malware binary to prevent antivirus detection could turn the binary non-executable; or perturbing a text representation to change the output of a classifier~\cite{MiyatoDG16} could cause the representation to not correspond to any plausible text in terms of semantics or grammar. Hence, the techniques for evaluating robustness against perturbation-based adversaries are not straightforward to apply to these cases. This shows a significant gap in the literature. On one hand, multiple methods and tools have been proposed to either evaluate such measures of robustness, or train robust models. On the other hand, many security-critical domains---that could benefit from such tools---cannot effectively use them, as the measures do not easily translate to discrete domains. We introduce a framework for efficiently finding adversarial examples that is suitable for constrained discrete domains. We represent the space of possible adversarial manipulations as a weighted directed graph, called a \newterm{transformation graph}. Starting from a regular example, every edge is a transformation, its weight being the transformation cost, and the children nodes are transformed examples. This representation has the following advantages. First, explicitly defining the descendants for each node captures the feasibility constraints of the domain: the transitive closure for a given starting node represents the set of all possible transformations of that example. Second, the graph can capture non-trivial manipulation-cost functions: the cost of a sequence of manipulations can be modeled as the sum of edge weights along a path from the original to the transformed example. Third, the graph representation is independent of the ML model being attacked, and of the adversary's knowledge of this model. Thus, it applies to different adversarial settings. Fourth, this framework is a generalization of many existing attacks in discrete domains~\cite{PapernotMJFCS16, PapernotMSH16, GrossePM0M16, EbrahimiRLD18, LiangSBLS18, GaoLSQ18, JiaG18, OverdorfKBTG18}. This makes it a useful tool for comparing attacks and characterizing the attack space. An additional advantage of the graphical approach is that it enables us to use well-known algorithms for graph search in order to find adversarial examples. Concretely, in a white-box setting, an adversary can use the A$^*$\xspace graph-search algorithm to find adversarial examples that are \emph{optimal} in terms of transformation cost, i.e., with \newterm{minimal adversarial cost} guarantees. Note that we use the term \emph{adversarial cost} to represent the effort an adversary applies to mount an attack~\cite{LowdM05}. Works on \newterm{cost-sensitive adversarial robustness}~\cite{AsifXBZ15, ZhangE19} use the term \emph{cost} in a different sense---to represent the harm an adversary causes with different kinds of misclassifications in a multi-class setting---and, hence, are orthogonal to our work. Being able to obtain constrained adversarial examples with provably minimal costs has key implications for security. The minimal adversarial cost guarantee naturally extends the notion of adversarial robustness in an $\lp$-neighbourhood to constrained discrete domains. Furthermore, our approach enables the evaluation of adversarial robustness under realistic tangible cost functions. For instance, we show how to measure robustness in terms of economical cost, i.e., guaranteeing that the adversary needs to pay a certain financial price in order to change the decision of the classifier. Using A$^*$\xspace search to find minimal-cost adversarial examples requires to compute a heuristic based on a measure of adversarial robustness in a $\lp$-neighbourhood over a continuous superset of the domain. Hence, we establish the following connection: if adversarial robustness of a classifier can be computed in a $\lp$-neighborhood in a continuous domain, it can also be computed in a discrete domain using our framework. If computing robustness in the continuous domain is expensive, we show how efficient sub-optimal instantiations of A$^*$\xspace can be used to still obtain optimality guarantees on the costs. The graphical framework and the provable guarantees it enables to obtain are also useful in privacy applications. Machine learning is widely used to infer private information about users~\cite{Cadwalladr18, AbbasiC08, KosinskiSG13}, track people~\cite{HRW18}, and learn about their browsing behaviour~\cite{BackMS01,LiberatoreL06}. Defenses against such attacks are non-existent or predominantly ad-hoc (e.g., for de-anonymization attacks \cite{PanchenkoNZE11, DyerCRS12, CaiNWJG14, CaiNJ14, JuarezIPDW17}). With the exception of the work by \citeauthor{JiaG18} on hindering profiling based on app usage~\cite{JiaG18}, the privacy community has so far not considered the use of evasion attacks as systematic means to counteract privacy-invasive classifiers. As many of the domains in which privacy-invasive classifiers operate are discrete, our framework opens the door to the principled design of privacy defenses against such classifiers. Moreover, the minimal cost guarantee provides a lower bound on the costs of a defense. Thus, it provides a good baseline for benchmarking the cost-effectiveness of existing privacy defenses against machine learning. \smallskip In summary, these are our contributions: \begin{itemize} \item We present a graphical framework that systematizes the crafting of adversarial examples in constrained discrete domains. It generalizes many existing crafting methods. \item We show how to use the framework to measure adversarial robustness considering the domain constraints and the adversary's capabilities. Our framework can handle arbitrary costs beyond the commonly-used $\lp$ norms to express the adversarial costs. \item We show how the A$^*$\xspace graph search algorithm can be used to efficiently obtain minimal-cost adversarial examples, thus to provide provable guarantees of robustness against a given model of adversarial capabilities. \item We identify and formally prove the connection between adversarial robustness in continuous and discrete domains: if the robustness can be computed in the continuous domain, it can also be computed in a discrete domain. \item We show how our framework can be used as a tool to systematically build and evaluate privacy defenses against privacy-invasive ML classifiers. \end{itemize} \section{Graph search approach} \label{sec:non-optim-attack} As detailed before, there exist numerous attacks that produce adversarial examples in various discrete domains. The majority of these attacks are based on greedy heuristic algorithms. Our formalization of adversary's capabilities through a transformation graph (\Secref{sec:model}) allows to view the problem of finding adversarial examples in discrete domains as graph search. This view subsumes many of these attack techniques, and offers some additional benefits. In this section we present some background on graph search algorithms and show how mentioned attacks can be formulated as instances of graph search. \subsection{Best-first search algorithms} \label{sec:bfs} In Algorithm~\ref{alg:bfs} we show the pseudocode of a variant of generalized best-first search called BF$^*$~\cite{DechterP85}. Uniform-cost search, greedy best-first search, and A$^*$\xspace variants, are specializations of BF$^*$. They differ in their instantiation of the scoring function used to select the best nodes at each step of the algorithm. Additionally, by limiting the number of items in data structure holding candidate nodes, one can obtain best-first beam search and hill climbing algorithms~\cite{RichKnight91}. We summarize these differences in~\Tabref{tab:search-algos}. \begin{algorithm}[t] \caption{BF$^*$ search algorithm}\label{alg:bfs} \begin{algorithmic}[1] \Require{Priority queue data structure $\mathsf{pqueue}\xspace$} \Require{Directed graph ${\mathcal{G}} = (V, E)$} \Require{Scoring function $\mathsf{score}\xspace: V \times V \rightarrow {\mathbb{R}}$} \Require{Goal predicate $\mathsf{goal}\xspace: V \rightarrow \{0, 1\}$} \Require{Starting node $s \in V$} \Function{BF$^*$}{${\mathcal{G}}$, $\mathsf{score}\xspace(\cdot)$, $\mathsf{goal}\xspace(\cdot)$; $s$} \Let{\textsc{open}\xspace}{$\mathsf{pqueue}\xspace(\{s\})$} \Let{\textsc{closed}\xspace}{$\{\}$} \While{\textsc{open}\xspace is not empty} \Let{$v$}{remove node with lowest $f$-score from \textsc{open}\xspace} \If{$\mathsf{goal}\xspace(v)$} \Return $v$ \EndIf \Let{\textsc{closed}\xspace}{$\textsc{closed}\xspace \cup \{v\}$} \For{each child $v'$ of $v$ in ${\mathcal{G}}$} \Let{$\mathsf{score}\xspace$-value}{$\mathsf{score}\xspace(v, v')$} \If{$v'$ \emph{not} in \textsc{open}\xspace or \textsc{closed}\xspace} \State{Record $v'$ into \textsc{open}\xspace with $\mathsf{score}\xspace$-value} \EndIf \If{$v'$ is in \textsc{open}\xspace or \textsc{closed}\xspace and $\mathsf{score}\xspace$-value \phantom{\hspace{5.2em}} is lower than recorded} \State{Replace $v'$ with the updated} \State{\quad $\mathsf{score}\xspace$-value in the respective set} \If{$v'$ is in \textsc{closed}\xspace} \State{Move $v'$ to \textsc{open}\xspace} \EndIf \EndIf \EndFor \EndWhile \EndFunction \end{algorithmic} \end{algorithm} \begin{table}[t] \centering \caption{Specializations of BF$^*$}\label{tab:search-algos} \resizebox{\columnwidth}{!}{ \begin{tabular}{ll} \toprule \textbf{Algorithm} & \textbf{$\mathsf{score}\xspace$} \\ \midrule Greedy best-first~\cite{DoranMichie66} & $h(v')$ \\ Uniform-cost~\cite{HartNR68} & $\omega(v, v')$ \\ A$^*$\xspace~\cite{HartNR68} & $\omega(v, v') + h(v')$ \\ $\varepsilon$-weighted A$^*$\xspace~\cite{Pohl70} & $\omega(v, v') + \varepsilon h(v')$ \\ \bottomrule \\ \toprule \textbf{Algorithm} & \textbf{$\mathsf{pqueue}\xspace$} \\ \midrule Hill climbing & Limited to one best-scoring item \\ Best-first beam search~\cite{RichKnight91} & Limited to $B$ best-scoring items \\ \bottomrule \end{tabular} } \end{table} \subsection{Instantiations of existing attacks} \label{sec:related-instantiations} We can use the graphical framework formalization from in \Secref{sec:model}, to instantiate several existing attacks against machine learning classifiers in discrete domains. \Tabref{tab:comparison} outlines the instantiations of these attacks as applications of best-first search over a transformation graph. The attacks differ in their general setting, transformation graphs, and the scoring function choice. Note that for all cases, each child in a transformation graph (column \emph{expansions} in the table) represents a single atomic transformation. Transformations thus compound with the depth of the traversal of the graph. For those attacks that can be instantiated as greedy search, if \emph{cost} is shown, it is enforced by truncating the transformation graph once a certain cost threshold is reached. Most of the attacks amount to running a hill climbing search algorithm over some \emph{implicit} transformation graph. \citet{EbrahimiRLD18} take a step forward, and run beam search. \citet{OverdorfKBTG18} use an exhaustive search over the space of feasible transformations, equivalent to uniform-cost search. \begin{sidewaystable} \input{parts/extras/pretty_comparison.tex} \end{sidewaystable} \medskip In \Secref{sec:eval-wfp} we use the best-first graph search algorithm to run attacks against network traffic classifier. To demonstrate an advantage of the graphical framework, in the next section, \Secref{sec:optim-attack}, we show how in some settings one can leverage the framework to obtain adversarial examples with provable optimality guarantees using A$^*$\xspace search. \section*{Acknowledgements} We would like to thank Danesh Irani, \'Ulfar Erlingsson, Alexey Kurakin, Seyed-Mohsen Moosavi-Dezfooli, Maksym Andriushchenko, and Giovanni Cherubin for their feedback and helpful discussions. This research is funded by NEXTLEAP project\footnote{\url{https://nextleap.eu}} within the European Union's Horizon 2020 Framework Program for Research and Innovation (H2020-ICT-2015, ICT-10-2015) under grant agreement 688722. Jamie Hayes is funded by a Google PhD Fellowship in Machine Learning. \section{Instantiations of attacks in discrete domains as best-first search}% \label{app:related-instantiations} \subsection{Best-first search algorithms} In Algorithm~\ref{alg:bfs} we show the pseudocode of a variant of generalized best-first search called BF$^*$~\cite{DechterP85}. Several common search algorithms, like uniform-cost search, greedy best-first search, and A$^*$\xspace variants, are specializations of BF$^*$. They differ in their instantiation of the scoring function used to select the best nodes at each step of the algorithm. Additionally, by limiting the number of items in data structure holding candidate nodes, one can obtain best-first beam search and hill climbing algorithms~\cite{RichKnight91}. We summarize these differences in~\Tabref{tab:search-algos}. \begin{algorithm*}[t] \caption{BF$^*$ search algorithm}\label{alg:bfs} \begin{algorithmic}[1] \Require{Priority queue data structure $\mathsf{pqueue}\xspace$} \Require{Directed graph ${\mathcal{G}} = (V, E)$} \Require{Scoring function $\mathsf{score}\xspace: V \times V \rightarrow {\mathbb{R}}$} \Require{Goal predicate $\mathsf{goal}\xspace: V \rightarrow \{0, 1\}$} \Require{Starting node $s \in V$} \Function{BF$^*$}{${\mathcal{G}}$, $\mathsf{score}\xspace(\cdot)$, $\mathsf{goal}\xspace(\cdot)$; $s$} \Let{\textsc{open}\xspace}{$\mathsf{pqueue}\xspace(\{s\})$} \Let{\textsc{closed}\xspace}{$\{\}$} \While{\textsc{open}\xspace is not empty} \Let{$v$}{remove node from \textsc{open}\xspace with the lowest recorded value of $f$-score} \If{$\mathsf{goal}\xspace(v)$} \Return $v$ \EndIf \Let{\textsc{closed}\xspace}{$\textsc{closed}\xspace \cup \{v\}$} \For{each child $v'$ of $v$ in ${\mathcal{G}}$} \Let{$\mathsf{score}\xspace$-value}{$\mathsf{score}\xspace(v, v')$} \If{$v'$ \emph{not} in \textsc{open}\xspace or \textsc{closed}\xspace} \State{Record $v'$ into \textsc{open}\xspace with $\mathsf{score}\xspace$-value} \EndIf \If{$v'$ is in \textsc{open}\xspace or \textsc{closed}\xspace and $\mathsf{score}\xspace$-value is lower than recorded} \State{Replace $v'$ with the updated $\mathsf{score}\xspace$-value in the respective set} \If{$v'$ is in \textsc{closed}\xspace} \State{Move $v'$ to \textsc{open}\xspace} \EndIf \EndIf \EndFor \EndWhile \EndFunction \end{algorithmic} \end{algorithm*} \begin{table}[t] \centering \caption{Specializations of BF$^*$}\label{tab:search-algos} \begin{tabular}{ll} \toprule \textbf{Algorithm} & \textbf{$\mathsf{score}\xspace$} \\ \midrule Greedy best-first~\cite{DoranMichie66} & $h(v')$ \\ Uniform-cost~\cite{HartNR68} & $\omega(v, v')$ \\ A$^*$\xspace~\cite{HartNR68} & $\omega(v, v') + h(v')$ \\ $\varepsilon$-weighted A$^*$\xspace~\cite{Pohl70} & $\omega(v, v') + \varepsilon h(v')$ \\ \bottomrule \\ \toprule \textbf{Algorithm} & \textbf{$\mathsf{pqueue}\xspace$} \\ \midrule Hill climbing & Limited to one highest-scoring item \\ Best-first beam search~\cite{RichKnight91} & Limited to $B$ highest-scoring items \\ \bottomrule \end{tabular} \end{table} \subsection{Instantiations} We can use the graphical framework formalization, described in \Secref{sec:model}, to instantiate several existing attacks against machine learning classifiers in discrete domains. \Tabref{tab:comparison} outlines the instantiations of these attacks as applications of best-first search over a transformation graph. The attacks differ in their general setting, transformation graphs, and the scoring function choice. Note that for all cases, each child in a transformation graph (column \emph{expansions} in the table) represents a single atomic transformation. Transformations thus compound with the depth of the traversal of the graph. For those attacks that can be instantiated as greedy search, if \emph{cost} is shown, it is enforced by truncating the transformation graph once a certain cost threshold is reached. \begin{sidewaystable*} \input{parts/extras/pretty_comparison.tex} \end{sidewaystable*} \section{Proof of \Stmtref{thm:admissibility}} \label{app:admissibility-proof} Observe that if $F({\bm{x}}) = t$, the heuristic $\dist({\bm{x}}) = 0$, and hence is trivially admissible. Indeed, it cannot overestimate $C_{{\mathcal{G}}}({\bm{x}}, \optim{{\bm{x}}})$ due to the fact that $\omega(a, b) \geq 0$ and $C_{{\mathcal{G}}}(a, b) \geq 0$ for any $a, b \in V$. It is therefore sufficient to show that if $F({\bm{x}}) \neq t$, the lower bound on adversarial robustness at $x$ over ${\mathbb{S}}$ never overestimates $C_{{\mathcal{G}}}({\bm{x}}, \optim{{\bm{x}}})$: \begin{equation}\label{eq:lowbound-less-than-graphcost} \distlowbound({\bm{x}}') \leq C_{{\mathcal{G}}}({\bm{x}}, \optim{{\bm{x}}}) \end{equation} The following sequence holds: \[ \begin{aligned} \dist({\bm{x}}) & \leq \norm{{\bm{x}} - \optim{{\bm{x}}}} \\ & \leq C_{{\mathcal{G}}}({\bm{x}}, \optim{{\bm{x}}}) \\ \end{aligned} \] The first inequality is by definition of $\dist({\bm{x}})$ (see \Eqref{eq:robustness}). Indeed, since $\dist({\bm{x}})$ is a norm of the smallest adversarial perturbation $\Delta$ over ${\mathbb{S}}$, $\Delta$ is smaller than the distance from ${\bm{x}}$ to any other ${\bm{x}}' \in {\mathbb{X}} \subseteq {\mathbb{S}}$ that also flips the decision of the target classifier: \[ \dist({\bm{x}}) = \norm{\Delta} \leq \norm{{\bm{x}} - {\bm{x}}'} \quad \text{\footnotesize (for \emph{any} ${\bm{x}}' \in {\mathbb{X}}$ s.t. $F({\bm{x}}') = t$)} \] By \Eqref{eq:graph-cost}, $C_{{\mathcal{G}}}({\bm{x}}, \optim{{\bm{x}}})$ is a path cost for some path: \[ \begin{aligned} C_{{\mathcal{G}}}({\bm{x}}, \optim{{\bm{x}}}) & = W({\bm{x}} \rightarrow {\bm{v}}_1 \rightarrow \ldots \rightarrow {\bm{v}}_{n-1} \rightarrow \optim{{\bm{x}}}) \\ & = \norm{{\bm{x}} - {\bm{v}}_1} + \sum_{i=1}^{n-2} \norm{{\bm{v}}_i - {\bm{v}}_{i+1}} + \norm{{\bm{v}}_{n-1} - \optim{{\bm{x}}}} \end{aligned} \] By triangle property of the norm, the second inequality holds: \[ \begin{aligned} \norm{{\bm{x}} - \optim{{\bm{x}}}} & \leq \norm{{\bm{x}} - {\bm{v}}_1} + \sum_{i=1}^{n-2} \norm{{\bm{v}}_i - {\bm{v}}_{i+1}} + \norm{{\bm{v}}_{n-1} - \optim{{\bm{x}}}} \\ & = C_{{\mathcal{G}}}(x, \optim{{\bm{x}}}) \end{aligned} \] Hence, $\dist({\bm{x}}) \leq C_{{\mathcal{G}}}({\bm{x}}, \optim{{\bm{x}}})$, which implies \Eqref{eq:lowbound-less-than-graphcost}, and concludes the proof. \section{Derivation of the heuristic approximation for non-linear models} \label{sec:non-linear-heuristic-deriv} W.l.o.g, assume that the decision threshold of the target classifier is $\theta = 0$. For an initial ${\bm{x}} \in {\mathbb{R}}^m$, the smallest adversarial perturbation $\Delta \in {\mathbb{R}}^m$ puts ${\bm{x}} + \Delta$ \emph{on} the decision boundary: $f({\bm{x}} + \Delta) = \theta = 0$. Let $\tilde f({\bm{x}} + \Delta)$ be the first-order Taylor approximation of $f$ at ${\bm{x}} + \Delta$: \[\tilde f({\bm{x}} + \Delta) = f({\bm{x}}) + \nabla_{{\bm{x}}} f({\bm{x}}) \cdot \Delta\] We want to estimate $\Delta$ assuming $\tilde f({\bm{x}} + \Delta) = f({\bm{x}} + \Delta) = 0$. By H\"older's inequality, \[|f({\bm{x}})| = |\nabla_{{\bm{x}}} f({\bm{x}}) \cdot \Delta| \leq \dualnorm{\nabla_{{\bm{x}}} f({\bm{x}})} \norm{\Delta} \] Hence, assuming that $\tilde f({\bm{x}} + \Delta) = 0$, the $p$-norm of the smallest perturbation has the following lower bound: \[\norm{\Delta} \geq \frac{|f({\bm{x}})|}{\dualnorm{\nabla_{{\bm{x}}} f({\bm{x}})}} \] We can use the right-hand side as an approximation of the lower bound on $\dist({\bm{x}})$. Note that for a linear model $f({\bm{x}}) = {\bm{w}} \cdot {\bm{x}} + b$ the first-order approximation $\tilde f({\bm{x}} + \Delta)$ is exact. Hence, the bound implies \Eqref{eq:dist-decision-boundary}: \[\norm{\Delta} \geq \frac{|f({\bm{x}})|}{\dualnorm{{\bm{w}}}}\] \section{Supplementary figures} \label{app:figures} The rest of the document contains supplementary figures. \begin{sidewaystable*} \input{parts/extras/pretty_comparison.tex} \end{sidewaystable*} \begin{sidewaystable*} \input{parts/extras/bots_examples.tex} \end{sidewaystable*} \begin{figure*}[t] \centering \includegraphics[width=0.99\textwidth]{images/wfp__defs_overhead_tracelen_comparison.pdf} \caption{Overhead of WF defenses compared to adversarial examples found with hill-climbing search (x-axis is bi-symmetrically logarithmic)} \label{fig:wfp-defenses-delta} \end{figure*} \subsection{Evaluating Security: Twitter-Bot Detection} \label{sec:eval-bots} In this section, we evaluate the security of an ML-based Twitter-bot detector. First, we show how to use the graphical framework to compute adversarial robustness as the minimal cost of building a bot that can evade detection, and compare the guarantees the framework provides to the standard adversarial robustness measures. Second, we evaluate the efficiency and optimality of our attacks. At the end of this section, we discuss the implications of our assumptions when the framework is used as a security evaluation tool. As in our toy example, we assume the adversary starts with a bot account and aims to find the minimum transformation that make the model classify the account as human. They define a transformation graph such that any chain of transformations results in a feasible account, runs the graph search to find a minimal-cost example, and replicates the transformations on their bot account to evade the classifier. Note that, as opposed to adversarial examples on images where the perturbations added by the adversary may change the content of the image to the point where it changes its class, in our setting, a bot account will keep being a bot account regardless of the transformations. \subsubsection{Twitter-Bot Detector} We use a linear model as the target classifier that classifies an account as a ``bot'' or ``not bot''. In particular, we use $\lp[2]$-regularized logistic regression, as the use of a linear model enables us to compute the exact value of the heuristic efficiently (see \Secref{sec:computing-heuristic}). The decision threshold of the classifier is standard: $d = 0.5$. We use 5-fold cross-validation on the training set (see below) to pick the $\lp[2]$ regularization parameter of the logistic regression (set to $1.9$). Although simple, this classifier yields an accuracy of 88\% (random baseline is 65\%) and performs on par with an SVM with an RBF kernel, and better than some neural network architectures (see \Secref{sec:bots-discussion}). Hence, we consider the regression to be a realistic choice in our setting. \descr{Dataset} We use the dataset for Twitter bot classification by \citet{GilaniKC17}. Each example in this dataset represents aggregated information about a Twitter account in April of 2016. Concretely, it has the following features: the \emph{number of tweets,} \emph{retweets,} \emph{favourites,} \emph{lists,} and \emph{replies,} the \emph{average number of URLs,} the \emph{size of attached content,} \emph{average likes} and \emph{retweets per tweet,} and the \emph{list of apps that were used to post tweets} (see \Tabref{tab:bots-features}). Accounts are human-labeled as bots or humans. The original dataset is split into several bands by the number of followers. We report the results for the 1,289 accounts with under 1,000 followers; more popular accounts result in similar behavior. We randomly split the dataset into training and test sets, containing 1160 and 129 accounts or, respectively, 90\% and 10\% samples. We generate adversarial examples for the 41 accounts in the test set that are classified as bots by the target classifier. \begin{table}[t] \input{parts/extras/bots_features.tex} \end{table} \descr{Feature Processing} Almost all the features in the dataset are numeric (e.g., \emph{size of attached content}). We use quantile-based bucketization to distribute them into buckets that correspond to quantiles in the training dataset. In our experiments, we use 20 buckets, which offers best performance in a grid search measuring 5-fold cross-validation accuracy on the training set. After bucketization, we one-hot encode the features. The only non-numerical feature is the \emph{list of apps that were used to post the tweets}. We encode it as follows. For each of the six apps in the dataset, we use two bits: if the app was used by the account we set the first bit, and if not, we set the second bit. \subsubsection{Security Evaluation} Here, we evaluate the security of the bot detector using minimal adversarial costs as the measure of adversarial robustness. \descr{The Adversary's Goals} As mentioned in \Secref{sec:model:goal}, the minimal adversarial costs depend on the adversary's definition of ``fooling''. To illustrate how the framework can accommodate different goals, we simulate two attack settings. First, a \emph{basic attack}, in which the adversary's goal is to find any adversarial examples that flip the decision of the classifier ($l = d = 0.5$). Second, a \emph{high-confidence attack}, in which the adversary's goal is to find adversarial examples that are classified as ``not bot'' with at least \emph{75\% confidence} ($l = 0.75$). \descr{The Adversary's Capabilities: Transformation Graph and Cost} For this evaluation, we assume that the adversary is capable of changing all account features, and the cost of an adversarial example is the \emph{number of changes} required to transform an initial bot account into that example. This adversarial cost model is similar to the state of the art in adversarial ML (see \Secref{sec:related-instantiations}). In \Secref{sec:bots-discussion}, we discuss a different model in which the adversary is constrained by the number of features that they can influence, and the cost is not measured in the number of changes, but in the actual dollar cost of performing transformations. We build the transformation graph by defining \emph{atomic} transformations that change \emph{only one} feature value. For each bucketed feature we define two atomic transformations: increasing the feature value so that it moves one bucket up, and decreasing the feature value so that it moves one bucket down. For the buckets in the extremes, only one transformation is possible. For the \emph{list of apps} feature, we define one transformation per app: flipping the bits that represent whether the app was used or not. Then, all possible modifications of an initial example, including those that change multiple features, can be represented as chains of atomic transformations: paths in the graph. For example, a modification that changes two features needs at least two atomic transformations: a path with two edges. We define the edge costs to be the $\lp[1]$ distance between feature vectors before and after a transformation. Given the one-hot encoding, this means that each atomic transformation in our graph has a constant cost of 2 (one bit is set to zero, and another bit to one). Such a representation has two advantages. First, the path cost can be easily related to the number of changes needed to evade the classifier: it suffices to divide the path cost by two. Second, because the edge cost is a norm-induced metric, we can use A$^*$\xspace with admissible heuristic by \Stmtref{thm:admissibility}. \Figref{fig:bots-graph-illustration} illustrates an example transformation graph for simplified accounts. Note that the $\lp[1]$ distance between two arbitrary feature vectors does not represent the number of changes between them. We are able to represent the number of changes through the structure of the transformation graph. \begin{figure} \centering \resizebox{0.9\columnwidth}{!}{ \centering \vspace{1em} \begin{tikzpicture} \tikzstyle{featurevec}=[draw, fill=lightgray!10, text width=5cm, font=\sffamily, scale=2] \tikzstyle{caption}=[font=\sffamily, scale=2] \node[featurevec] at (0,0) (a) {Number of tweets: few \\ Age of account: medium}; \node[featurevec] at (6,-4) (b) {Number of tweets: \textit{medium} \\ Age of account: medium}; \node[featurevec] at (-6,-4) (c) {Number of tweets: few \\ Age of account: \textit{new}}; \node[caption] at (14,-2) (ellipsis1) {...}; \node[featurevec] at (-5,-8) (d) {Number of tweets: \textit{medium} \\ Age of account: \textit{new}}; \node[featurevec] at (7,-8) (e) {Number of tweets: \textit{medium} \\ Age of account: \textit{$>$ 1 year}}; \draw[->] (a)->(b) node[caption] [left=3, midway] {2}; \draw[->] (a)->(c) node[caption] [left=5, midway] {2}; \draw[->, color=black!50] (a)->(ellipsis1); \draw[->] (b)->(d) node[caption] [left=15, midway] {2}; \draw[->] (c)->(d) node[caption] [left=1, midway] {2}; \draw[->] (b)->(e) node[caption] [left=1, midway] {2}; \end{tikzpicture} } \caption{ Sketch of the transformation graph for simplified Twitter accounts. \emph{Italics} denote a feature value that differs from the initial example. The edge costs are constant and equal to the $\lp[1]$ distance between the feature vectors representing the accounts.} \label{fig:bots-graph-illustration} \end{figure} \descr{Results} We run A$^*$\xspace search on the transformation graph to find adversarial examples. For the basic attack, i.e., when the adversary wants to find an example that flips the decision regardless of the confidence, on average only 2.2 (s.d. 1.6) feature changes suffice to flip the decision of the target model. To obtain adversarial examples with high confidence ($75\%$), the adversary needs on average 3.9 (s.d. 2.7) changes. We report some examples of adversarial example accounts in \Tabref{tab:bots-examples} (\Appref{app:figures}). \begin{figure*}[t] \centering \includegraphics[width=0.36\textwidth]{images/bots__mac_distplots__band_1k__target_50__model_lr.pdf} \includegraphics[width=0.36\textwidth]{images/bots__mac_distplots__band_1k__target_75__model_lr.pdf} \caption{Pointwise adversarial robustness measured as minimal-cost adversarial examples computed using the transformation graph ${\mathcal{G}}$, and pointwise robustness in $\lp[1]$ space. Left: basic attack. Right: high-confidence attack. (Notice the different y-axes)} \label{fig:bots-guarantees} \end{figure*} Next, we compare the minimal adversarial cost of adversarial examples as a security measure to the standard notion of adversarial robustness over continuous ${\mathbb{S}} = {\mathbb{R}}^m$ (as in \Eqref{eq:robustness}). We show in \Figref{fig:bots-guarantees} the distribution of the values of both measures for the basic and high-confidence attacks. We see that the MAC values from our method are up to 26$\times$ higher than adversarial robustness over the unconstrained $\lp[1]$ space in the case of the basic attack, and up to 486$\times$ higher in the case of the high-confidence attack. This means that the continuous domain robustness measure applied to a discrete domain results in overly pessimistic adversarial cost estimates, that an adversary \emph{cannot} achieve because of inherent domain constraints. Our approach produces a more precise robustness measure, tailored to the concrete domain constraints and the adversary's capabilities. \subsubsection{Performance Evaluation} Here, we study the trade-off between being able to run the graph search efficiently and the optimality guarantees of the obtained MAC values. We consider the following algorithms: uniform-cost search (UCS), plain A$^*$\xspace, and $\varepsilon$-bounded relaxations of A$^*$\xspace with $\varepsilon=\{2,3,5,10\}$ (see \Secref{sec:optim:bounded}). \begin{figure*} \centering \includegraphics[width=0.36\textwidth]{images/bots__expansions__bin_20__band_1k__target_50__model_lr.pdf}\quad \includegraphics[width=0.36\textwidth]{images/bots__runtimes__bin_20__band_1k__target_50__model_lr.pdf} \caption{Basic attack performance performance. Left: node expansions. Right: runtime in seconds. (y-axes are logarithmic)} \label{fig:bots-perf} \end{figure*} \descr{Runtime} \Figref{fig:bots-perf} shows the number of expansions (left), as well as the runtime (right), needed to find adversarial examples that flip the detector in the basic attack. We find that A$^*$\xspace expands significantly fewer nodes than UCS (up to 32$\times$ fewer), showing that our admissible heuristic is indeed useful for efficiently finding MAC adversarial examples in the search space. Relaxing the optimality requirement by increasing the $\varepsilon$ weight speeds up the search even more. For instance, $\varepsilon = 5$ decreases runtime by three orders of magnitude, and still ensures that the minimal cost is at most five times lower than the cost of the found adversarial example. We also observe that in some cases UCS performs better than A$^*$\xspace in terms of the runtime, even though A$^*$\xspace expands fewer nodes. We believe that this is an artifact of our Python implementation, which could be solved with a more efficient implementation. We observe similar results for the high-confidence attack (\Figref{fig:bots-high-perf}). High-confidence adversarial examples require more transformations, hence, the search takes up to 100$\times$ more runtime than for the basic attack. Still, A$^*$\xspace performs significantly better than UCS, expanding 2--31$\times$ fewer nodes. \begin{figure*} \centering \includegraphics[width=0.36\textwidth]{images/bots__expansions__bin_20__band_1k__target_75__model_lr.pdf}\quad \includegraphics[width=0.36\textwidth]{images/bots__runtimes__bin_20__band_1k__target_75__model_lr.pdf} \caption{High-confidence attack performance. Left: node expansions. Right: runtime in seconds. (y-axes are logarithmic)} \label{fig:bots-high-perf} \end{figure*} \begin{figure}[h] \centering \includegraphics[width=0.36\textwidth]{images/bots__path_costs__band_1k__target_75__model_lr.pdf} \caption{High-confidence attack: increase in cost over MAC of adversarial examples found using $\varepsilon$-weighted A$^*$\xspace.} \label{fig:bots-high-overhead} \end{figure} \descr{Speed vs. Optimality Trade-Off} We saw that $\varepsilon$-bounded relaxations can drastically decrase the search runtime. We empirically assess by how much this speedup hurts the optimality of obtained adversarial examples. To evaluate this, we compute the increase in costs of adversarial examples found with $\varepsilon$-weighted A$^*$\xspace over the optimal adversarial examples found with plain A$^*$\xspace. We discover that the upper bound on sub-optimality from $\varepsilon$-weighted A$^*$\xspace is extremely pessimistic (recall that $\varepsilon$-weighting guarantees that the found adversarial examples have at most $\varepsilon$ times higher cost than the MAC). In practice, for the tested values of $\varepsilon$, \emph{all} adversarial examples found in the basic attack incur minimal cost in our transformation graph, and only few high-confidence adversarial examples have costs at most $1.2\times$ higher than the MAC (see \Figref{fig:bots-high-overhead}). We conclude that for this setting, using $\varepsilon$-weighting can bring huge performance benefits at no cost. \descr{Heuristics Comparison} Up to this point, we used $\lp[1]$ distance as an edge weight, and the corresponding $\lp[1]$-based heuristic in the search. Here, we investigate if other heuristics provide better performance. To maintain the optimality of A$^*$\xspace, the edge weights have to change accordingly. Given that we consider that all transformations have the same cost, a uniform change in the weights results in an equivalent transformation graph. The difference lies in the multiplicative factor used to recover the number of required changes from the path cost. We run the basic attack using $\lp[1]$, $\lp[2]$, and $\lp[\infty]$ edge weights, i.e., 2 for $\lp[1]$, $\sqrt{2}$ for $\lp[2]$, and $1$ for $\lp[\infty]$; and the corresponding heuristics, that we denote as $\dist[1], \dist[2], \dist[\infty]$. We compare the performance of A$^*$\xspace for these edge costs against each other and against two baselines. First, UCS, which expands the same number of graph nodes for all three edge cost models. It represents the worst case in terms of performance, but outputs provably minimal-cost adversarial examples. Second, we run ``random search'' 10 times. By random search we mean A$^*$\xspace with a heuristic that outputs random numbers between 0 and 2 on the transformation graph with $\lp[1]$ edge costs. This algorithm is efficient, but does not provide any optimality guarantees. We show the result in \Figref{fig:bots-heuristic-comparison} (left). Except for adversarial examples that lie deeper in the graph, all admissible heuristics find solutions faster than the random search. We also see that the random search outputs adversarial examples that have costs up to 11 times the MAC obtained with A$^*$\xspace. Even though both $\lp[2]$ and $\lp[\infty]$ heuristics enable the algorithm to explore fewer graph nodes than UCS, in practice they take the same time (see \Figref{fig:bots-heuristic-comparison}, right) This is because each heuristic exploration is quite costly in terms of computation time. The edge weights for $\lp[2]$ and $\lp[\infty]$ ($\sqrt{2}$ and $1$, respectively) are comparatively high compared to the values of the heuristics (on our dataset $\dist[2]$ is on average 0.52 and $\dist[\infty]$ is on average 0.059). Hence, A$^*$\xspace often needs to explore all nodes of the same cost before it can proceed to transformations that carry a higher cost, essentially degenerating into UCS. On the contrary, $\lp[1]$ heuristics perform consistently better, as $\dist[1]$ is on average 2.18, higher than 2 (the $\lp[1]$ edge cost). We note that this result may not necessarily hold for other transformation graphs with different cost models. \begin{figure*}[h!] \centering \includegraphics[width=0.36\textwidth]{images/bots__heuristics_expansions__bin_20__band_1k__target_50.pdf}\quad \includegraphics[width=0.36\textwidth]{images/bots__heuristics_runtimes__bin_20__band_1k__target_50.pdf} \caption{Basic attack heuristics comparison. Left: node expansions. Right: runtime in seconds. (y-axes are logarithmic)} \label{fig:bots-heuristic-comparison} \end{figure*} \subsubsection{Applicability Discussion} \label{sec:bots-discussion} In the previous experiments, we assumed that the target model is linear, that the adversarial cost is proportional to the number of feature changes, and that the adversary has white-box knowledge and uses optimal algorithms. In this section, we explore other options for each of these assumptions and, in \Secref{sec:eval-wfp}, we conduct a thorough evaluation of a setting in which none of them hold. \descr{Non-Linear Models} When the target model is linear, we can efficiently compute the exact value of the the admissible heuristic from \Eqref{eq:heuristic-threshold}. Even though a linear model is a sensible choice in our setting, non-linear models can often appear in other security-critical settings. Mounting an A$^*$\xspace-based attack against a non-linear target model, however, requires costly methods to compute the heuristic (see \Secref{sec:computing-heuristic}). One way to overcome this issue is linearizing the non-linear model using a first-order Taylor expansion, and then using the heuristic $\dist$ for linear models (see \Appref{sec:non-linear-heuristic-deriv} for a formal derivation): \begin{equation}\label{eq:dist-nonlinear-decision-boundary} \dist({\bm{x}}) \approx \frac{|f({\bm{x}})|}{\dualnorm{\nabla_{{\bm{x}}} f({\bm{x}})}} \end{equation} We note that, as this approximation can overestimate $\dist({\bm{x}})$, it cannot serve as an admissible heuristic without additional assumptions on $f$. We empirically evaluate this heuristic by running the attack against an SVM with the RBF kernel trained on the dataset with discretization parameter set to 20 (88\% accuracy on the test set). Using UCS to obtain the ground-truth minimal-cost adversarial examples, we find that, even though the heuristic is approximate, all adversarial examples found with A$^*$\xspace are minimal-cost. Moreover, this heuristic allows the graph search to use significantly fewer graph node expansions than UCS (\Figref{fig:bots-svmrbf}, left). On the downside, even though the algorithm expands fewer nodes, the overhead of computing the heuristic (which includes computing the forward gradient of the SVM-RBF) is high enough that there is no actual improvement in performance, unless we use $\varepsilon > 2$ weighting (\Figref{fig:bots-svmrbf}, right). More efficient implementations of the heuristic can result in better gains. \begin{figure*} \centering \includegraphics[width=0.36\textwidth]{images/bots__expansions__bin_20__band_1k__target_50__model_svm.pdf}\quad \includegraphics[width=0.36\textwidth]{images/bots__runtimes__bin_20__band_1k__target_50__model_svm.pdf} \caption{Basic attack against a non-linear model (SVM-RBF) using an approximate heuristic. Left: node expansions. Right: runtime in seconds. (y-axes are logarithmic)} \label{fig:bots-svmrbf} \end{figure*} \begin{table} \caption{Transferability of minimal-cost adversarial examples from logistic regression to other models. Columns: model; \emph{Accuracy}---model's accuracy on the test set; \emph{Trans. (basic)}---transferability rate of adv. examples aiming to be missclassified with 50\% confidence to this model; \emph{Trans. (high)}---same, with 75\% confidence.} \label{tab:bots-transferability} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{lrrr} \toprule \textbf{Model} & \textbf{Accuracy, \%} & \textbf{Trans. (basic), \%} & \textbf{Trans. (high), \%} \\ \midrule LR & 88 & --- & --- \\ NN-A & 80 & 49 & 84 \\ NN-B & 83 & 38 & 95 \\ GBDT & 87 & 74 & 95 \\ SVM-RBF & 88 & 73 & 100 \\ \bottomrule \end{tabular} } \end{table} \descr{Black-Box Setting} The minimal-cost adversarial examples, and the robustness guarantees they induce, are specific to a particular target model. Do other models misclassify these examples as well? If yes, the attack would not only be effective in the white-box setting, but also in the black-box setting, where the adversary does not know the exact architecture and weights of the target model. Furthermore, there would be no need to use expensive heuristics for non-linear models. We check whether the adversarial examples found in the previous section using A$^*$\xspace against a logistic regression (LR) are misclassified by other non-linear models trained on the same dataset. We choose four ML models representative of typical architectures: two instantiations of a two-layer fully-connected ReLU neural network, one with 2000 and 500 neurons (NN-A), and one with 20 and 10 neurons in the respective layers (NN-B); gradient-boosted decision tree (GBDT); and an SVM-RBF. We do not run extensive hyperparameter search to obtain the best possible performance of these models, but we ensure that all of them have accuracy greater than $80\%$ (the random baseline is $65\%$). Table~\ref{tab:bots-transferability} shows the results of the experiment. We see that the basic minimal-cost adversarial examples (which mostly have confidence only slightly higher than 50\%) transfer to the other models in at least 38\% of the cases and in about 73\% of the cases for GBDT and SVM-RBF. In the high-confidence setting more than 83\% of the adversarial examples transfer to all models. We conjecture that when the goal is to find any adversarial examples, the minimal-cost adversarial examples often exploit a weakness found only in their target model, and, hence, rarely transfer. When the target confidence level is higher, the adversarial examples require more transformations and become similar to non-bots as seen in the training data. Hence, they are more likely to generalize. \descr{Non-Optimal Algorithms} So far we have considered that the adversary uses algorithms that provide optimality guarantees: UCS, A$^*$\xspace, $\varepsilon$-weighted A$^*$\xspace. These algorithms are often expensive. We investigate the performance of less expensive non-optimal algorithms in our setting using a hill-climbing modification of A$^*$\xspace as an example (see \Secref{sec:bfs}). Regular A$^*$\xspace needs to keep track of all previously expanded nodes at any given time. The hill-climbing variation only keeps the best-scoring node. This significantly improves memory and computation requirements, but sacrifices the ability of A$^*$\xspace to backtrack. We see in \Figref{fig:bots-hill-climbing-perf} that hill climbing performs significantly better than UCS and A$^*$\xspace. Furthermore, we find that, in the basic-attack setting, all adversarial examples found by the hill climbing incur minimal cost; and in the high-confidence setting only some are more expensive (at most $1.2\times$ higher than the minimal cost). We note that these results are on par with weighted A$^*$\xspace for $\varepsilon=10$, with the difference that hill climbing does not provide any provable guarantees. \begin{figure*} \centering \includegraphics[width=0.36\textwidth]{images/bots__hill_climbing__expansions__bin_20__band_1k__target_50.pdf} \quad \includegraphics[width=0.36\textwidth]{images/bots__hill_climbing__runtimes__bin_20__band_1k__target_50.pdf} \caption{Basic attack setting comparison of UCS, A$^*$\xspace, weighted A$^*$\xspace, and hill climbing. Left: node expansions. Right: runtime in seconds. (y-axes are logarithmic)} \label{fig:bots-hill-climbing-perf} \end{figure*} \descr{Realistic Adversarial Costs} Previously, we ensured that the chosen edge weights allow to use admissible heuristics in A$^*$\xspace, and assumed that the adversary can modify all features at the same cost. However, the general graphical framework and, more importantly, the problems it represents in practice, are not limited to these transformation costs or such a powerful adversary. Here, we show how the graphical approach can accommodate a more realistic scenario. We constrain the transformation graph to modify only the features that can be changed with the help of online services. As of this writing, there exist online services that charge approximately \$2 for ghost-writing a tweet or a reply; and services that charge approximately \$0.025 for a retweet or a like of a given tweet. Hence, we constrain the adversary to modify only the \emph{number of tweets}, the \emph{number of replies}, the \emph{likes per tweet}, and the \emph{retweets per tweet} features. Moreover, we constrain the adversary to only increase the value of any transformable feature (e.g., we assume the adversary can hire someone to write more tweets, but not to delete them). We set the weights of the edges such that they correspond to the dollar costs of the atomic transformations. This cost is estimated as follows. We compute the difference between the previous value of the feature and the lowest endpoint of the bucket in which the new feature value ends up. We then multiply this difference (the number of tweets, replies, incoming likes, or incoming retweets that need to be created or added) by the respective price in the mentioned online services. As a result, path costs in such a graph are lower bounds on the dollar cost the adversary has to pay to perform a sequence of transformations. Hence, these costs can be used to make informed risk analysis regarding the security of a model. \Tabref{tab:dollar-cost-results} shows the results of running UCS to obtain MAC values for the basic and high-confidence attacks on this transformation graph. Because of the restricted transformations, we can find adversarial examples only for 70\% and 19\% of the initial examples, for the basic and the high-confidence setting, respectively. In particular, we observe that if an initial example is classified as ``bot'' with high enough confidence (approximately 80\% for the basic attack), it is unlikely that we can find a corresponding adversarial example. Even though for simplicity we used UCS, the edge weights could be expressed as a \newterm{weighted norm}, e.g., $\norm{A({\bm{x}}- {\bm{x}}')}_1$ for some positive-definite \newterm{weight matrix} $A$ encoding the costs of the transformations. This means that it is possible to derive an admissible heuristic and employ A$^*$\xspace. We leave the derivation of such heuristic as an open line of research. \begin{table} \caption{Dollar cost of adversarial examples against bot detection. Columns: \emph{Attack}---attack setting: required confidence level for adversarial examples; \emph{Exists}---proportion of initial examples for which an adversarial example exists; \emph{Minimal adversarial cost}---minimum, average, and maximum values of minimal adversarial costs for the part of the test dataset for which adversarial examples exist.} \label{tab:dollar-cost-results} \centering \resizebox{0.8\columnwidth}{!}{ \begin{tabular}{l|r|rrr} \toprule & & \multicolumn{3}{l}{\textbf{Minimal adversarial cost}} \\ \midrule \textbf{Attack} & \textbf{Exists} & min. & avg. & max. \\ \midrule Basic (50\%) & 70\% & \$0.02 & \$35.7 & \$281.6 \\ High (75\%) & 19\% & \$3.8 & \$57.6 & \$218.2 \\ \bottomrule \end{tabular} } \end{table}
1,108,101,563,049
arxiv
\section{Introduction} \label{sec:introduction} In this paper, we show that the coefficients of the Talor expansion of Selberg integrals with respect to its exponent variables are expressed as a linear combinations of multiple zeta values. First object we treat is Selberg integral, the peirod integrals of abelian coverings of the moduli spaces of $n$-points in $\bold P^1$. Let $2 \leq r \leq n$ be integers, $\alpha_{ij}$ be positive real numbers. For an element $f \in \bold C[\frac{1}{x_i -x_j}]$, the function on $D= \{ x_1 < x_r < \cdots < x_3 < x_2\}$ defined by the integral $$ \int_{D'} f \prod_{i<j}(x_j -x_i)^{\alpha_{ij}}dx_{r+1} \cdots dx_n, $$ where $D' =\{ x_1 < x_n < \cdots < x_r\}$ is called a Selberg integral. It is considered as a family of period integrals for abelian coverings of the moduli space of distince $n$-points in $\bold C$. It is a function on $x_1, \dots , x_r$ and exponent parameters $\alpha_{ij}$. If $r=2$, by the simple functional equation on $x_1$ and $x_2$, the Selberg integral is determined by its restriction to $x_1 =0$ and $x_2=1$. This restricted function on the exponent parameter $\alpha_{ij}$ is called $n-2$-dimensional Selberg integral of $0$-variables. It is natural to think that this period integral is equipped with an arithmentic nature. The second object in our paper is multiple zeta value introduced by Euler. Let $\bold k =(k_1, \dots ,k_m)$ be a sequence of integers such that $k_i \geq 1$ $(i=1, \dots m-1)$ and $k_m \geq 2$. The multiple zeta value for the index $\bold k$ is define by $$ \zeta (\bold k) = \sum_{n_1 < \cdots < n_m}\frac{1}{n_1^{k_1}\cdots n_m^{k_m}}. $$ The natural number $\mid \bold k \mid = \sum_{p=1}^m k_p$ is called the weight of the index $\bold k$. The integer $\mid \bold k \mid$ is called the weight of the multiple zeta value $\zeta (\bold k)$. By using the iterated integral expression, multiple zeta values are regarded as period integrals for the fundamental group $\pi_1(\bold P^1 - \{ 0, 1, \infty \})$ of $\bold P^1 - \{ 0, 1, \infty \}$. (See \S \ref{sec:Drinfeld associator} for the iterated integral expression of multiple zeta values.) Notice that the motivic weight of $\zeta (\bold k)$ is equal to $-2 \mid \bold k \mid$. The main theorem of this paper is \begin{theorem} For a suitable choice of $f$, the degree $w$ coefficient of the Talor expansion for $\alpha_{ij}$ of the Selberg integral of $0$-variables is a linear combination of weight $w$ multiple zeta values. \end{theorem} Let us illustrate a primitive example for this statement. By the well known equality $$ \log \Gamma (1 + x) = \gamma x + \sum_{n \geq 2}\frac{\zeta (n) x^n}{n}, $$ we have $$ \frac{\Gamma (1+\alpha ) \Gamma (1+\beta )}{\Gamma (1+\alpha + \beta)} = \exp (\sum_{n \geq 2}\frac{\zeta (n) (\alpha^n+\beta^n - (\alpha+\beta)^n)}{n}). $$ In this example, we choose $r=2, n=3$ and $f= \alpha /(1-x)$. (See \cite{A2} \cite{Z} for another expression of this quantity.) We can find the prototype of this theorem in \cite{A2}. The method of the choice of $f$ leads us to an interesting combinatorial problem. In this paper, we will answer to this problem. This choice of $f$ happens to be equal to $\beta$-nbc base after Falk-Terao \cite{FT}. Let us summerize the method of the proof of the main theorem. Let $\bold C \langle\langle X, Y \rangle\rangle$ be the formal non-commtuative free algebra generated by $X$ and $Y$. After Drinfeld, associator $\Phi (X,Y)$ is defined as an element of $\bold C \langle\langle X, Y \rangle\rangle$. It is known that the coefficient of $\Phi (X, Y)$ is expressed as a $\bold Q$-linear combination of multiple zeta values by Le-Murakami \cite{LM}. Let $\bold Q [[\alpha_i]]$ and $\bold C [[\alpha_i]]$ be formal power series rings with vairables $\alpha_i$ ($i \in I$) over $\bold Q$ and $\bold C$ respectively. Let $\rho : \bold C\langle\langle X, Y \rangle\rangle \to M(r, \bold C [[\alpha_i]])$ be a continuous homomorphism, where all the matrix elements of $\rho (X)$ and $\rho (Y)$ are degree 1 homogeneous in $\bold Q [[\alpha_i]]$. By the result of Le-Murakami, the coefficients of the Talor expansion of any matrix element $\rho (\Phi (X,Y))$ are expressed by multiple zeta function. For any solution $s$ of the differential equation $$ ds = (\frac{\rho (X)}{x}+\frac{\rho (Y)}{x-1})s dx, $$ we have $$ \lim_{x \to 1}((1-x)^{-\rho (Y)}s(x)) = \rho (\Phi (X,Y)) \lim_{x \to 0}(x^{-\rho (X)}s(x)). $$ In this paper, we construct representation $\rho$ and horizontal section $s$ with the following properties. \begin{enumerate} \item All the element of $\lim_{x \to 0}(x^{-\rho (X)}s(x))$ is expressed as $n-3$-dimensional $0$-vairalbe Selberg integrals by taking a limit for some of $\alpha_i \to 0$. \item All the element of $\lim_{x \to 1}((1-x)^{-\rho (Y)}s(x))$ is expressed as $n-2$-dimensional $0$-vairalbe Selberg integrals by taking the same limit $\alpha_i \to 0$. \end{enumerate} For the construction of representation with these properties, we make a combinatorial preparation in Section \ref{sec:combinatorial preliminaries}. The construction of $\rho$ depends on the computation of the higher direct image of the local system on the moduli space $X_n$ of distinct $n$-points in $\bold C$ for the projection $X_n \to X_{n-1}$. This computation is executed in Section \ref{sec:selberg integral}. The limit for $x \to 0$, $x\to 1$ and $\alpha_i \to 0$ are given in Section \ref{sec:proof of the main theorem}. The author would like to thank T.Kohno for references and discussions. He also would like to thank M.Kaneko and H.Terao for discussions. \section{Preliminary} \label{sec:preliminary} \subsection{Drinfeld Associator} \label{sec:Drinfeld associator} In this section, we recall known facts about Drinfeld associator. Let $R = \bold C \langle\langle X, Y \rangle\rangle$ be the completion of non commutative polynomial ring of symbols $X, Y$ with respect to its total degree on $X$ and $Y$. Let $V= R$. Then $X$ and $Y$ acts on $V$ as the left multiplication and under this action $X$ and $Y$ are regarded as elements in $End_{\bold C}(V)$. Now we consider a differential form $\omega$ on $\bold C-\{ 0,1\}$ with the coefficient in $End_{\bold C}(V)$ defined as $$ \omega = \frac{X}{x}dx + \frac{Y}{x-1}dx, $$ where $x$ is the coordinate of $\bold C$. Let $ E(x) = \exp (\int _{x_0}^x \omega) $ be the solution of the differential equation for $End_{\bold C}(V)$-valued function $E(x)$ $$ dE(x) = \omega E(x) $$ with the initial condition $E(x_0) = 1$. Then by the standard argument for iterated integrals, $\exp (\int _{x_0}^x \omega)$ is expressed as \begin{align} \label{eq:exponential} \exp (\int _{x_0}^x \omega) = 1 + \int _{x_0}^x \omega + \int _{x_0}^x \omega\omega + \cdots . \end{align} Here we use the convention for iterated integrals defined by the inductive relation $$ \int_p^q \omega_1 \cdots \omega_n = \int_p^q (\omega_1(q_1)\int_p^{q_1}\omega_2 \cdots \omega_n). $$ The expression ($\ref{eq:exponential}$) implies, $\exp (\int _{x_0}^x \omega) \in \bold C \langle\langle X, Y \rangle\rangle ^{\times}$ and the shuffle relation for iterated integral imples that $E = \exp (\int _{x_0}^x \omega)$ is a group like element, i.e. $\Delta (E) = E \otimes E $ in $\bold C\langle\langle X, Y\rangle\rangle \hat\otimes \bold C \langle\langle X, Y\rangle\rangle $ where the comultipication $\Delta :\bold C\langle\langle X, Y\rangle\rangle \to \bold C\langle\langle X, Y\rangle\rangle \hat\otimes \bold C \langle\langle X, Y\rangle\rangle $ is given by $\Delta (X) = X\otimes 1 + 1 \otimes X$ and $\Delta (Y) =Y \otimes 1 + 1 \otimes Y$. The set $\hat G = \{ \Delta (g) = \Delta (g) \otimes \Delta (g) \mid g \in \bold C\langle\langle X, Y\rangle\rangle ^{\times}\}$ is called the set of group like elements and closed under the multiplication. By the theory of differential equation only with regular singularity, the limit $ \lim_{x\to 1} \exp (\int_x^0 \frac{Y}{x-1}dx) \exp (\int _{x_0}^x \omega) $ exists. In the same way, the limit $$ \Phi (X, Y) = \lim_{x\to 1, y\to 0} \exp (\int_x^0 \frac{Y}{x-1}dx) \exp (\int _{y}^x \omega) \exp (\int_1^y \frac{X}{x}dx) $$ exists and contained in $\bold C\langle\langle X, Y\rangle\rangle ^{\times}$. $\Phi (X, Y)$ is called the Drinfeld associator. Since $\exp (\int_x^0 \frac{Y}{x-1}dx)$ and $\exp (\int_1^y \frac{X}{x}dx)$ are elements in $\hat G$ and $\hat G$ is a closed subset of $\bold C\langle\langle X, Y\rangle\rangle ^{\times}$, the limit $\Phi (X,Y)$ is an element in $\hat G$. We recall that the relation between multiple zeta values and the coefficients of the Drinfeld associator. Firstly, we recall the definition of multiple zeta values. Let $k_1, \dots , k_n$ be integers such that $k_i \geq 1$ for $i = 1, \dots ,n$ and $k_n \geq 2$. Set $\bold k = (k_1, \dots , k_n)$. The following series $$ \zeta (\bold k) = \zeta (k_1, \dots, k_n) =\sum_{m_1 < m_2 < \cdots < m_n}\frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_n^{k_n}} $$ is called the multiple zeta values for the index $\bold k = (k_1, \dots ,k_n)$. The number $\mid \bold k \mid = \sum_{i=1}^n k_i$ is called the weight of the index $\bold k$. Let $L_w$ be the finite dimensional $\bold Q$ vector subspace of $\bold C$ generated by $\zeta (\bold k)$, with the weight $\mid \bold k \mid = w$. The following iterated integral expression of multiple zeta values is fundamental. \begin{align*} \zeta (k_1, \dots , k_n) = & \int_0^1 \underbrace{\frac{dx}{x}\cdots \frac{dx}{x}}_{k_n -1}\frac{dx}{1-x} \underbrace{\frac{dx}{x}\cdots \frac{dx}{x}}_{k_{n-1} -1}\frac{dx}{1-x} \\ & \cdots \underbrace{\frac{dx}{x}\cdots \frac{dx}{x}}_{k_1 -1}\frac{dx}{1-x}. \end{align*} By using this expression and shuffle relation, for elements $a$ and $b$ in $L_{w_1}$ and $L_{w_2}$, we can show that $ab$ is an element in $L_{w_1 + w_2}$. Using this fact, we define the homogeneous multiple zeta value ring (homogeneous MZV ring for short) $H$ in $\bold C\langle\langle X, Y\rangle\rangle $ by $$ H= \oplus_{w \geq 0}\oplus_{W:\text{word of length $w$ on $X,Y$}} L_w\cdot W. $$ The following proposition is due to Le-Murakami \cite{LM}. \begin{proposition} \label{prop:L-M} $\Phi (X, Y) \in H$. \end{proposition} It is very usuful to specialize this universal result to a special class of representations of $\bold C \langle\langle X, Y \rangle\rangle$. Let $R$ be a homogeneous complete ring generated by degree 1 elements over $\bold Q$, i.e. R is generated topologically by degree 1 homogeneous elements $\alpha_1, \dots ,\alpha_m$ with homogeneous relations and complete under the topology defined by its degree. The decomposition of R with respect to its degree is denoted by $R = \hat\oplus_{d \geq 0} R_d$. Let $\bold R_{\bold C}$ be the completion of $R\otimes \bold C$ with resepct to the topology defined by its degree. A ring homomorphism $\rho :\bold C \langle\langle X, Y\rangle\rangle \to M(r, R_{\bold C})$ is called a homogeneous rational representation of degree 1 if and only if all the matrix elements of $\rho (X)$ and $\rho (Y)$ are degree 1 homogeneous elements in $R$. The homogeneous MZV ring $H_R$ for $R$ is defined by $H_R = \hat\oplus_{d \geq 0}(R_d \otimes L_d)$. The following corollary is a direct consequence of Proposition \ref{prop:L-M}. \begin{corollary} \label{corollary:homogeneous rational representation} Let $\rho : \bold C \langle\langle X, Y\rangle\rangle \to M(r, R_{\bold C})$ be a homogeneous rational representation of degree 1. Then all the matrix elements of $\rho (\Phi (X,Y))$ are elements in $H_R$. \end{corollary} \section{Selberg integral} \label{sec:selberg integral} \subsection{Combinatorial aspects} \label{subsec:combinatorial aspects} Let $[n] =\{ 1,\dots ,n\}$. A graph $\Gamma$ consisits of the set of vertices $V_{\Gamma}$ and edges $E_{\Gamma}$. We assume that every edges have distinct two terminals. Moreover we assume for any two vertices $p$ and $q$, there exists at most one edge whose terminals are $p$ and $q$. An edge is written as $(p,q)$, where $p$ and $q$ are its terminals. For a graph $\Gamma$, we can associate a 1-dimensional simplicial complex by usual manner and we use the standard terminology connected component, tree, and so on. Moreover, if the order of $E_{\Gamma}$ is specified, it is called ordered graph. A specified point for each connected components is called a root. The specified point for a connected component is called the root of the connected component and the set of roots is denoted by $R= R_{\Gamma}$. For two sets $V$, $R$ such that $R \subset V$, we define $\Omega^i (V\operatorname{mod} R)$ by $\wedge^i(\Omega_{X_V}^1/p^*\Omega_{X_R}^1)$, where $X_V = \{ (x_i)_{i \in V} \mid x_i \neq x_j \text{ for } i \neq j\}$, $X_R = \{ (x_i)_{i \in R} \mid x_i \neq x_j \text{ for } i \neq j\}$, and $p$ is the natural projection $X_V \to X_R$. Then it is easy to see that $\Omega^{\# V -\# R} (V\operatorname{mod} R)$ is a rank 1 $\mathcal O_{X_n}$ module generated by $\wedge_{i \in V - R} dx_i$. For an edge $e = (p,q) , p,q \in V_{\Gamma}$, we define $\omega_e = d\log (x_p - x_q) \in \Omega^1(V \operatorname{mod} R)$. For an ordered tree, we define $\omega_{\Gamma}$ as $\omega_{\Gamma} = \omega_{e_r}\wedge \cdots \wedge \omega_{e_1}$ in $\Omega(V \operatorname{mod} R)$, where $E_{\Gamma}=\{ e_1, \dots ,e_r \}$ and $e_1 < \cdots < e_r$. It is easy to to see the following lemma. \begin{lemma} Assume $\# E = \# V -\#R$. Then $\Gamma$ is a tree if and only if $\omega_{\Gamma} \neq 0$ \end{lemma} Let $R$ be a sub set of $[n]$ such that $\{ 1, 2\}\subset R$. We define an ordering of $[n]$ by $1 \ll n \ll \cdots \ll 3 \ll 2$. We define $D(R)$ by $\{ (x_1, \dots ,x_i)_{i \in R}\mid x_i <x_j \text{ for } i\ll j \}$. For two subsets $V$ and $R$ of $[n]$ such that $R \subset V$, the fiber of the map $D(V) \to D(R)$ at $(x_i)_{i \in R} \in D(R)$ is denoted by $D(V/R,x_i)_{i \in R}$. Let $\alpha_{i,j}$ $(i,j \in V)$ be positive real numbers. We choose a branch of $\Phi (V) = \prod_{i \ll j}(x_j - x_i)^{\alpha_{i,j}}$ on $D(V)$ with $\Phi \in \bold R_+$. For an ordered rooted graph $\Gamma$ whose root set is $R$, we define a funciont $S_{\Gamma }=S_{\Gamma}(V/R, x_i)_{i \in R}$ on $D(R)$ as $$ S_{\Gamma }(V/R, x_i)_{i \in R} = \int_{D(V/R,x_i)_{i \in R}} \Phi (V)\prod_{(i,j) \in E_{\Gamma}} \alpha_{i,j}\omega_{\Gamma} $$ If $R$ is fixed, it is denoted by $S_{\Gamma}$. Then $S_{\Gamma}$ is a function on $(x_i)_{i \in R}$ and $\alpha_{i,j}$. The free abelian group generated by ordered rooted graphs whose root set and the set of vertices are $R$ and $V$, is denoted by $\Gamma (V,R)$. For an element $\gamma = \sum a_{\Gamma} \Gamma$ in $\Gamma (V,R)$, we define $S_{\gamma}$ by $S_{\gamma}= \sum a_{\Gamma}S_{\Gamma}$. The function $S_{\gamma}$ is called the Selberg integral for $\gamma$. Before the presentation of the main theorem, we introduce several combinatorial notions. For two natrual numbers $n,r$ such that $2 \leq r \leq n$, we set $R= [r]$ and $V = [n]$. For an ordered rooted graph $\Gamma$, whose vertex set and root set are $[n]$ and $[r]$, we define an element $\Gamma \wedge (n+1, i)$ in $\Gamma ([n+1],[r])$ for $i \in [n]$ according to the following recipe. \begin{enumerate} \item Choose a subset $A$ of edges which are connecting to $i$. ($A$ may be an emptyset.) \item Replace the number $i$ by $n+1$ for all the edges contained in $A$ choosen in 1. \item Make a graph $\Gamma_A$ by adding the edge $(n+1, i)$ to the graph $\Gamma$ and extend the original ordering to that of the edge set of $\Gamma_A$ such that $(n+1,i)$ is the biggest edge. \item Consider the sum $\sum_A \Gamma_A$ of $\Gamma_A$, where $A$ runs through all the subsets of edges connecting to $i$. This summation is denoted by $\Gamma \wedge(n+1, i)$. \end{enumerate} We extend the operation $\wedge (n+1, i_{n+1})$ from $\Gamma ([n], [r])$ to $\Gamma ([n+1], [r])$ by linearity. For an element $\gamma \in \Gamma ([l], [r])$ and $(l+1, i_{l+1}),\dots ,(n,i_n)$, where $i_{l+1} \in [l], \dots ,i_n \in [n-1]$, we define $\gamma\wedge (l+1, i_{l+1})\wedge\cdots \wedge (n, i_n)$ inductively by $$ \gamma\wedge (l+1, i_{l+1})\wedge \cdots \wedge (n, i_n) = (\gamma\wedge (l+1, i_{l+1})\wedge \cdots \wedge (n-1, i_{n-1}))\wedge (n, i_{n}) $$ The graph $\Gamma$ with $V_{\Gamma} = R$ and $E_{\Gamma} = \emptyset$ is denoted by $\emptyset (R)$. A grpah is denoted by $e_1 e_2 \cdots e_b$, where the set of edges is $\{e_1 < e_2 < \cdots < e_b \}$ . \begin{example} If $R =\{1,2\}, i_3 = 2, i_4=2$, then $$ \emptyset (R)\wedge (3,2)\wedge (4,2) = (2,3)\wedge (4,2) = (2,3)(4,2) + (4,3)(4,2) $$ \end{example} We state the main theorem. Let $H_{\alpha}$ be a homogeneous MZV ring for $\bold Q \langle\langle \alpha_{i,j},\alpha_{1,k}, \alpha_{2,k}\rangle\rangle _{3 \leq i,j,k \leq n, i\neq j}$. \begin{theorem} \label{theorem:the main theorem} Let $R=\{1,2\}$. For any $i_3 \in [2], \dots , i_n \in [n-1]$, put $\gamma = \emptyset (R)\wedge (3, i_3) \wedge \cdots \wedge (n, i_n)$. Then $S_{\gamma}([n]/[2],0, 1)$ is a holomorphic function on $\alpha_{i,j}$ and an element of $H_\alpha$. \end{theorem} \subsection{Differential equation satisfied by Selberg integral} \label{subsec:differential equation} First we compute the higher direct image of the connection on the configuration space $X_n =\{(x_1, \dots ,x_n) \mid x_i \neq x_j \text{ for $i \neq j$ } \}$ for distinct $n$-points in $\bold C$ for the morphism $\pi: X_n \to X_{n-1}$ defined by $(x_1, \dots , x_n) \to (x_1, \dots ,x_{n-1})$. Let $A_{i,j} \in M(d,\bold C)$ be matrices for $1 \leq i \neq j \leq n$ satisfying the following relations. These realations are called pure braid relations. \begin{enumerate} \item $A_{i,j} = A_{j,i}$. \item $[A_{i,j}, A_{k,l}]=0$ for all distinct $i, j, k, l$. \item $[A_{i,j} + A_{j,k}, A_{i,k}] = 0$ for all distinct $i, j, k$. \end{enumerate} Then the matrix valued 1-form $$ \omega = \sum_{1 \leq i<j \leq n} A_{i,j}d \log (x_i - x_j) $$ defines an integrable connection $\nabla$ on $\mathcal O_{X_n}^d = \{ v= ^t(v_1, \dots ,v_d)\}$ by $$ \nabla v = dv - \omega v. $$ Let $v$ be a horizontal section of the connection $\nabla$ on $D([n])$, i.e. $dv = \omega v$. For $i \in [n-1]$, and $(x_1, \dots ,x_{n-1})\in D([n-1])$, we define $w_i$ as $$ w_i = \int_{D([n]/[n-1],x_1, \dots ,x_{n-1})}\frac{A_{n,i}}{x_n - x_{i}}v dx_n. $$ Then $w_i$ is a function on $(x_1, \dots , x_{n-1})\in D([n-1])$. We have the following proposition. \begin{proposition} \label{proposition:higher direct image} \begin{enumerate} \item $w_1 + \cdots + w_{n-1} = 0$. \item Let $W = ^t(w_1, \dots , w_{n-1})$. Then $W$ satisfies the differential equation $$ dW = \sum_{1\leq i<j \leq n-1}\frac{A'_{i,j}(dx_i - dx_j)}{x_i - x_j}W, $$ where \begin{align} \label{eq:big matrix} A'_{i,j} = \left(\begin{matrix} A_{ij} & \overset{i}{\cdots} & \overset{j}{\cdots} & 0 \\ \vdots & A_{ij}+A_{nj} & -A_{ni} & \vdots \\ \vdots & -A_{nj} & A_{ij}+A_{ni} & \vdots \\ 0 & {\cdots} & {\cdots} &A_{ij} \\ \end{matrix}\right) \begin{matrix} \\ i \\ j \\ \\ \end{matrix} \end{align} \end{enumerate} \end{proposition} \begin{proof} 1. By the equality $$ \frac{\partial v}{\partial x_n} = \sum_{j=1}^{n-1}\frac{A_{nj}}{x_n-x_j} v, $$ and Stokes' theorem, the equatity follows. 2. By using the differential equation for $v$, we have \begin{align*} & \frac{\partial}{\partial x_i}(\frac{A_{nj}}{x_n- x_j}v) \\ = & \frac{A_{nj}}{x_n- x_j}(\sum_{k \neq i,j,n}\frac{A_{ik}v}{x_i- x_k} +\frac{A_{ij}v}{x_i- x_j}+\frac{A_{ni}v}{x_i- x_n}) \\ = & \sum_{k \neq i,j,n}\frac{A_{ik}A_{nj}v}{(x_i- x_k)(x_n- x_j)}+ \frac{A_{nj}v}{x_i- x_j}\{ -\frac{A_{ni}v}{x_n- x_i}+\frac{(A_{ij}+A_{ni})v}{x_n- x_j} \} . \end{align*} By the commutativity condition of $A_{ij}$, we have $$ \frac{\partial}{\partial x_i} w_j = \sum_{k \neq i,j, 1 \leq k \leq n-1} \frac{A_{ik}}{x_i - x_k}w_k + \frac{1}{x_i- x_j} \{ (A_{ij}+ A_{ni}) w_j - A_{nj} w_i\} $$ for $i \neq j$. Using the relation in 1., $\frac{\partial}{\partial x_i}w_i = \sum_{j \neq i, 1 \leq j \leq n-1}\frac{\partial}{\partial x_i}w_j$, and we obtain the statement of 2. \end{proof} \begin{remark} \label{remark:homomorphism} If $A_{ij}$ satisfies the infinitesimal pure braid relations, then matrices $A_{ij}'$ defined by (\ref{eq:big matrix}) also satisfies the inifinitesimal pure braid relations for $n-1$. Therefore the connection $\nabla '$ given by $$ \nabla ' W = dw -\sum_{1 \leq i <j \leq n-1}\frac{(dx_i -dx_j)A_{ij}'}{x_i -x_j} W $$ on $(\mathcal O^{\oplus d})^{\oplus (n-1)}$ is integralble. \end{remark} Let $V_n$ be the local system of horizontal sections of the connection $\nabla$ and $V_n\mid_{ \pi^{-1}(x_1^0, \dots , x_{n-1}^0)}$ its restriction of the fiber $\pi^{-1}(x_1^0, \dots , x_{n-1}^0)$ of $\pi :X_n \to X_{n-1}$. Then the Eular-Poincare characteristic of $V_n\mid_{\pi^{-1}(x_1^0, \dots , x_{n-1}^0)}$ is $-(\operatorname{rank} V)\cdot (n-2)$. Therefore under certain non-resonance condition, $\dim H^1(\pi^{-1}(x_1^0, \dots ,x_{n-1}^0), V_n)$ is equal to $\operatorname{rank} V\cdot (n-2)$. By a direct comupation, the submodule $(\mathcal M )^{red} = \{ W = ^t(w_1, \dots ,w_{n-1}) \mid w_i \in \mathcal O^{\oplus d}, \sum_{i=1}^{n-1} w_i = 0\}$ comes to be a sub connection of $\mathcal M =( (\mathcal O^{\oplus d})^{\oplus (n-1)}, \nabla ')$. As a consequence, horizontal section of $(\mathcal M )^{red}$ is equal to the higher direct image of $V$ under the projection $\pi$. This construction is compatible with the sub local system in $V$. We apply this inductive formula to compute the differential equation satisfied by Selberg integral. Note that the similar computation is executed in \cite{A1} with a different base of de Rham cohomology. Selberg integral is holomorphic with respect to $\alpha_{ij}$ for our base. This base is nothing but the $\beta$-nbc base introduced in Falk-Terao \cite{FT}. Let $R$ be a ring. For a set of elements $\bold a =\{ a_{pq}\}_{1 \leq p< q \leq k}$ satisfying the infinitesimal pure braid relation, we define a set of elements $\operatorname{Ind}( \bold a) = \{ \operatorname{Ind} (\bold a)_{ij} \}_{1 \leq i < j \leq k-1}$ in $M(k-1,R)$ by $$ \operatorname{Ind}(\bold a)_{ij}= \left( \begin{matrix} a_{ij} & \overset{i}{\cdots} & \overset{j}{\cdots} & 0 \\ \vdots & a_{ij}+a_{kj} & -a_{ki} & \vdots \\ \vdots & -a_{kj} & a_{ij}+a_{ki} & \vdots \\ 0 & {\cdots} & {\cdots} &a_{ij} \\ \end{matrix}\right) \begin{matrix} \\ i \\ j \\ \\ \end{matrix}. $$ Let $2 \leq r \leq n$ be integers and $V_{r,n}$ be a $\bold C$ vector space of dimension $r(r+1)\cdots (n-1)$ whose coordinates are given by $v_{i_{r+1}, \dots ,i_n}$ for $1 \leq i_{r+1} \leq r, \dots , 1\leq i_n \leq n-1$. We define ${\bold A}^{(p)} =\{ A^{(p)}_{ij}\}_{1\leq i < j \leq p} $ for $p= r, \dots , n-1$ by $$ {\bold A}^{(p)}=\operatorname{Ind} ({\bold A}^{(p+1)}) $$ and $A^{(n)}_{ij} = a_{ij}$. We define $V_{k,n}$ valued function $S^{(k)}(x_1, \dots , x_k)$ on $D([k])$ inductively by $$ S^{(k)}= \left(\begin{matrix} \int_{D([k+1]/[k],x_i)_{i \in [k]}}\frac{A^{(k+1)}_{k+1,1}}{x_{k+1}- x_1} S^{(k+1)}(x_1, \dots ,x_{k+1})dx_{k+1} \\ \vdots \\ \int_{D([k+1]/[k],x_i)_{i \in [k]}}\frac{A^{(k+1)}_{k+1,k}}{x_{k+1}- x_k} S^{(k+1)}(x_1, \dots ,x_{k+1})dx_{k+1} \\ \end{matrix}\right) $$ for $k = r,\dots ,n-1$ and $$ S^{(n)} = \prod_{1\leq i \ll j \leq n}(x_j - x_i)^{\alpha_{ij}}. $$ We have the following corollary of Proposition \ref{proposition:higher direct image}. \begin{corollary} The $V_{k,n}$ valued function $S^{(k)}$ satisfies the following differential equation $$ dS^{(k)} = \Omega_k S^{(k)}, $$ where $\Omega_k = \sum_{1 \leq i < j \leq k}\frac{A^{(k)}_{ij}d(x_i -x_j)} {x_i - x_j}$. \end{corollary} The next proposition is used at the proof of Main theorem \ref{theorem:the main theorem}. \begin{proposition} \label{proposition:sum relation} Let $S^{(r)}_{i_{r+1}, \dots ,i_n}$ be the $(i_{r+1}, \dots ,i_n)$-component of $S^{(r)}$. Then we have \begin{align} \label{eq:fundamental relation} \sum_{i_p' =1}^{p-1} S_{i_{r+1}, \dots , ,i_{p-1}, i_p', i_{p+1}, \dots , i_n} = 0. \end{align} \end{proposition} \begin{proof} If $p= r+1$, then it is nothing but the first statement of Proposition \ref{proposition:higher direct image}. Suppose $p > r+1$. Then $(i_{r+1}, \dots ,i_{p-1})$-part of $S^{(r)}$ is a linear combination of \begin{align} \label{eq:sum relation} \int \prod_{i=r+1}^{p-1}A_{p_iq_i}^{(p)} \prod_{j=r+1}^{p-1}\frac{1}{x_j- x_{i_j}} S^{(p)}dx_{r+1}\cdots dx_{p-1} \end{align} Since the set $\{(a_{i_p, \dots ,i_n}) \mid \sum_{i_p'=1}^{p-1} a_{i_p', \dots ,i_n} = 0 \}$ is stable under the action of $A_{ab}^{(p)}$, (\ref{eq:sum relation}) satisfies the relation $\sum_{i_p'=1}^{p-1} a_{i_p', \dots ,i_n} = 0$. \end{proof} \section{Combinatorial Preliminaries} \label{sec:combinatorial preliminaries} \subsection{Statement of the main theorem} \label{subsec:statement of the main theorem} In this section, we present combinatorial facts which is used to the computataion of Selberg integrals. Let $P_n$ be the non-commutative ring $\bold C [a_{ij}]$ with the generators $a_{ij}$ $(1 \leq i,j \leq n)$ and the infinitesimal pure braind relations. We define a set of matrices $\bold A^{(k)} = \{ A_{ij}^{(k)}\}_{1 \leq i , j \leq k}$ in $M(k(k+1) \cdots (n-1), P_n)$ inductively by the relations: $$ {\bold A}^{(k)} = \operatorname{Ind} (\bold A^{(k+1)}) $$ for $k = r, \dots , n-1$ and $A_{ij}^{(n)} = a_{ij}$. We introduce the degree of $P_n$ by $\deg a_{ij} = 1$. Then the matrix elements of $A_{ij}^{(k)}$ are degree $1$ for $k = r, \dots ,n$ and $A_{ij}^{(k)}$ satisfes the pure braid relations. In other words, A ring homomorphism $P_r \to M(r(r+1)\cdots (n-1), P_n)$ is defined by attaching $A_{ij}^{(r)}$ to $a_{ij}\in P_r$. We define a vector $w_k \in P_n^{k(k+1)\cdots (n-1)}\otimes \bold C[\frac{1}{x_i -x_j}]$ inductively by the relation \begin{align} \label{eq:tensor vector} w_k= \left(\begin{matrix} \frac{A^{(k+1)}_{k+1,1}}{x_{k+1}- x_1} w_{k+1} \\ \vdots \\ \frac{A^{(k+1)}_{k+1,k}}{x_{k+1}- x_k} w_{k+1} \\ \end{matrix}\right) \end{align} for $k = r, \dots, n-2$ and $$ w_{n-1}= \left(\begin{matrix} \frac{A^{(n)}_{n,1}}{x_{n}- x_1} \\ \vdots \\ \frac{A^{(n)}_{n,n-1}}{x_{n}- x_{n-1}} \\ \end{matrix}\right). $$ In this section, we express each coordinate of $w_r$ in terms of combinatorics introduced in \S \ref{subsec:combinatorial aspects}. For an ordered tree $\Gamma$ with the vertex set $[k]$ and root set $R = [r]$, we define $A_{\Gamma}^{(k)}$ by $$ A_{\Gamma}^{(k)} = \prod_{i=l}^1 A_{p_i,q_i}^{(k)} \in M(k(k+1)\cdots (n-1), P_n), $$ where $E_{\Gamma} =\{ e_1 < \cdots < e_l\}$ and $e_i = (p_i, q_i)$. Here we use the notation $\prod_{i=l}^1 a_i = a_l a_{l-1}\cdots a_1$ in a non-comutative ring. We define a matrix valued differential form $\eta_{\Gamma} \in \Omega ([k]\operatorname{mod} [r]) \otimes M(k(k+1) \cdots (n-1), P_n)$ by $\eta_{\Gamma} = A_{\Gamma}^{(k)}\omega_{\Gamma}$, where $\omega_{\Gamma}$ is defined in \S \ref{subsec:combinatorial aspects}. For an element $\gamma \in \Gamma ([k], [r])$, we define $\eta_{\gamma}$ by $\eta_{\gamma} = \sum_{\Gamma} a_{\Gamma} \eta_{\Gamma}$, where $\gamma = \sum_{\Gamma} a_{\Gamma}\Gamma$. \begin{theorem} \label{theorem:combinatorics} Let us denote the $(i_{r+1}, \dots ,i_n)$-th coordinate of $w_r$ by \linebreak $w_r(i_{r+1}, \dots , i_n)$. Then $$ w_r(i_{r+1}, \dots , i_n)dx_n\wedge \cdots \wedge dx_{r+1} = \eta_{\gamma}, $$ where $\gamma = \emptyset ([r]) \wedge (r+1, i_{r+1}) \wedge \cdots \wedge (n, i_n)$. \end{theorem} The rest of this section is spent to prove Theorem \ref{theorem:combinatorics}. \subsection{Several lemmata} \label{several lemmata} Let $\Gamma$ be an ordered graph with the root set $[r]$ and the vertex set $[n-1]$. The edge set is denoted by $E=\{ e_1 < \cdots < e_l\}$ and $e_i = (p_i, q_i)$. Suppose that $p$ and $q$ are contained in the same connected component. Then there exists unique path $P$ connecting $p$ and $q$ in $\Gamma$. We write $P = \{ e_{t_1}, \dots , e_{t_m}\}$. The subgraph $P$ looks like figure 1. \begin{figure} \vskip 10pt \centerline{\epsfbox{fig1.eps}} \vskip 5pt \caption[]{ } \end{figure} \begin{lemma} \label{lemma:product} Let $A_{\Gamma}^{(n-1)} \in M(n-1, P_n)$ be defined as in \S \ref{subsec:statement of the main theorem} \begin{enumerate} \item If $q$-th component of \begin{align} A_{\Gamma}^{(n-1)}\left( \begin{matrix} 0 \\ \vdots \\ a_{np} \\ \vdots \\ 0 \end{matrix} \right) \in P_n^{(n-1)} \end{align} is not zero, then $p$ and $q$ are contained in the same connected component and $t_1 < t_2 < \cdots < t_m$. \item Suppose that $t_1 < t_2 < \cdots < t_m$. We write vertices of the path $P$ as $p= k_0, k_1, \dots , k_m = q$, (see figure 1) and define $B_i$ $(i=1, \dots ,l)$ by $$ B_i = \begin{cases} -a_{k_j,n} & (\text{ if } i = t_j) \\ a_{p_iq_i} + a_{n q_i} & (\text{ if } t_j < i < t_{j+1} \text{ and $e_i$ ajacents to $k_j$ and put $p_i=k_j$}) \\ a_{p_iq_i} & (\text{ if } t_j < i < t_{j+1} \text{ and $e_i$ does not ajacent to $k_j$}) \\ \end{cases} $$ (For the second case see figure 2.) \begin{figure} \vskip 10pt \centerline{\epsfbox{fig2.eps}} \vskip 5pt \caption[]{ } \end{figure} Then $q$-th component of (1) is equal to $\prod_{i=l}^1 B_i$. \end{enumerate} \end{lemma} \begin{proof} For a vector $v = ^t(v_1, \dots ,v_{n-1}) \in P^{n-1}_n$, we set $\operatorname{Supp} (v) = \{ i \mid v_i \neq 0\}$. \begin{enumerate} \item If $\{ i,j \} \cap \operatorname{Supp} (v) = \emptyset$, then $\operatorname{Supp} (A_{ij}^{(n-1)}v) = \operatorname{Supp} (v)$ and $k$-th component of $A_{ij}^{(n-1)}v$ is equal to $a_{ij}v_k$ for $k \in \operatorname{Supp} (v)$. \item If $\{ i,j\} \cap \operatorname{Supp} (v) =\{ i \}$, then $\operatorname{Supp} (A_{ij}^{(n-1)}v) \subset \operatorname{Supp} (v) \cup \{ j \}$ and $j$-th component and $i$-th component of $A_{ij}^{(n-1)}v$ is equal to $-a_{nj}v_i$ and $(a_{ij} + a_{nj})v_i$ respectively. \end{enumerate} Therefore if we define $S_i$ inductively by $$ S_{i+1} = \begin{cases} S_i & (\text{ if $e_{i+1} \cap S_i = \emptyset$}) \\ S_i \cup \{ j\} & (\text{ if $e_{i+1} \cap S_i = \{ k\}$ and $e_{i+1} = \{ j,k \}$}), \\ \end{cases} $$ and $S_0 = \{ p\}$. Then $\operatorname{Supp} (\prod_{i= l}^1A_{p_i, q_i}^{(n-1)})v \subset S_{l}$. If $q \in S_{l}$, then $t_1 < \cdots < t_m$. This proves (1). For (2), we can prove $$ [(\prod_{i=s}^1A_{p_i, q_i}^{(n-1)}\left(\begin{matrix} 0 \\ \vdots \\ a_{nk} \\ \vdots \\ 0 \end{matrix}\right) )]_{k_j} = a_{n,k_j}\cdot \prod_{j=s}^1 B_j \cdot $$ if $t_j \leq s < t_{j+1}$ by induction on $s$ using the infinitesimal pure braind relation (1) and (2). (In case $s \geq t_m$ and $s < t_1$, $a_{n,k_j} =a_{n,k_m}$ and $a_{n,k_j} = a_{n,k_0}$ respectively.) This complete 2. \end{proof} Next we introduce an expression of $\emptyset ([r]) \wedge (r+1, i_{r+1}) \wedge \cdots \wedge (n, i_n)$ by using the notion of principal graph. \begin{definition} For an index set $I = (i_{k+1}, \dots ,i_n)$, $(1 \leq i_p \leq p-1)$, we define the ordered rooted graph $P_I$ as follows. \begin{enumerate} \item The set of vertices is $\{1, \dots ,n\}$, \item the set of roots is $\{ 1, \dots , k\}$, and \item the set of ordered edges is $\{(k+1, i_{k+1})< \dots < (n, i_n)\}$. \end{enumerate} The graph $P_I$ is called the principal graph of $I$. \end{definition} Let $p,q$ be two vertices contained in the same connected component of $P_I$. The unique shortest path connecting $p, q$ in $P_I$ is denoted by $\gamma (p,q)$ and the minimal edge of $\gamma (p,q)$ is denoted by $\operatorname{min} (p,q)$. Then by the construction of the principal graph, we have the following lemma. \begin{lemma} Let us write a path $\gamma (p,q)$ connecting $p,q$ in $P_I$ as in figure 1: Suppose that $e_{t_s}$ is the minimal edge of $\gamma (p,q)$. Then $t_1 > \cdots > t_s < \cdots < t_m$ \end{lemma} A graph $\Gamma$ is called a support of $\gamma = \sum_{\Gamma}a_{\Gamma}\Gamma$, if $a_{\Gamma} \neq 0$. The set of supports of $\gamma$ is denoted by $\operatorname{Supp} (\gamma )$. Let $p,q \in [n]$ be vertices contained in the same connected component in $P_I$. We set $\gamma =\emptyset (\{1, \dots ,k\})\wedge (k+1,i_{k+1}) \wedge \cdots \wedge (n, i_n)$. By the construction of $\gamma$, if $\Gamma \in \operatorname{Supp} (\gamma )$ and $(p,q)$ appeares in $\Gamma$, then $(p,q)$ is the $m$-th edge of $\Gamma$, where $e_m = \operatorname{min} (p,q)$. Conversely, for any pairs $(p_{k+1},q_{k+1}), \dots , (p_n, q_n)$ such that \begin{enumerate} \item $p_i$ and $q_i$ are contained in the same connected component of $P_I$, and \item $\operatorname{min} (p_j, q_j)$ is the $j$-th edge $(j,i_j)$ of $P_I$, \end{enumerate} $a_{\Gamma} = 1$ for $\Gamma = (p_{k+1},q_{k+1}) \cdots (p_n, q_n)$. We use distributive notation as \begin{align*} & \{(p_{k+1},q_{k+1})+(p'_{k+1},q'_{k+1})\} (p_{k+2},q_{k+2}) \cdots (p_n, q_n) \\ = & (p_{k+1},q_{k+1}) (p_{k+2},q_{k+2}) \cdots (p_n, q_n) + (p'_{k+1},q'_{k+1}) (p_{k+2},q_{k+2}) \cdots (p_n, q_n). \\ \end{align*} Here the right hand side has a meaning as a formanl linear combination of ordered graphs. The following proposition is nothing but the restatement of the definition of $\wedge$. \begin{proposition} Let $S_i =\sum_{ 1 \leq p < q \leq n, \operatorname{min} (p,q) = e_i \text{ in } P_I} (p,q)$. Then $$ \emptyset (\{1, \dots ,r\})\wedge (r+1, i_{r+1}) \wedge \cdots \wedge (n,i_n) = S_{r+1}\cdot S_{r+2} \cdots S_{n} $$ \end{proposition} We finish this subsection by computing $\operatorname{Res}_{x_n \to x_k}(\omega_{\Gamma})$ for an ordered rooted graph $\Gamma = \{e_{r+1}, \dots ,e_{n}\} \in \operatorname{Supp} (\gamma )$. Until the end of this subsection we assume $\Gamma \in \operatorname{Supp} (\gamma )$ and $(n,k)$ is an edge of $\Gamma$. Put $R_- = \{ k' \mid (n,k') \in \Gamma , \operatorname{min} (n,k') < \operatorname{min} (n,k)\}$ and $R_+ = \{ k' \mid (n,k') \in \Gamma , \operatorname{min} (n,k') \geq \operatorname{min} (n,k)\}$. We meke a numbering of $R_+ =\{k_1 = i_n, k_2, \dots ,k_s = k\}$ such that $\operatorname{min}(n,k_1) > \cdots > \operatorname{min} (n,k_s)$. Set $e_{t_i}= \operatorname{min} (n,k_i)$. For the figure of principal graph see figure 3. If $s \geq 2$, we put $P = P(\Gamma , k)$ the power set of $R_+ -\{ k_1, k_s\}$. For an element $p \in P$, we define a graph $\Gamma (p) \in \Gamma ([n-1],[r])$ as follows. For $i = 2, \dots , s$, put $m(p,i) = \operatorname{min} \{j \mid k_j \in p\cup \{ k_1\}, j<i\}$. The $t_i$-th edge of $\Gamma (p)$ is equal to $(k_i, k_m)$, where $m= m(p,i)$. The $j$-th edge is the same as $\Gamma$ if $j \neq t_i, n$ $(i= 2, \dots , s)$. A Set of ordered graph $\{ \Gamma (p) \mid p \in P(\Gamma ,k)\}$ is denoted by $R(\Gamma , k)$ and called the residue graph of $\Gamma$ with respect to $k$. \begin{figure} \vskip 10pt \centerline{\epsfbox{fig3.eps}} \vskip 5pt \caption[]{ } \end{figure} \begin{proposition} \label{proposition:residue} If $\# \mid R_+ \mid \geq 2$, then $$ \operatorname{Res}_{x_n \to x_k}(\omega_{\Gamma}) = \sum_{p \in P(\Gamma ,k)}(-1)^{\# p +1}\omega_{\Gamma (p)}. $$ Here the residue $\operatorname{Res}_{x_n \to x_k}\omega = \eta \mid_{x_n = x_k}$, where $\omega = d \log (x_n - x_k) \wedge \eta$. \end{proposition} \begin{proof} We write $d \log (x_p - x_q) = \langle p,q \rangle$ for short. First we prove that \begin{align*} & \<< k, k_1 \>> \cdots \<< k, k_{s-1} \>> \\ =&\sum_{p \subset \{ k_2, \dots , k_{s-1}\}} (-1)^{s+\# p} \<< m(p,2),k_2 \>> \cdots \<< m(p,s-1),k_{s-1}\>> \<< m(p,s), k_s\>> \\ \end{align*} by induction on the cardinarity of $\{k_1, \dots ,k_{s-1}\}$. By the assumption of induction for $\{k_1, \dots k_{s-2}\}$, we have \begin{align*} &\<< k, k_1 \>> \cdots \<< k, k_{s-2} \>> \<<k, k_{s-1}\>> \\ = & \sum_{q \subset \{ k_2, \dots , k_{s-1}\}} (-1)^{s+\# q} \<< m(q,2),k_2 \>> \cdots \<< m(q,s-1),k \>> \<< k,k_{s-1}\>> \\ = & \sum_{q \subset \{ k_2, \dots , k_{s-1}\}} (-1)^{s+\# q} \<< m(q,2),k_2 \>> \cdots \<< m(q,s-2),k_{s-2} \>> \\ & \qquad (\<< m(q,s-1),k_{s-1} \>> \<< k,k_{s-1}\>> -\<< k_{s-1},m(q,s-1)\>> \<< k, m(q,s-1) \>> \\ \end{align*} and the last expression gives the expression of $\{k_1, \dots ,k_{s-1}\}$. Therefore we have \begin{align*} & \operatorname{Res}_{x_n \to x_k} \<< n, k_1 \>> \cdots \<< n, k_s \>> \\ = &(-1)^{s-1}\<< n, k_1 \>> \cdots \<< n, k_{s-1} \>> \\ = & \sum_{p \subset \{ k_2, \dots , k_{s-1}\}} (-1)^{\# p+1} \<< m(p,2),k_2 \>> \cdots \<< m(p,s-1),k_{s-1}\>> \<< m(p,s), k_s\>>. \\ \end{align*} This implies the proposition. \end{proof} \subsection{Proof of Theorem \ref{theorem:combinatorics}} \label{subsec:proof of theorem} We prove Theorem \ref{theorem:combinatorics} by induction. By Remark \ref{remark:homomorphism}, the morphism $$ \rho :A_{ij}^{(n-1)} \mapsto \operatorname{Ind}( \bold a )_{ij} $$ defines a ring homomorphism from $P_{n-1}$ to $M(n-1, P_n)$. We assume Theorem \ref{theorem:combinatorics} for $n-1$. We define $W_{k} \in P_{n-1}^{r(r+1)\cdots (n-1)}$ for $k= 1, \dots ,n-3$ inductively by the relation similar to (\ref{eq:tensor vector}) and $$ W_{n-2} = \left(\begin{matrix} \frac{A^{(n-1)}_{n-1,1}}{x_{n-1}- x_1} \\ \vdots \\ \frac{A^{(n-1)}_{n-1,n-2}}{x_{n-1}- x_{n-2}} \\ \end{matrix}\right). $$ Then by the assumption of induction, $$ \eta_{\gamma} = W_{r}(i_{r+1}, \dots ,i_{n-1}) dx_{n-1}\wedge \cdots \wedge dx_{r+1} $$ for $\gamma = \emptyset (\{1, \dots ,r\}) \wedge (r+1, i_{r+1}) \wedge \cdots \wedge (n-1, i_{n-1})$ in $P_{n-1}\otimes \Omega_R$. Here $W_r(i_{r+1}, \dots i_{n-1})$ is the $(i_{r+1}, \dots ,i_n)$-th component of $W_r$. By applying the above ring homomorphism $\rho$, we have $$ \rho(\eta_{\gamma}) =\rho ( W_{r}(i_{r+1}, \dots ,i_{n-1})) dx_{n-1}\wedge \cdots \wedge dx_{r+1} $$ in $M(n-1, P_{n-1}) \otimes \Omega_R$. By the definition of $w_r$, $w_r (i_{r+1}, \dots , i_n)$ is equal to the $i_n$-th component of the vector $$ \rho ( W_{r}(i_{r+1}, \dots ,i_{n-1}))\left(\begin{matrix} \frac{a_{n1}}{x_n -x_1} \\ \vdots \\ \frac{a_{nn-1}}{x_n- x_{n-1}} \\ \end{matrix} \right). $$ Therefore by taking the residue, $\operatorname{Res}_{x_n \to x_k}$, it is enough to prove that $$ \rho ( W_{r}(i_{r+1}, \dots ,i_{n-1}))\left(\begin{matrix} 0\\ \vdots \\ a_{nk} \\ \vdots \\ 0 \\ \end{matrix} \right)_{i_n} = \operatorname{Res}_{x_n \to x_k} (\eta_{\bar\gamma}) $$ for all $k = 1, \dots , n-1$, where $\bar\gamma = \emptyset(\{1, \dots , k\}) \wedge (k+1, i_{k+1}) \wedge \cdots \wedge (n, i_n)$. We compute the left hand side and right hand side by using Lemma \ref{lemma:product} and Proposition \ref{proposition:residue}. Left hand side of (1) is expressed as a linear combination of $\eta_{\Gamma}$, where $\Gamma$ is a support of $\gamma $. On the other hand, the expression given in Proposition \ref{proposition:residue} gives an expression of the right hand side by a linear combination of $\eta_{\Gamma}$, where $\Gamma$ is a support of $\gamma $. By comparing the coefficient of $\omega_{\Gamma}$ it is enough to prove the following proposition. \begin{proposition} Let $\Gamma \subset \operatorname{Supp} (\gamma)$ and $k, i_n$ be contained in the same connected component of $\Gamma$. \begin{enumerate} \item $\operatorname{length} (k, i_n) = \# p +1$ if $\bar \Gamma (p) = \Gamma$. Here $\operatorname{length} (k,i_n)$ is the length of the path connecting $k$ and $i_n$ in $\Gamma$. \item $$ (-1)^{\operatorname{length} (k, i_n)} \prod_{i= r+1}^n B_i = \sum_{\{\bar\Gamma \in \operatorname{Supp} (\bar\gamma ) \mid \Gamma \in R(\bar\Gamma, k) \}} A_{\bar\Gamma}^{(n)}, $$ were $B_i$ is defined in Lemma \ref{lemma:product} and $R(\bar\Gamma , k)$ is defined in Proposition \ref{proposition:residue}. \end{enumerate} \end{proposition} \begin{proof} Let $\Gamma \in \operatorname{Supp} \gamma$ and suppose $R(\bar \Gamma,k) \ni \Gamma$. As in Lemma \ref{lemma:product}, we make the numbering of the path in $\Gamma$ from $k$ to $i_n$ as $e_{t_1} = (n,k_1), e_{t_2} = (k_1, k_2), \dots, e_{t_m} = (k_{s-1}, k)$ and $k_s = k, k_1 = i_n$. First we claim the set $L= L(\bar\Gamma)= \{ l \mid (n,l) \in \bar\Gamma \}$ contains $k_1, \dots ,k_s$. Since $\operatorname{Res}_{x_n \to x_k }(\bar\Gamma )$ is not zero, $\bar\Gamma$ contains $(n,k)$, i.e. $k= k_s \in L$. If the path connecting $k$ and $i_n$ in the corresponding graph $\bar\Gamma (p)$ is $k_1, \dots , k_s$, then $p = \{ k_2, \dots ,k_{s-1}\}$. Therefore $L \supset \{k_1, \dots ,k_s\}$. If $l \in L -\{k_1, \dots , k_s\}$ and $q$ is a minimal element satisfying $\operatorname{min} (i_n,l) < \operatorname{min} (i_n, k_q)$, then $\bar\Gamma (p)$ contins an edge $(l, k_q)$ by the definition of $\bar\Gamma (p)$, i.e. $(l, k_q) \in G(\Gamma ,k,i_n)$, where \begin{align*} G(\Gamma ,k,i_n) = \{e :\text{ edge } \mid & \text{ There exists $i$ such that } e \ni k_i, \\ & \operatorname{min} (k_{i-1},i_n ) \leq e < \operatorname{min} (k_i, i_n)\}\}. \end{align*} Therefore $L-\{k_1, \dots ,k_s\} \subset G(\Gamma , k , i_n)$. Conversely, for any subset $L$ of $\{ 1, \dots ,n\}$ satisfying \begin{enumerate} \item $L$ is contained in the same connected component of $i_n$, \item $L \supset \{k_1 ,\dots ,k_s\}$, and \item $L - \{k_1, \dots ,k_s\} \subset G(\Gamma,k,i_n)$, \end{enumerate} there exists unique $\bar\Gamma (L)$ satisfying \begin{enumerate} \item $L (\bar \Gamma(L)) = L$, \item $\bar\Gamma \in \operatorname{Supp} (\bar \gamma )$, and \item $\operatorname{Supp} (\operatorname{Res}_{x_n \to x_k}(\bar \Gamma)) \ni \Gamma$. \end{enumerate} Therefore \begin{align*} \sum_{\{\bar\Gamma \mid \Gamma \in R(\bar\Gamma, k), \bar\Gamma \in \operatorname{Supp} (\bar\gamma )\}} A_{\bar\Gamma}^{(n)} = & \sum_{\substack{ L \supset \{k_1, \dots ,k_s\}, L-\{k_1, \dots, k_s\} \subset G(\Gamma,k,i_n), \\ L \text{ is contained in the same} \\ \text{ connected component of }i_n}} A_{\bar\Gamma (L)}^{(n)} \\ =(-1)^{\operatorname{length} (k, i_n)} & \prod_{i=k+1}^n B_i \end{align*} \end{proof} \section{Proof of The Main Theorem} \label{sec:proof of the main theorem} \subsection{Some lemmata for the assypmtotic beheaviors} \label{subsec:some lemmata} In this subsection, we investigate the assymptotic beheavior of the solution of linear differential equation with regular singularity. Let $A \in \frac{1}{x}M(d, \mathcal O_x)$, where $\mathcal O_x$ is a germ of holomorphic functions at $x=0$. We are interested in the differential equation for $r\times r$-matrix valued function $V$: $$ \frac{dV}{dx} = A V. $$ We write $A = Rx^{-1} +\sum_{i=0}^{\infty}A_ix^i$, where $R, A_i \in M(r,\bold C)$. If all the eigen values of $R$ are enough small, then the solution $V$ can be written as $V = F x^R C_0$, where $F$ is an $r \times r$-valued homolorphic function in $I +x M(r,\mathcal O_x)$, and $C_0 \in GL(r, \bold C)$. In the rest of this section, we assume that all the eigen values of $R$ are sufficiently small positive real numbers and $R$ is semi-simple. The eigen value of $R$ is denoted by $0 < \lambda_1 <\cdots < \lambda_s$. \begin{lemma} \label{lemma:limit} Let $\bold C^r = \oplus_{i=1}^s W_i$ be the eigen space decomposition of $\bold C^r$ with respect to $R$. \begin{enumerate} \item If $w_i \in W_i$, then all the element $a_k$ of the vector $Fx^Rw_i$ satisfies the estimation $\mid a_k \mid \leq \mid x \mid^{\lambda_i}c$ with some constant $c$ for $k=1, \dots ,r$. Moreover we have $\lim_{x\to 0}(x^{-\lambda_i}Fx^Rw_i) = w_i$. \item Let $\lambda > \lambda_i$. If $w_i \in W_i$ and all the elements $a_k$ of $Fx^Rw_i$ satisfy $\mid a_i \mid \leq x^\lambda c$ with some constant $c$, then $w = 0$. \item Let $\lambda_i$ be an eigenvalue of $R$. Let $p : W_i \to \bold C^l$ be a linear map and the composite $\bold C^r \to W_i \to \bold C^l$ is denoted by $\tilde p$. Then we have $$ \tilde p(\lim_{x \to 0}x^{-\lambda_i}F x^R w) = \tilde p(\lim_{x \to 0}x^{-R} F x^{R}w) $$ for any $w \in W$. \end{enumerate} \end{lemma} \begin{proof} Since $F = I + xm, m \in M(r, \mathcal O_x)$, using identity $\lim_{x \to 0} x^{-\lambda}xm x^R = \lim_{x \to 0} x^{-R}xm x^{R} =0$, we get the statements. \end{proof} Let $n,k$ be integers such that $2 \leq k \leq n$ and we define reduced part $V^{red} = V^{red}_k$ as in \S \ref{subsec:differential equation}. The restriction of $A_{ij}^{(k)}$ to $V^{red}$ is denoted by $A^{(k)}_{ij,red}$. For a subset $S$ of $[i,k]$, we define $A_S^{(k)}$ and $A_{S, red}^{(k)}$ by $$ A_S^{(k)} = \sum_{i<j, i,j \in S}A_{ij}^{(k)}, A_{S,red}^{(k)} = \sum_{i<j, i,j \in S}A_{ij,red}^{(k)}. $$ Fron now on, $a_{ij}$ is sufficiently generic small positive real number. For a semisimple matrix $A$, the formal sum of eigne values of $A$ counting their multiplicities is denoted by $\sigma (A)$: $\sigma (A) = \sum (\text{ eigen values of $A$})$. In this situation, the set of eigen values is denoted by $\operatorname{Supp} (\sigma (A))$. \begin{proposition} \label{proposition:eigenvalue} Under the notations and assumptions as above, $A_S^{(k)}$ and $A_{S,red}^{(k)}$ are semi-simple and \begin{align*} \sigma (A_S^{(k)}) = & \sum_{T \subset [k+1,n]} (k-l;\mid T^c \mid)(l ; \mid T \mid) a_{S\cup T}, \\ \sigma (A_{S,red}^{(k)}) = & \sum_{T \subset [k+1,n]} (k-l-1;\mid T^c \mid)(l ; \mid T \mid) a_{S\cup T}, \\ \end{align*} where $a_U = \sum_{i< j, i,j \in U} a_{ij}$ for a subset $U \subset [1,n]$. For a subset $T \subset [k+1,n]$, $T^c = [k+1, n] -T$ and $l = \#\mid S \mid -1$ and $(a;b) = a(a+1) \cdots (a+b-1)$. \end{proposition} To prove the above propostion, we use the following two elementary lemmata. \begin{lemma} Let $X$ be $kN \times kN$-matrix. We assume that there exist semi-simple matrices $B$ and $D$ and matrices $C_1, \dots ,C_k$ such that \begin{align*} A\left(\begin{matrix} 0 \\ \vdots \\ 1 \\ -1 \\ \vdots \\ 0 \\ \end{matrix} \right) \begin{matrix} \\ \\ i \\ i+1 \\ \\ \\ \end{matrix} & = \left(\begin{matrix} 0 \\ \vdots \\ B \\ -B \\ \vdots \\ 0 \\ \end{matrix} \right) \\ A\left(\begin{matrix} C_1 \\ \vdots \\ C_k \\ \end{matrix} \right) & = \left(\begin{matrix} C_1D \\ \vdots \\ C_kD \\ \end{matrix} \right), \end{align*} \\ with $\operatorname{Supp} (\sigma (B)) \cap \operatorname{Supp} (\sigma (D)) = \emptyset$. Then \begin{enumerate} \item $\sigma (A) = (k-1)\sigma (B) + \sigma (D)$. \item $(k-1)N$-dimensional subvector space $V^{red} = \{ (v_1, \dots ,v_k) \mid v_i \in \bold C^N, \sum v_i = 0\}$ is stalbe under the action of $A$. Let $A^{red}$ be the restriction of $A$ to $V^{red}$. Then $\sigma (A^{red}) = (k-1)\sigma (B)$. \end{enumerate} \end{lemma} \begin{lemma} Let $a_{ij} \in P_k$ and set $A_{ij}= \operatorname{Ind} (\bold a )_{ij}$ for $1 \leq i < j \leq k-1$, $A_{[1,k-1]} = \sum_{1 \leq i < j \leq k-1}A_{ij}$, $a_{[1,k-1]} = \sum_{1 \leq i < j \leq k-1}a_{ij}$ and $a_{[1,k]} = \sum_{1 \leq i < j \leq k}a_{ij}$. Then we have \begin{align*} A_{[1, k-1]}\left(\begin{matrix} 0 \\ \vdots \\ 1 \\ -1 \\ \vdots \\ 0 \\ \end{matrix} \right) \begin{matrix} \\ \\ i \\ i+1 \\ \\ \\ \end{matrix} & = \left(\begin{matrix} 0 \\ \vdots \\ a_{[1,k]} \\ -a_{[1,k]} \\ \vdots \\ 0 \\ \end{matrix} \right) \\ A_{[1,k-1]}\left(\begin{matrix} a_{k1} \\ \vdots \\ a_{k-1} \\ \end{matrix} \right) & = \left(\begin{matrix} a_{k1}a_{[1,k-1]} \\ \vdots \\ a_{kk-1}a_{[1,k-1]} \\ \end{matrix} \right). \end{align*} \\ \end{lemma} \begin{proof} The first equality follows from the expression $$ A_{[1,k-1]}= \left(\begin{matrix} a_{[1,k-1]}+\sum_{j \neq 1} a_{k,j} & -a_{k1} & \cdots \\ -a_{k2} & a_{[1,k-1]} + \sum_{j \neq 2}a_{kj} & \cdots \\ -a_{k3} & -a_{k3} & \cdots \\ \vdots & & & \vdots \\ \end{matrix}\right). $$ The second equality is obtined directly by the equality $$ A_{ij}\left(\begin{matrix} a_{k1} \\ \vdots \\ a_{kk-1} \\ \end{matrix}\right) = \left(\begin{matrix} a_{k1}a_{ij} \\ \vdots \\ a_{kk-1}a_{ij} \end{matrix}\right). $$ \end{proof} \begin{proof}(of Proposition \ref{proposition:eigenvalue}) We prove the proposition by induction. By two lemmata, we have \begin{align*} \sigma (A_S^{(k)}) & = (k-l) \sigma (A_S^{(k+1)}) + l \sigma (A_{S\cup\{k+1\}}^{(k+1)}), \\ \sigma (A_{S,red}^{(k)}) & = (k-l-1) \sigma (A_{S,red}^{(k+1)}) + l \sigma (A_{S\cup\{k+1\},red}^{(k+1)}), \\ \end{align*} using homomorphism $P^{(k+1)} \to M((k+1)(k+2) \cdots (n-1), \bold C)$ and assumption of independence of $a_{ij}$. \end{proof} \subsection{Relation between Selberg integral and Drinfeld associator} \label{subsec:relation between selberg integral} In this section, we will compare vectors whose elements are given by Selberg integrals with the Drinfeld associator. Let $n \geq 3$ be an integer and we define $A_{ij}^{(k)}$ as in \S \ref{subsec:differential equation}. We set $V = V_{3,n} = \bold C^{3\cdot 4 \cdots (n-1)}$. Let $S = S([n]/[3],x_1, x_2, x_3, \alpha_{ij})$ be a $V$-valued function on $x_1, x_2, x_3$ whose $(i_4, \dots , i_n)$-component is given by $S_{\emptyset(\{1,2,3\})\wedge (4,i_4) \cdots (n,i_n)} ([n]/[3],x_1, x_2, x_3, \alpha_{ij})$. Then $S$ satisfies the differential equation $$ dS = (A_{13}^{(3)}d \log (x_1 - x_3) + A_{23}^{(3)}d \log (x_2 - x_3) ) S. $$ We set $\bar S(x_3) = S([n]/[3],0,1,x_3)$. Then $\bar S$ satisfies the equation $$ \frac{d\bar S}{dx_3} = (A_{13}^{(3)}\frac{dx_3}{x_3} + A_{23}^{(3)}\frac{dx_3}{x_3-1} )\bar S. $$ Therefore by considering the rational representation $\rho$ of degree 1 : $\rho : \bold C \langle\langle X, Y \rangle\rangle \to M(3\cdot 4 \cdots (n-1), \bold Q[[\alpha_{ij}]])$, we have $$ \lim_{x \to 1}(1-x_3)^{-A_{23}^{(3)}} \bar S(x_3) = \rho (\Phi (X,Y)) \lim_{x_3 \to 0} x_3^{-A_{13}^{(3)}}\bar S(x_3). $$ We have the following lemma. \begin{lemma} \label{lemma:limit to 0} \begin{enumerate} \item For any $i_4, \dots ,i_n$, we put $\gamma = \emptyset(\{1,2,3\})\wedge (4, i_4)\wedge \cdots \wedge(n,i_n)$. Then for a sufficiently small $x_3$, we have an estimation \begin{align} \label{eq:estimate at 0} \mid S_{\gamma}([n]/[3],0,1,x_3)\mid < c x_3^{\alpha_{max}} \end{align} for some constant $c$. Here $\alpha_{max}$ is the maximal eigenvalue \linebreak $\sum_{1 \leq i < j \leq n, i,j \neq 2}\alpha_{ij}$ of $A_{13}^{(3)}$. \item For $\Gamma \in \Gamma ([n],[3])$, \begin{align*} & \lim_{x_3 \to 0}x_3^{-\alpha_{max}} S_{\Gamma}([n]/[3], 0,1,x_3) \\ = & \begin{cases} S_{\Gamma '}([n]-\{ 2\}/\{ 1, 3\}, 0,1) & (\text{if there is no edges containing 2}) \\ 0 & (otherwise) \\ \end{cases} \\ \end{align*} Here $\Gamma '\in \Gamma ([n]-\{ 2 \},\{1, 3\})$ is the ordered graph obtained by deleting $2$ from the graph $\Gamma$. \end{enumerate} \end{lemma} \begin{proof} By Proposition \ref{proposition:eigenvalue}, we have $\alpha_{max} = \sum_{1 \leq i < j \leq n, i,j \neq 2} \alpha_{ij}$. To prove the statement, it is enough to prove that $$ \int_D \prod_{1 \leq i < j \leq n}(x_i -x_j)^{\alpha_{ij}} \omega_{\Gamma} \mid_{x_1 = 0, x_2=1} $$ satisfies the estmation of (\ref{eq:estimate at 0}) for an ordered rooted tree $\Gamma$ with the root set $[3]$. We change variable by $x_p = \xi_p x_3$ for $p = 4, \dots ,n$. Then \begin{align} \label{eq:coordinate change} \omega_{\gamma} = \pm \prod_{(p_i, q_i) \in E_{\Gamma},\text{ not adjacent to 2}}\frac{d\xi_{p_i}-d\xi_{q_i}}{\xi_{p_i}-\xi_{q_i}} \prod_{(p_i, 2) \in E_{\Gamma}}\frac{x_3 d\xi_{p_i}}{-1} \cdot (1+o(1)). \end{align} and $$ \prod_{1 \leq i < j \leq n}(x_i - x_j)^{\alpha_{ij}} = \prod_{1 \leq i < j \leq n, i,j \neq 2}(\xi_i -\xi_j)^{\alpha_{ij}} \cdot x_3^{\alpha_{max}} \dot (1+ o(1)). $$ Here we put $\xi_3 =1, \xi_1 =0$. In particular, $\lim_{x_3 \to 0}x_3^{-\alpha_{max}}\int_D \Phi \omega_{\Gamma} = 0$ if $\Gamma$ contains an edge adjacent to 2. The signature in (\ref{eq:coordinate change}) arise from the subtutition for separating edges of $\Gamma$ adjacent to 2 and those does not adjacent to 2. If $\Gamma$ contains no edges adjacent to 2, we get the second statement. \end{proof} From Lemma \ref{lemma:limit}, we have the following corollary. \begin{corollary} The $(i_4, \dots ,i_n)$-th component of $\lim_{x_3 \to 0}x_3^{-A_{13}^{(3)}}\bar S(x_3)$ is equal to $S_{\gamma}([n]-\{ 2\}/\{1, 3\},0,1)$ if $i_p \neq 2$ for $p= 4, \dots ,n$, where $\gamma = \emptyset(\{1,3\})\wedge (4, i_4) \wedge \cdots \wedge (n, i_n)$ and $0$ otherwise. \end{corollary} \begin{proof} By the definition of $\gamma = \emptyset ([3])\wedge (4, i_4) \wedge \cdots \wedge (n, i_n)$, if $i_p=2$ for some $p$, then any $\Gamma \in \operatorname{Supp} (\gamma )$ has an edge adjacent to 2. If $i_p \neq 2$ for all $p$, then any $\Gamma \in \operatorname{Supp} (\gamma )$ contains no edges adjacent to 2. Therefore the statement follows from Proposition \ref{lemma:limit to 0}. \end{proof} Next we consider the assumptotic beheavior for $x_3 \to 1$. Let $I$ be the set $\{ (i_4, \dots ,i_n) \mid i_p \neq 2,3\}$. By the definition of $A_{23}^{(3)}$, the projection $p : V \to \bold C^I$ to $I$-th coordinate factors through $\alpha_{23}$ eigen projection. Therefore we have $$ p( \lim_{x_3 \to 1}(1-x_3)^{-A_{23}^{(3)}}\bar S(x_3)) = p(\lim_{x_3 \to 1}(1-x_3)^{-\alpha_{23}}\bar S (x_3)). $$ by Lemma \ref{lemma:limit}. On the other hand, it is easy to see the following lemma. \begin{lemma} \label{lemma:limit to 1} If $\Gamma$ contains no edges containing 2 and 3, then $$ \lim_{x_3 \to 1}(1-x_3)^{-\alpha_{23}} S_{\Gamma}([n]/[3], 0,1,x_3) = S_{\Gamma '}([n]-\{ 3\}/[2], 0,1,\alpha_{ij}') $$ where $\Gamma '$ is the ordered graph obtained by deleting 3 from the graph $\Gamma$, and $\alpha_{ij} = \alpha_{ij}'$ if $i,j \neq 2$ and $\alpha_{2j}' = \alpha_{2j}+\alpha_{3j}$. \end{lemma} \begin{definition} The vector $\lim_{x_3 \to 0} x_3^{-A_{13}^{(3)}}\bar S(x_3)$ and $\lim_{x_3 \to 1} (1-x_3)^{-A_{23}^{(3)}}\bar S(x_3)$ is denoted by $V^{(1)}$ and $V^{(2)}$ respectively. Then we have \begin{align} \label{eq:projection} p(V^{(2)}) = p(\rho (\Phi (X,Y))V^{(1)}) \end{align} \end{definition} By Lemma \ref{lemma:limit to 1}, the $(i_4, \dots ,i_n)$-th component of $V^{(2)}$ with $i_p \neq 2,3$ is equal to $S_{\gamma}(0,1, \alpha'_{ij})$, where $\alpha'_{ij} = \alpha_{ij}$ if $i,j \neq 2$ and $\alpha '_{2,j} = \alpha_{2j} +\alpha_{3j}$, where $\gamma = \emptyset (\{1,2\}) \wedge (4,i_4) \wedge \cdots \wedge (n, i_n)$. We compute the limit of all the component of $V^{(1)}$ for the limit $\alpha_{3i} \to 0$. For this purpose, we compute $\lim_{\alpha_{3i}\to 0}S_{\gamma}(\alpha_{ij})$ for $\gamma = \emptyset (\{1,3\})\wedge (4, i_4) \wedge \cdots \wedge (n,i_n)$ with $i_p \neq 2$, in the next subsection. \subsection{Limit for $\alpha_{3i} \to 0$} \label{sebsec: limit for a} In this subsection, we change numbering from that of the last subsection. Let $\Gamma$ be an ordered graph with the root set $[2]$ and vertex set $[n]$. We set $\Phi = \prod_{i\ll j}(x_i - x_j)^{\alpha_{ij}}$, and $$ S(\alpha_{ij})= \int_{D([n]/[2],0,1)}\eta_{\Gamma}\Phi . $$ Before proving Proposition \ref{proposition:alpha limit}, we remark the following lemma. \begin{lemma} Let $F(x)$ be a continuous function defined on $(p,1]$. Suppose that $F(x)$ is integralble on $(p, p+\epsilon]$. Then we have $$ \lim_{\alpha \to 0}\int_p^1 \alpha (1-x)^{\alpha -1}F(x) dx = F(1). $$ \end{lemma} \begin{proof} This is the fundamental property of $\delta$-function $\lim_{\alpha\to 0}\alpha (1-x)^{\alpha-1}$. \end{proof} \begin{proposition} \label{proposition:alpha limit} \begin{enumerate} \item If $\lim_{\alpha_{2i} \to 0}S(\alpha_{ij}) \neq 0$, then (1) $\Gamma$ contins no edges adjacent to 2, or (2) $(2, 3)$ is a unique edge adjacent to 2. \item If $(2, 3)$ is a unique edge in $\Gamma$ adjacent to 2, then $\lim_{\alpha_{2i} \to 0}S(\alpha_{ij})$ is equal to $S_{\Gamma '}(\alpha_{ij}')$, where $\Gamma '$ is the ordered graph obtained by deleting the edge $(2,3)$ and replace the numbering $3$ of original edge by the new numbering 2 and $\alpha_{ij}' = \alpha_{ij}$ if $i,j \neq 2$ and $\alpha_{2,k}' = \alpha_{3,k}$. \end{enumerate} \end{proposition} \begin{proof} Suppose that $\Gamma$ contains an edge adjacent to 2. Let $p \leq 3$ be the minimal number such that $(2,p)$ is an edge of $\Gamma$. Set \begin{align*} F(x_p, \dots ,x_n) = & \prod_{(pq)\in E_{\Gamma}, \neq (2,p)}a_{pq} \prod_{1 \leq i\leq n, p \leq j \leq n, i < j, (i,j) \neq (2,p)} (x_i-x_j)^{\alpha_{ij}+\epsilon_{ij}} \\ & \int_{\{x_p < \cdots < x_3 < 1\}} \prod_{1 \leq i < j \leq p -1}(x_i -x_j)^{\alpha_{ij}+ \epsilon_{ij}} dx_{p-1} \cdots dx_3,\\ \end{align*} where $\epsilon_{ij}=-1$ if $(i,j)$ is an edge of $\Gamma$ and $0$ otherwise. Then $$ \lim_{\alpha_{2p} \to 0} S_{\Gamma} = \int_{\{0 < x_n < \cdot < x_p <1\}} F(x_p, \dots ,x_n)\alpha_{2p}(1-x_p)^{\alpha_{2p}-1}. $$ Therefore if $p\neq 3$ or there exist at least tow p such that $(2,p)$ is an edge of $\Gamma$, then $S_{\Gamma} = 0$. If $p=3$ and there is no edge adjacent to 2 other than $(2,3)$, then $\lim_{\alpha_{23} \to 0}S_{\Gamma} = S_{\Gamma '}(\alpha_{ij}')$. \end{proof} We define $S_{\gamma}(\alpha_{ij})$ by $\sum a_{\Gamma}S_{\Gamma}(\alpha_{ij})$, where $\gamma = \sum a_{\Gamma}\Gamma \in \Gamma ([2],[n])$. \begin{corollary} \label{corollary:alpha limit} Let $\gamma = \emptyset (\{1,2\})\wedge (3, i_3) \wedge \cdots \wedge (n, i_n)$. \begin{enumerate} \item If there exists $k \neq 3$ such that $i_k =2$, then $\lim_{\alpha_{2i} \to 0}S_{\gamma}(\alpha_{ij}) =0$ \item If $i_3 = 2$ and $i_k \neq 2$ for $k \neq 3$, then $$ \lim_{\alpha_{2i} \to 0}S_{\gamma}(\alpha_{ij})= S_{\gamma '}(\alpha_{ij}'), $$ where $\gamma '$ is $\emptyset (\{1,3 \}) \wedge (4, i_4) \wedge \cdots \wedge (n, i_n)$. \end{enumerate} \end{corollary} \begin{proof} (Proof of the Main Theorem \ref{theorem:the main theorem}) We can proceed by the induction on $n$. We consider the limit of ($\ref{eq:projection}$) for $\alpha_{3i} \to 0$. Then all the entries of $\lim_{\alpha_{3i} \to 0}(\rho (\Phi (X,Y)))$ are contined in $H_{\alpha}$ by Corollary \ref{corollary:homogeneous rational representation}. By Corollary \ref{corollary:alpha limit}, all the entries of $\lim_{\alpha_{3i} \to 0}V^{(1)}$ are contained in $H_{\alpha}$. Therefore all the entries of $\lim_{\alpha_{3i} \to 0}p(V^{(0)})$ are also contained in $H_{\alpha}$. Therefore $S_{\gamma}(0,1, \alpha_{ij})$ is an element of $H_{\alpha}$ for $\gamma = \emptyset (\{1, 2\}) \wedge (3, i_3) \wedge \cdots \wedge (n, i_n)$ under the restriction \begin{quote} (R) : $i_k \neq 2$ for all $k$. \end{quote} On the other hand, by the relation (\ref{eq:fundamental relation}), the restriction (R) is not necessary. This completes the main therem. \end{proof}
1,108,101,563,050
arxiv
\section{Introduction} \label{sec:introduction} The thermodynamic entropy is a key concept in Statistical Mechanics, and measures disorder and randomness in a macroscopic system. The thermodynamic entropy $S_T$ for a quantum system in thermal equilibrium at temperature $T$ is defined as \begin{equation} S_T\equiv-\textrm{Tr}\rho_T \ln \rho_T \end{equation} where \begin{equation} \rho_T\equiv \frac{1}{Z} e^{-\beta H} \end{equation} is (thermal density) matrix, of the Gibbs ensemble at temperature $T$, and $Z\equiv \textrm{Tr} \exp(-\beta H)$ is the Gibbs partition function with $\beta=1/T$. At zero temperature, the system is in its ground state (with an at most finite degeneracy) and in this limit the thermodynamic entropy vanishes, $S_T=0$. On the other hand, the entanglement entropy is a measure of the non-local correlations of a pure quantum state. The entanglement entropy for subsystem $A$ is a measure of the quantum entanglement between $A$ and $B$ (and viceversa). and it is defined as follows. Let us consider an extended system in a pure state $|\Psi\rangle$, and define a partition of system into two subsystems, $A$ and $B$ with common boundary $\Gamma=\partial A=\partial B$. We will denote by \begin{equation} \rho_{A \cup B}=|\Psi\rangle \langle \Psi | \end{equation} the density matrix of the pure state $|\Psi\rangle$, and by \begin{equation} \rho_A=\textrm{Tr}_B \rho_{A \cup B}, \quad \rho_B=\textrm{Tr}_A \rho_{A \cup B}, \end{equation} the (normalized) reduced density matrices of the subsystems $A$ and $B$ of the partition (which satisfy $\textrm{Tr} \rho_A=\textrm{Tr} \rho_B=1$). Then, the (von Neumann) entanglement entropy is given by \begin{equation} S_{vN}(A)\equiv- \textrm{Tr} \rho_A \ln\rho_A \label{von-neumann} \end{equation} Since the full system $A \cup B$ is in a pure state, $|\Psi \rangle$, the von Neumann entanglement entropy is the same for both members of the partition, $S_{vN}(A)=S_{vN}(B)$. Similarly, the R\'enyi entropies $S_n$ are given by ($n>1$) \begin{equation} S_n=\frac{1}{1-n} \ln \textrm{Tr} \rho_A^n \label{eq:Renyi} \end{equation} and are also symmetric under the exchange of regions $A$ and $B$. The von Neumann and R\'enyi entropies are related by the ``replica trick'' formula\cite{Callan1994,Holzhey1994,Calabrese2004} \begin{equation} \label{eq:replica-trick} \end{equation} which is understood as an analytic continuation. The behavior of the entanglement entropy has been the focus of study in several areas of physics. A particular focus of interest has been the scaling of the entanglement entropy with the linear size $\ell$ of the subsystem, assumed to be much smaller than the linear size $L$ of the system as a whole, $\ell \ll L$. It is known that for a generic state in spatial dimension $d$ the entanglement entropy scales with the area of the subsystem\cite{Bombelli1986,Srednicki1993,Eisert2010} $S_{vN}(\ell)=\alpha \ell^{d-1}$ where $\alpha$ is a non-universal constant determined by the short-distance correlations of the wave function. This result is reminiscent of the area law of the entropy of black holes\cite{Bekenstein1973,Hawking1975} where the constant is instead determined by the Planck scale. Of particular interest is the fact that quantum entanglement also encodes universal information of the non-local correlations of the many-body wavefunction of the macroscopic quantum system.\cite{Calabrese2004,Kitaev2006,Levin2006,Fradkin2006,Wen2012} Although the von Neumann {\it entanglement} entropy $S_{vN}$ has the same formal definition as the {\it thermodynamic} entropy $S_T$, these are conceptually different quantities. In this paper we will be interested in under what circumstances can the {\em reduced density matrix} $\rho_A$ of a subsystem of a system in its {\em ground state} $|\Psi\rangle$ define an effective Gibbs ensemble for the subsystem at some effective temperature $T_{\rm eff}$. For this equivalence to be meaningful it should be possible to express the reduced density matrix, whose spectrum is by definition non-negative, in terms of an effective {\em local} so-called entanglement Hamiltonian, that we will denote by $H_E$, whose spectrum is the entanglement spectrum.\cite{Li2008} If this equivalence holds, then the reduced density matrix takes the thermal form \begin{equation} \rho_A=\frac{1}{Z_{\rm eff}} e^{-\beta_{\rm eff} H_E} \end{equation} where $\beta_{\rm eff}=1/T_{\rm eff}$, and the normalization factor $Z_{\rm eff}$ plays the role of a an effective partition function. Since the reduced density matrix is, by definition, a Hermitian matrix it is obvious that a suitable Hermitian operator $H_E$ can always be defined. However it is not obvious, and in general it is not true, that $H_E$ should also be local and, even more, what connection it may bear, if any, with the Hamiltonian $H$ of the combined quantum system of which the state $|\Psi\rangle$ is its ground state or with the Hamiltonian $H_A$ of subsystem $A$ (and similarly with $B$). In this paper we will consider systems made of two identical subsystems which are coupled to each other in the bulk. In this case, both subsystems are thermodynamically large and neither can be regarded as a ``heat bath'' for the other. Here we will focus on the special (and interesting) problem in which the two identical subsystems are one-dimensional and are separately at quantum criticality. The problem that we want to address is under what circumstances can the reduced density matrix of one of these subsystems have a Gibbs form at some effective temperature $T_{\rm eff}$ with a local (and Hermitian) entanglement Hamiltonian $H_E$. We are motivated by some recent numerical results by Poilblanc\cite{Poilblanc2010} who used an exact diagonalization technique to determine the entanglement Hamiltonian for one leg of spin-1/2 quantum antiferromagnet on a two-leg ladder. Over some range of values of the inter-leg exchange interaction, Poilblanc found that the reduced density matrix of one leg is the same as the thermal (Gibbs) density matrix of a spin-1/2 quantum Heisenberg {\em chain} at an effective temperature (determined by the spin gap of the ladder). Similar results have also been found in other fully gapped systems such as AKLT models on ladders\cite{Katsura2010,Lou2011} and in the entanglement of spin and orbital degrees of freedom in Kugel-Khomski models in one dimension.\cite{Lundgren2012} To this end we examine this question first in an exactly solvable system of free fermions on a ladder, with a gapped ground state. Next we examine the same problem in the spin-1/2 ladder in the strong inter-leg coupling regime, a system recently discussed also by La\"uchli and Schliemann\cite{Lauchli2012} and by Qi, Katsura and Ludwig\cite{Qi2012} in 2D topological phases. Next we formulate a scaling hypothesis for the entanglement entropy in the weak coupling limit, where the combined system can be regarded as being a perturbed conformal field theory, and test its validity in the free-fermion system. We finally compare with results in a system of two coupled Luttinger liquids in a gapless combined ground state\cite{Furukawa2011}. An important question is whether the effective entanglement Hamiltonian $H_E$ is local and what is its relation with the (local) Hamiltonian of the decoupled subsystems. We will see below that if we insist that the entanglement Hamiltonian $H_E$ be fully local (i.e. at the scale of the lattice spacing) the energy gap of the coupled system has to be much larger than the coupling constants (end hence the energy scales) of the subsystems. In this regime all the degrees of freedom of the subsystem are thermal. However we will see in an explicitly solvable free-fermion example that in regimes in which the gap is small (compared with other scales of the problem), the reduced density matrix for the {\em long wavelength} degrees of freedom of subsystem $A$ is thermal with a {\em local} effective {\em continuum} effective entanglement Hamiltonian which is the same as the Hamiltonian $H_A$ of the low-energy conformal field theory of the decoupled subsystem $A$. Moreover, in this regime the structure of the effective long-wavelength entanglement entropy has a form which is determined entirely by conformal invariance. This observation leads us to conjecture that this result and not a peculiarity of the free fermion system but it is actually a general property of gapped systems of this type. The separation of the entanglement spectrum into a long-wavelength universal (and thermal) piece and a short-distance non-universal piece that we found in this free-fermion model is in line with what was found by Li and Haldane.\cite{Li2008} These authors showed that the low-(pseudo)energy modes of the entanglement Hamiltonian of fractional quantum Hall fluids of a two-dimensional electron gas have the same universal structure as the low-energy states of the edge states of the same fractional quantum Hall state (on a disk geometry). We should note that the question we are asking here is conceptually different from the central axiom of Statistical Mechanics stating that a subsystem weakly coupled to a much larger system (the ``heat bath'') can reach thermal equilibrium at a temperature determined by the larger system. It is an axiom of Statistical Mechanics that the equilibrium state of the subsystem is in the Gibbs Ensemble, and that this equilibrium state is universally reached irrespective of the specific dynamics.\cite{Schrodinger1952} It is known rigorously that the reduced density matrix of a subsystem has a Gibbs form if the total system is in a ``typical state'', {\it i.e.} a state drawn from some statistical ensemble, which is assumed to be a typical state of the spectrum of the full (and generic) Hamiltonian $H$.\cite{Tasaki1998,Goldstein2006,Popescu2006} However the ground state of the Hamiltonian is hardly a typical state and one generally does not expect to find a Gibbssian reduced density unless the ground state has special properties. The paper is organized as follows. In Section \ref{sec:free-fermion} we consider a system of free fermions on a ladder which is gapped by the inter-chain tunneling amplitude. This problem is exactly solvable and the reduced density matrix can be determined explicitly.\cite{Peschel2003,Peschel2009,Peschel2011} In Section \ref{sec:strong-coupling} we consider the spin ladder problem in the strong inter-chain coupling limit and we show that in this limit the reduced density matrix is that of a spin-1/2 quantum Heisenberg chain. In Section \ref{sec:weak-coupling} we use the insights obtained in the free-fermion system of Section \ref{sec:free-fermion} to formulate a scaling ansatz for the form of the entanglement entropy for a system of two weakly coupled quantum critical systems (which can be regarded as a perturbed conformal field theory). Here we conjecture a general form of the the scaling behavior of the entanglement entropies, and infer that the reduced density matrix for the low energy degrees of freedom of the subsystem is the thermal Gibbs density matrix of the conformal field theory at a temperature determined by the gap scale. In Section \ref{sec:coupled-LL} we consider the case of two coupled Luttinger liquids with a joint gapless ground state. Our conclusions are summarized in Section \ref{sec:conclusions}. \section{A Free Fermion Model} \label{sec:free-fermion} In this section we will consider a two-leg ladder model of free fermions which are gapped by the inter-chain tunneling. Consider a two-leg ladder model with Hamiltonian \cite{Jaefari2012} \begin{align} H=&-t\sum_j(e^{i\Phi/2}c_{A,j+1}^{\dag}c_{A,j}+e^{-i\Phi/2}c_{B,j+1}^{\dag}c_{B,j}+h.c.)\nonumber\\ &+t_{\bot}\sum_j(c_{A,j}^{\dag}c_{B,j}+c_{B,j}^{\dag}c_{A,j}) \label{eq:HladderPhi} \end{align} in which $c_{A,j}$ and $c_{B,j}$ are the fermion operators on chain $A$ and on chain $B$, respectively. Here $t$ is the hopping amplitude along the chains and $t_{\bot}$ is the hopping amplitude along the rungs (between the chains). For each plaquette of the ladder there is a flux $\Phi$ introduced in the Hamiltonian through minimal coupling (the Peierls substitution). As the flux per plaquette $\Phi$ varies from $0$ to $\pi$ the spectrum evolves continuously from a regime with two gapless branches (for $\Phi \sim 0$) to a fully gapped spectrum (for $\Phi \sim \pi$). Since in this paper we are interested in the gapped case, we consider only the simple case in which the flux per plaquette is half of the flux quantum and hence $\Phi=\pi$ (in units in which $\hbar=c=e=1$). The behavior of entanglement in the gapless regime is similar to what is discussed in Section \ref{sec:coupled-LL}. In momentum space, the Hamiltonian of this model becomes: \begin{equation} H=\int_{-\pi}^{\pi}\frac{dk}{2\pi} \mathcal{H}(k) \label{eq:Hpi} \end{equation} For flux $\Phi=\pi$, the Hamiltonian $\mathcal{H}(k)$ of Eq.\eqref{eq:Hpi} is \begin{align} \mathcal{H}(k)=&2t \sin k \; \left( c_{A}(k)^{\dag}c_{A}(k)- c_{B}(k)^{\dag}c_{B}(k)\right) \nonumber\\ &-t_{\bot}\left(c_{A}(k)^{\dag}c_{B}(k)+c_{B}(k)^{\dag}c_{A}(k)\right) \label{eq:mathcalHk} \end{align} The Hamiltonian can be diagonalized by a change of basis \begin{align} c_{A}(k)=&\cos \Big(\frac{\xi(k)}{2}\Big)c^a(k)-\sin \Big(\frac{\xi(k)}{2}\Big)c^b(k)\nonumber \\ c_{B}(k)=&\sin \Big(\frac{\xi(k)}{2}\Big)c^a(k)+\cos \Big(\frac{\xi(k)}{2}\Big)c^b(k) \end{align} where $b$ and $a$ label the bonding and the anti-bonding bands of the ladder, respectively, and $\xi(k)$ is defined as \begin{align} \sin \Big(\frac{\xi(k)}{2}\Big)=&\frac{u(k)}{\sqrt{1+u^2(k)}}\nonumber \\ \cos \Big(\frac{\xi(k)}{2}\Big)=&\frac{1}{\sqrt{1+u^2(k)}} \end{align} where $u(k)$ is given by \begin{equation} t_{\bot}u(k)=2t \sin k+\sqrt{(2t \sin k)^2+t_{\bot}^2} \label{eq:uofk} \end{equation} The dispersion relations for the bonding and anti-bonding bands are \begin{equation} E(k)=\pm\sqrt{(2t \sin k)^2+t_{\bot}^2} \label{eq:dispersion} \end{equation} At half filling, the bonding band is filled and the anti-bonding band is empty. As can be seen from Eq.\eqref{eq:dispersion}, the excitation energy $E(k)$ is smallest at $k=0,\pi$, where the spectrum has an energy gap of $2t_\bot$. As in all fermionic systems in 1D, this system can also be put in the form of 1D Dirac fermions with two two-component spinor fields, with the components being the right and left moving amplitudes near the two Fermi points at $k=0,\pi$. Therefore the low-energy degrees of freedom of this ladder (with flux $\Phi=\pi$) are described by two species (bonding and anti-bonding) Dirac spinors each with velocity $v=2t$ and mass gap $mv^2=t_\bot$. This can be done, more formally, by combining the right moving fermion from the $A$ chain (with $k \sim 0$), $R_A(k)$ and the left-moving fermion from the $B$ chain (also with $k\sim 0$), $L_B(k)$, into a two-component (Weyl) spinor. Similarly a second spinor can be constructed where $\tilde R_B(k)$ is the right-moving component of the fermion on the $B$ chain with momentum $-\pi+k$ and $\tilde L_A(k)$ is the left-moving fermion from the $A$ chain with momentum $\pi-k$. \begin{equation} \psi_1(k)= \begin{pmatrix} R_A(k\\ L_B(k) \end{pmatrix}, \qquad \psi_2(k)= \begin{pmatrix} \tilde R_B(k)\\ \tilde L_A(k) \end{pmatrix} \label{eq:spinors} \end{equation} The tunneling matrix element $t_\bot$, which mixes right and left movers with the same momenta on both chains, opens the (same) mass gap $m \propto t_\bot$ in both Dirac spinors. The effective (continuum) low energy Hamiltonian density for this system is \begin{equation} \mathcal{H}= \sum_{a=1,2}\left(\psi_a^\dagger \sigma_3 i v \partial_x \psi_a +mv^2 \psi^\dagger_a \sigma_1 \psi_a\right) \end{equation} where $a=1,2$ labels the two spinors and $\sigma_1$ and $\sigma_3$ are the two $2 \times 2$ Pauli matrices (which act on the components of each spinor). Below we will calculate the reduced density matrix for chain $A$ by making a cut between the chains and trace out chain $B$. We can now use the results of Peschel \cite{Peschel2003} for free-fermion system to find the entanglement Hamiltonian \begin{equation} \widetilde H_E\equiv -\ln \rho_A \label{eq:tildeH} \end{equation} for subsystem $A$, which has the explicit form \begin{equation} \widetilde H_E=\sum_{i,j=1}^N \widetilde H_{ij}c^{\dag}_i c_j \label{eq:peschel} \end{equation} where the matrix $\widetilde H_{ij}$ takes the form \begin{equation} \widetilde H_{ij}=\left(\ln\left[(C^{-1}-1)\right]\right)_{ij} \label{eq:Hij} \end{equation} The creation and annihilation operators $c_I^\dagger$ and $c_i$ in Eq.\eqref{eq:peschel} are labelled by the sites of subsystem $A$. In Eq.\eqref{eq:Hij} $C_{ij}$ is the correlation function matrix (the fermion propagator at equal times) whose matrix elements in momentum space are \begin{align} C_{kk^{\prime}}=& \langle c^\dagger_{A}(k) c_{A}(k^{\prime})\rangle =2\pi \delta(k-k^{\prime}) \sin ^2\Big(\frac{\xi(k)}{2}\Big)\nonumber\\ =&2\pi \delta(k-k^{\prime}) \frac{u^2(k)}{1+u^2(k)} \label{eq:eq-time} \end{align} Combining the above two equations, we find that the entanglement Hamiltonian (in momentum space) has the standard form \begin{equation} \widetilde H_E=\int_{-\pi}^\pi \frac{dk}{2\pi} \; \omega(k) \; c^{\dag}(k)c(k) \label{eq:H_E-fermion-ladder} \end{equation} where \begin{equation} \omega(k)= \ln u^2(k) \end{equation} By inspection of Eq.\eqref{eq:uofk} we see that as $k \to 0$ the quantity $u(k) \simeq 1 + v k/t_\bot+ O(k^2)$, and similarly as $k \to \pi$. Thus the one-particle spectrum $\omega(k)$ vanishes linearly as $k \to 0,\pi$. In other terms, the long-wavelength modes (with $k\sim 0, \pi$) of the one-particle entanglement spectrum is that of a system of massless fermions, $\omega(k) \simeq 2 v k/t_\bot$, with the modes near $k=0$ representing right-movers and the modes near $k=\pi$ representing left movers, respectively. If we define the inverse temperature $\beta_{\rm eff}=(t_{\bot}/2)^{-1}$, we can rewrite the reduced density matrix $\rho_A$ as \begin{equation} \rho_A=\rho_{T_{\rm eff}}=\frac{1}{Z} e^{-\beta_{\rm eff} H_A} \end{equation} We can see that $\rho_A$ has the same form as $\rho_T$ for chain A with $T_{\rm eff}=t_\bot/2$ playing the role of the temperature. Therefore, the entanglement Hamiltonian for the long-wavelength modes (near $k=0$ and $k=\pi$, always has the form (regardless of the strength of the tunneling amplitude $t_{\bot}$) \begin{align} \widetilde H_E=&\int_{-\Lambda}^\Lambda \frac{dk}{2\pi} \; \frac{4ta}{t_{\bot}}k \big(R^\dagger(k)R(k)-L^\dagger(k) L(k)\big) \nonumber\\ =&\beta_{\rm eff} H_A \label{eq:H_E-long-wavelengths} \end{align} where $R(k)$ represent the right-moving modes (with $k\sim 0$) and $L(k)$ the left-moving modes (with wave vector $\pi-k$), respectively, $v=2ta$ is the velocity of the modes, and $\Lambda \sim \pi/a$ is a momentum cutoff (and $a$ is the lattice spacing which we have set to $1$). In other terms, the long-wavelength entanglement Hamiltonian for chain $A$ is the same as the low-energy Hamiltonian $H_A$ for the Dirac fermions of the decoupled chain. Therefore the long-wavelength reduced density matrix is the Gibbs density matrix of a system of a massless Dirac fermion (with velocity $2t$) at temperature $T_{\rm eff}=\beta_{\rm eff}^{-1}=t_\bot/2$. On the other hand, in the strong coupling (tunneling) limit $t_{\bot}\gg t$, in which there is a large energy gap in the spectrum of the fermions, the entanglement Hamiltonian has the simple form \begin{align} \widetilde H_E=&\int_{-\pi}^\pi \frac{dk}{2\pi} \; \frac{4t}{t_{\bot}}\sin k \; c^{\dag}(k)c(k)\nonumber\\ =&\beta_{\rm eff}\; i t \sum_{n=1}^N \; c(n)^\dagger c(n+1)+\textrm{h.c.} \label{eq:H_E-strong-tunneling} \end{align} In this limit the reduced density matrix for leg $A$ is that of a single chain of free fermions with hopping amplitude $it$ at an effective temperature $T_{\rm eff}=t_\bot/2$. This effective temperature is in fact much higher than the bandwidth $4t$ of the fermionic spectrum of the chain. Thus, the statistical ensemble of the chain defined by the strong tunneling limit is essentially the classical Gibbs ensemble. In the strong tunneling limit the entanglement Hamiltonian is a local operator of the chain degrees of freedom. Clearly the corrections to this strong tunneling limit lead to an effective entanglement Hamiltonian which becomes increasingly non-local. Nevertheless, these apparently non-local lattice operators only contribute with irrelevant operators in the long-wavelength regime. In summary, in this free fermion ladder model the reduced density matrix of a chain has a Gibbs form with an effective entanglement Hamiltonian $H_E$, given in Eq.\eqref{eq:H_E-fermion-ladder}. Since the reduced density matrix for chain $A$ is thermal, the von Neumann entanglement entropy is equal to the thermodynamic entropy of the 1D quantum system described by the Hamiltonian $H_E$. As the strength of the tunneling matrix element $t_\bot$ increases, the fraction of the entanglement spectrum that is thermal also increases ranging from only the long-wavelength modes of chain $A$ for $t_\bot \ll t$ to all of the modes for $t_\bot \gg t$, with an effective temperature $T_{\rm eff}=t_\bot/2$. Nevertheless, the long-wavelength modes, i.e. the lowest eigenvalues of the entanglement Hamiltonian, which are always thermal, have universal properties. The free energy of a system of 1D massless Dirac fermions (in a system of length $L$ in the thermodynamic limit) at temperature $T$ is that of a conformal field theory with central charge $c=1$ (see Refs. [\onlinecite{Affleck1986,Blote1986}]) \begin{equation} F=-T \ln Z=\varepsilon_0 L -\frac{\pi c}{6v} T^2L \label{eq:1D-fermion-free-energy} \end{equation} where $\varepsilon_0$ is the (non-universal) ground state energy density, $c$ is the central charge of the conformal field theory and $v$ is the velocity of the modes. The last term in Eq.\eqref{eq:1D-fermion-free-energy} is universal and it is well known low temperature (the Casimir term) contribution to the free energy of a conformal field theory. The form of this term is determined by the conformal anomaly of the conformal field theory.\cite{Affleck1986,Blote1986} From here it follows that the thermodynamic entropy $S_T$ of this 1D quantum critical system is \begin{equation} S_T=-\frac{\partial F}{\partial T}=\frac{\pi c}{3v} T L \label{eq:1D-cft-entropy} \end{equation} Depending on the boundary conditions, the entropy $S_T$ may have the finite limiting value $\ln g$ as $T \to 0$, where $g$ is a universal number that depends on the boundary conditions and can be interpreted as a ground state ``degeneracy'' (even though it is generally not an integer).\cite{Affleck1991} Since we have shown that the reduced density matrix of a leg of the fermion ladder with flux $\Phi=\pi$ per plaquette is, in the long wavelength limit, identical to the Gibbs density matrix of a system of massless Dirac fermions in 1D, we can apply the above results from CFT to the present case. It then follows that the thermodynamic entropy of a system of 1D massless Dirac fermions at finite temperature $T$ is the same as the von Neumann entanglement entropy $S_{vN}$ of chain $A$ (also in the long wavelength limit) with a temperature $T=T_{\rm eff}=M$, given by the mass gap of the fermion ladder. It is an elementary excercise to compute the R\'enyi entropies, $S_n$. Indeed in this system the trace of the $n$th power of the (unnormalized) reduced density matrix $\rho_A$ is now equal to the partition function of a free Dirac fermion at temperature $T_{\rm eff}/n$, \begin{align} \textrm{Tr} \rho_A^n=&Z_F\left(T=\frac{T_{\rm eff}}{n}\right)\nonumber\\ =& \exp\left(-\frac{n}{T_{\rm eff}} \varepsilon_0 L+\frac{\pi c}{6v} \frac{T_{\rm eff}}{n} L\right) \end{align} where we have purposely left the explicit dependence on the central charge $c$ of the CFT (although for the free fermion case $c=1$). We will return to this expression in Section \ref{sec:weak-coupling}. Therefore \begin{equation} \frac{\textrm{Tr} \rho_A^n}{\left(\textrm{Tr} \rho_A\right)^n}\equiv \textrm{Tr} \hat \rho_A^n=\exp\left[\frac{\pi c}{6v} \left(\frac{1}{n}-n\right) T_{\rm eff} L\right] \end{equation} where we denoted by $\hat \rho_A$ the normalized reduced density matrix, i.e. $\textrm{Tr} \hat \rho_A=1$. From here we find that the von Neumann entanglement entropy $S_{vN}$ is given by \begin{equation} S_{vN}=-\frac{\partial}{\partial n}\textrm{Tr} \hat \rho_A^n\Big|_{n\to 1} =\frac{\pi c}{3v} T_{\rm eff} L=S(T_{\rm eff}) \end{equation} which agrees with the thermodynamic entropy $S(T_{\rm eff})$ at temperature $T_{\rm eff}$ (as it should). Similarly, the R\'enyi entropy $S_n$ is given by the result (valid for $n>1$) \begin{equation} S_n= \frac{\pi c}{6v} \left(1+\frac{1}{n}\right) T_{\rm eff} L \end{equation} Notice that $S_1=\lim_{n \to 1} S_n= S_{vN}$ as it should. We close this section with a comment of the correlators. The equal-time fermionic correlators, i.e. the equal time propagators (or Green functions), of a theory of massive fermions in $1+1$ dimensions has an asymptotic exponential decay $\exp (-m|x|)$ (where $m$ is the mass gap) with a power law correction prefactor $(m|x|)^{-1/2}$. This behavior is correctly reproduced by Eq.\eqref{eq:eq-time} (as it should). However the equal-time correlation function of gapless Dirac fermions at temperature $T$ has a pure exponential decay of the form $\pi T/\sinh(2\pi T |x|)$, which does not have a power law prefactor correction. This apparent difference is the result of the long-wavelength approximation used in the entanglement Hamiltonian of Eq.\eqref{eq:H_E-long-wavelengths}. \section{Entanglement in Strongly Coupled Systems} \label{sec:strong-coupling} For a free fermion model, such as the one discussed in Section \ref{sec:free-fermion}, we can exactly calculate the reduced density matrix for any value of the coupling constant. For the general case of two arbitrary coupled systems which are not free the computation of the reduced density matrix is non-trivial. However, in the strong coupling limit we can still use the perturbation theory to calculate the reduced density matrix. This is the approach we will follow here. A similar calculation was done by La\"uchli and Schliemann.\cite{Lauchli2012} In general the Hamiltonian will have the form $H=H_0+H_{\rm pert}$, where $H_0$ is the (local) inter-chain coupling between chains $A$ and $B$ and $H_{\rm pert}$ represents the Hamiltonian of the two decoupled chains. The ground state of the coupled system, to zeroth order in perturbation theory is the product state $\big| \Psi_0\rangle=|1\rangle \times |2\rangle \times \ldots \times |N\rangle$ where $\{ \big| n \rangle \}$ (with $n=1,\ldots N$ for chains of $N$ sites) are the states of the degrees of freedom of the two chains at the $n$th rung of this ladder. This ground state is non-degenerate and has a finite (and large) energy gap to all excitations. Since it is a product of singlet states, it is also ``maximally entangled'' even though in this basis the ground state is a product state. This is a simple example showing that the degree of entanglement of a state depends on how the question is posed, i.e. on the choice of the entangling region. Thus if we choose as the entangling region the left half of the ladder we would conclude that its entanglement entropy would be trivially zero. In contrast, if we choose one chain of the ladder as the entangling region the entanglement entropy will be (trivially) maximal). We can compute next the corrections to the unperturbed ground state $\big|\Psi_0\rangle$ using an expansion in powers of the intra-chain interactions. Since we start with a gapped phase, the strong coupling expansion works well. For the sake of definiteness we will consider the problem of a quantum Heisenberg antiferromagnetic model with $S=1/2$ on a two-leg ladder as an example. Other models can be treated using a similar procedure. The unperturbed Hamiltonian $H_0$ now contains only the inter-chain exchange interactions (with coupling constant $J_\perp$) on the rungs of the ladder, \begin{equation} H_0=J_\perp \sum_{n=1}^N \vec S_{A}(n) \cdot \vec S_{B}(n) \end{equation} The coupling between the chains $A$ and $B$ is anti-ferromagnetic with $J_\perp>0$. For $H_0$, the ground state is the product of $N$ spin singlets on the rungs \begin{equation} \big|\Psi_0\rangle=\prod_{i=n}^N \big| 0,0\rangle_n \label{eq:Psi0} \end{equation} where \begin{equation} \big| 0,0 \rangle_n=\frac{1}{\sqrt{2}} \big(\big|\uparrow_A,\downarrow_B\rangle_n-\big|\downarrow_A,\uparrow_B\rangle_n\big) \end{equation} is the spin singlet state on the $n$th rung of the ladder. In the first excited state of the ladder, $\big|\Psi_1\rangle$, the spin singlet state of one rung is replaced by a spin triplet state $\big|1,m\rangle$, with $m=\pm 1, 0$ given by their standard expressions, $\big| 1,1\rangle=\big| \uparrow_A, \uparrow_B\rangle$, $\big|1,-1\rangle=\big|\downarrow_A, \downarrow_B\rangle$, and $\big| 1,0 \rangle=(\big| \uparrow_A, \downarrow_B\rangle +\big|\downarrow_A, \uparrow_B\rangle)/\sqrt{2}$. The excitation energy is $E_1-E_0=J_\perp $. For the second excited state $\big|\Psi_2^{i,j}\rangle$, two singlets at rungs $i$ and $j$ become triplets, etc. For the $k$th excited state $\big|\Psi_k\rangle$, the excitation energy is $k J_\perp $. The perturbing Hamiltonian $H_{\rm pert}$ is the sum of the Hamiltonians of the quantum Heisenberg antiferromagnets of the two chains \begin{align} H_{\rm pert}=&J \sum_{n=1}^N (\vec S_{A}(n) \cdot \vec S_{A}(n+1)+\vec S_{B}(n) \cdot \vec S_{B}(n+1)) \nonumber \\ =&\frac{J}{2} \sum_{n=1}^N \left(\sigma^+_{A}(n)\sigma^-_{A}(n+1)+\sigma^-_{A}(n)\sigma^+_{A}(n+1)\right)\nonumber\\ +&\frac{J}{4} \sum_{n=1}^N \sigma^z_{A}(n)\sigma^z_{A}(n+1)\nonumber \\ +&\frac{J}{2}\sum_{n=1}^N\left(\sigma^+_{B}(n)\sigma^-_{B}(n+1)+\sigma^-_{B}(n)\sigma^+_{B}(n+1)\right)\nonumber\\ +&\frac{J}{4}\sum_{n=1}^N\sigma^z_{B}(n)\sigma^z_{B}(n+1) \label{eq:Hpert} \end{align} where we have expressed the spin operators in terms of the Pauli matrices. Let us compute the ground state of the ladder to first order in perturbation theory in $H_{\rm pert}$. By inspection of Eq.\eqref{eq:Hpert} we see that the only non vanishing contribution involves breaking the spin singlets on pairs of nearest-neighbor rungs at a time, i.e. only $\langle \Psi^{n,n+1}_2\big|H_{\rm pert}\big|\psi_0\rangle \neq 0$. The perturbed ground state is \begin{align} \big| \Psi \rangle = &\big|\Psi_0\rangle+\sum_{n}\frac{\langle\Psi^{n,n+1}_2\big|H_{\rm pert}\big|\Psi_0\rangle}{E_0-E_2}\big|\Psi^{n,n+1}_2\rangle \nonumber \\ =&\big|\Psi_0\rangle\nonumber\\ -\sum_{n} & \left(-\frac{J}{16 J_\perp}\big|\phi_1^{n,n+1}\rangle-\frac{J}{16 J_\perp}\big|\phi_2^{n,n+1}\rangle+\frac{J}{4J_\perp}\big|\phi_3^{n,n+1}\rangle\right)\nonumber\\ & \label{eq:perturbed-wf} \end{align} where $\{ \big|\phi_{1,2,3}^{n,n+1}\rangle \}$ are three different types of excited states of the unperturbed Hamiltonian $H_0$. In these excited states spins on pairs of nearest-neighbor rungs are put in triplet states. They are given by \begin{align} \big|\phi_1^{n,n+1}\rangle=&\ldots \big|1,1\rangle_n \big|1,-1\rangle_{n+1}\ldots \nonumber\\ \big|\phi_2^{n,n+1}\rangle=&\ldots \big|1,-1\rangle_n \big|1,1\rangle_{n+1} \ldots \nonumber\\ \big|\phi_3^{n,n+1}\rangle=&\ldots \big|1,0\rangle_n \big|1,0\rangle_{n+1}\ldots \end{align} where $\ldots$ represents product of singlets on the other rungs. We have \begin{eqnarray} \nonumber \langle \phi_{1}^{n,n+1}\big|H_{\rm pert}\big|\Psi_0\rangle&=&-J/8\\ \nonumber \langle \phi_{2}^{n,n+1}\big|H_{\rm pert}\big|\Psi_0\rangle&=&-J/8\\ \langle \phi_{3}^{n,n+1}\big|H_{\rm pert}\big|\Psi_0\rangle&=&J/2 \end{eqnarray} The wavefunction of Eq.\eqref{eq:perturbed-wf} is written in the basis of states of total spin state on the rungs. However in order to compute the reduced density matrix of one chain we will need to express the wave function in the basis of the spin projections of each chain, $ \big|S^z(1),\ldots,S^z(N)\rangle_A $ for chain $A$, and $ \big|S^z(1),\ldots,S^z(N)\rangle_B $ for chain $B$, respectively. Let us denote the spin configurations in chain $A$ by $\big|\phi\rangle_A$ and the spin configurations of chain $B$ by $\big|\phi\rangle_B$. In this basis the unperturbed wave function $\big|\Psi_0\rangle$ of Eq.\eqref{eq:Psi0} is given by \begin{align} \big|\Psi_0\rangle=&\sum_{C(B)} \left(\frac{1}{\sqrt{2}}\right)^N (-1)^{m_d(C(B))} \big|\phi\rangle_A\big|\phi\rangle_B \nonumber\\ =\sum_{C(B)} &\left(\frac{1}{\sqrt{2}}\right)^N (-1)^{m_d(C(B))} \big|\uparrow\downarrow...\downarrow\uparrow...\rangle_A \; \big|\downarrow\uparrow...\downarrow\uparrow...\rangle_B \label{eq:Psi0-product} \end{align} where we have denoted by $C(B)$ the set of all spin configurations in chain $B$, and by $m_d(C(B))$ the number of down spins $\downarrow$ in the configuration of chain $B$. Notice that in this basis the spin configurations $\big| \phi\rangle_A$ of chain $A$ are antiparallel to the spin configurations $\big| \phi\rangle_B$ in chain $B$ at every rung of the ladder. Although this is a product state, this state is maximally entangled when the cut is made between the chains. Similarly, when we add $H_{\rm pert}$ to $H_0$, in the basis of the spin projection of each chain, the perturbed wavefunction $\big|\Psi\rangle$ defined in Eq.\eqref{eq:perturbed-wf} can be rewritten in the following form: \begin{widetext} \begin{eqnarray} \nonumber \big|\Psi\rangle &=&\sum_{C(B)} \left(\frac{1}{\sqrt{2}}\right)^N(-1)^{m_d(C(B))} \left[(1-\frac{J }{4J_\perp}M_1+\frac{J }{4J_\perp}M_2) \big|\phi\rangle_A -\sum_{C(A)^{\prime}}\frac{J}{8J_\perp} \big|\phi^{\prime}\rangle_A\right] \big|\phi\rangle_B\\ \nonumber &=&\sum_{C(B)} \left(\frac{1}{\sqrt{2}}\right)^N(-1)^{m_d(C(B))} \left[(1-\frac{J }{4J_\perp}M_1+\frac{J }{4J_\perp}M_2) \big|\uparrow...\uparrow\downarrow...\rangle_A -\sum_{C(A)^{\prime}}\frac{J}{8J_\perp} \big|\uparrow...\downarrow\uparrow...\rangle_A\right] \big|\downarrow...\downarrow\uparrow...\rangle_B\\ \label{eq:perturbedwf} \end{eqnarray} \end{widetext} where $C(B)$ represents all the spin configurations $\big|\phi\rangle_B$ of chain $B$ (which are presented schematically in Eq.\eqref{eq:perturbedwf}). For each $\big|\phi\rangle_B$, the spin configuration $\big|\phi\rangle_A$ of chain $A$ is antiparallel with the spin configuration in chain $B$. $\big|\phi^{\prime}\rangle_A$ is defined by flipping the neighboring antiparallel spin paris ($\uparrow\downarrow$ or $\downarrow\uparrow$) in $\big|\phi\rangle_A$ and $C(A)^{\prime}$ represents all possible spin configurations for $\big|\phi^{\prime}\rangle_A$. $m_d(C(B))$ is the number of down spins $\downarrow$ in the states of the $B$ chain, $M_1$ and $M_2$ are the numbers of pairs for parallel spins ($\uparrow\uparrow$ or $\downarrow\downarrow$) and antiparallel spins ($\uparrow\downarrow$ or $\downarrow\uparrow$) in $\big|\phi\rangle_A$. To get the reduced density matrix for chain $A$, we need to use the Schmidt decomposition to trace out the states in chain $B$. The resulting (unnormalized) reduced density matrix for chain $A$ is \begin{widetext} \begin{align} \rho_A=&\sum_{C(A)} \frac{1}{2^N}\left[\left(1-M_1 \frac{J}{2J_\perp}+M_2 \frac{J}{2J_\perp}\right) \big|\phi\rangle_A \langle\phi\big|_A -\sum_{C(A)^{\prime}}\left( \frac{J}{4J_\perp} \big|\phi\rangle_A \langle\phi^{\prime} \big|_A+\textrm{h.c.}\right)\right] \nonumber\\ =&\sum_{C(A)} \frac{1}{2^N}\left[\left(1-M_1 \frac{J}{2J_\perp}+M_2 \frac{J}{2J_\perp}\right) \big|\uparrow...\downarrow\uparrow...\rangle_A \langle\uparrow...\downarrow\uparrow... \big|_A -\sum_{C(A)^{\prime}}\left(\frac{J}{4J_\perp} \big|\uparrow...\downarrow\uparrow...\rangle_A \langle\uparrow...\uparrow\downarrow... \big|_A+\textrm{h.c.}\right)\right] \nonumber\\ & \label{eq:rhoA-ladder} \end{align} \end{widetext} where $C(A)$ are all the spin configurations in chain $A$ and $C(A)^{\prime}$ are the spin configurations obtained by flipping neighboring antiparallel spin pairs in $\big|\phi\rangle_A$. The reduced density matrix for chain $A$ can be computed straightforwardly at this (first) order in perturbation theory in $J/J_\perp$. It has the form \begin{equation} \rho_A= \frac{1}{Z} (1-\beta_{\rm eff} H_E+\ldots)\simeq \frac{1}{Z} e^{-\beta_{\rm eff} H_E+\ldots} \label{eq:rhoA} \end{equation} where $Z$ normalizes the reduced density matrix, and $H_E$ is the entanglement Hamiltonian. Notice that in Eq.\eqref{eq:rhoA}, in the square bracket, there are two terms, the first term $\big|\phi\rangle_A\langle\phi\big|_A$ can be understood as the potential term and the second term $\big|\phi\rangle_A\langle\phi^{\prime}\big|_A$ represents the hopping term between neighboring sites. Thus $H_E$ (at this order) is the Hamiltonian of the spin-1/2 antiferromagnetic quantum Heisenberg chain, \begin{align} \nonumber H_E=& \frac{J}{4} \sum_{n} \Big(2\sigma^z(n)\sigma^z(n+1)\nonumber \\ &+\sigma^+(n)\sigma^-(n+1)+\sigma^-(n)\sigma^+(n+1)\Big)+\ldots \nonumber \\ =& J \sum_{n}\vec S_{A}(n) \cdot \vec S_{A}(n+1)+\ldots \label{eq:entanglement-H-highT} \end{align} Thus, in the strong coupling limit, $J_\perp\gg J$, the reduced density matrix $\rho_A$ of chain $A$ is equal to the thermal density matrix $\rho_T$ of the chain with an effective (very high) temperature $T_{\rm eff}=2J_\perp\gg J$. In this limit the entanglement entropy equals to thermal entropy of the chain. The result we derived is a general consequence of the strong coupling limit and it is not peculiar to a ladder system. It is straightforward to see that, for instance, it also applies to a 2D bilayer antiferromagnet in the regime of strong inter-layer exchange interactions. In this regime the bilayer system is gapped and the ground state is also well approximated by a product of singlets on the inter-layer couplings. By construction, in all cases the resulting reduced density matrix always describes a system at very high temperature. Thus we obtain that the reduced density matrix is thermal with an effective local Hamiltonian which that of a 2D quantum Heisenberg antiferromagnet. Since the effective temperature is much larger than the intra-layer exchange interaction, the reduced density matrix of layer $A$ describes the paramagnetic phase of a single-layer antiferromagnet. However, this result does not imply that the entanglement Hamiltonian must necessarily always be equal to the Hamiltonian of the subsystem. For instance, La\"uchli and Schliemann have also shown that at second order in perturbation theory the entanglement Hamiltonian acquires a next-nearest-neighbor exchange interaction. Higher order terms in perturbation theory will generate more non-local terms in the effective Hamiltonian. \section{Weak Coupling Limit} \label{sec:weak-coupling} From the discussion in Section \ref{sec:free-fermion} we see that for the free fermion model, in the strong tunneling limit, the reduced density matrix of one chain has thermal form, $\rho_{A}=\rho_T$. However, in the same section we also saw that for the low energy modes of a chain of the ladder, i.e. those with wave vectors around $k=0$ and $k=\pi$, the reduced density matrix of one chain is also thermal regardless of the strength of the tunneling matrix element $t_\bot$. Also in section \ref{sec:strong-coupling} we saw that in the case of antiferromagnets on ladders, the reduced density matrix of one chain of the ladder is also thermal in the strong inter-ladder coupling, albeit with a temperature large compared with the scale of the entanglement Hamiltonian (which has the quantum Heisenberg form). By comparison with the results of Section \ref{sec:free-fermion} we would also expect that the reduced density matrix for the long-wavelength degrees of freedom of a chain of the ladder should also have a thermal form. This issue cannot be addressed by a direct calculation from the inter-chain strong-coupling regime of the ladder. In this section, we will consider the general case in the weak coupling limit. We consider a system with two critical chains with the same Hamiltonian which in the low-energy and long-wavelength limit describes a conformal field theory (CFT) in $1+1$ dimensions. We will further assume that, when coupled by some relevant operator $O(A,B)$ of the CFT, the combined system flows to a fixed point with a finite energy gap in its spectrum. Our goal is to determine if the reduced density matrix of one subsystem, $A$, has a thermal form. Formally, the Hamiltonian of the coupled CFTs has the form \begin{equation} H=H_A+H_B+\int dx \; g\; O(A,B) \label{eq:HAB} \end{equation} where $H_A\simeq H_B$ describe the two critical subsystems (the ``legs''), $O(A,B)$ is a suitable local relevant operator, and $g$ is a coupling constant. We will assume that this operator has the form $O(A,B)=\phi(A) \phi(B)$ where $\phi(A)$ and $\phi(B)$ are local operators of $A$ and $B$ each with (the same) scaling dimension $\Delta_{\phi(A)}=\Delta_{\phi(B)}\equiv \Delta/2$. This perturbation is relevant if its scaling dimension $\Delta_{\phi(A)}+\Delta_{\phi(B)}=\Delta\leq 2$ (where $2$ is the space-time dimension). Under these assumptions this perturbation drives the combined system into a massive phase with a finite mass gap $M(g)$ which obeys the scaling relation $M(g) \sim \textrm{const.} \; g^{\nu z}$ where $\nu=2-\Delta$. These CFTs are ``relativistic'' and hence have dynamical exponent $z=1$. The case $\Delta=2$ is special in that the operator $O(A,B)$ is marginal. We will further assume that in this case it is marginally relevant. In the case of the fermionic ladder of Section \ref{sec:free-fermion} the CFT of the decoupled chains is a theory of two massless Dirac (Weyl) fermions (and hence with central charge $c=2$). The scaling dimension of the tunneling operator (i.e. the fermion mass term) is $\Delta=1$ which is relevant. In this case, the exponent is $\nu=1$. In the case of the two-leg ladder, the decoupled ladder is a theory of two spin-1/2 quantum Heisenberg antiferromagnetic chains and hence are critical. The CFT of the spin-1/2 quantum Heisenberg antiferromagnetic chain is an SU(2)$_1$ Wess-Zumino-Witten (WZW) model.\cite{Affleck1986b} Hence the decoupled ladder is a product of two SU(2)$_1$ WZW models (with total central charge $c=2$). The most relevant operator in the inter-ladder exchange interaction is the coupling of the N\'eel order parameters of each chain, $\vec N_A(x) \cdot \vec N_B(x)$. In the SU(2)$_1$ CFT the N\'eel order parameters of each chain are represented by the primary field whose scaling dimension is $1/2$ (for a detailed discussion see, e.g. Ref.[\onlinecite{Fradkin1991}]). Hence, the scaling dimension of the inter-chain exchange interaction in the spin-1/2 ladder is $\Delta=1$, and hence the exponent is $\nu=1$ (albeit for different reasons than in the case of the fermionic ladder). \begin{figure}[t] \psfrag{tau}{$\tau$} \begin{center} \includegraphics[width=0.4\textwidth]{rhoA.eps} \end{center} \caption{Spacetime manifold with a cut required for the computation of $\rho_A$. The cut (the broken line) only affects the spacetime for subsystem $A$ (the outside cylinder) whose configurations are discontinuous across the cut. The configurations on region $B$ (the inside cylinder) are periodic and smooth. The interactions between the fields on regions $A$ and $B$ is depicted by the thin radial lines.} \label{fig:rhoA} \end{figure} The computation of the reduced density matrix of a subsystem (in this case a perturbed CFT) is in general a very difficult problem. In principle it is possible to compute the reduced density matrix using methods of quantum field theory which reduces this computation to an imaginary time path integral over the field configurations $\phi(x,\tau)$, with $0\leq x \leq L$ and $0\leq \tau\leq 1/T$ (in the limits $L \to \infty$ and $1/T \to \infty$), with suitable boundary conditions. For the matrix element $\langle \phi_A^{\textrm{in}}(x)\big|\rho_A\big|\phi^{\textrm{out}}_A(x)\rangle$, the boundary conditions are that the field configurations for region $B$ are periodic in imaginary time, $\phi_B(x,0)=\phi_B(x,1/T)$, whereas on region $A$ the field configurations are discontinuous across the $x$ axis between $\tau=0$ and $\tau=1/T$, and hence satisfy $\phi_A(x,0)=\phi_A^{\textrm{in}}(x)$ and $\phi_A(x,1/T)=\phi_A^{\textrm{out}}(x)$ (see Ref. [\onlinecite{Calabrese2004}]). For the type of problems we are discussing here the result is a path integral on two concentric cylinders each of length $L$ and and circumference $1/T$, with the cylinder for region $A$ having a cut along the $x$ axis representing the discontinuity of the field configurations, as shown in Fig.\ref{fig:rhoA}. \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{trrhoA2.eps} \end{center} \caption{ The spacetime manifold needed for the computation of $\textrm{Tr}\rho_A^2$. The inside cylinders represent the replicated regions $B$ (which are integrated out) and the outer surface which wraps around them is the replicated $A$ region. The interactions between the $A$ and $B$ regions are shown as thin radial lines.} \label{fig:trrhoA^2} \end{figure} Alternatively, we can compute the moments of the reduced density matrix of the subsystem (needed for the computation of the R\'enyi and von Neumann entropies) using the replica trick\cite{Callan1994,Holzhey1994,Calabrese2004} \begin{equation} \textrm{Tr}\rho_A^n=\frac{\mathcal{Z}_n}{\mathcal{Z}^n} \label{eq:Z_n/Z^n} \end{equation} from which the R\'enyi entropies $S_n$ and the von Neumann entropy can be determined, \begin{equation} S_n=\frac{1}{1-n} \ln \textrm{Tr} \rho^n_A, \qquad S_{vN}=\lim_{n\to 1} S_n \end{equation} In Eq.\eqref{eq:Z_n/Z^n} we have denoted by $\mathcal{Z}$ the partition function of the coupled system (with coupling constant $g$) defined on a cylinder of length $L \to \infty$ and circumference $1/T \to \infty$. $\mathcal{Z}_n$ is the partition function of the coupled system (with subsystems $A$ and $B$) on a spacetime manifold obtained by stitching together $n$ copies of the path integral of the reduced density matrix. In the case at hand this leads to the manifold shown in Fig.\ref{fig:trrhoA^2} (for the case $n=2$), where the $B$ region are the inside cylinders whereas the $A$ region is obtained by gluing together the $n$ path integrals along the $n$ cuts. Therefore, $\mathcal{Z}_n$ is a path integral in which the fields on the $n$ copies of the region $B$ are periodic with period $1/T$. Instead the fields on region $A$ are stitched together in such a way that they are periodic with period $n/T$ (see Fig. \ref{fig:trrhoA^2}). The partition function $\mathcal{Z}$ should not be confused withe the normalization $Z$ of the reduced density matrix. This procedure requires the introduction of a set of twist fields that connect the Hilbert spaces two at a time. In the case of spatial cuts there are a finite number of such twist fields. In the case of a conformally invariant theory the twist fields behave as local operators with non-trivial scaling dimensions and uniquely determine the singularities of the path-integral.\cite{Calabrese2004} However in the case in which two conformal field theories (on regions $A$ and $B$) are coupled everywhere we are led to the ``body'' cuts we described above (and shown in Fig.\ref{fig:rhoA}) which require the introduction of a line of twist fields defined along these cuts. The introduction of this line of twist fields complicates the calculation of the replicated partition function, and we will not pursue this approach here. Another option is to use the approach introduced by Qi, Katsura and Ludwig\cite{Qi2012} who made the observation that upon physically splitting regions $A$ and $B$ suddenly, i.e. upon setting the coupling constant $g\to 0$ after some (real) time $t=0$, the reduced density matrix of subsystem $A$ becomes the density matrix of the (now decoupled) system $A$. These authors used this approach to relate the entanglement entropy of a simply connected region of a 2D chiral topological phase to the behavior of its edge states. In this section we will formulate instead a scaling argument to generalize the results of Section \ref{sec:free-fermion}. There we saw that the reduced density matrix of the long-wavelength modes of a chain of a gapped free fermion system on a ladder is thermal and that the von Neumann entropy of the chain is the thermal entropy of an isolated chain at a finite effective temperature set by the gap in the fermion spectrum. We also saw that the resulting expressions for the entanglement entropies (von Neumann and R\'enyi) depend only on the Casimir term that gives the form of the finite size correction to the free energy in a conformal field theory. The structure of the universal Casimir term is determined by conformal invariance and by the conformal anomaly\cite{Blote1986,Affleck1986} (through the central charge $c$). We are thus led to conjecture that this behavior of the entanglement entropies holds for any system of two coupled conformal field theories in a massive phase with a mass gap $M(g) \sim g^\nu$. The scaling argument is based on the observation that the quantity $\mathcal{F}_n=-T \ln \mathcal{Z}_n$ is the {\em free energy} of the replicated system and, as such, it is a function of $L$, $T$ and $n$ (as well as of the coupling constant $g$). The scaling behavior is expected to hold since we are dealing with a perturbed conformal field theory which, due to the effects of the relevant perturbation, is driven into a massive phase. Since the coupled theory now has a finite mass gap $M(g)$ and a finite correlation length $\xi(g)$, the singular part of $\ln \mathcal{Z}$ of the coupled system, whose Hamiltonian is given in Eq.\eqref{eq:HAB}, should be, as in all theories of critical behavior,\cite{cardy-book} an extensive homogeneous function of the form (known in the theory of Critical Phenomena as Widom scaling) \begin{equation} \big(\ln \mathcal{Z}\big)_{\rm sing}= \textrm{const.} \; T L\; \xi^{-2}(g)\; f(g) \label{eq:scaling-lnZ} \end{equation} where $f(g)$ is a function such that $f(0)=1$. Turning now to the replicated partition function, $\mathcal{Z}_n$, we notice that on the $A$ region the stitched cuts act only at imaginary times $\tau=p/T$ (with $p=1,\ldots,n$) and for all values of $x$. The partition function of the replicated system, $\mathcal{Z}_n$, differs from the partition function of a single copy by the action of the lines of twist fields at $n$ equally spaced boundaries in imaginary time. We are interested in the limit in which both $L \to \infty$ and $T \to \infty$ for fixed and finite $n$. In this limit $\mathcal{Z}_n$ should have a bulk contribution which is asymptotically the same as the bulk contribution of $n$ decoupled copies. By examining the free energies $-T \ln \mathcal{Z}_n$ and $-nT \ln \mathcal{Z}$, we notice that in the thermodynamic limit $L \to \infty$ and $T \to 0$, the bulk contributions should cancel exactly each other out and that the only surviving contributions come from the ``defects'' (associated with the twist fields). Thus, the piece we are interested in is a finite size correction in $\mathcal{Z}_n$ which defines a type of boundary field theory. Furthermore, since for any finite value of the coupling constant $g$ the theory is in a massive phase, the subtracted quantity $(\ln \mathcal{Z}_n- n\ln \mathcal{Z})$ (needed to compute the R\'enyi entropies) has contributions only from a strip of width $\xi=1/M(g)$ and length $L$. The length scale $\xi$ is the ``extrapolation length'' invoked in Refs.[\onlinecite{Gambassi2011,Qi2012}]. Therefore, again in the thermodynamic limit $L \to \infty$ and $T \to 0$, we expect to obtain the scaling behavior \begin{equation} \lim_{T\to 0, L \to \infty} \big(\ln \mathcal{Z}_n-\ln \mathcal{Z}^n\big)= L M(g)\; \tilde f_n(g) \label{eq:sing-Renyi} \end{equation} where $\tilde f_n(g)$ is another function with the limit $\widetilde f_n(0)=f_n$. By demanding consistency with the results from Section \ref{sec:free-fermion}, we will conjecture that the quantities $f_n$ are given by \begin{equation} f_n=\frac{\pi c}{6v} \left(\frac{1}{n}-n\right) \label{eq:fn} \end{equation} where $c$ is the central charge of each of the two conformal field theories at $g=0$ and $v$ is the velocity of their long-wavelength modes. We are then led to conjecture that the von Neumann entanglement entropy of subsystem $A$ of a gapped system $A\cup B$ is extensive and has the scaling behavior \begin{equation} S_{vN}=\frac{\pi c}{3v} M(g) L \end{equation} where $M(g) \sim g^{2-\Delta}$, $c$ is the central charge $c$ of the decoupled CFTs which are coupled by a local relevant operator of scaling dimension $\Delta$, and $v$ is the velocity of the modes. These arguments also imply that the R\'enyi entropies $S_n$ should be given by an expression of the form \begin{equation} S_n=\frac{\pi}{6} \frac{c}{v} \left(\frac{1}{n}+1\right) M(g) L \end{equation} \section{Gapless Coupled Luttinger Liquids} \label{sec:coupled-LL} For completeness, in this Section we will consider a situation where the coupling operator $O(A, B)$ is marginal and therefore will not open a gap in the spectrum. The entanglement entropy thus should be different from the thermal entropy. As a simple example we consider two Luttinger liquids coupled with a marginal operator. The R\'enyi entropy for this model has been calculated before by Furukawa and Kim, using the replica trick. They showed that the von Neumann entanglement entropy has, in addition to a term proportional to the length of the subsystem, there is a constant term determined by Luttinger parameter.\cite{Furukawa2011} Here we will arrive to the same result using a different (and simpler) method. We will obtain this result directly by computing the reduced density matrix $\rho_A$. This can be done since the Luttinger liquid model is essentially a free scalar (compactified) (Bose) field. The Hamiltonian density for this model is \begin{equation} \mathcal{H}=\mathcal{H}_A+\mathcal{H}_B+\mathcal{H}_{AB} \label{eq:HLutt-total} \end{equation} where $\mathcal{H}_A$ and $\mathcal{H}_B$ are the Hamiltonian densities for the two Luttinger liquids \begin{equation} \mathcal{H}_{AB}=\frac{v}{2} \left[\frac{\Pi^2}{K}+K(\partial_x \phi)^2\right] \label{eq:HLutt} \end{equation} In momentum space the Hamiltonians have the form \begin{equation} H_{AB}= \sum_{p \neq 0}v|p| \left(a^{\dag}_p a_p+\frac{1}{2}\right)+\frac{v}{2LR^2}M^2+\frac{2 v}{L}R^2N^2 \label{eq:HLutt-momentum} \end{equation} where $\phi$ is a compactified boson with compactification radius $r$ and $R=r \sqrt{K}=1/\sqrt{4\pi}$, where $K$ is the Luttinger parameter, and $\Pi$ is the canonical momentum conjugate to the the field $\phi$. Here $M$ and $N$ take integer values. (For a summary of the Luttinger model see, e.g., Refs. [\onlinecite{Fradkin1991}] and [\onlinecite{Gogolin1998}]). The coupling term $\mathcal{H}_{AB}$ takes the form \begin{equation} \mathcal{H}_{AB}= uK \partial_x \phi_A \partial_x \phi_B - \frac{u}{K} \Pi_A\Pi_B \label{eq:HABLutt} \end{equation} where $u$ is the coupling constant. In momentum space the inter-chain coupling Hamiltonian is \begin{align} H_{AB}=&\sum_{p\neq 0} u|p|(a_{p}^{\dag}b^{\dag}_{-p}+a_pb_{-p})\nonumber\\ -&\frac{u}{LR^2}M_AM_B+\frac{4 u}{L}R^2N_AN_B \label{eq:HLuttAB} \end{align} where $a_p$ and $b_p$ are the boson operators for chain $A$ and chain $B$, respectively. The inter-chain coupling term of Eq.\eqref{eq:HABLutt}, has scaling dimension $2$ and hence it is a marginal operator. In the case of the Luttinger model it is an exactly marginal operator. Its main effects are to change (continuously) the scaling dimensions of the operators of the physical observables, as well as a finite renormalization of the velocities of the modes (see, e.g., Refs.[\onlinecite{Vishwanath2001}] and [\onlinecite{Emery2000}]). The coupled Luttinger models are stable provided $|u|<v$. Since $|u|<v$, the ground state of this system is in the sector where the winding modes are absent, $N_A=N_B=M_A=M_B=0$. Thus, we only need to solve the following Hamiltonian: \begin{equation} H=\sum_{p\neq 0} \Big[ v|p|(a^{\dag}_p a_p+b^{\dag}_p b_p)+u|p|(a_pb_{-p}+a_p^{\dag}b_{-p}^{\dag})\Big] \end{equation} which is a bilinear form in the bosons. Since the number of bosons in the separate chains are not conserved, the diagonalization of the Hamiltonian then proceeds through the standard Bogoliubov transformation \begin{align} a_p^{\dag}=&f_+c_p^{\dag}+f_-d_{-p}\nonumber\\ b_{-p}^\dagger=&f_+ d_{-p}^\dagger+f_- c_{p} \label{eq:Bogoliubov} \end{align} By diagonalizing the Hamiltonian, we can get the new spectrum for the bosons \begin{equation} E(p)=|p|\sqrt{v^2-u^2} \end{equation} The parameters $f_\pm$ are given by \begin{equation} f_\pm^2=\frac{1}{2}\left((1-u^2/v^2)^{-1/2} \pm 1\right) \end{equation} Since the coupled Luttinger model has been reduced to a free bosonic model, the reduced density matrix for chain $A$ can be calculated similarly as in free fermionic model. The entanglement Hamiltonian here too has the form $\widetilde H_E=\sum_{ij}\widetilde H_{ij}a^{\dag}_ia_j$ with \begin{equation} \widetilde H_{ij}=\Big(\ln [C^{-1}-1]\Big)_{ij} \end{equation} where $C_{ij}$ is the correlation matrix. Its matrix elements in momentum space (and in the thermodynamic limit $L \to \infty$) are \begin{equation} C_{pp^{\prime}}=2\pi \delta(p-p^{\prime})f_-^2 \end{equation} Since $f_-^2$ is a constant, the matrix $\widetilde H_{ij}$ is proportional to the identity matrix. Hence the entanglement Hamiltonian is proportional to the number operator and it is not equal to the Hamiltonian of one of the subsystems. Consequently the reduced density matrix is no longer thermal. This difference is also reflected in the different behavior of the von Neumann entanglement entropy $S_{vN}$ and thermal entropy $S_T$. Let us define the parameter $\kappa$, \begin{equation} \kappa=\frac{K_{+}-K_{-}}{K_{+}+K_{-}}=\frac{u}{v} \end{equation} where \begin{equation} K_\pm=K \left(\frac{v\pm u}{v\mp u}\right)^{1/2} \label{eq:Kpm} \end{equation} are the Luttinger parameters for the fields $\phi_\pm=(\phi_A\pm \phi_B)/\sqrt{2}$ that diagonalize the Hamiltonian of the coupled system, Eq.\eqref{eq:HLutt-total}. We will now obtain the expressions of the entanglement entropies as functions of $\kappa$. In the weak coupling limit $|u|\ll v$ (i.e. $\kappa\ll 1$) and in momentum space, the correlation matrix is \begin{equation} C_{pp^{\prime}} \simeq \frac{\kappa^2}{4} 2\pi \delta(p-p^{\prime}) \end{equation} It follows that the R\'enyi entropies $S_n$ are equal to \begin{eqnarray} \nonumber S_n&=&\frac{1}{1-n}\ln \textrm{Tr}\rho_A^n\\ \nonumber &=&\frac{1}{1-n}\left(\frac{L}{a}-1\right)\ln \left[\frac{(1-e^{-E})^n}{1-e^{-nE}}\right]\\ \nonumber &\approx&\frac{1}{1-n}\left(\frac{L}{a}-1\right)\left(-n\frac{\kappa^2}{4}+\left(\frac{\kappa}{2}\right)^{2n}\right)\\ &=&-\gamma_n \frac{L}{a}+\gamma_n \end{eqnarray} where $E=\ln\left((4/\kappa^2)-1\right)$, $a$ is a short-distance cutoff and \begin{equation} \gamma_n=\frac{1}{(1-n)}\left[n\frac{\kappa^2}{4}-\left(\frac{\kappa}{2}\right)^{2n}\right] \end{equation} From the above equation, we see that besides a term proportional to the length $L$ of the system, there is also a constant term related to the Luttinger liquid parameter. When $n$ is large, $\gamma_n=\frac{n\kappa^2}{4(1-n)}$. These results agree with those of Ref.[\onlinecite{Furukawa2011}]. Similarly, the von Neumann entanglement entropy equals to \begin{equation} S_{vN}=\left(\frac{L}{a}-1\right)\frac{\kappa^2}{4}\left[1-\ln \left(\frac{\kappa^2}{4}\right)\right]=-\gamma_1 \frac{L}{a}+\gamma_1 \end{equation} where $\gamma_1=-\frac{\kappa^2}{4}(1-\ln\frac{\kappa^2}{4})$. We can see that the von Neumann entanglement entropy $S_{vN}$ for this system is extensive but it is totally different from the thermal entropy $S_T$ which is given by Eq.\eqref{eq:1D-cft-entropy} (with $c=1$). \section{Conclusions} \label{sec:conclusions} In conclusion, in this work we obtained the reduced density matrix in some two-leg ladder systems. We find that when the two chains that are critical and are coupled by some relevant operator which opens a finite energy gap in the spectrum, the reduced density matrix for one chain takes the same form as the thermal density matrix with the energy gap playing the role of the effective temperature. This idea is verified at both the strong coupling limit and the weak coupling limits. We also noted that although the entanglement Hamiltonian is generally non-local, the reduced density matrix for the long-wavelength modes of the subsystem is of the Gibbs form with a local effective Hamiltonian with a finite effective temperature. The fraction of modes which are thermal increases as the strength of the coupling increases. We showed that the entanglement von Neumann entropy for the long wavelength modes has a universal form which is equal to the thermodynamic entropy of the decoupled conformal field theory with central charge $c$. We verified the validity of this conjecture by explicit calculations in a ladder fermionic system with a gap. The strong coupling results are generally valid and also hold in higher dimensional systems \begin{acknowledgments} We thank P. Calabrese, P. Fendley, S. Kivelson, A. La\"uchli, I. Peschel and D. Poilblanc for illuminating discussions. We also thank H. Katsura and R. Lundgren for correspondence and for alerting us about Refs. [\onlinecite{Katsura2010,Lou2011,Lundgren2012}], repsectively. This work was supported in part by the National Science Foundation, under grant DMR-1064319 at the University of Illinois. \end{acknowledgments} \bibliographystyle{apsrev}
1,108,101,563,051
arxiv
\section{Introduction} Observations of galaxies at high redshifts have revealed a broad class of Ly$\alpha$-emitting galaxies at $z \sim 3$ to 5 (e.g., Tapken et al. 2007). The Ly$\alpha$ emission from these objects is reaching us as light in the visible spectral band, enabling their study using large, ground-based optical telescopes, which in turn permits detailed spectroscopic studies of these galaxies. Observations of quasars at $z \sim 6$ (e.g., Fan et al. 2006) have revealed heavy elemental abundances exceeding solar values. We know that at least some of the galaxies at $z \sim 3$ to 5 have high abundances of heavy elements, facilitating the formation of dust. In homogenous and static media, the dust particles impede the escape of Ly$\alpha$ emission from gas-rich galaxies, due to the small mean free paths of the photons, low temperatures of the gas and ultimately high probabilities of absorption. In clumpy media, dust can enhance the escape of Ly$\alpha$ photons relative to the continuum (Neufeld 1991; Hansen \& Oh 2006). Broadening of Ly$\alpha$ lines due to multiple scatterings is a slow process requiring a long diffusion time (though velocity fields in the interstellar medium may broaden the Ly$\alpha$ lines and reduce the diffusion time). Hence, there is special interest in the physical processes that are able to naturally produce extremely broad wings in Ly$\alpha$ lines, which may permit the photons to leave the host galaxy without requiring many scatterings (but see \S\ref{sect:discussion}). Among obvious mechanisms is the one at work in the unique massive binary SS433 (for a recent review, see Fabrika [2004]), with strongly blue- and redshifted H$\alpha$ and H$\beta$ lines, due to cooling and recombination of hydrogen in the baryon-dominated, precessing jet moving with velocity $\sim 0.26 c$. Such objects are very rare --- SS433 is the only such example in our Galaxy. More well-known Galactic sources of H$\alpha$ emission with broad line wings are the supernova remnants (SNRs) of Type Ia, emitting due to charge transfer (or ``charge exchange'') reactions between hydrogen atoms and protons in the blast wave penetrating the low-density ($\sim 1$ cm$^{-3}$), ambient gas. The widths of the H$\alpha$ lines correspond to Doppler broadening with velocities up to $\sim 5000$ km s$^{-1}$. The same process should produce not only H$\alpha$ emission, but photons in the Lyman series of hydrogen as well. Recently, some of these SNRs were observed in Ly$\beta$ using the {\it FUSE} spacecraft (Korreck et al. 2004; Ghavamian et al. 2007, hereafter G07). Knowledge of the cross sections of charge transfers to excited levels and excitation of the fast-moving hydrogen atoms permit us to find simple formulae relating the luminosities of SNRs in the broad H$\alpha$ and Ly$\alpha$ lines. The Ly$\alpha$ line should have a similar spectral distribution to the observed H$\alpha$ one in the broad wings, because the optical depth of the SNR for broad photons is negligibly small and the optical depth for coherent scattering (in the distant Lorentzian wings) in interstellar gas is low. We compile the existing data for core collapse and thermonuclear SNRs, including SNR 1987A (where the reverse shock is bright in the broad H$\alpha$ line), and present their theoretically expected, broad Ly$\alpha$ and Ly$\beta$ luminosities. For two objects, we present their expected broad Ly$\gamma$, H$\beta$ and P$\alpha$ luminosities. Taking into account the supernova (SN) rates, the luminosities of the SNRs in H$\alpha$ and the duration of their active phase (for the charge transfer mechanism described), we find that --- even without discussing the cosmological evolution of the SN rates --- the expected broad Ly$\alpha$ is several orders of magnitude lower than the estimate of Shull \& Silk (1979), who treated fully radiative SNRs with low metallicities and velocities (20 to 120 km s$^{-1}$). We come to the conclusion that the contribution of both core collapse and thermonuclear SNRs to the Ly$\alpha$ luminosity of young galaxies is negligibly small. In \S\ref{sect:obs}, we gather a modest sample of 8 Galactic and Large Magellanic Cloud (LMC) remnants, and use them as a template for estimating the expected Ly$\alpha$, Ly$\beta$, Ly$\gamma$, H$\beta$ and P$\alpha$ production. In \S\ref{sect:ratios}, we compute the Ly$\alpha$/H$\alpha$, Ly$\alpha$/Ly$\beta$, Ly$\beta$/H$\alpha$, Ly$\gamma$/H$\alpha$, H$\beta$/H$\alpha$ and P$\alpha$/H$\alpha$ luminosity ratios. We present our results in \S\ref{sect:results} and discuss their implications in \S\ref{sect:discussion}. \section{Galactic \& LMC Remnants} \label{sect:obs} SNRs are the result of the interaction of SN ejecta with ambient matter. The nature of the interaction can be approximately categorized into several stages (Truelove \& McKee 1999, hereafter TM99; and references therein): the ejecta-dominated (ED) or freely-streaming stage; the Sedov-Taylor (ST) or self-similar stage; the pressure-driven snowplow (PDS) stage; and a possible, momentum-conserving snowplow stage (Cioffi, McKee \& Bertschinger 1988). Many of the well-studied, young SNRs like Kepler, Tycho and SN 1006 are intermediate between the ED and ST stages; this has been corroborated by the numerical studies of TM99, who showed that there is no sharp transition between the two stages. The transition from the ED to ST stage occurs on a timescale $t_{\rm{SD}} \sim t_{\rm{ch}}$; the characteristic timescale is \begin{equation} t_{\rm{ch}} = t_{\rm{ch},0} ~m^{5/6}_{\rm{ej}} E^{-1/2}_{51} n^{-1/3}_0, \label{eq:chartime} \end{equation} where $M_{\rm{ej}} = m_{\rm{ej}} M_{\sun}$ is the mass of the ejecta, $E = E_{51} 10^{51}$ erg is the energy of the supernova explosion, and $n_0$ is the density of the ambient medium (in cm$^{-3}$). The coefficient in equation (\ref{eq:chartime}) is $t_{\rm{ch},0}= 423$ yrs (TM99). If one makes the argument that $M_{\rm{ej}}$ (with density $\rho = 1.4 m_{\rm{H}} n_0$) of mass is swept up in a time $t_{\rm{ch}}$, one instead gets $t_{\rm{ch},0}=186$ yrs. The PDS stage occurs at \begin{equation} t_{\rm{PDS}} \approx 30 t_{\rm{ch}} ~m^{-5/6}_{\rm{ej}} E^{5/7}_{51} n^{-5/21}_0 \zeta_m^{-5/14} \end{equation} after the explosion (Cioffi, McKee \& Bertschinger 1988; TM99), where $\zeta_m$ is a dimensionless metallicity correction factor. More precise estimates for $t_{\rm{SD}}$ and $t_{\rm{PDS}}$ are dependent upon the spatial density distributions of both the ejecta and the ambient matter. In the ED and SD stages, the emission from some SNRs is ``non-radiative'', meaning the timescale for thermal, radiative losses from the interacting gases is much longer than $t_{\rm{ch}}$. When the blast wave of the SNR slams into ambient gas consisting predominantly of hydrogen atoms, it emits in Balmer and Lyman lines consisting of a broad ($\sim 1000$ km s$^{-1}$) and a narrow ($\sim 10$ km s$^{-1}$) component (Chevalier \& Raymond 1978; Bychkov \& Lebedev 1979; Chevalier, Kirshner \& Raymond 1980; Heng \& McCray 2007, hereafter HM07; Heng et al. 2007, hereafter H07; G07; and references therein). These objects are known as ``Balmer-dominated'' SNRs. Positive detections of the line components are so far only from Galactic and LMC SNRs. Even though narrow Ly$\alpha$ emission is produced, it is not seen due to interstellar absorption; broad Ly$\alpha$ should be observed\footnote{G07 recorded {\it FUSE} spectra only in the 905 --- 1100 and 987 --- 1180 \AA bands.}. Non-thermal H$\alpha$ and Ly$\alpha$ emission has not been observed in studies of local starburst galaxies (e.g., Kunth et al. 2003). The narrow Balmer and Lyman lines are produced when the fast-moving ejecta directly excite stationary hydrogen atoms in the surrounding material. The broad lines are produced when the post-shock protons and atoms engage in charge transfer reactions, creating a population of post-shock atoms in broad velocity distributions known as ``broad neutrals'' (HM07; H07). In the frame of the observer, these broad neutrals move at a velocity $v_{\rm{B}} \lesssim 3v_s/4$, where $v_s$ is the shock velocity (of the blast wave). For $v_s \gtrsim 500$ km s$^{-1}$, the broad neutrals can produce Ly$\alpha$ that is blue- or redshifted out of resonance with the stationary atoms, hence providing an escape route for the photons. The ratio of broad to narrow H$\alpha$ (and Ly$\alpha$) emission is a function of the shock velocity (HM07; H07); it also depends on factors like the pre-shock neutral density and the degree to which the temperatures of the electrons and ions are equilibrated. The contribution from the broad H$\alpha$ line dominates when the shock velocity is $\lesssim 3000$ km s$^{-1}$ and when the narrow H$\alpha$ line assumes Case A conditions (HM07). Existing observations of H$\alpha$ and Ly$\beta$ emission from 8 Balmer-dominated SNRs are catalogued in Table \ref{table:obs}. At least 5 of these SNRs are believed to have resulted from Type Ia explosions. Only SNR 1987A has a clear core collapse origin; it is also the youngest SNR in the sample. To convert H$\alpha$ line fluxes to broad Ly$\alpha$ luminosities, we use \begin{equation} L_{\rm{Ly}\alpha} = 4 \pi d^2 F_{\rm{H}\alpha} ~\frac{\Re_{bn}}{1+\Re_{bn}} ~\Gamma_{\rm{Ly}\alpha/\rm{H}\alpha}, \label{eq:lum} \end{equation} where $d$ is the distance to the SNR and $\Re_{bn} \sim 1$ is the observed ratio of broad to narrow H$\alpha$ emission. The quantity $\Gamma_{\rm{Ly}\alpha/\rm{H}\alpha}$ is the ratio of Ly$\alpha$ to H$\alpha$ luminosities (see \S\ref{sect:ratios}). For SNRs in the LMC, we adopt $d=50$ kpc. In the case of the LMC remnant 0509---67.5, $\Re_{bn}$ is unavailable, so we quote an upper limit. If several values for the H$\alpha$ flux are given, we simply choose the brightest one (e.g., different emission knots of Kepler's SNR). For SNR 1987A, we take the observed value of $\Re_{bn} \sim 1$ (Heng et al. 2006), as opposed to the theoretically calculated one ($\sim 0.1$; HM07); we use the measured H$\alpha$ flux to obtain an estimate for the broad Ly$\alpha$ luminosity, as the measured Ly$\alpha$ flux is subjected to resonant scattering. The Cygnus Loop is excluded from our sample due to its low shock velocity of $\sim 250$ km s$^{-1}$. In using equation (\ref{eq:lum}), we note that the measured H$\alpha$ and Ly$\beta$ fluxes are mostly from limb-brightened portions of the SNRs. Assuming spherical remnants, the intensities from these parts are brightened by factors $\sim (R/l_a)^{1/2}$ (Chevalier \& Raymond 1978; H07), where $R \sim 1$ pc is the typical radius of the SNR and $l_a \sim 10^{15}$ cm is the length scale for atomic interactions (assuming density $\sim 1$ cm$^{-3}$ and velocity $\gtrsim 1000$ km s$^{-1}$). Hence, the luminosity inferred might be over-estimated by a factor $\sim 50$. \section{THE Ly$\alpha$/Ly$\beta$ \& Ly$\alpha$/H$\alpha$ Ratios} \label{sect:ratios} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{f1.eps}} \caption{Luminosity ratios of Ly$\alpha$ to H$\alpha$, Ly$\alpha$ to Ly$\beta$, and Ly$\beta$ to H$\alpha$, denoted by $\Gamma_{\rm{Ly}\alpha/\rm{H}\alpha}$, $\Gamma_{\rm{Ly}\alpha/\rm{Ly}\beta}$ and $\Gamma_{\rm{Ly}\beta/\rm{H}\alpha}$, respectively, as a function of the shock velocity, $v_s$.} \label{fig:ratios} \end{figure} The ratio of Ly$\alpha$ to H$\alpha$ luminosities (Fig. \ref{fig:ratios}) is computed using the methods developed by HM07: \begin{equation} \Gamma_{\rm{Ly}\alpha/\rm{H}\alpha}\left(nl,n^\prime l^\prime\right) = \frac{\epsilon \left( R_{E,nl} + R_{T^*,nl} \right) + R_{T^*_0,nl}}{\epsilon \left( R_{E,n^\prime l^\prime} + R_{T^*,n^\prime l^\prime} \right) + R_{T^*_0,n^\prime l^\prime}}, \label{eq:lumratio} \end{equation} where $\epsilon = P_{T_0}/P_I$. The quantity $P_{T_0}$ is the probability for pre-shock atoms (found in a beam, i.e., at one velocity) to engage in charge transfer reactions with ions (thereby creating broad neutrals), while $P_I$ is the probability for the broad neutrals to be ionized by both electrons and ions. Physically, broad H$\alpha$ emission is produced in two ways: charge transfer of the pre-shock atoms to excited states of broad neutrals (with a rate coefficient, in cm$^3$ s$^{-1}$, of $R_{T^*_0,nl}$); creation of broad neutrals in the ground state, followed by excitation ($R_{E,nl}$) and/or charge transfers between them and ions to excited states ($R_{T^*,nl}$). Hence, $\epsilon$ is a measure of how efficient the first contribution is relative to the second one. At low shock velocities ($v_s \lesssim 1000$ km s$^{-1}$), $\epsilon \gtrsim 3$ --- charge transfer to the ground state is the dominant process, and it is efficient to create broad neutrals that subsequently get excited. We emphasize that equation (\ref{eq:lumratio}) is only valid in the case of optically-thin plasmas. For Ly$\alpha$, we consider charge transfers (with protons) and excitations (by electrons and protons) to the sub-levels $2p$, $3s$ and $3d$. For H$\alpha$, we consider the same processes, but for the sub-levels $3s$, $3p$ and $3d$. Hence, we compute $\Gamma_{\rm{Ly}\alpha/\rm{H}\alpha}(nl,n^\prime l^\prime)$ for $nl=2p+3s+3d$ and $n^\prime l^\prime=3s+3d+ B_{3p,2s} 3p$, where the factor of $B_{3p,2s}=0.1183$ is the fraction of radiative decays from $3p$ that result in H$\alpha$, with the remainder going to Ly$\beta$. For $\Gamma_{\rm{Ly}\alpha/\rm{Ly}\beta}(nl,n^\prime l^\prime)$, we consider instead $n^\prime l^\prime=(1-B_{3p,2s})3p$. Cascade contributions from higher levels are $\lesssim 5\%$ effects. For example, contributions to H$\alpha$ from $n=4$ are at most $\sim (3/4)^3 B_{4s,3p} B_{3p,2s} \approx 2\%$; other contributions from $4p$, $4d$ and $4f$ are at the $\lesssim 1\%$ level. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{f2.eps}} \caption{Luminosity ratios of Ly$\gamma$ to H$\alpha$, H$\beta$ to H$\alpha$, and P$\alpha$ to H$\alpha$, denoted by $\Gamma_{\rm{Ly}\gamma/\rm{H}\alpha}$, $\Gamma_{\rm{H}\beta/\rm{H}\alpha}$ and $\Gamma_{\rm{P}\alpha/\rm{H}\alpha}$, respectively, as a function of the shock velocity, $v_s$. Only charge transfers to excited states are considered in these ratios, so they should be used with caution; we use only the luminosity ratios for $v_s \gtrsim 5000$ km s$^{-1}$.} \label{fig:ratios2} \end{figure} One can calculate the luminosity ratios for Ly$\gamma$/H$\alpha$, H$\beta$/H$\alpha$ and P$\alpha$/H$\alpha$ as well. However, the cross sections for impact excitation of hydrogen atoms by protons to the sub-levels $4s$, $4p$, $4d$ and $4f$ are unavailable at the time of writing\footnote{We note that Mart\'{i}n (1999) has computed the cross sections for impact excitation of hydrogen atoms by protons to the sub-levels $4s$, $4p$, $4d$ and $4f$, but only for energies of 30 to 200 keV.}. The cross sections for charge transfers to these excited states, however, are available. At $v_s \gtrsim 5000$ km s$^{-1}$, $\epsilon \lesssim 0.5$, and we may obtain luminosity ratios for Ly$\gamma$/H$\alpha$, H$\beta$/H$\alpha$ and P$\alpha$/H$\alpha$ to within a factor of 2 (Fig. \ref{fig:ratios2}). A list of the relevant radiative decay fractions, $B_{nl,n^\prime l^\prime}$, is given in Table \ref{table:einstein} (see Appendix \ref{append:einstein} for details). In principle, if the charge transfer and excitation cross sections are known to higher levels in the relevant velocity range, one can calculate the luminosity ratios for other lines in the Balmer, Lyman, Paschen and other series of hydrogen. We use the atomic cross sections of Balan\c{c}a, Lin \& Feautrier (1998), Barnett et al. (1990), Belk\'{i}c, Gayet \& Salin (1992), Harel, Jouin \& Pons (1998) and Janev \& Smith (1993), as well as those found in the {\it NIST Electron-Impact Cross Section Database}. Details concerning the cross sections are given in Appendix \ref{append:atomic}, where we provide fitting functions to them. We consider a pure hydrogen gas and include charge transfer, excitation and ionization events between hydrogen atoms, electrons and protons. We employ the thin shock approximation, such that the relative velocity between atoms and ions is $3v_s/4$; this has been shown by H07 to be an excellent approximation. At the shock velocities considered, $500 \lesssim v_s \lesssim 10,000$ km s$^{-1}$, the significance of impact excitation by electrons is comparable to that by protons and cannot be neglected. We do not consider broad emission from within the shock front (see Appendix \ref{append:within}). \section{Results} \label{sect:results} The luminosities ratios $\Gamma_{\rm{Ly}\alpha/\rm{H}\alpha}$, $\Gamma_{\rm{Ly}\alpha/\rm{Ly}\beta}$ and $\Gamma_{\rm{Ly}\beta/\rm{H}\alpha}$ are shown in Fig. \ref{fig:ratios}. In the shock velocity range $1000 \lesssim v_s \lesssim 4000$ km s$^{-1}$, the differences in $\Gamma_{\rm{Ly}\alpha/\rm{H}\alpha}$ and $\Gamma_{\rm{Ly}\beta/\rm{H}\alpha}$ between the $\beta=0.1$ and 1 cases are factors $\sim 2$, and are due to the sensitivity to temperature of impact excitation and ionization of hydrogen atoms by electrons. This may present a direct and unique opportunity to measure $\beta$. We emphasize that our calculations are only valid for the broad lines; the narrow lines have optical depths $0 \le \tau \le 1$ and Lyman line trapping is a non-negligible effect (Ghavamian et al. 2001, 2002). For example, narrow Ly$\beta$ photons may be converted into narrow H$\alpha$ photons and two-photon (2$\gamma$) continuum. In addition, narrow Ly$\alpha$ cannot propagate easily through the interstellar gas. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{f3.eps}} \caption{Left: Expected Ly$\alpha$ luminosities, $L_{\rm{Ly}\alpha}$, from the SNR sample as a function of the shock velocity, $v_s$. Right: $L_{\rm{Ly}\alpha}$ plotted versus the age of the SNR.} \label{fig:lum} \end{figure} We use the data in Table \ref{table:obs} to compute the expected luminosity of Ly$\alpha$, $L_{\rm{Ly}\alpha}$ (Fig. \ref{fig:lum}). In estimating a range for $L_{\rm{Ly}\alpha}$, we only consider the observational error bars in $F_{\rm{H}\alpha}$ (if available) and allow for a generous range in temperature equilibration between electrons and protons, $0.1 \le \beta \le 1$, where $\beta = T_e/T_p$. Hence, the displayed error bars for $L_{\rm{Ly}\alpha}$ are not formal ones. We are aware of the recent work by Ghavamian, Laming \& Rakowski (2007), who showed that there is an empirical correlation between $\beta$ and $v_s$ --- namely, $\beta=1$ for $v_s \lesssim 400$ km s$^{-1}$ and $\beta \propto v^{-2}_s$ for $v_s \gtrsim 400$ km s$^{-1}$. For the LMC remnants detected in Ly$\beta$ by G07, we compute the range in $L_{\rm{Ly}\alpha}$ by considering both the H$\alpha$ and Ly$\beta$ fluxes. We note that the computed $(2.96 \pm 0.05) \times 10^{35}$ erg s$^{-1}$ value for broad Ly$\alpha$ in SNR 1987A is comparable to the $\sim 10^{36}$ erg s$^{-1}$ figure predicted by Michael et al. (2003). Note that the condition $L_{\rm{L}\beta}/L_{\rm{H}\alpha} \ge \lambda_{\rm{H}\alpha}/\lambda_{\rm{L}\beta} \approx 6.4$ is not true in general. This is because the cross section for charge transfers to the level $3p$ falls below that to $3s$ at a relative velocity $\sim 2000$ km s$^{-1}$ (Fig. \ref{fig:pct3}). In Table \ref{table:predict}, we make some predictions for the Ly$\beta$, Ly$\gamma$, H$\beta$ and P$\alpha$ luminosities. It is puzzling that the theoretically expected Ly$\beta$ luminosities are about 10 to 20 times higher than those inferred from the observations of G07. In other words, the observed H$\alpha$ fluxes in the LMC SNRs are comparable to the observed Ly$\beta$ ones. We are not certain why this is the case, but we note that Ly$\beta$ is more susceptible to absorption by interstellar dust than H$\alpha$, and we suspect this effect to play at least some part in the discrepancy. Moreover, the H$\alpha$ and Ly$\beta$ observations were taken at different epochs (Tuohy et al. 1982 versus Ghavamian et al. 2007). As described in \S\ref{sect:ratios}, we are only able to provide rough predictions for Ly$\gamma$, H$\beta$ and P$\alpha$, and only in the cases of 0509---67.5 and SNR 1987A, as these SNRs have shock velocities $\gtrsim 5000$ km s$^{-1}$. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{f4.eps}} \caption{Luminosity ratios of the 2$\gamma$ continuum to H$\alpha$, $\Gamma_{2\gamma/\rm{H}\alpha}$. Only charge transfers and excitations to the $2s$ level are considered (see text).} \label{fig:ratios3} \end{figure} We can make some estimates for the expected 2$\gamma$ continuum as well, which is produced in the $2s \rightarrow 1s$ transition. In the case of an optically-thin plasma, the $2s \rightarrow 2p$ transition is negligible as collisions are unimportant. In Table \ref{table:predict}, we make conservative predictions for the 2$\gamma$ continuum luminosity from both broad and narrow atoms, but consider only charge transfers and excitations to the $2s$ level. Additional contributions from $n=3$ range from $\sim (2/3)^3 B_{3p,2s} \approx 4\%$ (Case A) to $\sim (2/3)^3 \approx 30\%$ (Case B); those from $n=4$ are $\ll 1\%$. We only wish to make the point that $L_{2\gamma}$ is comparable to $L_{\rm{Ly}\alpha}$ and $L_{\rm{H}\alpha}$, and thus the 2$\gamma$ transitions are a potentially observable source of continuum. In the case of Galactic and LMC SNRs, the {\it Galaxy Evolution Explorer (GALEX)} is in principle able to measure the low-frequency wing of 2$\gamma$ decay, using its 135---175 and 175---280 nm channels. By comparing H$\alpha$ and 2$\gamma$ emission, it will be possible to directly estimate emission from the SNR shock due to broad Ly$\alpha$ (and the contribution of narrow Ly$\alpha$ that cannot reach us). This is an additional, unique source of information on the detailed physical processes in shocks. Several sources of uncertainty can affect the predicted luminosities. These include uncertainties in the age of the SNR, $t_{\rm{age}}$, the distance to it, $d$, the measured {\it non-radiative} component of the H$\alpha$ flux, the temperature equilibration between electrons and ions, and the atomic cross sections used. Uncertainties in the cross sections are typically $\sim$ 10\%. For charge transfer to excited states, the uncertainty can be as much as 30\% (R.K. Janev 2007, private communication). The predicted luminosities have not been corrected for reddening by dust. \section{Discussion} \label{sect:discussion} SNR 1987A is a unique example of a Balmer-dominated SNR. By virtue of adiabatic expansion cooling, the SN ejecta comprises mostly neutral hydrogen; it rushes out at velocities $\gtrsim 12,000$ km s$^{-1}$ (Michael et al. 2003; Heng et al. 2006). The non-radiative H$\alpha$ and Ly$\alpha$ result from the interaction of the ejecta with the {\it reverse shock} and not the blast wave (Heng 2007). As SNR 1987A has a Type II origin, it is possible to produce Balmer and Lyman lines via this mechanism; this is obviously not possible with Type Ia's. Smith et al. (2005) have predicted that the H$\alpha$ and Ly$\alpha$ emission from the reverse shock of SNR 1987A is shortlived ($\sim$ 2012 to 2014) and will be extinguished by the increasing flux of extreme ultraviolet (EUV) and X-ray photons traveling into the pre-shock region and ionizing the atoms --- pre-ionization. This is marginal evidence that broad Ly$\alpha$ from SNRs of a core collapse origin will be short-lived, i.e., $\lesssim 100$ years. In general, for this scenario to work, some interaction of the blast wave with the ambient material is needed, but if it is too strong the pre-shock gas becomes ionized (R. Chevalier 2007, private communication). To further investigate the viability of the short-lived, non-radiative Ly$\alpha$ hypothesis, we examine the sample of optically identified SNRs by Matonick \& Fesen (1997), who studied an ensemble of 12 SNR samples from different galaxies, including the Small Magellanic Cloud (SMC), LMC, M31 and M33, with distances up to 7 Mpc. In galaxies like NGC 2403, M81 and M101, the SNRs are associated with star-forming regions and most of them probably have a Type Ib/c origin. In most cases, the measured H$\alpha$ flux is $\sim 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ and the inferred luminosity is $\sim 10^{36}$ erg s$^{-1}$. Since Matonick \& Fesen (1997) did not provide H$\alpha$ line profiles, it is impossible to estimate the proportion of the H$\alpha$ emission that is non-radiative. Furthermore, their selection criterion is based on picking out objects with [S~{\sc ii}]/H$\alpha \ge 0.45$, which will not detect SNRs with predominantly non-radiative H$\alpha$ emission. Shull \& Silk (1979) computed the temporally-averaged Ly$\alpha$ luminosity from radiative shocks of a population of Type II SNRs, assuming low metallicities, to be \begin{equation} L_{\rm{SS79}} = 3 \times 10^{43} \mbox{ erg s}^{-1} E^{3/4}_{51} n^{-1/2}_0 \dot{N}_{\rm{SN}}, \end{equation} where $\dot{N}_{\rm{SN}}$ is the number of supernovae (SNe) a year. They considered SNRs in both the ST and the PDS stages, and $v_s = 20$ to 120 km s$^{-1}$. Charlot \& Fall (1993) remark that the numerical coefficient in the preceding equation is about 40\% lower if one assumes solar metallicity. A very conservative upper limit on the broad Ly$\alpha$ from the Matonick \& Fesen (1997) samples can be obtained if one generously allows for all of the H$\alpha$ to be broad, for the shock velocities to be low ($\sim 500$ km s$^{-1}$) such that $\Gamma_{\rm{Ly}\alpha/\rm{H}\alpha} \sim 100$, and for the non-radiative emission to last $\sim 10^4$ years. Even in this very unlikely scenario, $L_{\rm{Ly}\alpha} \sim 10^{42}$ erg s$^{-1}$ is only about $0.1 L_{\rm{SS79}}$. Hence, our charge transfer mechanism is not energetically competitive. There is the possibility a SNR can produce both radiative and non-radiative components of H$\alpha$. Well-known examples are Kepler (Fesen et al. 1989; Blair, Long \& Vancura 1991) and RCW 86 (Long \& Blair 1990; Smith 1997). There is also the possibility that the non-radiative emission from the SNR is inhibited. For example, Foster (2005) observed and studied the Galactic SNR 3C 434.1 ($t_{\rm{age}} \approx 25,000$ yr; $d = 4.5 \pm 0.9$ kpc; possible Type Ib/c), which formed inside the eastern portion of a pre-existing stellar-wind bubble of interior density $\sim 0.1$ cm$^{-3}$. Strong H$\alpha$ emission ($6.1 \pm 0.4 \times 10^{36}$ erg s$^{-1}$) is measured from the eastern side; it is believed to be from a radiative shock. Being farther away from the western wall of the bubble, the shock on the western side is essentially still in free expansion and produces no measurable, non-radiative H$\alpha$. Our SNR sample and the considerations of SNR 1987A lead us to believe that if the short-lived emission contribution from Type Ib/c and Type II SNRs in young galaxies exists, it has a luminosity of \begin{equation} L_{\rm{Ly}\alpha,\rm{CC}} \sim 10^{38} \mbox{ erg s}^{-1} ~t_{\rm{emit},2} \dot{N}_{\rm{SN}}, \end{equation} where $t_{\rm{emit}} = t_{\rm{emit},2} 100$ years is the length of time we expect core collapse SNRs to produce shock-induced Ly$\alpha$ emission. On the other hand, thermonuclear SNRs are expected to have $t_{\rm{emit}} = t_{\rm{emit},4} 10^4$ years $\sim t_{\rm{PDS}}$. However, they are also believed to be much scarcer at high redshifts. For example, Dahlen et al. (2004) estimate that only 5\% to 7\% of available progenitors explode as Type Ia SNRs. Therefore, the expected luminosity is \begin{equation} L_{\rm{Ly}\alpha,\rm{Ia}} \sim 10^{38} \mbox{ erg s}^{-1} ~t_{\rm{emit},4} \dot{N}_{\rm{SN},-2}, \end{equation} where $\dot{N}_{\rm{SN},-2}$ is the number of SN per year in units of 0.01. We conclude that for both core collapse and thermonuclear SNRs, the expected luminosity from broad Ly$\alpha$ is only a $\sim 0.001\%$ effect, compared to the mechanism of Shull \& Silk (1979). Ly$\alpha$ line luminosities from $z \sim 3$ to 5 galaxies have been observationally determined to be $\sim 10^{42}$ to $10^{43}$ erg s$^{-1}$ (e.g., Saito et al. 2007), in general agreement with theoretical expectations. In addition, the lifetime of an emitting atom is approximately the length of time corresponding to one atomic length scale, and is only $t_{\rm mfp} \sim l_a/v_s \sim 10^7 n^{-1}_0 v^{-1}_{s,8}$ s, where $v_{s,8} = v_s/1000$ km s$^{-1}$ (H07). We have restricted our analysis to homogeneous and static media. Though broad, non-thermal Ly$\alpha$ emission has never been observed, these photons {\it are} produced in SNRs and hence the non-radiative Ly$\alpha$ luminosity is a part of the intrinsic Ly$\alpha$ spectrum of young galaxies. The optical depth for a broad photon in the line wings is (Verhamme, Schaerer \& Maselli 2006) \begin{equation} \tau \sim 0.26 T^{-1/2}_4 N_{{\rm H},20} v^{-2}_{w,8} b_{12.85}, \end{equation} where $T = 10^4 T_4$ K and $N_{\rm H} = N_{{\rm H},20} 10^{20}$ cm$^{-2}$ are the temperature and obscuring hydrogen column density of the medium, respectively. The turbulent velocity in the interstellar medium is $b = 12.85 b_{12.85}$ km s$^{-1}$ (Verhamme, Schaerer \& Maselli 2006), while $v_w = 1000 v_{w,8}$ km s$^{-1}$ is the velocity of the emitting atom in the line wings. Multiple scattering is important for $\tau \gtrsim 0.3$ (Chevalier 1986) and any realistic treatment of non-thermal Ly$\alpha$ lines in a young galaxy has to include radiative transfer effects, which we have neglected in our analysis. \begin{acknowledgements} K.H. is grateful to: Ratko Janev, C.D. Lin and Fernando Mart\'{i}n for invaluable advice regarding atomic cross sections; Dick McCray, Roger Chevalier, Rob Fesen, Bob Kirshner and Bryan Gaensler for engaging discussions; Christian Balan\c{c}a for providing atomic cross sections in an electronic form; John Raymond and Mike Shull for helpful suggestions following their careful reading of the manuscript. He acknowledges the Max Planck Institutes for Astrophysics (MPA) and Extraterrestrial Physics (MPE) for their generous support and kind hospitality during the months of June to October 2007, where he was a visiting postdoctoral scientist. He is indebted to the tranquil Bavarian countryside for necessary moments of academic solitude, and to his wife, Stefanie, for her steadfast support. \end{acknowledgements}
1,108,101,563,052
arxiv
\section{Introduction} The appearance of Skyrme theory \cite{skyrme} disclosed very neatly the fundamental role of topology in high energy physics (see for instance \cit {nuc0,nuc1,nuc2,nuc3,nuc4,nuc5}). First of all, the low energy QCD is very well described by the Skyrme theory \cite{witten0}. Secondly, the solitons of this Bosonic theory (\textit{Skyrmions}) describe Baryons. Thirdly, the Baryon charge is the winding number of the configuration (see \cit {witten0,finkrub,manton,skyrev1,giulini,bala0,ANW,guada} and references therein). These arguments are more than enough to justify a profound analysis of the Skyrme model. Indeed, extensive studies of the latter can be found in literature (as the previous references clearly show). Not surprisingl \footnote At least taking into account that it is reasonable to expect that the theory describing the low energy limit of QCD should be a quite complicated one.}, the Skyrme field equations are a very hard nut to crack and, until very recently no analytic solution was available. Nevertheless, many numerical studies have shown that the Skyrme model provides results in good agreement with experiments. Despite the success of the model and the existence of several solutions among different contexts, the analysis of their phenomenological aspects seldom can be carried out in an analytic manner. For an analytic solution and a relevant study in compact manifolds see \cite{newref1}. The gauged Skyrme model (which describes the coupling of a $U(1)$ gauge field with the Skyrme theory) has also very important applications in the analysis of electromagnetic properties of Baryons, in the decay of nuclei in presence of defects (see \cit {witten0,Witten,gipson,goldstone,dhoker,rubakov} and references therein). Obviously, from the point of view of constructing analytic solutions, the U(1)$ gauged Skyrme model is even worse than the original Skyrme theory. Until very recently, no explicit topologically non-trivial solution was available. Thus, topological configurations of this theory have been deeply analyzed numerically (see \cite{gaugesky1,gaugesky2}\ and references therein). Here we list three relevant problems in the applications of (gauged) Skyrme theory to high energy phenomenology which will be the focus of the present paper. \textbf{1)} \textit{Finite density effects and the compression modulus}: Finite density effects (and, in general, the phase diagrams) in the Skyrme model have been historically a very difficult topic to analyze with analytic methods. The lack of explicit solutions with topological charge living within a finite flat box with the spherical Skyrme ansatz is the origin of the problem. Some numerical results with the use of the spherical Skyrme ansatz are presented in \cit {klebanov,chemical1,chemical2,chemical3,chemical4} and references therein. Due to the fact that both finite volume effects and isospin chemical potential break spherical symmetry it is extremely difficult to improve the pioneering results in \cite{klebanov,chemical1,chemical2,chemical3,chemical4 \ without changing the original Skyrme ansatz. The main problem in this group is certainly the \textit{compression modulus} \cit {probcompression1,probcompression2,probcompression3} (to be defined precisely in the next section) which, roughly speaking, has to do with the derivative of the total energy of the Skyrmions with respect to the volume. The experimental value is different from the value derived using the original spherical hedgehog ansatz. The usual way to compute the compression modulus is to assume the Derrick rescaling for the reaction of nuclear matter to the action of external pressure (see the detailed discussion in \cite{Adam}). The resulting value is higher than the experimental valu \footnote The following analysis suggests that this "uniform rescaling" assumption could be too strong. Indeed, the results at the end of section 3 shows that Skyrme theory, when analyzed at finite density, provides with values of the compression modulus which are close to the experimental one.}. A closely related technical difficulty is that, if one uses the original hedgehog ansatz for the Skyrmion, it is very unclear even \textit{how to define} the compression mod\qquad ulus since the original Skyrme ansatz describes a spherical Skyrmion living within an infinite volume so that to compute the derivatives of the energy with respect to the volume becomes a subtle question. The best way out of this difficulty would be, of course, to have a consistent ansatz for a Skyrmion living within a finite volume. Relevant numerical results in the literature on that problem are presented in \cite{ref1,ref2,ref3,ref4} where non-spherical ans\"atze have been considered. \textbf{2)} \textit{Existence of Skyrmion-antiSkyrmion bound states/resonances}: multi-Skyrmionic bound states of Baryon charge higher than 1 are known to exist and they have been successfully constructed numerically (see, for instance, \cite{manton}\ and references therein). However, until very recently, the problem of the existence of Skyrmion-antiSkyrmion bound states and resonances did not possess the place it deserved in the literature on the Skyrme model and despite its importance. We can refer to an early work on the subject in \cite{newref2}. Here we shall study analytic results over the properties of such configurations. Experimentally, \ Baryon-antiBaryon bound states and resonances do exist \cite{expbab1,expbab2,expbab3}: these should correspond to Skyrmion-antiSkyrmion bound states. Such bound states are very difficult to find since the corresponding classical solutions are not static. Indeed, at a semi-classical level, Skyrmion-antiSkyrmion bound states should look like time-periodic solutions in which a Skyrmion and an antiSkyrmion moves periodically around the center of mass of the system. \ These kinds of time-dependent configurations are difficult to analyze even numerically. \textbf{3)} \textit{Conductivities}: the analysis of electrons transport through gauged Skyrmions is a very interesting open issue. At semi-classical level, one should solve the Dirac equation for the electron in the background of the gauged Skyrmion and, from the solution of the Dirac equation, one could compute the conductivity. It would be especially interesting to be able to describe complex structures assembled from neutrons and protons interacting with electromagnetic fields (such as slabs of Baryons interacting with the corresponding Maxwell field). In nuclear physics and astrophysics these structures are called \textit{nuclear pasta} and they are very relevant in a huge variety of phenomena (see, for instance, \cite{nuclearpasta1,nuclearpasta2,nuclearpasta3,nuclearpasta4} and references therein). On the other hand, there are very few ``first principles" computations of the transport properties of these complex structures (see \cite{conductivity} and references therein). At a first glance, one could think that this kind of complex structure is beyond the reach of the gauged Skyrme model. In order to achieve a deeper understanding of the above open issues, it is mandatory to be able to construct analytic examples of gauged multi-Skyrmionic configurations. In \cit {canfora2,canfora3,canfora4,canfora4.5,yang1,canfora6,canfora6.5,cantalla4,cantalla5} a strategy has been developed to generalize the usual spherical hedgehog ansatz to situations without spherical symmetry both in Skyrme and Yang-Mills theories (see \cite{canYM1,canYM2,canYM3} and references therein). Such a framework also allows to analyze configurations living within a finite region of space. As far as the three open issues described above are concerned, this tool (which will be called here ``generalized hedgehog ansatz") gave rise to the first derivation not only of the critical isospin chemical potential beyond which the Skyrmion living in the box ceases to exist, but also of the first explicit Skyrmion-antiSkyrmion bound states. Thus, this approach appears to be suitable to deal with the problems mentioned previously. Interestingly enough, the generalized hedgehog ansatz can be adapted to the U(1)$ gauged Skyrme model \cite{Fab1,gaugsk}: it allowed the construction of two types of gauged solitons. Firstly, gauged Skyrmions living within a finite volume. Secondly, smooth solutions of the $U(1)$ gauged Skyrme model whose periodic time-dependence is protected by a topological conservation law (as they cannot be deformed to static solutions). Here we demonstrate that by using this strategy it is possible to derive an explicit expression of the compression modulus. The transport properties of these gauged Skyrmions can also be analyzed. In this work we also present a simple estimate of the order of magnitude of the correction to the electron conductivities due to the interactions of the electrons with the baryonic environment. As far as transport properties are concerned, we will work at the level of approximation in which the electrons perceive the gauged Skyrmions as a classical background. Large \textbf{N} arguments strongly suggest that this is a very good approximation\footnote In the leading 't Hooft approximation, in meson-Baryon scattering, the heavy Baryon (the Skyrmion in our case) is unaffected and, basically, only the meson can react. This is even more so in the electron-Baryon semiclassical interactions due to the huge mass difference between the Skyrmion and the electron. In this approximation, electrons perceive the Skyrmions as an effective medium.} (see for a detailed review chapter 4 and, in particular, section 4.2 of the classic reference \cite{skyrev0}). This paper is organized as follows: in the second section the action for the gauged Skyrme model and our notations will be introduced. In the third section, the method to deal with Skyrmions at finite density will be described: as an application, a closed formula for the compression modulus of Skyrmions living within a cube will be derived. In the fourth section, the gauged Skyrmions at finite density will be considered. In the fifth section, the transport properties associated to electrons propagating in the Baryonic environment corresponding to the finite-density Skyrmions are analyzed. In section \ref{conclusions}, we draw some concluding ideas. \section{The $U(1)$ Gauged Skyrme Model} \label{model} We consider the $U(1)$ gauged Skyrme model in four dimensions with global SU(2)$ isospin internal symmetry and we will follow closely the conventions of \cite{Fab1,gaugsk}. The action of the system is \begin{align} S& =\int d^{4}x\sqrt{-g}\left[ \frac{K}{2}\left( \frac{1}{2}\mathrm{Tr \left( R^{\mu }R_{\mu }\right) +\frac{\lambda }{16}\mathrm{Tr}\left( G_{\mu \nu }G^{\mu \nu }\right) \right) -\frac{1}{4}F_{\mu \nu }F^{\mu \nu }\right] \ , \label{sky1} \\ R_{\mu }& =U^{-1}D_{\mu }U\ ,\ \ G_{\mu \nu }=\left[ R_{\mu },R_{\nu }\right] \ ,\ D_{\mu }=\nabla _{\mu }+\kappa A_{\mu }\left[ t_{3},\ .\ \right] \ , \label{sky2} \\ U& \in SU(2)\ ,\ \ R_{\mu }=R_{\mu }^{j}t_{j}\ ,\ \ t_{j}=\mathbbmtt{i \sigma _{j}\ , \label{sky2.5} \end{align where $\sqrt{-g}$ is the (square root of minus) the determinant of the metric, $F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }$ is the electromagnetic field strength, $\nabla _{\mu }$ is the partial derivative, the positive parameters $K$ and $\lambda $ are fixed experimentally, $\kappa $ the coupling for the $U(1)$ field and $\sigma _{j}$ are the Pauli matrices. In our conventions $c=\hbar =\mu _{0}=1$, the space-time signature is $(-,+,+,+)$ and Greek indices run over space-time. The stress-energy tensor is \begin{equation} T_{\mu \nu }=-\frac{K}{2}\mathrm{Tr}\left[ R_{\mu }R_{\nu }-\frac{1}{2 g_{\mu \nu }R^{\alpha }R_{\alpha }\right. \,+\left. \frac{\lambda }{4}\left( g^{\alpha \beta }G_{\mu \alpha }G_{\nu \beta }-\frac{g_{\mu \nu }}{4 G_{\sigma \rho }G^{\sigma \rho }\right) \right] +\bar{T}_{\mu \nu }, \notag \label{timunu1} \end{equation with \begin{equation} \bar{T}_{\mu \nu }=F_{\mu \alpha }F_{\nu }^{\;\alpha }-\frac{1}{4}F_{\alpha \beta }F^{\alpha \beta }g_{\mu \nu }. \end{equation The field equations are \begin{equation} D^{\mu }\left( R_{\mu }+\frac{\lambda }{4}\left[ R^{\nu },G_{\mu \nu }\right] \right) =0\ , \label{nonlinearsigma1} \end{equation \begin{equation} \nabla _{\mu }F^{\mu \nu }=J^{\nu }\ , \label{maxwellskyrme1} \end{equation where $J^{\nu }$ is the variation of the Skyrme action (the first two terms in Eq. (\ref{sky1})) with respect to $A_{\nu }$ \begin{equation} J^{\mu }=\frac{\kappa K}{2}Tr\left[ \widehat{O}R^{\mu }+\frac{\lambda }{4 \widehat{O}\left[ R_{\nu },G^{\mu \nu }\right] \right] \ , \label{current} \end{equation wher \begin{equation*} \widehat{O}=U^{-1}t_{3}U-t_{3}\ . \end{equation*} In the following sections, \textit{gauged Skyrmions} and \textit{gauged time-crystals} will be terms describing to the two different kinds of gauged topological solitons appearing as solutions of the coupled system expressed by Eqs. (\ref{nonlinearsigma1}) and (\ref{maxwellskyrme1}). The aim of the present work is to show that the Skyrme model and its gauged version are able to give good predictions for important quantities such as the compression modulus and the conductivity. \subsection{Topological charge} The proper way to define the topological charge in the presence of a minimal coupling with a $U(1)$ gauge potential has been constructed in \cite{Witten} (see also the pedagogical analysis in \cite{gaugesky1}) \begin{equation} \begin{split} W=& \frac{1}{24\pi ^{2}}\int_{\Sigma }\epsilon ^{ijk}Tr\left\{ \left( U^{-1}\partial _{i}U\right) \left( U^{-1}\partial _{j}U\right) \left( U^{-1}\partial _{k}U\right) \right. - \\ & \left. \partial _{i}\left[ 3\kappa A_{j}t_{3}\left( U^{-1}\partial _{k}U+\partial _{k}UU^{-1}\right) \right] \right\} . \end{split} \label{new4.1} \end{equation} In the literature one usually only considers situations where $\Sigma $ is a space-like three-dimensional hypersurface. In these situations $W$ is the Baryon charge. In fact it has been recently shown \cite{Fab1} \cite{gaugsk} that it is very interesting to also consider cases in which $\Sigma $ is time-like or light-like. Indeed, (whether $\Sigma $ is light-like, time-like or space-like) configurations with $W\neq 0$ cannot decay into the trivial vacuum $U=\mathbb{\mathbf{I}}$. Hence, if one is able to construct configurations such that $W\neq 0$ along a time-like $\Sigma $, then the corresponding gauged soliton possesses a topologically protected time-dependence as it cannot be continuously deformed into static solutions (since all the static solutions have $W=0$ along a time-like $\Sigma $). The natural name for these solitons is ``(gauged) time-crystals" \cit {Fab1,gaugsk}. We can adopt the standard parametrization of the $SU(2)$-valued scalar U(x^{\mu }) $ \begin{equation} U^{\pm 1}(x^{\mu })=Y^{0}(x^{\mu })\mathbb{\mathbf{I}}\pm Y^{i}(x^{\mu })t_{i}\ ,\ \ \left( Y^{0}\right) ^{2}+Y^{i}Y_{i}=1\,, \label{standnorm} \end{equation where $\mathbb{\mathbf{I}}$ is the $2\times 2$ identity and \begin{align} Y^{0}& =\cos C\ ,\ Y^{i}=n^{i}\cdot \sin C\ , \label{pions1} \\ n^{1}& =\sin F\sin G\ ,\ \ n^{2}=\sin F\cos G\ ,\ \ n^{3}=\cos F\ . \label{pions2} \end{align} with the help of which the standard baryon density (in the absence of a U(1) $ field) reads $\rho _{B}=12\sin ^{2}C\sin F\ dC\wedge dF\wedge dG$. If we want a non-vanishing topological charge in this setting we have to demand $dC\wedge dF\wedge dG\neq 0$. \section{Skyrmions at finite volume} In the present section, the Skyrmions living within a finite flat box constructed in \cite{Fab1} will be slightly generalized. These explicit Skyrmionic configurations allow the explicit computations of the total energy of the system and, in particular, of its dependence on the Baryon charge and on the volume. Hence, among other things, one can arrive at a well-defined closed formula for the compression modulus. The following anstatz for the representation of the $SU(2)$ group is the starting point of the analysis \begin{equation} G=\frac{q\phi - p\gamma}{2},\; \tan F=\frac{\tan H}{\sin A}, \; \tan C= \tan A \sqrt{1+\tan ^{2}F}\ , \label{pions2.25} \end{equation where \begin{equation} A=\frac{p\gamma +q\phi }{2\,}\ ,\ \ H=H\left( r,z\right) \ ,\ \ p,q\in \mathbb{N} \ . \label{pions2.26} \end{equation} Moreover, it can be verified directly that, the topological density $\rho _{B}$ is non-vanishing. From the standard parametrization of $SU(2)$ \cit {Shnir} it follows that \begin{equation} 0\leq \gamma \leq 4\pi ,\quad 0\leq \phi \leq 2\pi \ , \label{domain} \end{equation while the boundary condition for $H$ will be discussed below; in any case, its range is in the segment $H \in [0,\frac{\pi}{2}]$, while for $r$ we assume $0\leq r\leq 2 \pi$. With the parametrization introduced by \eqref{pions2.25} and \eqref{pions2.26} the $SU(2)$ field assumes the form \begin{equation} U = \pm \begin{pmatrix} \cos (H) e^{\frac{1}{2} i (p \gamma+q \phi )} & \sin (H) e^{\frac{1}{2} i (p \gamma -q \phi )} \\ -\sin (H) e^{-\frac{1}{2} i (p \gamma -q \phi )} & \cos (H) e^{-\frac{1}{2} i (p\gamma +q \phi ) \end{pmatrix . \end{equation} Hereafter, we just consider the plus expression for $U$ throughout all the range of the variables $\gamma$ and $\phi$, which makes it a continuous function of the latter. \subsection{Skyrmions in a rectangular cuboid} We can extend the results presented in \cite{Fab1} by considering a cuboid with three different sizes along the three axis instead of a cube. Thus, we will use three - different in principle - fundamental lengths characterizing each direction, $l_{1}$, $l_{2}$ and $l_{3}$, inside the metric. The corresponding line element is \begin{equation} ds^{2}=-dz^{2}+l_{1}^{2}dr^{2}+l_{2}^{2}d\gamma ^{2}+l_{3}^{2}d\phi ^{2}\ . \label{line3l} \end{equation The profile function that we consider depends only on one variable\footnote On the other hand, when the coupling with Maxwell field is neglected, the profile can depend on time as well. In this case, one gets an effective sine-Gordon theory for the profile $H(t,r)$ \cite{Fab1}.}, $H=H(r)$. We note that in this section we do not take into account the effects of an electromagnetic field, hence we have $A_{\mu }=0$ in the relations of the previous sections. Under the aforementioned conditions the profile equation reduces to \begin{equation} \label{profnoA} H^{\prime \prime }= \frac{\lambda l_1^2 p^2 q^2}{4 \left(l_2^2 \left(4 l_3^2+\lambda q^2\right)+\lambda l_3^2 p^2\right)} \sin (4 H). \end{equation} It is impressive that such a system, in flat space, can lead to an integrable equation for the profile. This is owed to the existence of a first integral of \eqref{profnoA} that is given by \begin{equation} \label{intofmol3} (H^{\prime})^2 \left(l_2^2 \left(4 l_3^2+\lambda q^2\right)+\lambda l_3^2 p^2\right)+\frac{\lambda l_1^2 p^2 q^2}{8} \cos (4 H) = I_0 . \end{equation} The above relation can be written as \begin{equation} \label{firstint2} (\tilde{H}^{\prime})^2 - k \sin (\tilde{H})^2 = \tilde{I}_0, \end{equation} where \begin{equation} \tilde{H}= 2 H, \quad k = \frac{\lambda l_1^2 p^2 q^2}{l_2^2 \left(4 l_3^2+\lambda q^2\right)+\lambda l_3^2 p^2}, \quad \tilde{I}_0= \frac{8 I_0-\lambda l_1^2 p^2 q^2}{8 l_2^2 l_3^2+2 \lambda l_2^2 q^2+2 \lambda l_3^2 p^2}. \end{equation} Subsequently, we can bring \eqref{firstint2} into the form \begin{equation} \label{firstint3} \frac{d \tilde{H}}{d r} = \pm \sqrt{\tilde{I}_0} \left(1 - \tilde{k}(\sin \tilde{H})^2 \right)^{\frac{1}{2}} \end{equation} where we have set $\tilde{k} = - k/\tilde{I}_0$. The last expression leads to \begin{equation} \label{finalsolH} \sqrt{\tilde{I}_0} \int_0^r d \bar{r} = \pm \int_0^{\tilde{H}} \left(1 - \tilde{k}(\sin \bar{H})^2 \right)^{-\frac{1}{2}} d \bar{H}, \end{equation} where we have introduced the bars in order to distinguish the variables that are integrated from the $r$ and $\tilde{H}(r)$ which are the boundaries of the two integrals. Of course we consider $\tilde{I}_0 > 0$. As a starting point for the integration we take $r=0$, $\tilde{H}(0)=0=H(0)$, although we could also set $r=0$, $\tilde{H}=\pi$ ($H(0)=\frac{\pi}{2}$). The difference between the two boundary choices is just in the sign of the topological charge. These boundary values, for $H$ and those that we have seen in \eqref{domain} for $\gamma$ and $\phi$ lead to a topological charge $W= p q$ in \eqref{new4.1} (for $A_\mu =0$). In the right hand side of \eqref{finalsolH} we recognize the incomplete elliptic integral defined as \begin{equation} F(\tilde{H}|\tilde{k}) = \int_0^{\tilde{H}} \left(1 - \tilde{k}(\sin \bar{H )^2 \right)^{-\frac{1}{2}} d \bar{H} . \end{equation} The solution to the differential equation \eqref{firstint3} is just the inverse of this function, which is called the Jacobi amplitude $\mathrm{am =F^{-1}(\tilde{H}|\tilde{k})$. So, in terms of our original equation \eqref{profnoA} the solution reads \begin{equation} \label{soltodifeq} H(r) = \pm \frac{1}{2} \mathrm{am}(\tilde{I}_0^{1/2} r|\tilde{k}). \end{equation} Finally, by considering the positive branch, the value of the constant of integration $\tilde{I}_0$ is governed by the boundary condition $H(2\pi) \frac{\pi}{2}$. In the special case when $l_1=l_2=l_3=l$ we obtain the particular case which was studied in \cite{Fab1}. Here, we give emphasis to this general case and, especially, we want to study the most energetically convenient configurations and the way in which they are affected by the anisotropy in the three spatial directions. In Fig. \ref{Fig0} we see a schematic representation of the finite box we are considering for this Skyrmionic configuration with a baryon number $B =p q$. The physical configuration that we try to reproduce with this model is the structure of matter in nuclear pasta. The latter is a dense form of matter that is encountered inside the crusts of neutron stars. Thus, we make this \textquotedblleft crude" (but analytic in its results) model trying to imitate with these $p$ and $q$ Skyrmionic layers a particular form of this matter that is encountered in nature. The dimensions of the configuration are governed by the three numbers $l_{1}$, $l_{2}$ and $l_{3}$. Of course we do not expect the binding energies of such a configuration to be at the same level with those produced by the usual spherically symmetric ansatz. This is something that we examine thoroughly in the next section. \begin{figure}[h] \centering \includegraphics[width=.40\textwidth]{box.eps} \caption{The finite box of the Skyrmionic system.} \label{Fig0} \end{figure} \subsubsection{The energy function} We proceed to study the energy function for the solution that we previously introduced. The constant of motion $I_0$ in \eqref{intofmol3} can be expressed in terms of the other constants of the model if we consider the boundary values $H(0)=0$ and $H(2\pi)=\pi/2$. By solving \eqref{intofmol3} with respect to $H^{\prime }$ and integrating the resulting relation with respect to $r$ we obtain \begin{equation} \label{intofint} 2\sqrt{2} \int_{a}^{b} \left(\frac{l_2^2 \left(4 l_3^2+q^2\right)+l_3^2 p^2} 8 I_0-l_1^2 p^2 q^2 \cos (4 H)}\right)^{1/2} dH = \int_{0}^{2\pi} dr \end{equation} which leads to \begin{equation} \label{l1tox} l_1 = \frac{x \mathrm{K}\left(-x^2\right) \sqrt{l_2^2 \left(4 l_3^2+q^2\right)+l_3^2 p^2}}{\pi p q}, \end{equation} where $\mathrm{K}$ is the complete elliptic integral of the first kind and x $ is related to $I_0$ through \begin{equation} \label{I0tox} I_0 = \frac{l_1^2 p^2 q^2 \left(x^2+2\right)}{8 x^2}. \end{equation} The pure time component of the energy momentum tensor in our case is \begin{equation} T_{00} = \frac{K}{8 V^2} \left[\left(l_2^2 \left(4 l_3^2+\lambda q^2\right)+\lambda l_3^2 p^2\right) H^{\prime 2 }+ \frac{\lambda l_1^2 p^2 q^2}{4}\sin ^2(2 H) + V^2 \left(\frac{p^2}{l_2^2}+\frac{q^2}{l_3^2}\right \right]. \end{equation} As a result we can calculate the energy from the expression \begin{equation} E= \int_{\Sigma} \sqrt{-{}^{(3)}g} T_{00}d^3 x = 8\pi^2 V \int_0^{\frac{\pi} 2}} \frac{T_{00}}{H^{\prime }} dH . \end{equation} We can write the integrand as a pure function of $H$ with the help of \eqref{intofmol3} and obtain - in principle - the energy as a function of the $l_i$' s, $p$ and $q$. However, due to the fact that relation \eqref{l1tox} cannot be straightforwardly inverted so as to substitute $I_0$ as a function of $l_1$ (through \eqref{l1tox} and \eqref{I0tox}) we choose to express the energy function in terms of $x$ instead of $l_1$. In what follows, we assume the values $K=2$ and $\lambda=1$ for the coupling constants \cite{skyrev1}, so that lengths are measured in fm and the energy in MeV. In this manner we get \begin{equation} \label{energyfull} E(x,l_2,l_3,p,q) = \frac{\pi ^2 p q \sqrt{l_2^2 \left(4 l_3^2+q^2\right)+l_3^2 p^2}}{l_2 l_3} \frac{K(-x^2) \left(\frac{4 l_2^2 x^2 K(-x^2)}{p^2}-\frac{\mathrm{K}(-x^2) \left(q^2-4 l_3^2 x^2\right)}{q^2}+2 \mathcal{E}(-x^2)\right)}{x |K(-x^2)|}, \end{equation} where $\mathcal{E}$ is the complete elliptic integral of the second kind. The $x$, as we discussed, is linked - with the help of the boundary conditions of the problem - through \eqref{l1tox} to $l_1$. If we fix all variables apart from $x$ and plot the energy as a function of the latter we get what we see in Fig. \ref{Fig1}. In this graph, we observe that the minimum of the energy is ``moving" to smaller values of $x$ as the box is being enlarged in the two directions of $l_2$ and $l_3$. However, we have to keep in mind that the other of the lengths, namely $l_1$, depends also on the values of $l_2$ and $l_3$ through \eqref{l1tox}. For the particular set of values used in the figure we can see that as $l_2$ and $l_3$ rise, $l_1$ is also relocated to larger values. In the next section we study more thoroughly the function $E(x,l_2,l_3,p,q)$ and its derivatives near the values that correspond to the most energetically convenient configurations. \begin{figure}[tbp] \centering \includegraphics[width=.40\textwidth]{Energy_of_x.eps} \caption{The plots of $E(x)$ (in MeV) for three sets of values: (a) $p=q=3$, $l_2=l_3=1$ fm (dashed line), (b) $p=q=3$, $l_2=l_3=2$ fm (dotted line) and (c) $p=q=3$, $l_2=l_3=3$ fm (continuous line). The minimum of the energy corresponds to $l_1=0.227$ fm, $l_1=0.323$ fm and $l_1=0.42$ fm respectively. } \label{Fig1} \end{figure} \subsubsection{The energy as a function of the three $l_i$'s} Let us see how the energy behaves in terms of the three fundamental lengths l_1$, $l_2$ and $l_3$ under the condition that we fix $p$ and $q$ to specific values. In the table \ref{tab1} we can observe the location of the minimum of the energy for specific values of $p$ and $q$. \begin{table}[tbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $E_{min}$ (MeV) & $p$ & $q$ & $l_1$ (fm) & $l_2$ (fm) & $l_3$ (fm) \\ \hline 167 & 1 & 1 & 0.251 & 0.413 & 0.413 \\ 334 & 1 & 2 & 0.251 & 0.413 & 0.826 \\ 669 & 2 & 2 & 0.251 & 0.826 & 0.826 \\ 835638 & 100 & 50 & 0.251 & 41.306 & 20.653 \\ 835638 & 50 & 100 & 0.251 & 20.653 & 41.306 \\ \hline \end{tabular \end{center} \caption{Minimum of the energy for values of $p$ and $q$.} \label{tab1} \end{table} First, we have to note that the interchange of $p$ and $q$ makes no significant difference, so weather you take $p=100$ and $q=50$ or $p=50$ and $q=100$, the only thing that happens is that the values of the corresponding lengths $l_{2}$ and $l_{3}$ are also interchanged. However, the arithmetic value that the energy assumes remains the same. Another thing that we have to notice is that, if we calculate the percentage difference of the minimum of the energy from the topological bound $E_{0}=12\pi ^{2}|B|=12\pi ^{2}pq$; in all cases we get $\Delta (\%)=\frac{E-E_{0}}{E_{0}}(\%)=41.11\%$. Thus, we see that the minimum of the energy $E(l_{1},l_{2},l_{3})$ has a fixed deviation from the Bogomol'nyi bound irrespectively of the $p$, $q$ configuration. We also observe that this most energetically convenient situation arises when the box has convenient lengths. In particular we see that the relation $\frac{l_{2}}{l_{3}}=\frac{p}{q}$ is satisfied in all cases, while $l_{1}$ remains fixed in a single \textquotedblleft optimal" value. By comparing with the usual spherically symmetry Skyrmionic configuration in an infinite volume, this higher deviation from the Bogomol'nyi bound may be anticipated due to the ``compression" of the system into a finite volume. It is also interesting to study the first derivatives of the energy with respect to the three lengths of the box. To this end, and since we have $E$ in terms of $x$ which also involves $l_{1}$, $l_{2}$ and $l_{3}$ we need to write \begin{equation} \begin{split} dE(x,l_{2},l_{3})& =\frac{\partial E}{\partial x}dx+\frac{\partial E} \partial l_{2}}dl_{2}+\frac{\partial E}{\partial l_{3}}dl_{3} \\ & =\frac{\partial E}{\partial x}\frac{\partial x}{\partial l_{1} dl_{1}+\left( \frac{\partial E}{\partial x}\frac{\partial x}{\partial l_{2}} \frac{\partial E}{\partial l_{2}}\right) dl_{2}+\left( \frac{\partial E} \partial x}\frac{\partial x}{\partial l_{3}}+\frac{\partial E}{\partial l_{3 }\right) \\ & =d\tilde{E}(l_{1},l_{2},l_{3}). \end{split \end{equation In Fig. \ref{Fig2} we can see the general behavior of three $\frac{\partial \tilde{E}}{\partial l_{i}}$ for fixed $l_{1}=0.251$ in terms of $l_{2}$ and l_{3}$ near the values where the energy assumes its minimum. On the other hand, in Fig. \ref{Fig3} we plot the derivatives of the energy with respect to $x$ after fixing $l_{2}$ and $l_{3}$ to their minimum value for various p $, $q$ configurations. We can see that $\frac{\partial E}{\partial l_{2}}$ and $\frac{\partial E}{\partial l_{3}}$ are indistinguishable when $p=q$. On the other hand if $q>p$ the $\frac{\partial E}{\partial l_{3}}$ line runs closer to the vertical axis than $\frac{\partial E}{\partial l_{2}}$ and vice versa when $p>q$. Finally, before proceeding to study the energy as a function of $p$ and $q$, we give in Fig. \ref{Fig4} its graph in terms of l_{2}$ and $l_{3}$ when $l_{1}$ assumes the value that corresponds to the minimum of the energy. \begin{figure}[h] \centering \hspace{8mm} \subfloat[][Derivative of the energy with respect to $l_1$]{\includegraphics[width=.40 \textwidth]{derEl1.eps}} \hspace{0mm} \subfloat[][Derivative of the energy with respect to $l_2$]{\includegraphics[width=.41 \textwidth]{derEl2.eps}} \hspace{0mm} \subfloat[][Derivative of the energy with respect to $l_3$]{\includegraphics[width=.40\textwidth]{derEl3.eps}} \caption{Derivative of the energy, in terms of the basic dimensions of the Skyrmionic box, near its minimum value. The behaviour of the three $\frac \partial E}{\partial l_i}$ is the same irrespectively of $p$ and $q$. The only thing that changes is the scaling of the figures since $l_2$ and $l_3$ and $\frac{\partial E}{\partial l_i}$ assume larger values as $p$ and $q$ increase.} \label{Fig2} \end{figure} \begin{figure}[tbp] \centering \hspace{8mm} \subfloat[][$\frac{\partial E}{\partial l_i}$ for $p=1$, $q=2$]{\includegraphics[width=.40 \textwidth]{derElixpq12.eps}} \hspace{0mm} \subfloat[][$\frac{\partial E}{\partial l_i}$ for $p=2$, $q=2$]{\includegraphics[width=.41 \textwidth]{derElixpq22.eps}} \hspace{0mm} \subfloat[][$\frac{\partial E}{\partial l_i}$ for $p=100$, $q=50$]{\includegraphics[width=.40\textwidth]{derElixpq10050.eps}} \caption{Derivative of the energy with respect to the $l_i$'s given as function of $x$. In every case the dashed line corresponds to $\frac \partial E}{\partial l_1}$, the dotted to $\frac{\partial E}{\partial l_2}$ and the continuous line to $\frac{\partial E}{\partial l_3}$. Lengths are measured in fm and the energy in MeV.} \label{Fig3} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=.40\textwidth]{El2l3.eps} \caption{Plot of the energy $E$ in the $l_{2}-l_{3}$ plane when $l_{1}$ takes the value that corresponds to the minimum of $E$.} \label{Fig4} \end{figure} \subsection{The energy of the symmetric configuration} Due to using \eqref{l1tox} in the previous section so as to write the energy as a function of $x$, $l_{2}$ and $l_{3}$, it is not straightforward from that expression to derive what happens in the case where one considers a symmetric box $l_{1}=l_{2}=l_{3}=l$. In this section we treat this situation from the very beginning by setting all fundamental lengths as equal in Eq. \eqref{intofmol3}. We have to note that throughout this section we also make use of the system of units $K=2$, $\lambda =1$. The expression relative to \eqref{intofint}, from the resulting integral of motion, leads to \begin{equation} l=\frac{\sqrt{\pi ^{2}p^{2}q^{2}-x^{2}\mathrm{K}\left( -x^{2}\right) ^{2}\left( p^{2}+q^{2}\right) }}{2x\mathrm{K}\left( -x^{2}\right) }, \label{ltox} \end{equation where $x$ is defined as in the previous section by relation \eqref{I0tox}, with $l_{1}=l$. By following the exact same steps as before we are led to the following expression for the energy \begin{equation} E_{c}(x,p,q)=\frac{2\pi ^{3}\left( 2p^{2}q^{2}\mathrm{K}\left( -x^{2}\right) \mathcal{E}\left( -x^{2}\right) -\mathrm{K}\left( -x^{2}\right) ^{2}\left( p^{4}x^{2}+p^{2}q^{2}\left( 2x^{2}+1\right) +q^{4}x^{2}\right) +\pi ^{2}p^{2}q^{2}\left( p^{2}+q^{2}\right) \right) }{x^{2}\mathrm{K}\left( -x^{2}\right) \sqrt{\frac{\pi ^{2}p^{2}q^{2}}{x^{2}}-\mathrm{K}\left( -x^{2}\right) ^{2}\left( p^{2}+q^{2}\right) }}. \label{encube} \end{equation} It is easy to note that the energy is symmetric under the mirror change p\leftrightarrow q$. We verify that the for a bigger baryon number, the most optimal configuration corresponds also to a larger box. In Fig. \ref{Fig4new} we can see the plot of the energy with respect to various configurations demonstrating the aforementioned fact. The second thing that we can note is that the deviation $\Delta= \frac{E-E_0}{E_0}$ from saturating the bound also increases for larger baryonic configurations. In table \ref{tab2} we provide some basic examples. Surprisingly we can see that the configuration p=q=2$ is slightly more convenient than the one corresponding to $p=2$, $q=1 . As long as we know, this is the only case where this is happening. In general it can be seen that the $p=q$ construction requires more energy than the $p$, $q-1$, with an exception in the $p=q=2$ case. \begin{table}[tbp] \begin{center} \begin{tabular}{|c|c|c|c|} \hline $p$ & $q$ & $l$ (fm) & $\Delta(\%)$ \\ \hline 1 & 1 & 0.322 & 53 \\ 2 & 1 & 0.369 & 105 \\ 3 & 1 & 0.385 & 177 \\ 2 & 2 & 0.463 & 104 \\ 3 & 2 & 0.505 & 138 \\ 3 & 3 & 0.571 & 148 \\ \hline \end{tabular \end{center} \caption{Deviation from the topological bound for several values of $p$ and q$.} \label{tab2} \end{table} \begin{figure}[tbp] \centering \includegraphics[width=.40\textwidth]{Energy_cube.eps} \caption{Plot of the energy of the cubic configuration $E_c$ with respect to $x$. The dashed line corresponds to $p=q=1$, the dotted to $p=2$, $q=1$ and the continuous line to $p=3$, $q=1$. The minimum of the energy in terms of the size of the cube $l$ is: $l=0.322$, $l=0.369$ and $l=0.385$ respectively. } \label{Fig4new} \end{figure} \subsection{The compression modulus for the rectangular box} From the technical point of view, it is worth to emphasize here that the very notion of compression modulus would require to put the Skyrmions within a finite flat box of volume $V$: then the compression modulus is related to the second derivative of the total energy of the system with respect to $V$. As it has been already mentioned, this requires to generalize the hedgehog ansatz to situations without spherical symmetry. On the other hand, if one insists in defining the compression modulus for the spherical hedgehog, it becomes a rather subtle issue (see the nice analysis in \cite{Adam}) how to define the derivative of the energy with respect to the volume. Here we are using the generalized hedgehog ansatz \cite{Fab1,gaugsk}\ which is well suited to deal with situations without spherical symmetry. In this way we can analyze Skyrmions living within a region of flat space-time of finite spatial volume avoiding all the subtleties mentioned above. In particular, in the present case the "derivative with respect to the volume" means, literally, the derivative (of the total energy of the system) with respect to the spatial volume of the region in which the Skyrmions are living. As we obtained the general behavior of the three $\frac{\partial E}{\partial l_{i}}$ functions in the previous sub-sections, we are also able to derive an analytic expression of the compression modulus \cite{Brown,Co} \begin{equation*} \mathcal{K}=\frac{9V}{B\beta }\approx 210\pm 30MeV \end{equation* where $\beta =-\frac{1}{V}\frac{\partial V}{\partial P}$ is the compressibility. By using $P=\frac{dE}{dV}$ we acquire \begin{equation} \mathcal{K}=-\frac{9V^{2}}{B}\frac{d^{2}E}{dV^{2}}, \label{compmod} \end{equation where $B$ is the baryon charge and $V$ the finite volume in which we confine the system; in our case this volume is $V=16\pi ^{3}l_{1}l_{2}l_{3}$. The difference in the sign of \eqref{compmod} in comparison to other expressions in the literature \cite{Blaizot} is owed to the metric signature that we follow here and which affects the derivation of $E$ from $T_{00}$. In order to express the energy that we obtain from \eqref{energyfull} as a function of the volume, we introduce the following reparametrization of the $l_{i}$'s into three new variables \begin{equation} l_{1}=c_{1}\left( \frac{V}{16\pi ^{3}}\right) ^{1/3},\quad l_{2}=c_{2}\left( \frac{V}{16\pi ^{3}}\right) ^{1/3},\quad \text{and}\quad l_{3}=\frac{1} c_{1}c_{2}}\left( \frac{V}{16\pi ^{3}}\right) ^{1/3}, \end{equation so that $l_{1}l_{2}l_{3}=\frac{V}{16\pi ^{3}}$. We can substitute the above expressions into both \eqref{l1tox} and \eqref{energyfull}. By solving the first with respect to $V$ and substituting to the second we obtain the energy as a pure function of $x$ which is associated through \eqref{l1tox} with the volume $V$. We can thus calculate the first and second derivatives of the energy with respect to the volume by just taking $\frac{dE}{dV =\left( \frac{dV}{dx}\right) ^{-1}\frac{dE}{dx}$ and $\frac{d^{2}E}{dV^{2} =\left( \frac{dV}{dx}\right) ^{-1}\frac{d}{dx}\left[ \left( \frac{dV}{dx \right) ^{-1}\frac{dE}{dx}\right] $. The first derivative of $E(V)$ with respect to the volume defines the pressure of the system, i.e. $P=\frac{dE}{dV}$. In Fig. \ref{Fig5} we see the graphs of the pressure the compression modulus and the energy with respect to the volume for specific regions of the variable $V$. Due to the complicated nature of the relation between $x$ and $V$ it is not easy to put in this parametric plot the behavior of $P$ and $E$ near the region where V\rightarrow 0$. However, one can calculate through the relations that as one shrinks the volume to zero, the pressure suddenly falls and changes sign becoming negative. The same happens to the compression modulus $\mathcal{K}$ as well, for even smaller values of $V$, while the energy remains positive for all $V$. Unfortunately the expressions are too cumbersome to present them analytically in this work, but the graphs in Fig. \ref{Fig5} demonstrate the general behavior. In the case of a finite cube with l_{1}=l_{2}=l_{3}$ the situation is a lot simpler as we can see in the following section. \begin{figure}[h] \centering \hspace{0mm} \subfloat[][Pressure $P(V)$]{\includegraphics[width=4cm,height=4cm]{Eq_of_state.eps}} \hspace{8mm} \subfloat[][Compression modulus $\mathcal{K}(V)$]{\includegraphics[width=.40 \textwidth]{Comp_mod.eps}} \hspace{0mm} \subfloat[][Energy $E(V)$]{\includegraphics[width=.40\textwidth]{Energy_volume.eps}} \caption{Parametric plots of the pressure $P$, the compression modulus \mathcal{K}$ and the energy $E$ with respect to the volume. The plots correspond to the same parameters but for different ranges of the volume.} \label{Fig5} \end{figure} \subsubsection{Compression modulus in the symmetric case} The most natural case corresponds to choose $l_{1}=l_{2}=l_{3}=l$. In this way, we can derive a closed analytic formula for the compression modulus of the Skyrmions living within such a cuboid. To the best of our knowledge, this is the first case in which one can derive an analytic formula (Eqs. \ref{Kbox}) and (\ref{Vbox}) below) for the compression modulus in a highly interacting theory such as the low energy limit of QCD. Indeed, by expressing the fundamental length as $l=\left( \frac{V}{16\pi ^{3}}\right) ^{1/3}$ we can easily use \eqref{ltox} to relate the volume $V$ with the variable $x$ on which the energy depends \eqref{encube}. In this manner we can get an analytical expression for the compression modulus of the cube in terms of the variable $x$, which is \begin{equation} \begin{split} \mathcal{K}(x)=& -\frac{36}{pq}\Bigg[\left( x^{2}+1\right) \mathrm{K}\left( -x^{2}\right) ^{3}\left( \pi ^{2}p^{2}q^{2}-x^{2}\mathrm{K}\left( -x^{2}\right) ^{2}\left( p^{2}+q^{2}\right) \right) ^{2} \\ & +x^{2}\mathrm{K}\left( -x^{2}\right) ^{3}\left( p^{2}+q^{2}\right) \mathcal{E}\left( -x^{2}\right) ^{2}\left( 5\pi ^{2}p^{2}q^{2}-x^{2}\mathrm{ }\left( -x^{2}\right) ^{2}\left( p^{2}+q^{2}\right) \right) \\ & +\pi ^{4}p^{2}q^{2}\mathcal{E}\left( -x^{2}\right) \left( \mathrm{K}\left( -x^{2}\right) ^{2}\left( x^{2}\left( p^{2}+q^{2}\right) ^{2}-2p^{2}q^{2}\right) -\pi ^{2}p^{2}q^{2}\left( p^{2}+q^{2}\right) \right) \Bigg]. \end{split} \label{Kbox} \end{equation It can be shown that the parametric plots with respect to the volume which is \begin{equation} V=2\pi ^{3}\frac{\left( \pi ^{2}p^{2}q^{2}-x^{2}\mathrm{K}\left( -x^{2}\right) ^{2}\left( p^{2}+q^{2}\right) \right) ^{3/2}}{x^{3}\mathrm{K \left( -x^{2}\right) ^{3}} \label{Vbox} \end{equation lead to the same behavior for the pressure, the energy and the compression modulus that has being derived in the previous section. For various values of $p$ and $q$ the behavior of the before mentioned quantities is described by the same graphs as given in Fig \ref{Fig5}. A baryon density ($n=\frac{B}{V}$) of $0.04$ fm$^{-3}$ $\lesssim n\lesssim 0.07$ fm$^{-3}$ is assumed \cite{Caplan} to be appropriate for characterizing nuclear pasta and in particular lasagna. Within this range densities we can see that with expressions \eqref{Kbox} and \eqref{Vbox} we can achieve a compression modulus around $\mathcal{K}\sim 230$MeV (which is quite reasonable \cite{Adam, Dutra}). For instance in table \ref{tab3} one can observe various examples of configurations involving baryon densities $n$ and the corresponding baryon numbers $B$, whose compression modulus - as calculated with the help of \eqref{Kbox} - is $\mathcal{K}\sim 230$MeV. In all cases presented in the table we have considered $p=q$, thus $B=p^{2}$. \begin{table}[tbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $B$ & 144 & 196 & 225 & 324 \\ \hline $n$ (fm$^{-3}$) & 0.044 & 0.048 & 0.051 & 0.057 \\ \hline \end{tabular \end{center} \caption{Examples of configurations corresponding to a compression modulus \mathcal{K}\sim 230$MeV.} \label{tab3} \end{table} \section{Gauged solitons} Here we will shortly describe (a slight generalization of) the gauged solitons constructed in \cite{gaugsk}. \subsection{Gauged Skyrmions} As in \cite{gaugsk}, we introduce an electromagnetic potential of the form \begin{equation} \label{empot} A_{\mu }=(b_{1}(r),0,b_{2}(r),b_{3}(r)), \end{equation to be coupled to the multi-Skyrmionic system under consideration. The Maxwell equations \eqref{maxwellskyrme1} reduce to \begin{equation} \label{Maxbs} b_{i}^{\prime \prime }=\kappa^2 M_{ij}b_{j}+ \kappa N_{i} \end{equation with the nonzero components of $M$ and $N$ being \begin{align*} M_{11}& =-K\sin ^{2}(H)\left[ l_{1}^{2}\left( 4+\lambda \left( \frac{p^{2}} l_{2}^{2}}+\frac{q^{2}}{l_{3}^{2}}\right) \cos ^{2}(H)\right) +4\lambda H^{\prime 2}\right] \\ M_{23}& =\frac{K\lambda l_{1}^{2}pq}{4l_{3}^{2}}\sin ^{2}(2H) \\ M_{32}& =\frac{l_{3}^{2}}{l_{2}^{2}}M_{23} \\ M_{22}& =M_{11}+\frac{p}{q}M_{32} \\ M_{22}& =M_{11}+\frac{q}{p}M_{23} \\ N_{2}& =\frac{p}{4}M_{11}+\frac{1}{4}\left( \frac{l_{3}^{2}p^{2}}{l_{2}^{2}q -q\right) M_{23} \\ N_{3}& =-\frac{q}{4}M_{11}-\frac{1}{4}\left( \frac{l_{2}^{2}q^{2}}{l_{3}^{2} }-p\right) M_{32}. \end{align* A direct computation shows that, using the line element in Eq. (\ref{line3l ), the three coupled gauged Skyrme equations (namely, $\mathit{E}^{j}=0$, j=1$, $2$, $3$) in Eq. (\ref{nonlinearsigma1}) \begin{equation*} D^{\mu }\left( R_{\mu }+\frac{\lambda }{4}\left[ R^{\nu },G_{\mu \nu }\right] \right) =\mathit{E}^{j}t_{j}=0 \end{equation* reduce to only one Skyrme field equation (since the third Skyrme equation is identically satisfied while the first and the second are proportional) \begin{eqnarray*} \mathit{E}^{3} &=&0\ , \\ \mathit{E}^{1} &=&I_{1}P\left[ H\right] \ ,\ \mathit{E}^{2}=I_{2}P\left[ \right] \ ,\ \ I_{1}\neq 0\ ,\ I_{2}\neq 0\ , \end{eqnarray* where $I_{j}$ are real and non-vanishing. Thus, the Skyrme field equations reduce to $P\left[ H\right] =0$ namely \begin{equation} \begin{split} & 4\left[ X\sin ^{2}(H)-\lambda \left( l_{2}^{2}q^{2}+l_{3}^{2}p^{2}\right) -4l_{2}^{2}l_{3}^{2}\right] H^{\prime \prime }+2X\sin (2H)H^{\prime 2}+4\sin ^{2}(H)X^{\prime }H^{\prime } \\ & +\Big[\lambda \kappa \left( l_{3}^{2}pb_{2}+l_{2}^{2}qb_{3}\right) \left( \frac{4l_{1}^{2}p}{l_{2}^{2}}\kappa b_{2}-\frac{4l_{1}^{2}q}{l_{3}^{2} \kappa b_{3}+2l_{1}^{2}\left( \frac{q^{2}}{l_{3}^{2}}-\frac{p^{2}}{l_{2}^{2} \right) \right) -\frac{1}{4}l_{1}^{2}X\left( \frac{p^{2}}{l_{2}^{2}}+\frac q^{2}}{l_{3}^{2}}\right) \\ & +\lambda l_{1}^{2}p^{2}q^{2}\Big]\sin (4H)-\frac{2l_{1}^{2}}{\lambda X\sin (2H)=P\left[ H\right] =0\ , \end{split} \label{profl3} \end{equation where \begin{equation} X(r)=8\lambda \kappa \left( 2l_{2}^{2}l_{3}^{2}\kappa b_{1}^{2}-l_{3}^{2}b_{2}(2\kappa b_{2}+p)+l_{2}^{2}b_{3}(q-2\kappa b_{3})\right) . \end{equation} Quite remarkably, if we demand that \begin{equation} X(r)=\lambda \left( l_{2}^{2}q^{2}+l_{3}^{2}p^{2}\right) ,\quad b_{2}(r)= \frac{l_{2}^{2}q}{l_{3}^{2}p}b_{3}+\frac{1}{\kappa }\left( \frac l_{2}^{2}q^{2}}{4l_{3}^{2}p}-\frac{p}{4}\right) \ , \label{Xcon1} \end{equation then the equation for the profile $H(r)$ can be solved explicitly. More importantly, the above algebraic conditions in Eq. (\ref{Xcon1}) are consistent with the Maxwell equations written above. Indeed, if one plugs the two algebraic conditions in Eq. (\ref{Xcon1}) into the three Maxwell equations one obtains a single Maxwell equation for $b_{3}(r)$: \begin{equation} b_{3}^{\prime \prime }=\frac{\kappa K}{8l_{2}^{2}l_{3}^{2}}(q-4\kappa b_{3} \left[ 8\lambda l_{2}^{2}l_{3}^{2}H^{\prime 2}+l_{1}^{2}\left( \lambda \cos (2H)\left( l_{2}^{2}q^{2}+l_{3}^{2}p^{2}\right) +l_{2}^{2}\left( 8l_{3}^{2}+\lambda q^{2}\right) +\lambda l_{3}^{2}p^{2}\right) \right] \sin ^{2}(H)\ , \label{b3XYcon1} \end{equation while for the profile $H(r)$ we have a decoupled (from $b_{3}$) equation that reads \begin{equation} \left[ \lambda \cos (2H)\left( l_{2}^{2}q^{2}+l_{3}^{2}p^{2}\right) +l_{2}^{2}\left( 8l_{3}^{2}+\lambda q^{2}\right) +\lambda l_{3}^{2}p^{2 \right] H^{\prime \prime }+\left( l_{2}^{2}q^{2}+l_{3}^{2}p^{2}\right) \left( l_{1}^{2}-\lambda H^{\prime 2}\right) \sin (2H)=0. \label{profXYcon1} \end{equation} Thus, the big technical achievement of the present approach is that the three coupled gauged Skyrme equations in Eq. (\ref{nonlinearsigma1}) and the corresponding four Maxwell equations in Eq. (\ref{maxwellskyrme1}) with exactly the Skyrme ansatz in Eqs. (\ref{pions2.25}) and (\ref{pions2.26}) and the gauge potential in Eq. (\ref{empot}) reduce to Eqs. (\ref{b3XYcon1}) and (\ref{profXYcon1}) when the two algebraic conditions in Eq. (\ref{Xcon1 ) are satisfied. We want to stress that the aforementioned relations provide an exact solution and they are not a product of an approximation. As for the boundary conditions that are needed to be set, we have to keep in mind that the system is confined to a finite box. Thus, the easiest way to realize this is by imposing periodic boundary conditions in $\gamma$ and $\phi$ and Dirichlet in $r$ Interestingly enough, Eq. (\ref{profXYcon1}) can be solved explicitly by observing that it has the following first integra \begin{equation} Y\left( H\right) \frac{H^{\prime 2}}{2}+V(H)=E_{0}\ , \label{firstint} \end{equation with \begin{eqnarray} Y\left( H\right) &=&2\lambda \left( l_{2}^{2}q^{2}+l_{3}^{2}p^{2}\right) \cos ^{2}(H)+8l_{2}^{2}l_{3}^{2}, \label{firstint2} \\ V\left( H\right) &=&-\frac{1}{2}l_{1}^{2}\left( l_{2}^{2}q^{2}+l_{3}^{2}p^{2}\right) \cos (2H) \label{firstint3} \end{eqnarray and where $E_{0}$ is an integration constant to be determined by requiring that the boundary conditions to have non-vanishing topological charge are satisfied. Thus, Eq. (\ref{profXYcon1}) can be reduced to a quadrature (which defines a generalized elliptic integral). Eq. (\ref{b3XYcon1}) for b_{3}$ is linear (since $H(r)$ can be found explicitly), however its integration is not a trivial task. In any case, integration of (\re {b3XYcon1}) that results in an expression for $b_{3}$ makes trivial the determination of the other two components of $A_{\mu }$ since both $b_{1}$ and $b_{2}$ are given algebraically in terms of $b_{3}$ through conditions \eqref{Xcon1}. Nevertheless, even without the explicit expressions, it is still possible to analyze the generic features of the transport properties electrons passing through the above gauged Skyrmions. \subsection{Gauged time-crystals} In order to have a time periodic solution with a non vanishing topological charge, that can be characterized as a time-crystal (for the introduction to the notion of time crystals see \cite{timec1,timec2,timec3,timecr}) we start by considering the line element \begin{equation} ds^2 = - d\gamma^2 +l_1 dr^2 +l_2 dz^2 +l_3 d\phi^2, \end{equation} where $\gamma$ in the new ansatz \begin{equation} G=\frac{q\phi - \omega \gamma}{2}, \; A=- \frac{q\phi + \omega \gamma}{2} \end{equation} is the time variable, making the ensuing solution a time periodic configuration. The constant $\omega$ is the frequency of the time-crystal characterizing the periodicity of the system. Again we consider a finite box, where this time we take \begin{equation} 0 \leq r \leq 2\pi, \quad 0 \leq z \leq 4\pi, \quad 0 \leq \phi \leq 2\pi . \end{equation} We adopt a similar form for the electromagnetic potential as the one given in \eqref{empot}. However, we have to note now that the index of the coordinates is changed into $x^\mu = (\gamma, r, z, \phi)$. Thus, the vector potential is \begin{equation} A_\mu = (b_2(r),0,b_1(r),b_3(r)), \end{equation} making $b_2(r)$ the electrostatic potential instead of $b_1(r)$ that we had in the Skyrmion case. The Maxwell equations \eqref{maxwellskyrme1} retain same form as \eqref{Maxbs} with \begin{align*} M_{11} & = -\frac{K}{2 l_3^2} \sin ^2(H) \left[8 \lambda l_3^2 H^{\prime 2}+l_1^2 \left(2 \lambda \cos ^2(H) \left(q^2-l_3^2 \omega ^2\right)+8 l_3^2\right)\right] \\ M_{23} & = \frac{K \lambda l_1^2 q \omega}{4 l_3^2} \sin ^2(2 H) \\ M_{32} & = - l_3^2 M_{23} \\ M_{22} & = M_{11} + \frac{\omega}{q} M_{3,2} \\ M_{33} & = M_{11} + \frac{q}{\omega} M_{2,3} \\ N_2 & = \frac{\omega}{4} M_{11} - \frac{1}{4}\left(\frac{l_3^2 \omega ^2}{q} +q\right) \\ N_3 & = -\frac{q}{4}M_{11} + \frac{1}{4} \left(\frac{q^2}{l_3^2 \omega } + \omega \right), \end{align*} while the rest of the components of $M$ and $N$ are zero. As also happened in the Skyrme case, again here, the field equations reduce to a single ordinary differential equation for the profile function $H(r)$. In this case the relative equation reads \begin{equation} \begin{split} & 4\left( X\sin ^{2}(H)+l_{2}^{2}\left( l_{3}^{2}\left( \lambda \omega ^{2}-4\right) -\lambda q^{2}\right) \right) H^{\prime \prime }+2X\sin (2H)(H^{\prime })^{2}+4\sin ^{2}(H)X^{\prime }H^{\prime } \\ & +\frac{l_{1}^{2}}{4l_{3}^{2}}\left[ 4\lambda l_{2}^{2}\left( 2\kappa qb_{3}-l_{3}^{2}\omega (2\kappa b_{2}+\omega )\right) \left( 2\kappa l_{3}^{2}\omega b_{2}+q(q-2\kappa b_{3})\right) -X\left( q^{2}-l_{3}^{2}\omega ^{2}\right) \right] \sin (4H) \\ & -\frac{\left( 2l_{1}^{2}\right) }{\lambda }X\sin (2H)=0, \end{split} \label{profeqTC} \end{equation where \begin{equation} X(r)=-8\kappa \lambda \left[ 2\kappa l_{3}^{2}b_{1}^{2}-l_{2}^{2}\left( l_{3}^{2}b_{2}(2\kappa b_{2}+\omega )+b_{3}(q-2\kappa b_{3})\right) \right] . \end{equation Once more, profile equation \eqref{profeqTC} can be reduced to an integrable one that is decoupled from the Maxwell field. Let us assume the following conditions for the components $b_{1}$ and $b_{3}$ of the electromagnetic potential $A_{\mu }$: \begin{equation} X(r)=\lambda l_{2}^{2}\left( q^{2}-l_{3}^{2}\omega ^{2}\right) ,\quad b_{3}(r)=\frac{l_{3}^{2}\omega }{q}b_{2}(r)+\frac{l_{3}^{2}\omega ^{2}} 4\kappa q}+\frac{q}{4\kappa }. \end{equation Then, the remaining Maxwell equation that needs to be satisfied for $b_{2}$ is \begin{equation} b_{2}^{\prime \prime }=-\frac{\kappa K}{8l_{3}^{2}}(4\kappa b_{2}+\omega \left[ 8l_{3}^{2}\left( \lambda (H^{\prime })^{2}+l_{1}^{2}\right) +2\lambda l_{1}^{2}\cos ^{2}(H)\left( q^{2}-l_{3}^{2}\omega ^{2}\right) \right] \sin ^{2}(H) \label{matc} \end{equation and the profile equation is reduced to \begin{equation} \left( 2\lambda \cos ^{2}(H)\left( q^{2}-l_{3}^{2}\omega ^{2}\right) +8l_{3}^{2}\right) H^{\prime \prime }+\sin (2H)\left( q^{2}-l_{3}^{2}\omega ^{2}\right) \left( l_{1}^{2}-\lambda H^{\prime 2}\right) =0. \end{equation Obviously it exhibits a first integral of the form \eqref{firstint} where now \begin{align*} Y(H)& =2\lambda \cos ^{2}(H)\left( q^{2}-l_{3}^{2}\omega ^{2}\right) +8l_{3}^{2} \\ V(H)& =\frac{l_{1}^{2}}{2}\left( l_{3}^{2}\omega ^{2}-q^{2}\right) \cos (2H). \end{align* We can notice the similarities with the expressions derived for the Skyrmion in the previous case. In \cite{gaugsk} there has been presented an extensive discussion on the \textquotedblleft extended duality" that exists between two such systems. \subsection{Topological Current for the gauged Skyrmion} The topological current \cite{Witten} of the gauged Skyrme model can be divided into two terms \begin{equation} J_{\mu }^{B}=J_{\mu }^{Sk}+J_{\mu }^{B-em} \end{equation with the first term $J_{\mu }^{Sk}$ being the usual Baryonic current, while second term is the correction to the latter, owed to the coupling with the electromagnetic field. For the first term we have \begin{equation} J_{\mu }^{Sk}=\frac{1}{24\pi ^{2}}E_{\mu \alpha \beta \nu }Tr\left( R^{\alpha }R^{\beta }R^{\nu }\right) , \label{Dirac} \end{equation which in our case has a single nonzero component \begin{equation} J_{0}^{Sk}=-\frac{pq}{8\pi ^{2}l_{1}l_{2}l_{3}}H^{\prime }\sin (2H)=-2\pi \widehat{n}_{B}H^{\prime }\sin (2H)\ ,\ V=16\pi ^{3}l_{1}l_{2}l_{3}, \label{Bden1} \end{equation where $V=16\pi ^{3}l_{1}l_{2}l_{3}$ is the volume of the box and $\widehat{n _{B}$\ is the Baryon density ($\widehat{n}_{B}=pq/V$) of the system. Note that in \eqref{Dirac} we make use of the Levi-Civita tensor $E_{\mu \alpha \beta \nu }=\sqrt{-g}\,\epsilon _{\mu \alpha \beta \nu }$ instead of the Levi-Civita symbol $\epsilon _{\mu \alpha \beta \nu }$ so that $J_{\mu }^{Sk} $ transforms covariantly and the topological charge results in a pure number. If for instance we apply the boundary conditions $H(0)=0$, $H(2\pi ) \frac{\pi }{2}$ we obtain \begin{equation} B=\int_{\Sigma }\!\!\sqrt{-g}J_{Sk}^{0}drd\gamma d\phi =pq. \end{equation The correction $J_{\mu }^{B-em}$ to the baryonic current, due to the electromagnetic field, is \begin{equation} J_{\mu }^{em}=-\frac{\kappa }{8\pi ^{2}}E_{\mu \alpha \beta \nu }\nabla ^{\alpha }\left[ A^{\beta }Tr\left( t_{3}(U^{-1}\nabla ^{\nu }U-\nabla ^{\nu }UU^{-1})\right) \right] \label{curBem} \end{equation and the total gauged Baryonic current reads \begin{equation} \label{fullcurB} \begin{split} J_{\mu }^{B}=\Big\{-\frac{pq\pi }{V}\partial _{r}\left( \cos (2H)\right) \frac{4\pi \kappa }{V}\partial _{r}\left( \cos ^{2}(H)(qb_{2}-pb_{3})\right) ,0,& \\ -\frac{4\pi q\kappa }{V}\partial _{r}\left( b_{1}\cos ^{2}(H)\right) ,\frac 4\pi p\kappa }{V}\partial _{r}\left( \cos ^{2}(H)\right) & \Big\}, \end{split \end{equation From what we see, the total baryon number when the Skyrmion is coupled to the electromagnetic field depends also on the boundary conditions that one may impose on the latter ($b_{2}$ and $b_{3}$ in particular). \subsection{Baryonic current for the Time-Crystal} The topological current of the time-crystal can be calculated with the use of the same relations \eqref{Dirac} and \eqref{curBem}. Here we just give the result for the full current of the Gauged Time Crystal (GTC) which is \begin{equation} \label{JGTC} \begin{split} J_\mu^{GTC} = \Big\{ &-\frac{4\pi q \kappa }{V} \partial_r \left(b_1 \cos^2(H)\right), 0, \\ & -\frac{l_2^2 q \pi \omega }{ V} \partial_r (\cos(2H)) + \frac{4 \pi \kappa }{V} \partial_r\left[\cos ^2(H) (q b_2-\omega b_3)\right], \frac{4\pi \kappa \omega }{V} \partial_r \left(b_1 \cos^2(H) \right) \Big\} . \end{split \end{equation} In the absence of the coupling with the electromagnetic field, $\kappa=0$, we can see that the expression for the non-zero topological current of the time-crystal is simplified to \begin{equation} J_\mu^{TC} = \Big\{ 0,0, -\frac{\pi l_2^2 q \omega }{V} \partial_r (\cos(2H)),0\Big\} . \end{equation} \section{On the conductivity of gauged solitons} At semi-classical level, the transport properties of electrons travelling through the above gauged Skyrmions can be determined by analyzing the corresponding Dirac equation. Obviously, the electrons interact directly both with the gauge field and with the Baryons. The fermion couples to A_{\mu }$, as QED dictates. However, there are further effects due to the coupling with the baryonic current. Here, we follow a very simple toy model interaction just to make a qualitative description of such effects. At this level of approximation in which the electrons perceive the gauged Skyrmions as a classical background, both interactions can be described as \textquotedblleft current-current" interactions in the Dirac Hamiltonian. The interaction of the electronic Dirac field $\Psi $ with the gauge potential $A_{\mu }$ corresponds to the following interaction Hamiltonia \begin{eqnarray} H_{int}^{U(1)} &=&\kappa J_{\mu }^{e}A^{\mu }\ , \label{u1corr} \\ J_{\mu }^{e} &=&\overline{\Psi }\gamma _{\mu }\Psi \ , \notag \\ \overline{\Psi } &=&\Psi ^{\dag }\gamma ^{0}, \notag \end{eqnarray where $\kappa $\ is the Maxwell couplin \begin{equation} \kappa \approx \left( \frac{1}{137}\right) ^{\frac{1}{2}}, \label{u1coupling} \end{equation $\gamma _{\mu }$ are the Dirac gamma-matrices (the conventions are collected in the appendix \ref{appA}), $\Psi ^{\dag }$ is the conjugate transpose of \Psi $ and $\overline{\Psi }$ the adjoint spinor. On the other hand, a simple way to describe the interactions of the electronic Dirac field with the baryonic current $J_{B}^{\mu }$ is with the following Hamiltonia \begin{equation} H_{int}^{B}=g_{eff}J_{\mu }^{e}J_{B}^{\mu }\ , \label{barcorr} \end{equation where $g_{eff}$\ is the effective coupling constant of the electron-Baryon interaction. At the present level of approximation (in which the energy scale is not high enough to disclose the parton structure of the Baryon) a reasonable assumption is: \begin{equation*} g_{eff}\approx G_{F}, \end{equation* where $G_{F}$ is the Fermi constant. In order to evaluate the relative strength of the two contributions to the conductivity (a brief analysis is given in Appendix \ref{appB}), one arising from the term owed to the coupling with the $U(1)$ field (the $\kappa A_{\mu }$\ in Eq. (\ref{Diraceq}), see section \ref{appB1} of Appendix \ref{appB}) and the other arising from the term produced from the baryon current (the G_F J_{\mu }^{B}$\ in Eq. (\ref{Diraceq})) one needs to evaluate the relative strength of the $U(1)$ coupling with respect to the interactions with the Skyrmionic current. There are two competing factors in the interactions with the Skyrmionic current. The first factor is the electro-weak coupling constant (which is obviously weaker than the $U(1)$ coupling). The second factor is related with the Skyrmions profile $H$ and can be evaluated explicitly thanks to the present analytic solutions. Assuming that both $\sin (2H)$ and $H^{\prime }$ are of order 1 (since both quantities are adimensional and the solitonic solutions we are considering are smooth and regular) one can see that the effective adimensional coupling $\widehat{g}$ measuring the strength of the contributions to the conductivity due to the interactions of the electrons with the Skyrmionic current is: \begin{equation} \widehat{g}= l_1 G_{F}\widehat{n}_{B}\ . \label{efcoupl} \end{equation Given that $G_F \sim 1.166$ GeV$^{-2}$ or $G_F \sim 4.564$ fm$^2$ in natural units we can see that the contribution of the interaction with $J_{\mu }^{B}$ remains small in comparison to the coupling with $A_{\mu }$ - at least for baryon densities $\widehat{n}_{B}$ and lengths $l_1$ of the box that can be characterized as natural. The ``Baryonic" correction $\delta \Psi $ to the wave function in Eq. (\ref{perturbedpsi}) depends on the effective coupling \widehat{g}$\ defined in Eq. (\ref{efcoupl}) and on the Fourier transform of quantities related with the background Skyrmion. For completeness, in sections \ref{appB2} and \ref{appB3} of Appendix \re {appB} we have included the Dirac equations for the electrons propagating in the gauged solitons background described above. Although these Dirac equations cannot be solved analytically (due to the fact that Eqs. (\re {b3XYcon1}) and (\ref{matc}) are not integrable in general), they can be useful starting points for numerical analysis of transport properties of the present gauged solitons. \section{Conclusions and perspectives} \label{conclusions} In the present paper we have studied (gauged) Skyrmionic configurations in a finite box. We provided the reduced field equations under the adopted ansatz and distinguished the conditions over the potential functions $A_\mu$ for which the aforementioned equations can be characterized as integrable. Additionally, we have presented analytic expressions for the energy and studied its general behaviour in relation to the baryon number and the possible sizes of the box under consideration. We also managed to demonstrate and analyze the cases where the more energetically convenient configurations emerge in relations to these variables. What is more, we have derived an explicit analytic expression for the compression modulus corresponding to Skyrmions living within a finite volume in flat space-times. This is the first case in which one can derive an analytic formula (Eqs. (\ref{Kbox}) and (\ref{Vbox}) in the previous section) for such an important quantity in a highly interacting theory such as the low energy limit of QCD. This expression produces a reasonable value with a correct order of magnitude. The gauged version of these solitons living within a finite volume can be also considered. Using these gauged solitons, it is possible to analyze the contributions to the electrons conductivity associated to the interactions with this Baryonic environment (which represents a slab of baryons which can be very large in two of the three spatial directions). To the best of authors knowledge, the present is the first concrete setting in which it is possible to perform analytic computations of these relevant quantities in the original version of the Skyrme model (and its gauged version). \subsection*{Acknowledgements} The authors would like to thank A. Zerwekh for useful discussions. This work has been funded by the Fondecyt grants 1160137, 1161150 and 3160121. The Centro de Estudios Cient\'{\i}ficos (CECs) is funded by the Chilean Government through the Centers of Excellence Base Financing Program of Conicyt.
1,108,101,563,053
arxiv
\section{Introduction} Very recently, experimental realization of one-dimensional (1D) ultracold fermions with tunable number of spin components has been reported in the crossover regime of temperature between spin-ordered and spin-incoherent Luttinger liquid (LL) \cite{Pagano2014}. In particular, the subtle bosonic limit \cite{Yang11} is evidenced for strongly repulsive $^{173}$Yb atoms with nuclear spin $I=5/2$. In addition, studies using analytical and numerical methods have shown \cite{Jen2016} that the spin-incoherent 1D spin-1 Bose LL in a harmonic trap and in the Tonks-Girardeau limit (infinite repulsion) \cite{Tonks1936,*Girardeau1960,*Lieb1963}, exhibits the universal $1/p^4$ dependence momentum distribution, which is, however, broader than the spinless case, due to spin-function overlaps. We also remark that the Tonks-Girardeau limit has been experimentally achieved in ultra cold boson atoms \cite{Kinoshita1125,*Paredes2004,*Haller1224}, and also verified in frustrated quantum spin chains \cite{Montenegro-Filho2008}. On the theoretical side, the method of bosonization \cite{stone} has provided an efficient means to derive analytical results for low-dimensional interacting fermion systems in condensed matter and field theory, thereby allowing the emergence of new physical concepts. In this context, the LL theory has been proposed \cite {142585} as a unified framework to describe the low-energy physics of a large class of 1D quantum many-body systems \cite{giamarchi,essler2005one, takahashi}. Emphasis has been given to those systems subjected to strong quantum fluctuations and exhibiting new features not fully described by the standard Fermi liquid theory \cite{pines} governed by the zero coupling-strength fixed point \cite{66129}. Notwithstanding, several aspects of a Landau-Luttinger theory were discussed at length \cite{3757,449967,73b}. Further, generalization of the standard Fermi liquid theory was also proposed with aim in describing the unusual properties of heavy-fermion systems, in particular close to a metal-insulator transition \cite{Shaginyan2010}. Following the LL concept we have witnessed a vigorous development in the study of 1D strongly correlated electron systems, particularly in connection with the nature and the role played by charge and spin excitations, and the related phenomenon of spin-charge separation \cite{giamarchi}. Comparison of results derived using bosonization with those from other methods, such as the Bethe-ansatz and density matrix renormalization group techniques \cite{takahashi,Sacramento2013}, has also proved valuable. More recently, a very interesting regime of the LL, namely the spin-incoherent LL, has received special attention \cite{79801}. For both continuous \cite{92b,93226401} and lattice \cite{394845,81075108,106146401} versions of the 1D Hubbard model \cite{essler2005one}, this regime is realized under the condition $J(\equiv 4t^2/U)\ll k_B T\ll E_F(\sim t)$, where $t$ is the nearest-neighbor hopping amplitude, $U$ is the repulsive on-site Coulomb interaction, $\beta=1/(k_BT)$ is the inverse temperature measured in units of the Boltzmann constant, $J$ is the antiferromagnetic exchange coupling, and $E_F$ is the Fermi energy. Alternatively, for low carrier densities, quantum wires \cite{92a,72045315,fieteprb2005,101036801,22095301} are near the 1D Wigner crystal limit at which the electrostatic energy between the particles greatly exceeds their kinetic energy leading to $J\ll E_F$, so that for $k_BT\gg J$ the observed conductance is about half the usual LL value $2e^2/h$ due to the spin-incoherent contribution to the resistance, where $e$ is the magnitude of electron charge and $h$ is the Planck constant. Indeed, it has been shown that, despite features of spin-charge separation persist, the spin part of the correlation function exhibits an exponential spatial decay \cite{92b,93226401} not consistent with the usual LL power-law decay. Moreover, at half filling \cite{394845}, the effective gapped charged excitations are modified due to the presence of the uncorrelated spin degrees of freedom. In this work we shall demonstrate that the thermodynamic properties of the Hubbard chain in the spin-incoherent regime can be described by using arguments from complementary powerful methods in the realm of quantum statistical mechanics and many-body theory, notably the Haldane-Wu {\it exclusion} fractional statistics \cite{HWF}. In this context, the fractional character of the excitations of Hubbard models with short-range Coulomb interaction and correlated hopping \cite{102146404,allegra2011,hidden2016} (bond-charge interaction), and infinite-range Coulomb interaction \cite{PhysRevB.61.7941,*PhysRevB.72.165109} as well, has been invoked to properly describe phase diagrams exhibiting metal-insulator transition, including the unexpected absence of conductivity at half filling due to a topological change in the Fermi surface, and $\eta$-pairing \cite{PhysRevLett.63.2144} induced 1D critical superconductivity \cite{82125126}. Correlated hopping can also play a relevant role in 2D models of high-temperature superconductors \cite{universal2017}. In addition, particles obeying exclusion fractional statistics have been considered in the context of optical lattices \cite{keilmann2011,optics2012}, including the (1D) Tonks-Girardeau limit \cite{Batchelor2006}. In 2D systems, it was suggested \cite{Cooper} that spectroscopy measurements on ultracold atoms can be used to demostrate the fractional exclusion statistics of quasiholes in the Laughlin state of bosons. On the other hand, neutral anyonic excitations, which satisfy fractional \textit{exchange} statistics in two dimensions, can be identified \cite{CES} through measurements of spectral functions near the threshold. The structure factor follows a universal power-law behavior, whose exponent is the signature of the anyon statistics and the underlying topologically ordered states that should occur in spin liquids and fractional Chern insulators. Moreover, it was proposed \cite{Arcila-Forero2018} that superfluid to Mott insulator quantum phase transitions in an anyon-Hubbard model with three-body interaction can be driven by the statistics or by the interaction. In Sec. \ref{sec:strong-coupling}, we use a strong-coupling perturbative expansion \cite{46a,*ha1996quantum} of the Takahashi's Bethe-ansatz grand-canonical free energy \cite{4769,52103,65165104} to calculate the Helmholtz free energy, energy and entropy in the spin-incoherent regime. From these thermodynamic potentials and the Luttinger theory, we present in Sec. \ref{sec:incohell} the specific heat, isothermal compressibility, Luttinger liquid parameter, magnetic susceptibility, and the Drude weight, to leading order in $J/E_{F}$. In Sec. \ref{sec:uinfity}, we show that the thermodynamics of the infinite-$U$ Hubbard chain is exactly mapped onto an ideal excluson gas of two species obeying the Haldane-Wu {\it exclusion} fractional statistics \cite{HWF}. In Sec. \ref{sec:flll} we introduce a fractional Landau LL approach, which provides non-trivial insights and a direct connection with the LL theory in the spin-incoherent regime. Indeed, our results provide strong evidence that the fractional excluson entropy describes very well the thermodynamics of the spin-incoherent regime. We can thus identify the pertinent fractional Landau LL parameters, and their relationship with the LL properties, namely, the velocity of holons and spinons. Despite that there have been previous attempts \cite{242130,Shaginyan2010,Leinaas2017} towards a generalization of the Fermi liquid theory to particles obeying fractional exclusion statistics, a realization of these ideas, as presented here, is apparently missing. In Sec. \ref{sec:hight} we consider the high-$T$ limit \cite{Juttner1997} of the particle distribution function, chemical potential and entropy. Finally, concluding remarks are reserved to Sec. \ref{sec:remarks}. \section{Spin-Incoherent Regime of the Hubbard chain} \label{sec:strong-coupling} The Hamiltonian of the Hubbard chain of $L$ sites in the presence of an external magnetic field along the $z$ direction is given by \begin{eqnarray} {\cal H}=-t\sum_{\langle i,j\rangle,\sigma} c_{i\sigma}^{\dagger}c_{j\sigma}^{} +U\sum_{i}n_{i\uparrow}n_{i\downarrow} -\mu_B H\sum_{i}(n_{i\uparrow} - n_{i\downarrow}), \nonumber\\ \label{modelo} \end{eqnarray} where $\langle i,j\rangle$ denotes nearest-neighbor sites, $\sigma\in\{\uparrow, \downarrow\}$, $c^{}_{i\sigma}~(c_{i\sigma}^{\dagger})$ is the electron annihilation (creation) operator, $n_{i\sigma}^{}= c_{i\sigma}^{\dagger}c_{i\sigma}^{}$ is the number operator, $\mu_BH$ is the Zeeman energy, and $\mu_B$ is the Bohr magneton. The $t-J$ model, which projects out doubly occupied states in the strong-coupling regime of the Hubbard chain, reads: \begin{eqnarray} \mathcal{H}_{t-J}&=&-t\sum_{\langle i,j\rangle,\sigma}(1- n_{i\bar{\sigma}}) c_{i\sigma}^{\dagger}c_{j\sigma}^{}(1- n_{j\bar{\sigma}})\nonumber\\ & &+J\sum_{\langle i,j\rangle}\left({\mathbf{S}}_{i}\cdot {\mathbf{S}}_{j}-\frac{1}{4}n_in_j\right)-2\mu_B H S^z, \label{eq:tj} \end{eqnarray} where $\bar{\sigma}=-\sigma$, $S^z=\frac{1}{2}\sum_{i}(n_{i\uparrow} - n_{i\downarrow})$, with $\hbar\equiv1$, and $$J=4t^2/U.$$ The spin-incoherent LL regime is found at temperatures such that \begin{equation} J(\equiv 4t^2/U)\ll k_{B}T \ll E_{F}\sim t. \label{siregime} \end{equation} This regime is characterized by low-energy collective charge excitations (holons) with a velocity $v_c^{(inch)}$ of interacting spinless fermions, and by the absence of collective spin excitations, since the very small strong-coupling spinon velocity $v_s$ ($\sim J$) implies a very small correlation length $\xi=v_s/\pi k_B T\sim J/2k_BT \ll 1$. In this context, we note that the special point $J=0$ ($U=\infty$) is also a spin-incoherent LL, since it is a spin-disordered state, with $v_s=J=0$ and infinite spin degeneracy in the thermodynamic limit; thereby, only holon excitations exist. The thermodynamic Bethe ansatz has been successfully implemented for the Hubbard chain long ago \cite{4769}. However, difficulties exist in deriving closed-form expressions for thermodynamic quantities from the infinite coupled integral equations. Notwithstanding, it has been shown \cite{46a} that it is possible to solve the set of integral equations perturbatively in the strong coupling limit ($t\ll U$), and consistent high-temperature series expansions have been provided. In particular, in Appendix \ref{strHa} the results reported in Ref. \onlinecite{46a} for the grand canonical free energy $\Omega(T,\mu,H)$ can be used in order to obtain corrections of ${\cal O}(t^2/U)$ to the $U=\infty$ limit. Most importantly, as we show in this work, these corrections are suitable to describe the $t-J$ limit of the Hubbard chain in the regime $U\gg k_BT$, including the spin-incoherent regime for $k_BT\ll t$. In fact, in Appendix \ref{strHa} we find that $\Omega(T,\mu,H)$ in the spin-incoherent regime reads: \begin{eqnarray} &&\frac{\Omega_{inch}(T,\mu,H)}{L}=\nonumber\\ &&-k_BT\int^{\pi}_{-\pi}\frac{dk}{2\pi}\ln[1+e^{-\beta(\varepsilon_k-\mu-\mu_BH)} +e^{-\beta(\varepsilon_k-\mu+\mu_BH)}] \nonumber\\ &&-\frac{k_BT}{\cosh(\beta \mu_BH)}\left(\frac{t}{U}\right) \int^{\pi}_{-\pi}\frac{dk}{2\pi}\frac{2}{e^{\beta(\varepsilon_k-\mu)}+2\cosh(\beta \mu_BH)} \nonumber\\ &&\times \int^{\pi}_{-\pi}\frac{dk}{2\pi}\cos{k}\ln[1+e^{-\beta(\varepsilon_k-\mu-\mu_BH)} +e^{-\beta(\varepsilon_k-\mu+\mu_BH)}] \nonumber\\ &&+\cdots, \label{tanto} \end{eqnarray} where $\mu$ is the chemical potential and $\varepsilon_k=-2t\cos{k}$ is the dispersion relation of tight-binding fermionic particles, which is the exact dispersion relation for the $U=\infty$ case\cite{1571033}. In fact, making $U=\infty$ in Eq. ($\ref{tanto}$), we obtain the exact expression of the grand-canonical free energy \cite{4769} at this extremal coupling value. The grand-canonical free energy (\ref{tanto}) is also suitable to describe the spin-incoherent regime, since using the inequalities in (\ref{siregime}): $4t^2/U \ll k_BT\ll t$, we find $U/k_BT\gg 1/(k_BT/t)^2\gg 1$. The chemical potential $\mu$ is calculated from $n=-\frac{1}{L}\left(\frac{\partial\Omega}{\partial\mu}\right)$: \begin{eqnarray} &&\mu_{inch}(T,n)= -2t\cos(n\pi) \nonumber\\ &&-\frac{nt^2}{U}\left[1+ 2\sin^2(n\pi) -\frac{\sin(2n\pi)}{2n\pi}\right] -k_BT\ln{2} \nonumber\\ &&+\frac{\pi^2(k_BT)^2\cos(n\pi)}{12t\sin^2(n\pi)} \left\{1+\left(\frac{2t}{U}\right) \left[\frac{n}{\cos(n\pi)}-\frac{\sin(n\pi)}{\pi} \right]\right\} \nonumber\\ &&+\cdots, \label{muinch} \end{eqnarray} The corresponding expansion for the Helmholtz free energy $F(=\mu N+\Omega)$, energy $E(=F-T\partial F/\partial T)$, and entropy $S(=-\partial F/\partial T)$ read: \begin{eqnarray} &&\frac{F_{inch}(T,n)}{L}=-\frac{2t\sin(n\pi)}{\pi} -\left( \frac{t^2}{U}\right)n^2\left[1-\frac{\sin(2n\pi)}{2n\pi}\right] \nonumber\\ &&-nk_BT\ln{2}-\frac{\pi(k_BT)^2}{12t^\ast \sin(n\pi)} +\cdots; \nonumber\\ \label{pasta} \end{eqnarray} \begin{eqnarray} \frac{E_{inch}(T,n)}{L}&=&-\frac{2t\sin(n\pi)}{\pi} -\left( \frac{t^2}{U}\right)n^2\left[1-\frac{\sin(2n\pi)}{2n\pi}\right] \nonumber\\ &+&\frac{\pi(k_BT)^2}{12t^\ast \sin(n\pi)} +\cdots; \label{pastel} \end{eqnarray} \begin{equation} \frac{S_{inch}(T,n)}{L}= nk_B\ln{2}+\frac{\pi k_B^2 T } {6t^\ast\sin(n\pi)}+\cdots, \label{next} \end{equation} where the $T$-dependent terms have coefficients with a hopping parameter $t^{\ast}$ given by, up to ${\cal O}(t/U)$, \begin{equation} t^{\ast} = t\left[1-\frac{2nt\cos(n\pi)}{U}\right]. \label{za} \end{equation} We stress that up to ${\cal O}(t/U)$ doubly occupied sites are forbidden \cite{377541}. In fact, \begin{eqnarray} \frac{\langle N_{\uparrow\downarrow}\rangle}{L}&=&\frac{\partial(E_{inch}/L)}{\partial U}= n^2\left( \frac{t}{U}\right)^2\left[1-\frac{\sin(2n\pi)}{2n\pi}\right] \nonumber\\ &-&\left( \frac{n\pi}{6}\right)\left(\frac{k_BT}{U} \right)^2\cot(n\pi)+\cdots. \label{papel} \end{eqnarray} The above results show that the charge degrees of freedom in the regime $J\ll k_{B}T \ll t$ or $J=0$ and $k_BT\ll t$ are described by a gas of free spinless fermions. Indeed, the first term in $E_{inch}(T,n)$ is the ground-state energy of a gas of free spinless fermions with dispersion $\varepsilon_k=-2t\cos{k}$; while $T$-dependent terms in $E_{inch}(T,n)$ and $S(T,n)$ are contributions from thermally excited spinless fermions, with a mass $\sim 1/t^\ast$, above the Fermi surface, which is defined by the wave vectors $k=\pm k_{F}$, with $k_{F}=n\pi$. The spin-incoherent regime is identified by noticing that the first term in the entropy $S_{inch}(T,n)$ indicates that the spin degrees of freedom are fully disordered. \section{Response Functions and Spin-incoherent LL parameters} \label{sec:incohell} The Hamiltonian of the system in the spin-incoherent regime and \textit{zero field} can be mapped onto the following charged bosonized LL Hamiltonian \cite{fieteprb2005}: \begin{equation} {\cal H}_{\text{inch}}=v^{(inch)}_c\int \frac{dx}{2\pi}\left[\frac{1}{g}(\partial_x\theta)^2+g(\partial_x\phi)^2 \right], \label{incoh} \end{equation} where $v_c^{(inch)}$ is the holon velocity, $(1/\pi)(\partial_x\theta)$ is the fluctuation in electron density and the commutation relation $[\theta(x),\partial_{x'}\phi(x')]=i\pi\delta(x-x')$ holds. The coupling $g$ can be written in terms of the LL parameter $K_c$, which governs the decay of the correlation functions: \begin{equation} K_c=\frac{1}{2g}. \label{eq:kc} \end{equation} The specific heat $C=-\frac{T}{L}\left(\partial^2F/\partial T^2\right)$: \begin{eqnarray} C_{inch}(T,n)=\gamma_{inch}k_B^2 T+\cdots, \label{sexta} \end{eqnarray} displays a free spinless Fermi gas form where the specific-heat coefficient $\gamma_{inch}$ and the holon velocity are, respectively, \begin{equation} \gamma_{inch}=\frac{\pi}{3v^{(inch)}_{c}}; \label{ginch} \end{equation} \begin{equation} v^{(inch)}_c =2t^\ast\sin(n\pi). \label{girl} \end{equation} On the other hand, the charge compressibility $\kappa^{-1}=n^2(\partial \mu / \partial n)$ reads: \begin{eqnarray} &&\kappa_{inch}^{-1}(T,n)=2\pi t n^2 \sin(n\pi) \nonumber\\ &&\times \left\{ 1-\left( \frac{2t}{U}\right)\left[ \frac{\sin(n\pi)}{\pi}+n\cos(n\pi)+{\cal O}\left(\frac{k_B^2 T^2}{t^2}\right)\right]\right\}\nonumber\\ && \label{indio} \end{eqnarray} Further, in the spin-incoherent LL regime $g_{inch}^{-1}=\pi v^{(inch)}_{c}\kappa_{inch} n^{2}$, we find \begin{equation} g_{inch}=1-\left(\frac{2t}{U}\right) \frac{\sin(n\pi)}{\pi}, \end{equation} and \begin{equation} K^{(inch)}_{c}=\frac{1}{2g_{inch}}=\frac{1}{2}+\left(\frac{t}{U}\right)\frac{\sin(n\pi)}{\pi}. \label{eq:kci} \end{equation} Notice that using Eqs. (\ref{za}) and (\ref{girl}), we can verify that $v_c^{(inch)}$ is not the holon velocity of the standard LL theory at $T=0$. Lastly, since \cite{schulz} $\sigma_0=2K_c v_c$, the Drude weight that measures the dc peak in the conductivity, $\sigma(\omega)=\sigma_0\delta(\omega)$, in the spin-incoherent LL regime is given by \begin{equation} \sigma_0^{(inch)}=2t\sin(n\pi)\left[ 1+\frac{2t}{U}\left(\frac{\sin(n\pi)}{\pi}-n\cos(n\pi)\right)\right], \label{trans} \end{equation} where use was made of Eqs. (\ref{eq:kci}) and (\ref{sv}). We also confirm the spin-incoherent regime by probing the spin degrees of freedom through the susceptibility $\chi(T,\mu)$. As shown in Appendix \ref{susInch}, the canonical susceptibility and spinon velocity read, respectively: \begin{eqnarray} \chi_{inch}(T,n)= { \mu_B^2\beta n} \left[1-\frac{n v_s}{\pi k_BT} +{\cal O}\left(\frac{J}{t}\right)\right]; \label{graviola} \end{eqnarray} \begin{equation} v_s=\frac{2\pi t^2}{U}\left[1-\frac{\sin(2n\pi)}{2n\pi}\right], \label{sv} \end{equation} where $v_s$ is the strong-coupling spinon velocity \cite{schulz}. The correction of ${\cal O}(v_s/k_BT)$ to the dominant Curie response is the one we expect in view of the highly excited spin degrees of freedom, and implies $v_s(n)|_{U=\infty}=0$, for any value of $T$. For finite $J$, we use the fluctuation-dissipation theorem: $\chi=\beta\int G(x)\,dx$, where $G(x)$ is the spin-correlation function. In order to satisfy Eq. (\ref{graviola}), $G(x)=\mu_B^2n[\delta(x)-ne^{-x/\xi}]$, with a correlation length $\xi$ given by the expected result \cite{142585,Hirokazu1991,Klumper1996}: $\xi=v_s/(\pi k_BT)\sim [J/(2k_BT)] \ll 1$, thus confirming the spin-incoherent regime for finite $J$. \subsection{$T\rightarrow 0$ limit: the standard LL regime, with charge and spin collective excitations} Here we show that we can infer the parameters of the standard LL regime, which settles as $T\rightarrow0$, from the above spin-incoherent results. In doing so, we take advantage of the description of the $U\rightarrow\infty$ limit of the Hubbard chain put forward in Ref. \onlinecite{5515475}. In particular, by using the Bethe ansatz solution, it has been shown that the ground-state wave function of the system can be constructed as a product of a spinless fermion wave function $|\Psi\rangle$ and a squeezed spin wave function $|\chi\rangle$. The wave function $|\chi\rangle$ are eigenfunctions of the following Heisenberg Hamiltonian: \begin{equation} \label{Heisenbergnew} {\cal H}_{S}=\sum_{i=1}^{N}\sum_{\alpha=x,y,z}\tilde{J}^{\alpha} \left(S_{i}^{\alpha}S_{i+1}^{\alpha}-\frac{1}{4}\delta_{\alpha,z}\right), \end{equation} where \begin{equation} \label{Jnew} \tilde{J}^{\alpha}=n\frac{4t^{2}}{U}\left[1-\frac{\sin\left(2n\pi\right)}{2n\pi}\right] \end{equation} is determined by the ground-state energy wave function of the spinless fermions $|\Psi^{GS}\rangle$. Notice that, at half filling, we have the standard coupling $J=4t^{2}/U$. Therefore, the contribution of $\cal H_{S}$ to the ground-state energy per site is given by \begin{eqnarray} \label{GSenergy} \frac{\langle \chi^{GS}|{\cal H}_{S}| \chi^{GS} \rangle}{L}&\equiv&\frac{E^{GS}}{L}=-n^{2} \left(\frac{4t^{2}}{U}\right)\frac{\left[1-4\gamma_{S}\left(T=0\right)\right]}{4} \nonumber\\ &\times&\left[1-\frac{\sin\left(2n\pi\right)}{2n\pi} \right], \end{eqnarray} where \begin{eqnarray} \gamma_S(T)=\langle {\bf S}_i\cdot {\bf S}_{i+1}\rangle =\left\{ \begin{array}{r} 1/4-\ln{2}, \quad T=0 ;\\ 0, \quad k_BT\gg t^2/U, \end{array} \right.\label{gammaS} \end{eqnarray} denotes the $T$-dependent nearest-neighbor spin correlation function of the Heisenberg model \cite{38363}. This contribution at $T=0$, together with that of spinless fermions [first term in Eq.~(\ref{pastel})] is the exact ground-state result up to ${\cal O}(t/U)$ \cite{6930,412326,641831,5515475,2351196,*anderson2017theory} of the 1D $t$-$J$ model. We thus infer that the ground state energy of the Hubbard chain in the spin-incoherent regime obtains through the replacement of $\gamma_{S}\left(T=0\right)$ by $\gamma_{S}\left(T\gg J/k_{B}\right)=0$. This correspondence was already noticed in the study of the thermodynamics of the Hubbard chain in the spin-disordered regime at half filling \cite{394845}. We have also noted that several expressions valid in the spin-incoherent LL regime differ from the corresponding ones at $T=0$ by the multiplying factor $[1-4\gamma_S(T=0)]$. \begin{figure} \begin{center} \includegraphics*[width=0.47\textwidth]{fig1.eps} \caption{(color online). (a) Charge velocity [Eq. (\ref{girls})] and (b) correlation exponent $K_c$ [Eq. (\ref{noises})] at $T=0$ as a function of $n$ for $U=16t$. In both figures, the dots displayed were obtained from Ref.~\onlinecite{schulz}.} \label{patrol} \end{center} \end{figure} Consider first the charge velocity at $T=0$: \begin{eqnarray} v_c(T=0,n)&=&2t\sin(n\pi) \nonumber\\ &&\times \left \{1-\frac{2[1-4\gamma_S(0)]nt\cos(n\pi)}{U}\right\},\nonumber\\ &=&2t\sin(n\pi)\left [1-\frac{8\ln{2}}{U}nt\cos(n\pi)\right] \label{girls} \end{eqnarray} which is the extension of Eq.~(\ref{girl}) to $T=0$ using Eq. (\ref{gammaS}), in agreement with Bethe-ansatz analytical results \cite{*[][{. Notice a factor of 2 discrepancy, in the $(t/U)$ correction, for the prediction of $v_c$: our Eq. (\ref{girls}) and Eq. (6.37) of this citation.}] pencsol} of the strongly coupled Hubbard model at $T=0$. In Fig.~\ref{patrol}(a) we plot $v_c(T=0)$ as a function of $n$ for $U=16t$. Note the remarkable agreement with early Bethe-ansatz numerical \cite{schulz} result at $T=0$. Now, consider the LL parameter at $T=0$: \begin{eqnarray} K_{c}(T=0,n)&=&\frac{1}{2}+ [1-4\gamma_S(0)]\left(\frac{t}{U}\right) \frac{\sin(n\pi)}{\pi}\nonumber\\ &=&\frac{1}{2}+ \frac{4\ln{2}}{U\pi}t\sin(n\pi). \label{noises} \end{eqnarray} The validity of this formula is confirmed in Fig.~\ref{patrol}(b), where the plot of $K_{c}(T=0,n)$ as a function of $n$ for $U=16t$ is exhibited. In addition, we note that for $n\rightarrow 0$: $K_{c}(T=0,n)=1/2+(4\ln{2})(nt/U)$, which coincides with the expression for $K_c$ reported in Ref.~\onlinecite{79195114}. The previous results imply that the Drude weight \cite{*[{For $T^2$ contribution, see: }] [] Fujimoto1998} at $T=0$ is given by \begin{eqnarray} \sigma_0(T=0)=2K_c v_c=2t\sin(n\pi)\left\{1+8\ln{2}\left(\frac{t}{U}\right) \right. \nonumber\\ \left. \times\left[\frac{\sin(n\pi)}{\pi}-n\cos(n\pi)\right]\right\}, \label{dru} \end{eqnarray} where use of Eqs.~(\ref{girls}) and (\ref{noises}) has been made. As shown in Fig.~\ref{gato}, the agreement between this formula for $U=16t$ and early numerical results \cite{schulz} is excellent. \begin{figure} \begin{center} \includegraphics*[width=0.27\textwidth]{fig2.eps} \caption{(color online). Drude weight as a function of bandfilling for $U=16t$ and $T=0$. Solid curve is the plot of Eq.~(\ref{dru}) and the dots in highlight were obtained from Ref.~\onlinecite{schulz}. } \label{gato} \end{center} \end{figure} Lastly, concerning the specific-heat coefficient, as $T\rightarrow0$ the spin-spin correlation function displays power-law behavior and the prediction for $\gamma$ is \cite{essler2005one}: \begin{equation} \gamma =\frac{\pi}{3}\left(\frac{1}{v_c}+\frac{1}{v_s}\right)_{T=0}. \label{gammat0} \end{equation} \section{$U=\infty$ as an exact ideal gas of exclusons or free spinless fermions} \label{sec:uinfity} The concept of a Luttinger liquid is the paradigm for describing the low-energy physics of interacting electron systems in one dimension. Notwithstanding, it is important to investigate alternative approaches that can shed light on the physics of such systems. In this context, a remarkable result that follows from previous works \cite{102146404,82125126,PhysRevB.61.7941,*PhysRevB.72.165109} by two of the authors is that the properties of $U=\infty$ limit can be viewed as derived from an ideal excluson gas of two fractional species: $\alpha=1$ for particles with spin up and $\alpha=2$ for particles with spin down, coupled by the Haldane statistical matrix \begin{eqnarray} [g]_{{ k}{ k'};\alpha\alpha'}=\delta_{{ k}{ k'}}\left( \begin{array}{cc} 1&1\\ 0&1\end{array}\right), \label{canto} \end{eqnarray} in which case double occupation is excluded. In fact, the same $3\times 3$ statistical matrix describes the referred Hubbard models \cite{102146404,82125126,PhysRevB.61.7941,*PhysRevB.72.165109}, including double occupancy effects. This is confirmed by noting that Eq.~(\ref{tanto}) with $U=\infty$ can be written in the form: \begin{equation} \Omega_{\infty}(T,\mu_{\infty},H)=-\frac{1}{\beta}\sum_{k,\alpha}\ln(1+w_{k,\alpha}^{-1}), \end{equation} where $w_{k,\alpha}$'s satisfy the Haldane-Wu distribution \cite{HWF}: \begin{eqnarray} w_{k,1}&=&e^{\beta({\varepsilon}_{k,1}-\mu_{\infty})}, \\ w_{k,2}&=&(1+w_{k,1})e^{\beta({\varepsilon}_{k,2} - {\varepsilon}_{k,1})}. \end{eqnarray} In addition, $\langle{n}_{{k},\alpha}\rangle$ satisfies the exclusion relation: \begin{equation} \langle{n}_{k,\alpha}\rangle w_{k,\alpha}=1-\sum_{k',\lambda}g_{kk';\alpha\lambda} \langle {n}_{k',\lambda}\rangle, \end{equation} where \begin{eqnarray} \langle{n}_{k,\alpha}\rangle=\frac{e^{-\beta({\varepsilon}_{k,\alpha}-\mu_{\infty})}} {{\displaystyle 1+\sum_{\lambda=1}^{2}e^{-\beta({{\varepsilon}}_{k,\lambda}-\mu_{\infty})}}}. \label{mesa} \end{eqnarray} More specifically: \begin{eqnarray} \langle n_{k,1}\rangle &=&e^{2\beta\mu_B H}\langle n_{k,2}\rangle, \label{eq:excinf1}\\ &=& \frac{e^{\beta\mu_B H}}{e^{\beta(\varepsilon_k-\mu_\infty)}+2\cosh(\beta\mu_B H)},\label{eq:excinf2} \end{eqnarray} in agreement with an independent calculation for the Hubbard model at $U=\infty$ in Ref. \onlinecite{pronko}. Although the matrix given in Eq.~(\ref{canto}) is asymmetric, it should be noted that the spin-up and spin-down symmetry is preserved, as we can see from Eq.~(\ref{mesa}): $\langle{n}_{k,1} \rangle_{H}=\langle{n}_{k,2} \rangle_{-H}$. Moreover, the entropy reads: \begin{eqnarray} &&S_{\infty}(T,\mu,H)=-k_B\sum_{k}[\langle{n}_{k,1}\rangle\ln{\langle{n}_{k,1}\rangle} +\langle{n}_{k,2}\rangle\ln{\langle{n}_{k,2}\rangle} \nonumber\\ &&+(1-\langle{n}_{k,1}\rangle-\langle{n}_{k,2}\rangle) \ln{(1-\langle{n}_{k,1}\rangle-\langle{n}_{k,2}\rangle)}], \label{peso} \end{eqnarray} which carries the signature of the statistical matrix in Eq. (\ref{canto}). In zero field, Eq.~(\ref{mesa}), or Eqs. (\ref{eq:excinf1}) and (\ref{eq:excinf2}) reduces to \begin{equation} \langle{n}_{k,1}\rangle_{H=0}=\langle{n}_{k,2}\rangle_{H=0} =\frac{1}{e^{\beta({\varepsilon}_{k}-\mu_\infty)}+2}\equiv \langle n_{k}\rangle; \label{money} \end{equation} in agreement with early results \cite{377541}, so $\langle n_{k}\rangle$ develops a rigorous step discontinuity at the Fermi surface as $T\rightarrow 0$, with \begin{equation} n =\frac{2}{L}\sum_{k}\langle{n}_{k}\rangle_{T=0}. \end{equation} We also mention that the fractional character of $\langle n_{k}\rangle$, Eq.~(\ref{money}), stems from the fact that, in the exclusion formalism, both charge and spin degrees of freedom are combined to form a single distribution. However, by summing up in the fractional species, we obtain the free spinless fermion distribution: \begin{equation} \langle n^{(F)}_{k}\rangle=\langle{n}_{k,1}\rangle_{H=0}+\langle{n}_{k,2}\rangle_{H=0} =\frac{1}{e^{\beta(\varepsilon_k-\mu^{(F)}_{\infty})}+1}, \label{eq:fsl-exc} \end{equation} where $\mu^{(F)}_{\infty}$ is the chemical potential of the free spinless Fermi gas: \begin{equation} \mu^{(F)}_{\infty}(T,n)=\mu_{\infty}+k_BT\ln{2}. \label{eq:musl-exc} \end{equation} Lastly, using Eqs. (\ref{eq:fsl-exc}) and (\ref{eq:musl-exc}), the zero-field entropy per site in Eq. (\ref{peso}) can be written as \begin{eqnarray} \frac{S_{\infty}(T,n)}{L}&=&nk_B \ln{2}-\frac{k_B }{L}\sum_k[\langle n^{(F)}_{k}\rangle \ln \langle n^{(F)}_{k}\rangle\nonumber\\ & &+(1-\langle n^{(F)}_{k}\rangle)\ln(1-\langle n^{(F)}_{k}\rangle)]\label{sfermion1}\\ &=&nk_B\ln{2}+\frac{S_{\infty}^{(F)}(T,n)}{L},\label{sfermion2} \end{eqnarray} where $S^{(F)}_{\infty}$ is the entropy of the free spinless Fermi gas. We stress that Eqs. (\ref{sfermion1})-(\ref{sfermion2}) or (\ref{peso}) in zero field reproduce the two low-$T$ leading terms in Eq. (\ref{next}) in the limit $U=\infty$, i. e., $t^*=t$, after eliminating $\mu_{\infty}$ or $\mu^{(F)}_\infty$ in favor of $n$. Therefore, the specific heat calculated from either of the referred equations has the same value, since the difference between the two forms of the entropy function is a constant term, $nk_B\ln{2}$, associated with the disordered spin degrees of freedom. \section{Fractional Landau Luttinger liquid} \label{sec:flll} In the previous section, we have described the low-energy physics of the Hubbard chain for $J(\equiv 4t^2/U)\ll k_BT\ll E_F(\sim t)$ from the standpoint of a spin-incoherent LL, and have determined the parameters $g$ and $v_c$ that govern this class of fluid. In this section, our aim is to show that the system can also be mapped onto a fractional Landau LL \cite{prwu,242130,57977}. This phenomenological approach, which is a suitable generalization of the standard Landau Fermi liquid theory, can shed light on the underlying aspects that characterize the crossover behavior from the fixed point associated with $U=\infty$ at $T=0$ to the spin-incoherent LL regime at a given temperature $k_B T>J\ll t$. In Fig. \ref{fig:diag} we present an schematic phase diagram $k_B T$ versus $J/t = 4t/U \ll 1$ that illustrates two possible thermodynamic paths of the Hubbard model to reach the spin-incoherent LL regime. The first one (Path I) is physically attained by increasing the temperature of the system, initially in the ground state of the strong-coupling regime of the LL. The system undergoes a crossover and ends up at $T\gg J/k_B$, a spin disordered regime characterized by a zero pair spin correlation function: $\langle {\bf S}_i\cdot {\bf S}_{i+1}\rangle=\gamma_S(T)=0$, as discussed in Section \ref{sec:incohell}. In the second path (Path II), which helps us to understand the Landau LL approach, the system starts at the fixed point $T = 0$ and $U = \infty$, the temperature increases up to a value at which the interaction is switched on and triggers the system into the spin-incoherent regime. \begin{figure} \begin{center} \includegraphics*[width=0.45\textwidth]{fig3.eps} \caption{(color online). Schematic phase diagram of the Hubbard chain in the strong-coupling regime, $J/t=4t/U\ll 1$, and $k_B T\lesssim t$. At the line $J=0$ the electrons are in a spin-incoherent Luttinger liquid (LL) phase with Curie response ( spin correlation length $\xi=0$). Further, this $U=\infty$ fixed point is exactly mapped onto an ideal gas with two species obeying the Haldane-Wu exclusion fractional statistics, i. e., a fractional LL. At the $T=0$ line, excluding the point $J=0$, the system is found in an LL phase with algebraic decay of the charge and spin correlation functions. Increasing $T$ from a point at this line, Path I in the diagram, there is a crossover to a spin-incoherent regime with spin correlation length $\xi=v_s/(\pi k_BT)\sim [J/(2k_BT)] \ll 1$, so $\langle \mathbf{S}_i\cdot\mathbf{S}_{i+1}\rangle=0$. This regime can also be achieved through Path II, associated with both fractional LL and the fractional Landau LL: starting at $T=0$ and $U=\infty$, the temperature increases up to a value at which the interaction is switched on and triggers the system into the spin-incoherent regime. } \label{fig:diag} \end{center} \end{figure} We thus assume that when corrections of ${\cal O}(t^2/U)$ are switched on, the low-energy spectrum can be obtained from the following expansion of the functional $E_L(T)-E_0(T=0)$: \begin{eqnarray} E_L(T)-E_0(T=0)&=&\sum_{k,\alpha}\tilde{\varepsilon}_{k,\alpha}\delta\langle\hat{n}_{k,\alpha}\rangle \nonumber\\ &+&\frac{1}{2}\sum_{k,\alpha,k',\alpha'}f_{k,\alpha;k',\alpha'}\delta\langle\hat{n}_{k,\alpha}\rangle \delta\langle\hat{n}_{k',\alpha'}\rangle,\nonumber\\ && \label{dog} \end{eqnarray} where $E_0(T=0)$ is the ground state energy, \begin{equation} \tilde{\varepsilon}_{k,\alpha}=-2t^{\ast}\cos{k}, \end{equation} $t^{\ast}$ is the renormalized hopping amplitude with no effect of quasiparticle interaction, \begin{equation} \delta\langle\hat{n}_{k,\alpha}\rangle= \langle\hat{n}_{k,\alpha}(T)\rangle- \langle\hat{n}_{k,\alpha}(0)\rangle, \label{inferno} \end{equation} and $f_{k,\alpha;k',\alpha'}$ represents the interaction energy between quasiparticles. In addition, it is assumed that the entropy has the same \textit{fractional} functional form of $S_{\infty}$, Eq.~(\ref{peso}): \begin{eqnarray} &&S(T,\mu,H)=-k_B\sum_{k}[\langle{\hat n}_{k,1}\rangle\ln{\langle{\hat n}_{k,1}\rangle} +\langle{\hat n}_{k,2}\rangle\ln{\langle{\hat n}_{k,2}\rangle} \nonumber\\ &&+(1-\langle{\hat n}_{k,1}\rangle-\langle{\hat n}_{k,2}\rangle) \ln{(1-\langle{\hat n}_{k,1}\rangle-\langle{\hat n}_{k,2}\rangle)}]. \label{pesos} \end{eqnarray} It means that the statistics of the fractional quasiparticles are also governed by the statistical matrix (\ref{canto}). The equilibrium distribution of the quasiparticles is obtained by solving the equation $\partial\Omega/\partial\langle\hat{n}_{k,\alpha}\rangle=0$, where $\Omega=E-TS-\mu N$ and \begin{equation} n=\frac{1}{L}\sum_{k,\alpha}\langle\hat{n}_{k,\alpha}\rangle. \label{numero} \end{equation} After some algebra, one finds a distribution that is formally identical to Eq.~(\ref{mesa}): \begin{eqnarray} \langle{\hat n}_{k,\alpha}\rangle=\frac{e^{-\beta({\hat\varepsilon}_{k,\alpha}-\mu_L)}} {{\displaystyle 1+\sum_{\lambda=1}^{2}e^{-\beta({{\hat\varepsilon}}_{k,\lambda}-\mu_L)}}}, \label{mesas} \end{eqnarray} where \begin{equation} \hat{\varepsilon}_{k,\alpha}=\tilde{\varepsilon}_{k,\alpha}+ \sum_{k',\alpha'}f_{k,\alpha;k',\alpha'}\delta\langle\hat{n}_{k',\alpha'}\rangle \label{pincelada} \end{equation} is the energy of the fractional Landau LL quasiparticle \cite{pines}. By symmetry considerations, the interaction energy between quasiparticles satisfies: \begin{eqnarray} f_{k,1;k',1}=f_{k,2;k',2}\equiv f_{k,k'}^{s}+f_{k,k'}^{a},\\ f_{k,2;k',1}=f_{k,1;k',2}\equiv f_{k,k'}^{s}-f_{k,k'}^{a}, \end{eqnarray} which define the spin symmetric $f_{k,k'}^{s}$ and spin antisymmetric $f_{k,k'}^{a}$ parts of the fractional quasiparticle interaction \cite{pines}. In terms of these quantities, one has in zero field \begin{equation} \hat{\varepsilon}_{k,1}=\hat{\varepsilon}_{k,2}=-2t^{\ast}\cos{k} +2\sum_{k'}f_{k,k'}^{s}\delta\langle\hat{n}_{k'}\rangle \equiv \hat{\varepsilon}_{k}. \end{equation} In the following, it is our task to demonstrate that the above phenomenological approach proves useful in the understanding of the underlying low-energy behavior of the Hubbard chain in the spin-incoherent regime. We emphasize that, regardless the fact that the quasiparticles effects occur in the neighborhood of the Fermi surface $\{\pm k_F\}$, the final results are shown to be fully compatible with those derived in the previous sections through a proper identification of the fractional Landau LL parameters. \subsection{Thermodynamic properties} In order to compute the specific heat $C(T,n)$, we make the usual Landau assumption of neglecting corrections to $\hat{\varepsilon}_{k,\alpha}$ due to interaction between the quasiparticles, so that only the hopping amplitude is renormalized: \begin{equation} \hat{\varepsilon}_{k}\simeq \tilde{\varepsilon}_{k} =-2t^{\ast}\cos{k}. \label{tela} \end{equation} Next, we insert Eq.~(\ref{tela}) into Eq.~(\ref{numero}) in order to obtain the fractional Landau LL chemical potential, $\mu_L$: \begin{eqnarray} \mu_L(T,n)&=&-2t^{\ast}\cos(n\pi)-k_BT\ln{2} \nonumber\\ &+&\frac{\pi^2\cos(n\pi)(k_BT)^2}{12t^{\ast}\sin^2(n\pi)}+\cdots; \label{sabonete} \end{eqnarray} therefore, the fractional Landau LL energy per site, and the fractional Landau LL specific heat, thus read: \begin{eqnarray} \frac{E_L(T,n)}{L}-\frac{E_0(T=0,n)}{L}&=&\frac{2}{L}\sum_{k}\tilde{\varepsilon}_{k}\left[\langle \tilde{n}_k\rangle - \langle\hat{n}_{k}(0)\rangle\right] \nonumber\\ &=&\frac{\pi(k_BT)^2}{12t^{\ast}\sin(n\pi)}+\cdots, \label{ira} \end{eqnarray} where \begin{equation} \langle \tilde{n}_k\rangle=\frac{1}{e^{\beta(\tilde{\varepsilon}_{k}-\mu_L)}+2}; \label{nlandau} \end{equation} and \begin{equation} C_L(T,n)=\frac{\pi k_B^2 T}{6t^{\ast}\sin(n\pi)} +\cdots. \label{taxi} \end{equation} Comparing the above equation with Eqs. (\ref{za}), (\ref{sexta})-(\ref{girl}) associated with $C_{inch}$, we confirm our choice of $t^{\ast}$ in Eq. (\ref{za}). The consistence of the fractional Landau LL approach is confirmed by the prediction for the entropy. In fact, by using Eqs. (\ref{tela}), (\ref{sabonete}) and (\ref{nlandau}) into Eq. (\ref{pesos}), we obtain \begin{equation} \frac{S_{L}(T,n)}{L}=nk_B\ln{2}+\frac{\pi k_B^2T}{6t^{\ast}\sin(n\pi)}+\cdots, \label{entropyll} \end{equation} in complete agreement with $S_{inch}(T,n)$ in Eq.~(\ref{next}). Remarkably, the fractional Landau LL quasiparticles carry all the entropy of the system in the spin-incoherent regime $J\ll k_B T\ll E_F$, and correctly describe the fermionic spinless charge degrees of freedom and the background of fully disordered spin degrees of freedom. The prediction for $\kappa$ is obtained as follows. From $n=\sum_{k}2\langle\hat{n}_k\rangle/L$, we get \begin{equation} \frac{\partial n}{\partial\mu}=\frac{2}{L}\sum_{k}\frac{\beta (1-\partial{\hat{\varepsilon}_k}/\partial\mu) e^{\beta(\hat{\varepsilon}_k-\mu)}}{[e^{\beta(\hat{\varepsilon}_k-\mu)}+2]^2}, \label{verdureiro} \end{equation} where \begin{equation} \frac{\partial{\hat{\varepsilon}_k}}{\partial\mu}=2\sum_{k'}\frac{f_{k,k'}^{s}\beta (1-\partial{\hat{\varepsilon}_{k'}}/\partial\mu) e^{\beta(\hat{\varepsilon}_{k'}-\mu)}}{[e^{\beta(\hat{\varepsilon}_{k'}-\mu)}+2]^2}. \label{verdura} \end{equation} At low-$T$, the above integrands have sharp peaks centered at the $k$ vectors of the Fermi surface $\{\pm k_{F}\}$; therefore, one obtains (see Appendix \ref{tudo}) \begin{eqnarray} \frac{\partial{\hat{\varepsilon}_k}}{\partial\mu}&=&(f_{k,k_F}^{s}+f_{k,-k_F}^{s}) \sum_{k'}\frac{\beta (1-\partial{\hat{\varepsilon}_{k'}}/\partial\mu) e^{\beta(\hat{\varepsilon}_{k'}-\mu)}}{[e^{\beta(\hat{\varepsilon}_{k'}-\mu)}+2]^2} \nonumber\\ &=&(f_{k,k_F}^{s}+f_{k,-k_F}^{s})\left(\frac{L}{2}\right)\left(\frac{\partial n}{\partial\mu}\right). \label{fruta} \end{eqnarray} By inserting this back into $\frac{\partial n}{\partial\mu}$ and using $\kappa^{-1}=n^2(\partial \mu / \partial n)$, we find \begin{equation} \kappa_L^{-1}(T,n)=2\pi t^{\ast}n^2\sin(n\pi)(1+F_0^{s})+\cdots, \label{patinha} \end{equation} where \begin{equation} F_0^{s}=\frac{L(f_{k_F,k_F}^{s}+f_{k_F,-k_F}^{s})}{4\pi t^{\ast}\sin(n\pi)} \label{zaza} \end{equation} is the Landau-Luttinger parameter associated with the spin symmetric part of the quasiparticle interaction at the Fermi level $(k_F=n\pi)$. A comparison of Eqs.~(\ref{patinha}) and (\ref{indio}) implies: \begin{equation} F_0^s= -\frac{v_{c,\infty}}{\pi U}, \end{equation} with $v_{c,\infty}=v_c^{(inch)}|_{U=\infty}=2t\sin(n\pi)$. Notice that $F_0^s$ is in fact the ratio of the total kinetic energy per site for $U=\infty$ at $T=0$ over the on-site Coulomb repulsion $U$. We now calculate the prediction for $\chi$. In the presence of a magnetic field, we replace $\hat{\varepsilon}_{k}$ by $\hat{\varepsilon}_{k,\alpha}=\tilde{\varepsilon}_{k}\mp \mu_BH$ in Eq.~(\ref{mesas}). Thus the spin susceptibility is given by \begin{eqnarray} &&\chi_L(T,n)=\frac{\mu_B^2}{L}\sum_{k} \frac{\partial}{\partial H}(\langle\hat{n}_{k,1} \rangle - \langle\hat{n}_{k,2}\rangle)_{H=0} \nonumber\\ &&=\frac{ \mu_B^2}{L}\sum_{k}\frac{\beta}{[e^{\beta(\hat{\varepsilon}_k-\mu)}+2]} \frac{\partial}{\partial H}( \hat{\varepsilon}_{k,2}- \hat{\varepsilon}_{k,1})_{H=0}, \nonumber\\ \label{grampo} \end{eqnarray} where \begin{eqnarray} \frac{\partial}{\partial H} (\hat{\varepsilon}_{k,2} - \hat{\varepsilon}_{k,1} )_{H=0}=2 \nonumber\\ +2\sum_{k'}f^{a}_{k,k'} \frac{\partial}{\partial H} (\langle\hat{n}_{k',2} \rangle- \langle\hat{n}_{k',1} \rangle )_{H=0}. \end{eqnarray} Since we expect $f_{k,k'}^{a}={\cal O}(t^2/U)$, we can take \begin{equation} \frac{\partial}{\partial H}(\langle\hat{n}_{k',2} \rangle - \langle\hat{n}_{k',1} \rangle)_{H=0}=-\frac{2\beta}{e^{\beta(\hat{\varepsilon}_{k'}-\mu)}+2} \end{equation} in the last expression. Therefore, the spin susceptibility becomes \begin{equation} \chi_L(T,n)=\frac{\mu_B^2n}{k_BT}(1-\beta t F_0^{a})+\cdots, \label{suco} \end{equation} where \begin{equation} F_{0}^{a}=\frac{4}{tN}\sum_{k}\sum_{k'}f_{k,k'}^{a}\langle \hat{n}_{k}\rangle \langle \hat{n}_{k'}\rangle. \label{ter} \end{equation} In contrast to Eqs.~(\ref{verdureiro}) and (\ref{verdura}), the absence of sharp peaks at the Fermi surface in Eq.~(\ref{ter}) is a clear manifestation of the fact that the spin degrees of freedom are highly thermalized. A comparison between Eqs.~(\ref{graviola}) and (\ref{suco}), however, allows us to identify \begin{equation} F_{0}^{a}=\frac{nv_s}{\pi t} \label{foton} \end{equation} without the need of specifying the range of integration. If, in addition, we make the assumption that $f_{k,k'}^{a}$ is $k$-independent, Eq.~(\ref{foton}) implies $Lf^{a}=v_s/\pi$. Notice also that $F_{0}^{a}$ is the ratio between the energy per site of the Heisenberg Hamiltonian in the spin-incoherent regime and $nt$ [see Eqs.~(\ref{pasta}), (\ref{pastel})-(\ref{GSenergy}) and (\ref{gammaS}) ]. Lastly, we shall digress on the eventual crossover of the magnetic susceptibility as $T\rightarrow 0$. Unlike the crossover associated with the charge response functions, which is governed by the spin-spin correlation function, as discussed in Sec. \ref{sec:incohell}, in the magnetic susceptibility case there is a change of paradigm as $T\rightarrow 0$. First, as a guess, we notice that, to ${\cal O}(t^2/U)$: $\lim_{T\rightarrow 0}\chi_L(T,n)=\lim_{T\rightarrow 0} \frac{\mu_B^2\beta n}{1+\beta t F_0^a}=\pi\mu_B^2/v_s$. It thus suggests the following \textit{ansatz} for the Landau parametrization: $\lim_{T\rightarrow 0}(tF_0^a)=(-\beta^{-1}+n \pi v_s/2)$, which implies \cite{essler2005one} $\lim_{T\rightarrow 0}\chi_L(T,n)=2\mu_B^2/\pi v_s$. It entails that, as $T\rightarrow 0$, the strong-coupling exchange enhancement of ${\cal O}(t^2/U)$ suppresses the Curie behavior and gives rise to the LL power-law decay of the spin correlation function and the very low-$T$ behavior of $C(T)$ shown in Fig. \ref{cvfiete}, with dominant spinon contribution, see Eq. (\ref{gammat0}). \subsection{Drude Weight} In the presence of an external electric field $\phi$, the spectrum $E_{\infty}$ of the Hubbard chain with $U=\infty$, or $J=0$ in Eq. (\ref{eq:tj}), is altered according to the well known prescription \cite{65243,*Kohn1964} \begin{equation} E_{\infty}\rightarrow \sum_{k}\varepsilon_k(\phi)n_k, \end{equation} where \begin{equation} \varepsilon_k(\phi)=-2t\cos(k+\phi). \end{equation} Since Eqs.~(\ref{dog}) and (\ref{pincelada}) establish an one-to-one mapping between the eigenstates of the Hamiltonian for $J=0$ and $J\neq0$, in the presence of $\phi$ we have \begin{eqnarray} E(\phi)-E_0=\sum_{k,\alpha}\tilde{\varepsilon}_{k,\alpha}(\phi) \delta\langle\hat{n}_{k,\alpha}\rangle_{\phi} \nonumber\\ +\frac{1}{2}\sum_{k,\alpha,k',\alpha'}f_{k,\alpha;k',\alpha'}\delta\langle\hat{n}_{k,\alpha}\rangle_{\phi} \delta\langle\hat{n}_{k',\alpha'}\rangle_{\phi}, \label{amendoim} \end{eqnarray} where \begin{eqnarray} &&\tilde{\varepsilon}_{k,\alpha}(\phi) =-2t^{\ast}\cos(k+\phi), \\ &&\hat{\varepsilon}_{k,\alpha}(\phi)=\tilde{\varepsilon}_{k,\alpha}(\phi)+\sum_{k', \alpha'}f_{k,\alpha; k',\alpha'} \delta\langle\hat{n}_{k',\alpha'}\rangle_{\phi}, \label{terreno} \\ &&\delta\langle\hat{n}_{k,\alpha}\rangle_{\phi}=\frac{1}{e^{\beta[\hat{\varepsilon}_{k,\alpha}(\phi)-\mu]}+2} -\langle\hat{n}_{k,\alpha}\rangle_{T=0\atop\phi=0}. \label{sacola} \end{eqnarray} We are now in a position to obtain the Drude weight \cite{65243,*Kohn1964} (see Appendix \ref{cru}): \begin{eqnarray} \sigma_0&=& -\frac{\pi}{L}\left[ \frac{\partial^2 E(\phi)}{\partial\phi^2} \right]_{\phi=0} \nonumber\\ &=&2t^{\ast} \sin(n\pi)-\left(\frac{L}{\pi}\right)(f_{k_F,k_F}^{s}-f_{k_F,-k_F}^{s}). \label{liga} \end{eqnarray} Now using Eq.~(\ref{trans}), one obtains \begin{equation} \frac{L(f_{k_F,k_F}^{s}-f_{k_F,-k_F}^{s})}{2\pi t^{\ast}\sin(n\pi)}=F_{0}^{s}. \label{oito} \end{equation} A combination of Eqs.~(\ref{zaza}) and (\ref{oito}) determines the spin symmetric part of the interaction energy between quasiparticles: \begin{eqnarray} L f_{k_F,k_F}^{s}=\frac{3\pi v_{c,\infty}}{2}(1-1/g), \label{ca}\\ L f_{k_F,-k_F}^{s}=\frac{\pi v_{c,\infty}}{2}(1-1/g), \label{cb} \end{eqnarray} with $v_{c,\infty}=v_c^{(inch)}|_{U=\infty}$. Note in addition that the renormalized hopping can be written as \begin{equation} t^{\ast}=t\left[\frac{v_c^{(inch)}}{v_{c,\infty}}\right]. \label{cc} \end{equation} It is now clear that Eq.~(\ref{foton}) and Eqs.~(\ref{ca})-(\ref{cc}) establish the connection between the fractional Landau LL parametrization and that of the LL in the spin-incoherent regime. \subsection{Specific heat and numerical data} We shall now demonstrate that in the spin-incoherent regime the fractional Landau LL approach provides a very good description of the $T$-behavior of the zero-field specific heat of the system derived from the entropy defined in Eq. (\ref{pesos}). We stress that this procedure will prove rewarding in establishing an exact connection between the fractional Landau LL and an \textit{interacting} spinless Fermi gas, similarly to the one that we have discussed between the fractional LL and the \textit{free} spinless Fermi gas in Section \ref{sec:uinfity}. However, the fractional Landau LL is valid only under the condition $J (=4t^2/U) \lesssim k_BT \lesssim E_F (\sim t)$, while the fractional LL is an exact description at $U=\infty$ and any $T$. In zero field, using the Landau assumption in the calculation of the specific heat, Eqs. (\ref{tela}) and (\ref{nlandau}), and summing up the two fractional species we can obtain a direct relation between $\langle \tilde{n}_k \rangle$ and the \textit{interacting} spinless Fermi gas distribution function: \begin{equation} \langle \tilde{n}^{(F)}_{k}\rangle=2\langle \tilde{n}_k\rangle=\frac{1}{e^{\beta(\tilde{\varepsilon}_k-{\mu}^{(F)}_L)}+1}, \label{eq:nk-fll} \end{equation} with \begin{equation} \mu^{(F)}_L=\mu_L+k_BT\ln2, \label{eq:mu-f-e} \end{equation} where $\mu_L$ is the fractional Landau LL chemical potential and $\mu^{(F)}_L$ is the chemical potential of the related interacting spinless Fermi gas. Lastly, by replacing $\langle \hat{n}_k\rangle \rightarrow \langle \tilde{n}_k\rangle$ in Eq. (\ref{pesos}), and using Eqs. (\ref{eq:nk-fll}) and (\ref{eq:mu-f-e}), we can obtain a relation between the fractional Landau LL entropy, $S_{L}$, and the related interacting spinless Fermi gas entropy, $S_{L}^{(F)}$: \begin{eqnarray} \frac{S_{L}(T,n)}{L}&=&nk_B \ln{2}-\frac{k_B }{L}\sum_k[\langle \tilde{n}^{(F)}_{k}\rangle \ln \langle \tilde{n}^{(F)}_{k}\rangle\nonumber\\ & &+(1-\langle \tilde{n}^{(F)}_{k}\rangle)\ln(1-\langle \tilde{n}^{(F)}_{k}\rangle)]\label{eq:sl-f}\\ &=&nk_B\ln{2}+\frac{S_{L}^{(F)}(T,n)}{L}, \end{eqnarray} which is formally identical to Eq. (\ref{sfermion2}) at $U=\infty$. \begin{figure} \begin{center} \includegraphics*[width=0.45\textwidth]{fig4} \caption{(color online). Specific heat $C$ in units of $k_B$ as a function of the thermal energy $k_B T$ in units of $t$ for chains with $n=3/4$. The DMRG data for a $t-J$ chain with 32 sites and $J=0.05t$ ($U=80t$), from Ref. \onlinecite{81075108}. Also shown are predictions from the fractional Landau LL in zero field for $U=80t$ and the fractional LL at $U=\infty$. Notably, the results of the fractional Landau LL are in very good agreement with the DMRG data in the spin-incoherent regime. For completeness, we show the straight line of the $T\rightarrow 0$ limit of $C/k_B$, whose coefficient is $\gamma k_BT$, with $\gamma$ in Eq. (\ref{gammat0}). The insert shows details of the referred estimates for $C/k_B$ in a narrow low $T$-interval. Notice that in Fig. \ref{fig:diag} Path I is associated with the DMRG data, while Path II with the fractional LL and the fractional Landau LL.} \label{cvfiete} \end{center} \end{figure} The function $\mu_L(T,n)$, to order $(k_B T/t^\ast)^2$, is given by Eq. (\ref{sabonete}); however, in order to attain a good description for a wide range of temperatures we have calculated $\mu_L(T,n)$ numerically using the constraint equation \begin{equation} \frac{2}{L}\sum_{k}\langle \tilde{n}_k\rangle = n, \label{eq:vinc-exc} \end{equation} where $n$ is the average density of spinless fermions. From either entropy above, we can numerically calculate the specific heat of the fractional Landau LL gas using $C=T(\partial S/\partial T)$. In Fig. \ref{cvfiete} we show $C(T)/k_B$ for the fractional LL ($U=\infty$) and the fractional Landau LL for $U=80t$ ($J=0.05t$) for chains with $n=3/4$. The specific heat of the fractional Landau LL in zero field is derived using Eqs. (\ref{pesos}), (\ref{tela}) and (\ref{za}), for $U=80t$, whereas for the fractional LL, $U=\infty$, use is made of Eq. (\ref{peso}). Remarkably, the fractional Landau LL prediction quantitatively agrees with the DMRG data in the temperature range of the spin-incoherent regime up to $k_B T\sim t$. Despite the tiny value of $\frac{t}{t*}=0.987$, the fractional Landau LL approach adequately quantifies the first order correction, $(t/U)$, to the $U=\infty$ curve in the spin-incoherent regime. The two paths to the spin-incoherent LL regime shown in Fig. \ref{fig:diag} can be discussed with the aid of Fig. \ref{cvfiete}. The Path I of Fig. \ref{fig:diag} is associated with the DMRG data of Ref. \onlinecite{81075108} showed in Fig. \ref{cvfiete}, in which case we witness the linear behavior of the specific heat, with spin and charge contributions at very low temperature, and the crossover to the spin-incoherent regime. Further, Path II of Fig. \ref{fig:diag} is associated with the analytical results plotted in Fig. \ref{cvfiete}. Indeed, in this figure we indicate the onset of the spin-incoherent regime, in which case we can notice that the specific heat data of the fractional LL and that of the Landau fractional LL, both due to charge contribution only, practically meet at the onset of the spin-incoherent regime, since they differ by the small correction term of order $t/U$. \section{High-temperature limit} \label{sec:hight} In previous sections we have studied the Hubbard chain in the spin-incoherent regime: $J\ll k_BT \ll t$, using a perturbative Bethe ansatz procedure, valid for $U/k_B T\gg 1$, combined with a phenomenological approach. In this Section, we find it instructive to study the high-$T$ limit, so we can provide direct contact with well established results for the $t$-$J$ models derived using quantum transfer matrix techniques \cite{Juttner1997}. The high-$T$ limit is accessed under the conditions: $e^{-\beta \tilde{\varepsilon}_k}\rightarrow 1$, with $\frac{\mu_L}{k_BT}$ a function of $n$. Indeed, from either Eqs. (\ref{eq:nk-fll})-(\ref{eq:mu-f-e}) or Eq. (\ref{eq:vinc-exc}), we find that $\langle \tilde{n}_k\rangle=n/2$ and \begin{equation} \lim_{T\rightarrow\infty}\frac{\mu_L}{k_BT}=\ln\left(\frac{n/2}{1-n}\right), \label{eq:htmul} \end{equation} which exhibits a Van-Hove singularity as $n\rightarrow 1$, as illustrated in Fig. \ref{fig:semuhight}(a). These results imply that $S_L$ in Eq. (\ref{eq:sl-f}) reads: \begin{equation} \lim_{T\rightarrow\infty}\frac{S_L(T,n)}{k_BL}=n\ln{2}-n\ln{n}-(1-n)\ln{(1-n)}, \label{eq:hts} \end{equation} which is exactly the result expected by counting the total number of states of the $t-J$ model at a density $n$, with $N_\uparrow = N_\downarrow$, in the thermodynamic limit. In Fig. \ref{fig:semuhight}(b) we present $S_L(T,n)$ with a density $n$ for $U=80t$. Its also interesting to notice that $S_L(T,n)/k_B L$ approaches $\ln 2$ at half-filling due to the Van-Hove singularity. In addition, we stress that the high-$T$ limit is taken under the proviso that $U/k_B T\gg 1$, as is the case in Figs. \ref{fig:semuhight}(a) and (b), in which case $U=80t$. It is worth mentioning that as $T\rightarrow\infty$, $U$ increases accordingly, so that, Eq. (\ref{eq:hts}) is the $T\rightarrow\infty$ entropy of the $U=\infty$ Hubbard chain, Eq. (\ref{sfermion2}). \begin{figure} \begin{center} \includegraphics*[width=0.4\textwidth]{fig5} \caption{(color online). (a) Fractional Landau LL chemical potential, $\mu_L$, in units of $k_BT$ as a function of particle density $n$, for $U=80t$. The high-$T$ limit, given by $\frac{\mu_L}{k_BT}=\ln(\frac{n}{2-2n} )$, is indicated. (b) Fractional Landau LL entropy per site, $S_L/L$, in units of $k_B$, as a function of $n$, for $U=80t$. The high-$T$ limit, given by $\frac{S_L}{k_BL}=n\ln{2}-n\ln{n}-(1-n)\ln{(1-n)}$, is also shown.} \label{fig:semuhight} \end{center} \end{figure} Lastly, in order to confirm the high-$T$ limit of the particle occupation number, $\langle n_k\rangle$, of the $t$-$J$ model, Eq. (\ref{eq:tj}), we use the Lanczos exact diagonalization and finite temperature Lanczos method (FTLM) \cite{ftlm} to calculate $\langle n_k\rangle$ in finite chains under periodic boundary conditions (PBC). In fact, our analysis provides strong evidence in favor of our analytical results and, most importantly, verifies the consistency of the fractional Landau LL phenomenological approach. The FTLM uses the states from $R$ independent Lanczos exact diagonalization procedures to estimate thermodynamic functions of finite systems. For each Lanczos run, a maximum of $M$ Lanczos basis states is generated. The $MR$ approximate eigenenergies and eigenstates are used to calculate the thermodynamic functions of interest. We take $R=12000$ and $M=50$ in our calculations, and have exploited translational symmetry and rotational symmetry in spin space. \begin{figure} \begin{center} \includegraphics*[width=0.45\textwidth]{fig6} \caption{(color online). Momentum distribution function $\langle n_k\rangle$ calculated through FTLM for a $t-J$ chain with 18 sites under PBC, particle density $n=7/9$, $J=0.05t$, and $k_F=n\pi/2$ for the indicated values of temperature $k_B T/t$. Notice that the limit $\langle n_k\rangle=n/2$ as $(k_B T/t)\rightarrow\infty$ is nearly attained for $(k_B T /t)=10$. The inset is a copy of Fig. 3(b) of Ref. \onlinecite{81075108}: DMRG data for $n=0.75$, $J=0.05t$, for a chain with 32 sites. Arrows indicate increasing $t/k_B T$ in steps of 4. The horizontal line at $\langle n_k\rangle=n/2=0.375$ indicates the value of $\langle n_k\rangle$ for $(k_B T/t)\rightarrow\infty$.} \label{nk} \end{center} \end{figure} The distribution function of spin $\uparrow$ electrons of momentum $k$ is calculated through: \begin{equation} \langle n_k\rangle =\frac{1}{L}\sum^L_{l=1,m=1} \langle c^{\dagger}_{l\uparrow} c_{m\uparrow}\rangle e^{ik(l-m)}, \end{equation} where $\langle \ldots \rangle$ indicates thermal and quantum averages. In Fig. \ref{nk} we present $\langle n_k\rangle$ for $J=0.05t$ and $n=7/9$, calculated with the Lanczos method ($T=0$) and FTLM ($T\neq 0$), as well as DMRG data from Ref. \onlinecite{81075108} for $n=0.75$ and $J=0.05t$, shown in the inset. At $T=0$, the singularities \cite{412326,81075108} at $k_F$ and $3k_F$ (shown at $2\pi-3k_F$) are evident in our FTLM results for $(k_BT/t)=0$ and $0.0125$, with $k_F=\pi n/2$. The spin-incoherent regime, $k_B T\gtrsim J$, is signaled \cite{81075108} by the presence of an inflection point at $2k_F$, as observed in Fig. \ref{nk} for $(k_B T/t) =0.05$, 0.10 and 0.20. We thus conclude that both the FTLM and DMRG methods grasp the main features of the crossover between the low-$T$ LL to its spin-incoherent regime. \section{Concluding Remarks} \label{sec:remarks} We have studied the Hubbard chain in the spin-incoherent Luttinger liquid regime, both for $J=0$ and finite $J(\ll k_BT)$. In the former case, we have shown that its thermodynamic properties are exactly those of an ideal gas of two species of noninteracting particles obeying fractional statistics. It implies that the charge degrees of freedom are governed by the free spinless Fermi gas, while the spin degrees of freedom are fully disordered (Curie response). On the other hand, the latter case was investigated using an expression for the grand-canonical free energy derived perturbatively by Ha from Takahashi's integral equations. Based on this result, and using $U\gg k_B T$, we were able to obtain an expression for the Helmholtz free energy suitable to describe the system in the spin-incoherent regime: $J(\equiv 4t^2/U)\ll k_B T\ll E_F$, from which several thermodynamic quantities were derived. In particular, we have reported on the specific heat, charge compressibility, magnetic susceptibility, Drude weight, charge and spin velocities, and the Luttinger liquid (LL) parameter. We have also discussed the interesting possibility of looking at the system with finite $J$ as a fractional Landau LL. In this framework, the low-energy physics of the system is also described in terms of fractional quasiparticles obeying the Haldane-Wu fractional entropy. At the same time, it enables us to interpret corrections of ${\cal O}(t^2/U)$ as coming from (i) renormalization of the hopping $t$ only, which is the case of the specific heat and charge velocity; (ii) hopping renormalization and the interaction of fractional Landau quasiparticles, as found for the charge compressibility, and the Drude weight; (iii) interaction of fractional Landau LL quasiparticles only, as for the magnetic susceptibility. In addition, we have calculated the fractional Landau LL parameters and showed that they are fixed by the ones of the incoherent LL derived from pure thermodynamic grounds and arguments of bosonization. In particular, a phase diagram was provided and two thermodynamic paths to access the spin-incoherent LL regime shed light on the numerical and analytical procedures. Lastly, through a numerical analysis of the excluson fractional Landau LL entropy and the use of finite temperature Lanczos method, we have calculated the temperature behavior of the specific heat and the particle momentum distribution, respectively, both in very good agreement with previous density matrix renormalization group calculations in the spin-incoherent and the high-$T$ limit. In conclusion, we believe that our reported results using complementary approaches have provided interesting insights on several features of the thermodynamics of the spin-incoherent Luttinger liquid regime of the Hubbard chain. They might stimulate further theoretical and experimental work, since this special LL regime has been of interest in the context of several physical systems mentioned in our work, particularly quantum wires at low temperature. In addition, the crossover \cite{57977,PhysRevB.61.7909,*PhysRevLett.87.276405,*Kung2017} from the (1D) spin-incoherent LL regime (Fractional Landau LL) to a higher dimensional phenomenology (due to 2D or 3D coupling between chains), e. g., standard Landau Fermi liquid theory, also deserves further investigation. \section*{Acknowledgments} We acknowledge financial support from Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'{\i}vel Superior (CAPES), Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq), and Funda\c{c}\~ao de Amparo \`a Ci\^encia e Tecnologia do Estado de Pernambuco (FACEPE), Brazilian agencies, including the PRONEX Program which is funded by CNPq and FACEPE.
1,108,101,563,054
arxiv
\section{Introduction} The purpose of this paper is to study modules over representations of a small category taking values in spaces that behave like quotients of categorified fiber bundles. Let $H$ be a Hopf algebra having a coaction $\rho:A\longrightarrow A\otimes H$ on an algebra $A$ such that $A$ becomes an $H$-comodule algebra. Let $B$ denote the algebra of coinvariants of this coaction. Suppose that the inclusion $B\hookrightarrow A$ is faithfully flat and the canonical morphism \begin{equation*} can: A\otimes_BA\longrightarrow A\otimes H \qquad x\otimes y\mapsto x\cdot \rho(y) \end{equation*} is an isomorphism. This datum is the algebraic counterpart of a principal fiber bundle given by the quotient of an affine algebraic group scheme acting freely on an affine scheme over a field $K$ (see, for instance, \cite{MF}, \cite{Schn}). If $H$ has bijective antipode, then modules over the algebra $B$ of coinvariants may be recovered as ``$(A,H)$-Hopf modules'' (see Schneider \cite{Schn}). \smallskip These $(A,H)$-Hopf modules may be rolled into the more general concept of modules over an `entwining structure' consisting of an algebra $R$, a coalgebra $C$ and a morphism $\psi:C\otimes R\longrightarrow R\otimes C$ satisfying certain conditions. Entwining structures were introduced by Brzezi\'{n}ski and Majid \cite{BrMj}. It was soon realized (see Brzezi\'{n}ski \cite{Brx1}) that entwining structures provide a single formalism that unifies relative Hopf modules, Doi-Hopf modules, Yetter-Drinfeld modules and several other concepts such as coalgebra Galois extensions. As pointed out in Brzezi\'{n}ski \cite{Brx2}, an entwining structure $(R,C,\psi)$ behaves like a single bialgebra, or more generally a comodule algebra over a bialgebra. Accordingly, the investigation of entwining structures as well as the modules over them has emerged as an object of study in its own right (see, for instance, \cite{Abu}, \cite{BBR0}, \cite{BBR}, \cite{Brx1}, \cite{Brx3}, \cite{BuTa2}, \cite{BuTa1}, \cite{CaDe}, \cite{HP}, \cite{Jia}, \cite{Schb}). \smallskip We consider an entwining structure consisting of a small $K$-linear category $\mathcal R$, a coalgebra $C$ and a family of morphisms \begin{equation*} \psi=\{\psi_{rs}:C\otimes \mathcal R(r,s)\longrightarrow \mathcal R(r,s)\otimes C\}_{r,s\in \mathcal R} \end{equation*} satisfying certain conditions (see Definition \ref{entcatx}). This is in keeping with the general philosophy of Mitchell \cite{Mit}, where a small $K$-linear category is viewed as a $K$-algebra with several objects. In fact, we consider the category $\mathscr Ent$ of such entwining structures. When the coalgebra $C$ is fixed, we have the subcategory $\mathscr Ent_C$. Given an entwining structure $(\mathcal R,C,\psi)$, we have a category $\mathbf M^C_{\mathcal R}(\psi)$ of modules over it (see our earlier work in \cite{BBR}). These entwined modules over $(\mathcal R,C,\psi)$ may be seen as modules over a certain categorical quotient space of $\mathcal R$, which need not exist in an explicit sense, but is studied only through its category of modules. \smallskip We work with representations $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ of a small category $\mathscr X$ taking values in $\mathscr Ent_C$, where $C$ is a fixed coalgebra. This is motivated by the work of Estrada and Virili \cite{EV}, who introduced a theory of modules over a representation $\mathscr A:\mathscr X\longrightarrow Add$, where $Add$ is the category of small preadditive categories. The modules over $\mathscr A:\mathscr X\longrightarrow Add$ were studied in the spirit of sheaves of modules over a scheme, or more generally, a ringed space. By considering small preaditive categories, the authors in \cite{EV} also intended to take Mitchell's idea one step forward: from replacing rings with small preadditive categories to replacing ring representations by representations taking values in small preadditive categories. In this paper, we develop a theory of modules over a representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ taking values in entwining structures. We also describe, by means of Frobenius and separable functors, how this theory relates to that of modules over the underlying representation taking values in small $K$-linear categories. \smallskip This paper has two parts. In the first part, we introduce and develop the properties of the category $Mod^C-\mathscr R$ of modules over $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. For this, we have to combine techniques on comodules along with adapting the methods of Estrada and Virili \cite{EV}. When $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is a flat representation (see Section 6), we also consider the subcategory $Cart-\mathscr R$ of cartesian entwined modules over $\mathscr R$. In the analogy with sheaves of modules over a scheme, the cartesian objects may be seen as similar to quasi-coherent sheaves. \smallskip Let $\mathscr Lin$ be the category of small $K$-linear categories. In the second part, we consider the underlying representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C \longrightarrow \mathscr Lin$, which we continue to denote by $\mathscr R$. Accordingly, we have a category $Mod-\mathscr R$ of modules over $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C \longrightarrow \mathscr Lin$ in the sense of Estrada and Virili \cite{EV}. We study the relation between $Mod^C-\mathscr R$ and $Mod-\mathscr R$ by describing Frobenius and separability conditions for a pair of adjoint functors between them (see Section 7) \begin{equation*} \mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R\qquad\qquad\qquad \mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R \end{equation*} Here, the left adjoint $\mathscr F$ may be thought of as an `extension of scalars' and the right adjoint $\mathscr G$ as a `restriction of scalars.' \smallskip The idea is as follows: as mentioned before, modules over an entwining structure $(\mathcal R,C,\psi)$ may be seen as modules over a certain categorical quotient space of $\mathcal R$, which behaves like a subcategory of $\mathcal R$. Again, this ``subcategory'' of $\mathcal R$ need not exist in an explicit sense, but is studied only through the category of modules $\mathbf M^C_{\mathcal R}(\psi)$. Accordingly, a representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ taking values in $\mathscr Ent_C$ may be thought of as a subfunctor of the underlying representation $ \mathscr R:\mathscr X\longrightarrow \mathscr Ent_C \longrightarrow \mathscr Lin$. We want to understand the properties of the inclusion of this ``subfunctor'': in particular, whether it behaves like a separable, split or Frobenius extension of rings. We recall here (see \cite[Theorem 1.2]{uni}) that if $R\longrightarrow S$ is an extension of rings, these properties may be expressed in terms of the functors $F:Mod-R\longrightarrow Mod-S$ (extension of scalars) and $G:Mod-S\longrightarrow Mod-R$ (restriction of scalars) as follows \begin{equation*} \mbox{ \begin{tabular}{ccc} $R\longrightarrow S$ split extension & $\qquad\qquad \Leftrightarrow\qquad\qquad$ & $F:Mod-R\longrightarrow Mod-S$ separable\\ $R\longrightarrow S$ separable extension & $\qquad\qquad \Leftrightarrow\qquad\qquad$ & $G:Mod-S\longrightarrow Mod-R$ separable\\ $R\longrightarrow S$ Frobenius extension & $\qquad\qquad \Leftrightarrow\qquad\qquad$ & $(F,G)$ Frobenius pair of functors\\ \end{tabular}} \end{equation*} \smallskip We now describe the paper in more detail. Throughout,we let $K$ be a field. We begin in Section 2 by describing the categories of entwining structures and entwined modules. For a morphism $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ of entwining structures, we describe `extension of scalars' and `restriction of scalars' on categories of entwined modules. Our first result is as follows. \begin{Thrm}\label{resulta} (see \ref{P2.2}, \ref{P2.3} and \ref{T2.5}) Let $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ be a morphism of entwining structures. \smallskip (1) There is a functor $(\alpha,\gamma)^\ast : \mathbf M_{\mathcal R}^C(\psi)\longrightarrow \mathbf M_{\mathcal S}^D(\psi')$ of extension of scalars. \smallskip (2) Suppose that the coalgebra map $\gamma:C\longrightarrow D$ is also a monomorphism of vector spaces. Then, there is a functor $(\alpha,\gamma)_\ast : \mathbf M_{\mathcal S}^D(\psi')\longrightarrow \mathbf M_{\mathcal R}^C(\psi)$ of restriction of scalars. Further, there is an adjunction of functors which is given by natural isomorphisms \begin{equation*} \mathbf M_{\mathcal S}^D(\psi')((\alpha,\gamma)^*\mathcal M,\mathcal N)=\mathbf M_{\mathcal R}^C(\psi)(\mathcal M,(\alpha,\gamma)_*\mathcal N) \end{equation*} for any $\mathcal M\in \mathbf M_{\mathcal R}^C(\psi)$ and $\mathcal N\in \mathbf M_{\mathcal S}^D(\psi')$. \end{Thrm} In Section 3, we give conditions for the category $\mathbf M^C_{\mathcal R}(\psi)$ of modules over an entwining structure $(\mathcal R,C,\psi)$ to have projective generators. We recall that a $K$-coalgebra $C$ is said to be right semiperfect if the category of right $C$-comodules has enough projectives. \begin{Thrm}\label{resultb} (see \ref{T3.5}) Let $(\mathcal R,C,\psi)$ be an entwining structure and let $C$ be a right semiperfect $K$-coalgebra. Then, the category $\mathbf M_{\mathcal R}^C(\psi)$ of entwined modules is a Grothendieck category with a set of projective generators. \end{Thrm} In Section 4, we fix a coalgebra $C$. We introduce the category $Mod^C-\mathscr R$ of modules over a representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$, which is our main object of study. Our first purpose is to show that $Mod^C-\mathscr R$ is a Grothendieck category. \begin{Thrm}\label{resultc} (see \ref{T4.9}) Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr R:\mathscr X\longrightarrow\mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$. Then, the category $Mod^C-\mathscr R$ of entwined modules over $\mathscr R$ is a Grothendieck category. \end{Thrm} Given $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$, we have an entwining structure $(\mathscr R_x,C,\psi_x)$ for each $x\in \mathscr X$. Our next aim is to give conditions for $Mod^C-\mathscr R$ to have projective generators. For this, we will construct an extension functor $ex_x^C$ and an evaluation functor $ev_x^C$ relating the categories $Mod^C-\mathscr R$ and $\mathbf M_{\mathscr R_x}^C(\psi_x)$ at each $x\in \mathscr X$. \begin{Thrm}\label{resultd} (see \ref{P5.3} and \ref{T5.5}) Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr X$ be a poset and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. \smallskip (1) For each $x\in \mathscr X$, there is an extension functor $ex_x^C:\mathbf M_{\mathscr R_x}^C\longrightarrow Mod^C-\mathscr R$ which is left adjoint to an evaluation functor $ev_x^C:Mod^C-\mathscr R\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)$. \smallskip (2) The family $\{\mbox{$ex_x^C(V\otimes H_r)$ $\vert$ $x\in \mathscr X$, $r\in \mathscr R_x$, $V\in Proj^f(C)$}\}$ is a set of projective generators for $Mod^C-\mathscr R$, where $Proj^f(C)$ is the set of isomorphism classes of finite dimensional projective $C$-comodules. \end{Thrm} We introduce the category of cartesian entwined modules in Section 6. Here, we will assume that $\mathscr X$ is a poset and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is a flat representation, i.e., for any morphism $\alpha:x\longrightarrow y$ in $\mathscr X$, the functor $\alpha^\ast:=\mathscr R_\alpha^\ast:\mathbf M^C_{\mathscr R_x}(\psi_x) \longrightarrow \mathbf M^C_{\mathscr R_y}(\psi_y)$ is exact. We then apply induction on $\mathbb N\times Mor(\mathscr X)$ to show that any cartesian entwined module may be expressed as a sum of submodules whose cardinality is $\leq \kappa :=sup\{ \mbox{$|\mathbb N|$, $|C|$, $|K|$, $|Mor(\mathscr X)|$, $|Mor(\mathscr R_x)|$, $x\in \mathscr X$}\}$. \begin{Thrm}\label{resulte} (see \ref{T6.10}) Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr X$ be a poset and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. Suppose that $\mathscr R$ is flat. Then, $Cart^C-\mathscr R$ is a Grothendieck category. \end{Thrm} In the next three sections, we study separability and Frobenius conditions for functors relating $Mod^C-\mathscr R$ to the category $Mod-\mathscr R$ of modules over the underlying representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C\longrightarrow \mathscr Lin$. For this, we have to adapt the techniques from \cite{uni} as well as our earlier work in \cite{BBR}. For more on Frobenius and separability conditions for Doi-Hopf modules and modules over entwining structures of algebras, we refer the reader to \cite{Brx5}, \cite{X13}, \cite{X14}, \cite{X15}. \smallskip At each $x\in \mathscr X$, we have functors $\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}$ and $\mathscr G_x:\mathbf M_{\mathscr R_x} \longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x)$ which combine to give functors $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ and $\mathscr G: Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ respectively. We will also need to consider a space $V_1$ of elements $\theta=\{\theta_x(r):C\otimes C\longrightarrow \mathscr R_x(r,r)\}_{x\in \mathscr X,r\in \mathscr R_x}$ and a space $W_1$ of elements $\eta=\{\eta_x(s,r):\mathscr R_x(s,r)\longrightarrow \mathscr R_x(s,r)\otimes C\}_{x\in \mathscr X,r,s\in \mathscr R_x}$ satisfying certain conditions (see Sections 7 and 8). \begin{Thrm}\label{resultf} (see \ref{P7.2}, \ref{P7.25} and \ref{P7.6}) Let $\mathscr X$ be a poset, $C$ be a right semiperfect $K$-coalgebra and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. \smallskip (1) The forgetful functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ has a right adjoint $\mathscr G: Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$. \smallskip (2) A natural transformation $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ corresponds to a collection of natural transformations $\{\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ such that for any $\alpha:x\longrightarrow y$ in $\mathscr X$ and object $\mathscr M\in Mod^C-\mathscr R$, we have $\mathscr M_\alpha\circ \upsilon_x(\mathscr M_x)=\alpha_\ast\upsilon_y(\mathscr M_y)\circ \mathscr G_x\mathscr F_x(\mathscr M_\alpha) $. \smallskip (3) The space $Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ is isomorphic to $V_1$. \end{Thrm} The main results in Sections 7 and 8 give necessary and sufficient conditions for the forgetful functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ and its right adjoint $\mathscr G: Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ to be separable. In Section 9, we give necessary and sufficient conditions for $(\mathscr F,\mathscr G)$ to be a Frobenius pair, i.e., $\mathscr G$ is both a left and a right adjoint of $\mathscr F$. \begin{Thrm}\label{resultg} (see \ref{T7.7}, \ref{P7.8} and \ref{P7.9}) Let $\mathscr X$ be a partially ordered set. Let $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. \smallskip (1) The functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ is separable if and only if there exists $\theta\in V_1$ such that $ \theta_x(r)(c_1\otimes c_2)=\varepsilon_C(c)\cdot id_r $ for every $x\in \mathscr X$, $r\in\mathscr R_x$ and $c\in C$. \smallskip (2) Suppose additionally that the representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, we have \smallskip \begin{itemize} \item[(a)] The functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ restricts to a functor $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$. Moreover, $\mathscr F^c$ has a right adjoint $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$. \smallskip \item[(b)] Suppose there exists $\theta\in V_1$ such that $ \theta_x(r)(c_1\otimes c_2)=\varepsilon_C(c)\cdot id_r $ for every $x\in \mathscr X$, $r\in\mathscr R_x$ and $c\in C$. Then, $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$ is separable. \end{itemize} \end{Thrm} \begin{Thrm}\label{resulth} (see \ref{Pro8.2} and \ref{T8.3}) Let $\mathscr X$ be a partially ordered set, $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. \smallskip (1) The spaces $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ and $ W_1$ are isomorphic. \smallskip (2) The functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ is separable if and only if there exists $\eta\in W_1$ such that $ id=(id\otimes \varepsilon_C)\circ \eta_x(s,r) $ for each $x\in \mathscr X$ and $s$, $r\in \mathscr R_x$. \end{Thrm} \begin{Thrm}\label{resulti} (see \ref{T9.1}, \ref{P9.4}, \ref{C9.5}) Let $\mathscr X$ be a partially ordered set, $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. \smallskip (1) $(\mathscr F,\mathscr G)$ is a Frobenius pair if and only if there exist $\theta\in V_1$ and $\eta\in W_1$ such that $ \varepsilon_C(d)f=\sum \widehat{f}\circ \theta_x(r)(c_f\otimes d)$ and $\varepsilon_C(d)f=\sum \widehat{f_{\psi_x}} \circ \theta_x(r)(d^{\psi_x}\otimes c_f) $ for every $x\in \mathscr X$, $r\in \mathscr R_x$, $f\in \mathscr R_x(r,s)$ and $d\in C$, where $\eta_x(r,s)(f)=\widehat{f}\otimes c_f$. \smallskip (2) Suppose additionally that the representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ restricts to a functor $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$. Further, $(\mathscr F^c,\mathscr G^c)$ is a Frobenius pair of adjoint functors between $Cart^C-\mathscr R$ and $Cart-\mathscr R$. \end{Thrm} We conclude in Section 10 by giving examples of how to construct entwined representations and describe modules over them. In particular, we show how to construct entwined representations using $B$-comodule categories, where $B$ is a bialgebra. \section{Category of entwining structures} Let $K$ be a field and let $Vect_K$ be the category of vector spaces over $K$. Let $\mathcal R$ be a small $K$-linear category. The category of right $\mathcal R$-modules will be denoted by $\mathbf M_{\mathcal R}$. For any object $r\in \mathcal R$, we denote by $H_r:\mathcal R^{op}\longrightarrow Vect_K$ the right $\mathcal R$-module represented by $r$ and by $_rH:\mathcal R\longrightarrow Vect_K$ the left $\mathcal R$-module represented by $r$. Given a $K$-coalgebra $C$, the category of right $C$-comodules will be denoted by $Comod-C$. \begin{defn}\label{entcatx} (see \cite[$\S$ 2]{BBR}) Let $\mathcal R$ be a small $K$-linear category and $C$ be a $K$-coalgebra. An entwining structure $(\mathcal R,C,\psi)$ over $K$ is a collection of $K$-linear morphisms \begin{equation*} \psi=\{\psi_{rs}:C\otimes \mathcal R(r,s)\longrightarrow \mathcal R(r,s)\otimes C\}_{r,s\in \mathcal R} \end{equation*} satisfying the following conditions \begin{equation} \begin{array}{c} (gf)_\psi \otimes c^\psi = g_\psi f_\psi \otimes {c^\psi}^\psi \qquad \varepsilon_C(c^\psi)(f_\psi) = \varepsilon_C(c)f \\ f_\psi \otimes \Delta_C(c^\psi) = {f_\psi}_\psi \otimes {c_1}^\psi \otimes {c_2}^\psi \qquad \psi(c \otimes id_r)= id_r \otimes c \\ \end{array} \end{equation} for each $f \in \mathcal{R}(r,s)$, $g \in \mathcal{R}(s,t)$ and $c \in C$. Here, we have suppressed the summation and written $\psi(c\otimes f)$ simply as $f_\psi\otimes c^\psi$. \smallskip A morphism $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ of entwining structures consists of a functor $\alpha:\mathcal{R}\longrightarrow \mathcal{S}$ and a counital coalgebra map $\gamma: C\longrightarrow D$ such that $\alpha({f}_{\psi})\otimes \gamma({c}^{\psi}) = \alpha(f)_{\psi'} \otimes \gamma(c)^{\psi'}$ for any $c\otimes f \in C\otimes \mathcal{R}(r,s)$, where $r,s \in \mathcal R$. \smallskip We will denote by $\mathscr Ent$ the category of entwining structures over $K$. \end{defn} If $\mathcal M$ is a right $\mathcal R$-module, $m\in \mathcal M(r)$ and $f\in \mathcal R(s,r)$, the element $\mathcal M(f)(r)\in \mathcal M(s)$ will often be denoted by $mf$. \smallskip If $\alpha:\mathcal{R}\longrightarrow \mathcal{S}$ is a functor of small $K$-linear categories, there is an obvious functor $\alpha_*: \mathbf M_{\mathcal S} \longrightarrow \mathbf M_{\mathcal R}$ of restriction of scalars. For the sake of convenience, we briefly recall here the well known extension of scalars $\alpha^*:\mathbf M_{\mathcal R}\longrightarrow \mathbf M_{\mathcal S}$. For $\mathcal M\in \mathbf M_{\mathcal R}$, the module $\alpha^*(\mathcal M)\in \mathbf M_{\mathcal S}$ is determined by setting \begin{equation}\label{ke2.2} \alpha^*(\mathcal M)(s):=\left(\underset{r\in \mathcal R}{\bigoplus}\mathcal M(r)\otimes \mathcal S(s,\alpha(r))\right)/V \end{equation} for $s\in \mathcal S$, where $V$ is the subspace generated by elements of the form \begin{equation} (m'\otimes \alpha(g)f)- (m'g\otimes f) \end{equation} for $m'\in \mathcal M(r')$, $g\in \mathcal R(r,r')$, $f\in \mathcal S(s,\alpha(r))$ and $r$, $r'\in \mathcal R$. \smallskip On the other hand, if $\gamma : C\longrightarrow D$ is a morphism of coalgebras and $N$ is a right $C$-comodule, there is an obvious corestriction of scalars $\gamma^*:Comod-C\longrightarrow Comod-D$. The functor $\gamma^*$ has a well known right adjoint $\gamma_*:Comod-D\longrightarrow Comod-C$, known as the coinduction functor, given by the cotensor product $N\mapsto N\Box_D C$ (see, for instance, \cite[$\S$ 11.10]{Wibook}). In general, we recall that the cotensor product $N\Box_DN'$ of a right $D$-comodule $(N,\rho:N\longrightarrow N\otimes D)$ with a left $D$-comodule $(N',\rho':N'\longrightarrow D\otimes N')$ is given by the equalizer \begin{equation} N\Box_DN':=Eq\left(N\otimes N'\doublerightarrow{\rho\otimes id}{id\otimes \rho'} N\otimes D\otimes N'\right) \end{equation} In other words, an element $\sum n_i\otimes n'_i\in N\otimes N'$ lies in $N\Box_DN'$ if and only if $\sum n_{i0}\otimes n_{i1}\otimes n'_i=\sum n_i\otimes n'_{i0}\otimes n'_{i1}$. However, we will continue to suppress the summation and write an element of $N\Box_DN'$ simply as $n\otimes n'$. We will now consider modules over an entwining structure $(\mathcal R,C,\psi)$. \begin{defn}\label{D2.2} (see \cite[Definition 2.2]{BBR}) Let $\mathcal{M}$ be a right $\mathcal{R}$-module with a given right $C$-comodule structure $\rho_{\mathcal M(s)}:\mathcal M(s)\longrightarrow \mathcal M(s)\otimes C$ on $\mathcal{M}(s)$ for each $s\in \mathcal{R}$. Then, $\mathcal{M}$ is said to be an entwined module over $(\mathcal{R},C,\psi)$ if the following compatibility condition holds: \begin{equation}\label{comp 2} \rho_{\mathcal{M}(s)}(mf)= \big(mf\big)_0 \otimes \big(mf\big)_{1}=m_0f_\psi\otimes {m_1}^\psi \end{equation} for every $f \in \mathcal{R}(s,r)$ and $m \in \mathcal{M}(r).$ \smallskip A morphism $\eta:\mathcal M\longrightarrow \mathcal N$ of entwined modules is a morphism $\eta:\mathcal M\longrightarrow \mathcal N$ in $\mathbf M_{\mathcal R}$ such that $\eta(r):\mathcal M(r)\longrightarrow\mathcal N(r)$ is $C$-colinear for each $r\in\mathcal R$. The category of entwined modules over $(\mathcal R,C,\psi)$ will be denoted by $\mathbf M_{\mathcal R}^C(\psi)$. \end{defn} \begin{thm} \label{P2.2} Let $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ be a morphism of entwining structures. Then, there is a functor $(\alpha,\gamma)^\ast : \mathbf M_{\mathcal R}^C(\psi)\longrightarrow \mathbf M_{\mathcal S}^D(\psi')$. \end{thm} \begin{proof} We take $\mathcal M\in \mathbf M_{\mathcal R}^C(\psi)$. Then, $\mathcal M\in \mathbf M_{\mathcal R}$ and we consider $\mathcal N:=\alpha^*(\mathcal M)\in \mathbf M_{\mathcal S}$. For $s\in S$, we consider an element $m\otimes f\in \mathcal N(s)$, where $m\in \mathcal M(r)$ and $f\in \mathcal S(s,\alpha(r))$ for some $r\in \mathcal R$. We claim that the morphism \begin{equation}\label{eq2.6} \rho_{\mathcal N(s)}:\mathcal N(s)\longrightarrow \mathcal N(s)\otimes D \qquad (m\otimes f)\mapsto (m\otimes f)_0\otimes (m\otimes f)_1:=(m_0\otimes f_{\psi'})\otimes \gamma(m_1)^{\psi'} \end{equation} makes $\mathcal N(s)$ a right $D$-comodule. Here, the association $m\mapsto m_0\otimes m_1$ comes from the $C$-comodule structure $\rho_{\mathcal M(r)}:\mathcal M(r)\longrightarrow \mathcal M(r)\otimes C$ of $\mathcal M(r)$. \smallskip First, we show that $\rho_{\mathcal N(s)}$ is well defined. For this, we consider $m'\in\mathcal M(r')$, $g\in \mathcal R(r,r')$ and $f\in \mathcal S(s, \alpha(r))$. We have \begin{equation} \begin{array}{ll} (m'g\otimes f)_0\otimes (m'g\otimes f)_1 & = ((m'g)_0\otimes f_{\psi'})\otimes \gamma((m'g)_1)^{\psi'}\\ & = (m'_0g_\psi\otimes f_{\psi'})\otimes \gamma(m'^\psi_1)^{\psi'}\\ &= (m'_0\otimes \alpha(g_\psi)f_{\psi'})\otimes \gamma(m'^\psi_1)^{\psi'}\\ &= (m'_0\otimes \alpha(g)_{\psi'}f_{\psi'})\otimes \gamma(m'_1)^{\psi'\psi'}\\ &= (m'_0\otimes (\alpha(g)f)_{\psi'})\otimes \gamma(m'_1)^{\psi'}\\ \end{array} \end{equation} From the properties of entwining structures, it may be easily verified that the structure maps in \eqref{eq2.6} are coassociative and counital, giving a right $D$-comodule structure on $\mathcal N(s)$. We now consider $f'\in \mathcal S(s',s)$. Then, we have \begin{equation} \begin{array}{ll} (m\otimes ff')_0\otimes (m\otimes ff')_1 & =(m_0\otimes (ff')_{\psi'})\otimes \gamma(m_1)^{\psi'}\\ &=(m_0\otimes f_{\psi'})f'_{\psi'}\otimes \gamma(m_1)^{\psi'\psi'}\\ & = (m\otimes f)_0f'_{\psi'}\otimes (m\otimes f)_1^{\psi'}\\ \end{array} \end{equation} This shows that $\mathcal N\in \mathbf M_{\mathcal S}^D(\psi')$. \end{proof} \begin{thm} \label{P2.3} Let $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ be a morphism of entwining structures. Suppose additionally that $\gamma:C\longrightarrow D$ is a monomorphism of vector spaces. Then, there is a functor $(\alpha,\gamma)_\ast : \mathbf M_{\mathcal S}^D(\psi')\longrightarrow \mathbf M_{\mathcal R}^C(\psi)$. \end{thm} \begin{proof} We take $\mathcal N\in \mathbf M_{\mathcal S}^D(\psi')$ and set $\mathcal M(r):=\mathcal N(\alpha(r))\Box_DC$ for each $r\in \mathcal R$. For $f\in \mathcal R(r',r)$, we define \begin{equation}\label{eq2.9} \mathcal M(f):\mathcal M(r)\longrightarrow \mathcal M(r')\qquad n\otimes c\mapsto (n\otimes c)\cdot f :=n\alpha(f_\psi)\otimes c^\psi \end{equation} To show that this morphism is well defined, we need to check that $\mathcal M(f)(n\otimes c)\in \mathcal M(r')=\mathcal N(\alpha(r'))\Box_DC$. Since $n\otimes c\in \mathcal N(\alpha(r))\Box_DC$, we know that \begin{equation}\label{eq2.10} n_0\otimes n_1\otimes c= n\otimes \gamma(c_1)\otimes c_2 \end{equation} In particular, it follows that \begin{equation}\label{eq2.11} n\otimes c\otimes f\in Eq\left( \begin{CD} \begin{tikzcd} \mathcal N(\alpha(r))\otimes C\otimes \mathcal R(r',r) \ar[d,xshift = 5pt]\ar[d,xshift=-5pt]\\ \mathcal N(\alpha(r))\otimes D\otimes C\otimes \mathcal R(r',r) \\ \end{tikzcd}\\ @Vid\otimes id\otimes \psi VV\\ \mathcal N(\alpha(r))\otimes D\otimes \mathcal R(r',r)\otimes C \\ @Vid\otimes id \otimes \alpha \otimes id VV\\ \mathcal N(\alpha(r))\otimes D\otimes \mathcal S(\alpha(r'),\alpha(r))\otimes C \\ @Vid\otimes \psi'\otimes idVV\\ \mathcal N(\alpha(r))\otimes\mathcal S(\alpha(r'),\alpha(r))\otimes D\otimes C\\ @VVV\\ \mathcal N(\alpha(r'))\otimes D\otimes C\\ \end{CD}\right) \end{equation} From \eqref{eq2.11}, it follows that \begin{equation}\label{eq2.12} n_0\alpha(f_\psi)_{\psi'}\otimes n_1^{\psi'}\otimes c^\psi=n\alpha(f_\psi)_{\psi'}\otimes \gamma(c_1)^{\psi'}\otimes {c_2}^{\psi} \end{equation} Applying \eqref{eq2.10} and \eqref{eq2.12}, we now see that \begin{equation}\label{eq2.13} \begin{array}{ll} (n\alpha(f_\psi))_0\otimes (n\alpha(f_\psi))_1 \otimes c^\psi & =n_0\alpha(f_\psi)_{\psi'}\otimes n_1^{\psi'}\otimes c^\psi \\ &=n\alpha(f_\psi)_{\psi'}\otimes \gamma(c_1)^{\psi'}\otimes {c_2}^{\psi}\\ &=n\alpha(f_{\psi\psi})\otimes \gamma({c_1}^{\psi})\otimes {c_2}^{\psi}\\ & =n\alpha(f_\psi)\otimes \gamma({c^{\psi}}_1)\otimes {c^{\psi}}_2\\ \end{array} \end{equation} From the definition, we may easily verify that the structure maps in \eqref{eq2.9} make $\mathcal M$ into a right $\mathcal R$-module. To show that $\mathcal M$ is entwined, it remains to check that \begin{equation}\label{eq2.14} n\alpha(f_\psi)\otimes {c^\psi}_1\otimes {c^\psi}_2=((n\otimes c)\cdot f)_0\otimes ((n\otimes c)\cdot f)_1= (n\otimes c)_0\cdot f_\psi\otimes (n\otimes c)_1^\psi=n\alpha(f_{\psi\psi})\otimes c_1^\psi\otimes c_2^\psi \end{equation} in $\mathcal N(\alpha(r'))\otimes C\otimes C$. Since $\gamma: C\longrightarrow D$ is a monomorphism and all tensor products are taken over the field $K$, it suffices to show that \begin{equation}\label{eq2.15} n\alpha(f_\psi)\otimes \gamma({c^\psi}_1)\otimes {c^\psi}_2=n\alpha(f_{\psi\psi})\otimes \gamma( {c_1}^\psi)\otimes {c_2}^\psi\in \mathcal N(\alpha(r'))\otimes D\otimes C \end{equation} Using \eqref{eq2.13} and the fact that $(\alpha,\gamma)$ is a morphism of entwining structures, the right hand side of \eqref{eq2.15} becomes \begin{equation}\label{eq2.16} n\alpha(f_{\psi\psi})\otimes \gamma( {c_1}^\psi)\otimes {c_2}^\psi=n\alpha(f_\psi)_{\psi'}\otimes \gamma(c_1)^{\psi'}\otimes {c_2}^{\psi}=n_0\alpha(f_\psi)_{\psi'}\otimes n_1^{\psi'}\otimes c^\psi \end{equation} From \eqref{eq2.13}, we already know that $n\alpha(f_\psi)\otimes c^\psi\in \mathcal N(\alpha(r'))\Box_DC$. As such, we have \begin{equation}\label{eq2.17} n\alpha(f_\psi)\otimes \gamma({c^\psi}_1)\otimes {c^\psi}_2=(n\alpha(f_\psi))_0\otimes (n\alpha(f_\psi))_1\otimes c^\psi=n_0\alpha(f_\psi)_{\psi'}\otimes n_1^{\psi'}\otimes c^\psi \end{equation} where the second equality follows from \eqref{eq2.13}. From \eqref{eq2.16} and \eqref{eq2.17}, the result of \eqref{eq2.15} is now clear. \end{proof} \begin{Thm}\label{T2.5} Let $(\alpha,\gamma):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,D,\psi')$ be a morphism of entwining structures such that $\gamma:C\longrightarrow D$ is a monomorphism of vector spaces. Then, there is an adjuction of functors \begin{equation} \mathbf M_{\mathcal S}^D(\psi')((\alpha,\gamma)^*\mathcal M,\mathcal N)=\mathbf M_{\mathcal R}^C(\psi)(\mathcal M,(\alpha,\gamma)_*\mathcal N) \end{equation} for $\mathcal M\in \mathbf M_{\mathcal R}^C(\psi)$ and $\mathcal N\in \mathbf M_{\mathcal S}^D(\psi')$. \end{Thm} \begin{proof} We consider a morphism $\eta :(\alpha,\gamma)^*\mathcal M\longrightarrow\mathcal N$ in $\mathbf M_{\mathcal S}^D(\psi')$. Then, $\eta$ corresponds to a morphism $\eta:\alpha^*\mathcal M\longrightarrow \mathcal N$ in $\mathbf M_{\mathcal S}$ such that $\eta(s):\alpha^*\mathcal M(s)\longrightarrow \mathcal N(s)$ is $D$-colinear for each $s\in S$. Accordingly, we have $\eta':\mathcal M\longrightarrow \mathcal N$ in $\mathbf M_{\mathcal R}$ such that $\eta'(r):\mathcal M(r)\longrightarrow \mathcal N(\alpha(r))$ is $D$-colinear for each $r\in \mathcal R$. Here, $\mathcal M(r)$ is treated as a $D$-comodule via corestriction of scalars. Therefore, we have morphisms $\eta''(r):\mathcal M(r)\longrightarrow \mathcal N(\alpha(r))\Box_DC$ of $C$-comodules for each $r\in \mathcal R$. Together, these determine a morphism $\mathcal M\longrightarrow (\alpha,\gamma)_*\mathcal N$ in $\mathbf M_{\mathcal R}$. These arguments can be easily reversed and hence the result. \end{proof} \section{Projective generators and entwined modules} Let $(\mathcal R,C,\psi)$ be an entwining structure. In \cite[Proposition 2.9]{BBR}, it was shown that the category $\mathbf M_{\mathcal R}^C(\psi)$ of entwined modules is a Grothendieck category. In this section, we will refine this result to give conditions for $\mathbf M_{\mathcal R}^C(\psi)$ to have a collection of projective generators. \begin{lem}\label{L3.1} Let $\mathcal G$ be a Grothendieck category. Fix a set of generators $\{ G_k\}_{k\in K}$ for $\mathcal G$. Let $Z\in \mathcal G$ be an object. Let $i_X:X\hookrightarrow Z$, $i_Y:Y\hookrightarrow Z$ be two subobjects of $Z$ such that for any $k\in K$ and any morphism $f_k:G_k\longrightarrow X$, there exists $g_k:G_k\longrightarrow Y$ such that $i_Y\circ g_k=i_X\circ f_k$. Then, $i_X:X\hookrightarrow Z$ factors through $i_Y:Y\hookrightarrow Z$, i.e., $X$ is a subobject of $Y$. \end{lem} \begin{proof} Since $\{ G_k\}_{k\in K}$ is a set of generators for $\mathcal G$, we can choose (see \cite[Proposition 1.9.1]{Tohoku}) an epimorphism $f:\underset{j\in J}{\bigoplus}\textrm{ }G_j\longrightarrow X$, corresponding to a collection of maps $f_j:G_j\longrightarrow X$, with each $G_j$ a generator from the collection $\{ G_k\}_{k\in K}$. Accordingly, we can choose morphisms $g_j:G_j\longrightarrow Y$ such that $i_Y\circ g_j=i_X\circ f_j$ for each $j\in J$. Together, these $\{g_j\}_{j\in J}$ determine a morphism $g:\underset{j\in J}{\bigoplus}\textrm{ }G_j\longrightarrow Y$ satisfying $i_Y\circ g=i_X\circ f$. Since $i_X$, $i_Y$ are monomorphisms and $f$ is an epimorphism, we have \begin{equation} X=Im(i_X)=Im(i_X\circ f)=Im(i_Y\circ g)=Im(i_Y|Im(g))\subseteq Im(i_Y)=Y \end{equation} \end{proof} \begin{lem}\label{L3.2} Let $\mathcal G$ be a Grothendieck category having a set of projective generators $\{ G_k\}_{k\in K}$. Let $f:X\longrightarrow Y$ be a morphism in $\mathcal G$. Let $i:X'\hookrightarrow X$ and $j:Y'\hookrightarrow Y$ be monomorphisms. Suppose that for any $k\in K$ and any morphism $f_k:G_k\longrightarrow X'$, there exists a morphism $g_k:G_k\longrightarrow Y'$ such that $f\circ i\circ f_k=j\circ g_k:G_k \longrightarrow Y$. Then, there exists $f':X'\longrightarrow Y'$ such that $j\circ f'=f\circ i$. \end{lem} \begin{proof} It suffices to show that $Im(f\circ i)\subseteq Y'$. We choose any $k\in K$ and a morphism $h_k:G_k\longrightarrow Im(f\circ i)\hookrightarrow Y$. Since $G_k$ is projective, we can choose $f_k:G_k\longrightarrow X'$ such that $f\circ i\circ f_k=h_k$. By assumption, we can now find $g_k:G_k\longrightarrow Y'$ such that $f\circ i\circ f_k=j\circ g_k:G_k \longrightarrow Y$. In particular, $j\circ g_k=h_k$. Applying Lemma \ref{L3.1}, we obtain $Im(f\circ i)\subseteq Y'$. \end{proof} \begin{lem}\label{L3.3} Let $(\mathcal R,C,\psi)$ be an entwining structure. Let $V$ be a right $C$-comodule. Then, for any $r\in \mathcal R$, the module $V\otimes H_r$ given by \begin{equation} \begin{array}{c} (V\otimes H_r)(r')=V\otimes \mathcal R(r',r)\\ (V\otimes H_r)(f):(V\otimes H_r)(r')\longrightarrow (V\otimes H_r)(r'')\qquad v\otimes g\mapsto v\otimes gf \end{array} \end{equation} for $r'\in \mathcal R$, $f\in \mathcal R(r'',r')$ is an entwined module in $\mathbf M^C_{\mathcal R}(\psi)$. Here, the right $C$-comodule structure on $(V\otimes H_r)(r')$ is given by taking $v\otimes g$ to $v_0\otimes g_\psi\otimes v^\psi_1$. \end{lem} \begin{proof} See \cite[Lemma 2.5]{BBR}. \end{proof} For the rest of this section, we will assume that the coalgebra $C$ is such that the category $Comod-C$ of right $C$-comodules has enough projective objects. In other words, the coalgebra $C$ is right semiperfect (see \cite[Definition 3.2.4]{book3}). \begin{thm}\label{P3.4} Let $(\mathcal R,C,\psi)$ be an entwining structure with $C$ a right semiperfect coalgebra. Let $V$ be a projective right $C$-comodule. Then, for any $r\in \mathcal R$, the module $V\otimes H_r$ is a projective object of $\mathbf M_{\mathcal R}^C(\psi)$. \end{thm} \begin{proof} We begin with a morphism $\zeta: V\otimes H_r\longrightarrow \mathcal M$ and an epimorphism $\eta:\mathcal N\longrightarrow \mathcal M$ in $\mathbf M_{\mathcal R}^C(\psi)$. In particular, we consider the composition \begin{equation}\label{eq3.3e} V\longrightarrow V\otimes H_r(r)\longrightarrow \mathcal M(r) \qquad v\mapsto v\otimes id_r\mapsto \zeta(r)(v\otimes id_r) \end{equation} which is a morphism in $Comod-C$. Since $V$ is projective, we can lift the map in \eqref{eq3.3e} to a map $T:V\longrightarrow \mathcal N(r)$ in $Comod-C$ such that $(\eta(r)(T(v))=\zeta(r)(v\otimes id_r)$ for each $v\in V$. \smallskip We now define $\xi:V\otimes H_r\longrightarrow \mathcal N$ by setting for each $s\in \mathcal R$ \begin{equation}\label{eq3.4e} \xi(s):V\otimes H_r(s)\longrightarrow \mathcal N(s)\qquad v\otimes g\mapsto \mathcal N(g)(T(v)) \end{equation} We first check that $\xi:V\otimes H_r\longrightarrow \mathcal N$ is a morphism in $\mathbf M_{\mathcal R}$. Given $g'\in \mathcal R(s',s)$, we have \begin{equation}\label{pf3.5} \mathcal N(g')(\xi(s)(v\otimes g))=\mathcal N(gg')(T(v))=\xi(s')(v\otimes gg')=\xi(s')((V\otimes H_r)(g')(v\otimes g)) \end{equation} We also have, for $v\otimes g\in V\otimes H_r(s)$, \begin{equation} \begin{array}{ll} \xi(s)(v\otimes g)_0\otimes \xi(s)(v\otimes g)_1 & =\mathcal N(g)(T(v))_0\otimes \mathcal N(g)(T(v))_1\\ & = T(v)_0g_\psi\otimes T(v)_1^\psi \\ &= T(v_0)g_\psi \otimes v_1^\psi\\ & =\mathcal N(g_\psi)(T(v_0))\otimes v_1^\psi=(\xi(s)\otimes id_C)(v_0\otimes g_\psi\otimes v_1^\psi)\\ \end{array} \end{equation} This shows that $\xi(s):V\otimes H_r(s)\longrightarrow \mathcal N(s)$ is a morphism in $Comod-C$. Together with \eqref{pf3.5}, it follows that $\xi:V\otimes H_r\longrightarrow \mathcal N$ is a morphism in $\mathbf M_{\mathcal R}^C(\psi)$. Finally, we see that for $v\otimes g\in V\otimes H_r(s)$, we have \begin{equation} \begin{array}{ll} (\eta(s)\circ \xi(s))(v\otimes g)& =\eta(s)(\mathcal N(g)(T(v)))\\ &= \mathcal M(g)(\eta(r)(T(v)))\\ & = \mathcal M(g)(\zeta(r)(v\otimes id_r))\\ & =\zeta(s)((V\otimes H_r)(g)(v\otimes id_r))\\ &= \zeta(s)(v\otimes g)\\ \end{array} \end{equation} This gives us $\eta\circ \xi=\zeta:V\otimes H_r\longrightarrow \mathcal M$. Hence the result. \end{proof} \begin{Thm}\label{T3.5} Let $(\mathcal R,C,\psi)$ be an entwining structure and let $C$ be a right semiperfect $K$-coalgebra. Then, the category $\mathbf M_{\mathcal R}^C(\psi)$ of entwined modules is a Grothendieck category with a set of projective generators. \end{Thm} \begin{proof} From \cite[Proposition 2.9]{BBR}, we know that $\mathbf M_{\mathcal R}^C(\psi)$ is a Grothendieck category. Let $\mathcal M$ be an object of $\mathbf M_{\mathcal R}^C(\psi)$. From the proof of \cite[Proposition 2.9]{BBR}, we know that there exists an epimorphism \begin{equation} \eta' : \underset{i\in I}{\bigoplus}V'_i\otimes H_{r_i}\longrightarrow \mathcal M \end{equation} where each $r_i\in \mathcal R$ and each $V'_i$ is a finite dimensional $C$-comodule. Since $Comod-C$ has enough projectives, it follows from \cite[Corollary 2.4.21]{book3} that we can choose for each $V'_i$ an epimorphism $V_i\longrightarrow V'_i$ in $Comod-C$ such that $V_i$ is a finite dimensional projective in $Comod-C$. This induces an epimorphism \begin{equation} \eta : \underset{i\in I}{\bigoplus}V_i\otimes H_{r_i}\longrightarrow \mathcal M \end{equation} The collection $\{V\otimes H_r\}$ now gives a set of projective generators for $\mathbf M_{\mathcal R}^C(\psi)$, where $r\in \mathcal R$ and $V$ ranges over (isomorphism classes of) finite dimensional projective $C$-comodules. \end{proof} \section{Modules over an entwined representation} We fix a $K$-coalgebra $C$ which is right semiperfect. We consider the category $\mathscr Ent_C$ whose objects are entwining structures $(\mathcal R,C,\psi)$. A morphism in $\mathscr Ent_C$ is a map $(\alpha,id):(\mathcal R,C,\psi)\longrightarrow (\mathcal R',C,\psi')$ of entwining structures, which we will denote simply by $\alpha$. From Section 2, it follows that we have adjoint functors \begin{equation}\label{eq4.1} \begin{array}{c} \alpha^\ast=(\alpha,id_C)^\ast: \mathbf M^C_{\mathcal R}(\psi)\longrightarrow \mathbf M^C_{\mathcal R'}(\psi')\qquad \alpha_\ast=(\alpha,id_C)_\ast :\mathbf M^C_{\mathcal R'}(\psi')\longrightarrow \mathbf M^C_{\mathcal R}(\psi)\\ \end{array} \end{equation} We note in particular that the functors $\alpha_\ast=(\alpha,id_C)_\ast$ are exact. In fact, the functors $\alpha_\ast$ preserve both limits and colimits. \smallskip \begin{defn}\label{D4.1} Let $\mathscr X$ be a small category. Let $C$ be a right semiperfect coalgebra over the field $K$. By an entwined $C$-representation of a small category, we will mean a functor $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. \smallskip In particular, for each object $x\in \mathscr X$, we have an entwining structure $(\mathscr R_x,C,\psi_x)$. Given a morphism $\alpha : x\longrightarrow y$ in $\mathscr X$, we have a morphism $\mathscr R_\alpha=(\mathscr R_\alpha,id_C):(\mathscr R_x,C,\psi_x) \longrightarrow (\mathscr R_y,C,\psi_y)$ of entwining structures. \end{defn} By abuse of notation, if $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is an entwined $C$-representation, we will write \begin{equation} \alpha^\ast=\mathscr R_\alpha^\ast: \mathbf M_{\mathscr R_x}^C(\psi_x)\longrightarrow \mathbf M_{\mathscr R_y}^C(\psi_y) \qquad \alpha_\ast=\mathscr R_{\alpha\ast}: \mathbf M_{\mathscr R_y}^C(\psi_y)\longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x) \end{equation} for any morphism $\alpha:x\longrightarrow y$ in $\mathscr X$. Also by abuse of notation, if $f:r'\longrightarrow r$ is a morphism in $\mathscr R_x$, we will often denote $\mathscr R_\alpha(f):\mathscr R_\alpha(r')\longrightarrow \mathscr R_\alpha(r)$ in $\mathscr R_y$ simply as $\alpha(f):\alpha(r')\longrightarrow \alpha(r)$. We will now consider modules over an entwined $C$-representation. \begin{defn}\label{D4.2} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$. An entwined module $\mathscr M$ over $\mathscr R$ will consist of the following data \smallskip (1) For each object $x\in \mathscr X$, an entwined module $\mathscr M_x\in \mathbf M_{\mathscr R_x}^C(\psi_x)$. \smallskip (2) For each morphism $\alpha : x\longrightarrow y$ in $\mathscr X$, a morphism $\mathscr M_\alpha: \mathscr M_x\longrightarrow \alpha_\ast\mathscr M_y$ in $\mathbf M_{\mathscr R_x}^C(\psi_x)$ (equivalently, a morphism $\mathscr M^\alpha: \alpha^\ast\mathscr M_x\longrightarrow \mathscr M_y$ in $\mathbf M_{\mathscr R_y}^C(\psi_y)$). \smallskip Further, we suppose that $\mathscr M_{id_x}=id_{\mathscr M_x}$ for each $x\in \mathscr X$ and that for any composable morphisms $x\overset{\alpha}{\longrightarrow}y\overset{\beta}{\longrightarrow}z$, we have $\alpha_\ast(\mathscr M_\beta)\circ \mathscr M_\alpha=\mathscr M_{\beta\alpha}:\mathscr M_x\longrightarrow \alpha_\ast\mathscr M_y\longrightarrow \alpha_\ast\beta_\ast\mathscr M_z=(\beta\alpha)_\ast\mathscr M_z$. The latter condition may be expressed in any of two equivalent ways \begin{equation}\label{4.25d} \mathscr M_{\beta\alpha}=\alpha_\ast(\mathscr M_\beta)\circ \mathscr M_\alpha\qquad\Leftrightarrow\qquad\mathscr M^{\beta\alpha}=\mathscr M^\beta\circ \beta^\ast(\mathscr M^\alpha) \end{equation} \smallskip A morphism $\eta:\mathscr M\longrightarrow \mathscr N$ of entwined modules over $\mathscr R$ consists of morphisms $\eta_x:\mathscr M_x\longrightarrow \mathscr N_x$ in each $\mathbf M^C_{\mathscr R_x}(\psi_x)$ such that the following diagram commutes \begin{equation} \begin{CD} \mathscr M_x @>\eta_x>> \mathscr N_x\\ @V\mathscr M_\alpha VV @VV\mathscr N_\alpha V \\ \alpha_\ast\mathscr M_y @>\alpha_\ast\eta_y>> \alpha_\ast\mathscr N_y \\ \end{CD} \end{equation} for each $\alpha:x\longrightarrow y$ in $\mathscr R$. The category of entwined modules over $\mathscr R$ will be denoted by $Mod^C-\mathscr R$. \end{defn} \begin{thm}\label{P4.3} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$. Then, $Mod^C-\mathscr R$ is an abelian category. \end{thm} \begin{proof} Let $\eta:\mathscr M\longrightarrow \mathscr N$ be a morphism $Mod^C-\mathscr R$. We define the kernel and cokernel of $\eta$ by setting \begin{equation} Ker(\eta)_x:=Ker(\eta_x:\mathscr M_x\longrightarrow \mathscr N_x)\qquad Cok(\eta)_x:=Cok(\eta_x:\mathscr M_x\longrightarrow \mathscr N_x) \end{equation} for each $x\in \mathscr X$. For $\alpha:x\longrightarrow y$ in $\mathscr X$, the morphisms $Ker(\eta)_\alpha$ and $Cok(\eta)_\alpha$ are induced in the obvious manner, using the fact that $\alpha_\ast:\mathbf M^C_{\mathscr R_y}(\psi_y)\longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x)$ is exact. From this, it is also clear that $Cok(Ker(\eta)\hookrightarrow \mathscr M)=Ker(\mathscr N\twoheadrightarrow Cok(\eta))$. \end{proof} We now let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$ and let $\mathscr M$ be an entwined module over $\mathscr R$. We consider some $x\in \mathscr X$ and a morphism \begin{equation} \eta: V\otimes H_r\longrightarrow \mathscr M_x \end{equation} in $\mathbf M_{\mathscr R_x}^C(\psi_x)$, where $V$ is a finite dimensional projective in $Comod-C$ and $r\in \mathscr R_x$. For each $y\in \mathscr X$, we now set $\mathscr N_y\subseteq \mathscr M_y$ to be the image of the family of maps \begin{equation}\label{eq4.6} \begin{array}{ll} \mathscr N_y&=Im\left(\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\begin{CD}\beta^\ast (V\otimes H_r)@>\bigoplus \beta^\ast\eta>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right)\\ &=\underset{\beta\in \mathscr X(x,y)}{\sum}\textrm{ }Im\left(\begin{CD}\beta^\ast (V\otimes H_r)@>\beta^\ast\eta>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right)\\ \end{array} \end{equation} We denote by $\iota_y$ the inclusion $\iota_y:\mathscr N_y\hookrightarrow\mathscr M_y$. For each $\beta\in \mathscr X(x,y)$, we denote by $\eta'_\beta:\beta^\ast (V\otimes H_r) \longrightarrow \mathscr N_y$ the canonical morphism induced from \eqref{eq4.6}. \begin{lem}\label{L4.4} For any $\alpha\in \mathscr X(y,z)$, $\beta\in \mathscr X(x,y)$, the following composition \begin{equation} \begin{CD} \beta^\ast (V\otimes H_r)@>\eta'_\beta>> \mathscr N_y @>\iota_y>> \mathscr M_y @>\mathscr M_\alpha>> \alpha_\ast\mathscr M_z \end{CD} \end{equation} factors through $\alpha_\ast(\iota_z):\alpha_\ast\mathscr N_z\longrightarrow\alpha_\ast\mathscr M_z$. \end{lem} \begin{proof} Since $(\alpha^\ast,\alpha_\ast)$ is an adjoint pair, it suffices to show that the composition \begin{equation} \begin{CD} \alpha^\ast\beta^\ast (V\otimes H_r)@>\alpha^\ast(\eta'_\beta)>> \alpha^\ast\mathscr N_y @>\alpha^\ast(\iota_y)>> \alpha^\ast\mathscr M_y @>\mathscr M^\alpha>> \mathscr M_z \end{CD} \end{equation}factors through $\iota_z:\mathscr N_z\longrightarrow \mathscr M_z$. By definition, we know that the composition $\beta^\ast (V\otimes H_r)\xrightarrow{\eta'_\beta}\mathscr N_y\xrightarrow{\iota_y}\mathscr M_y$ factors through $\beta^\ast\mathscr M_x$, i.e., we have \begin{equation} \iota_y\circ \eta'_\beta=\mathscr M^\beta\circ \beta^\ast\eta \end{equation} Applying $\alpha^\ast$, composing with $\mathscr M^\alpha$ and using \eqref{4.25d}, we get \begin{equation} \mathscr M^\alpha\circ \alpha^\ast(\iota_y)\circ \alpha^\ast(\eta'_\beta)=\mathscr M^\alpha\circ \alpha^\ast(\mathscr M^\beta)\circ \alpha^\ast(\beta^\ast\eta)=\mathscr M^{\alpha\beta}\circ \alpha^\ast\beta^\ast\eta \end{equation} From the definition in \eqref{eq4.6}, it is now clear that the composition $\mathscr M^\alpha\circ \alpha^\ast(\iota_y)\circ \alpha^\ast(\eta'_\beta)=\mathscr M^{\alpha\beta}\circ \alpha^\ast\beta^\ast\eta$ factors through $\iota_z:\mathscr N_z\longrightarrow \mathscr M_z$ as $\mathscr M^\alpha\circ \alpha^\ast(\iota_y)\circ \alpha^\ast(\eta'_\beta)=\iota_z\circ \eta'_{\alpha\beta}$. \end{proof} \begin{thm}\label{P4.5} For any $\alpha\in \mathscr X(y,z)$, the morphism $\mathscr M_\alpha:\mathscr M_y\longrightarrow \alpha_\ast\mathscr M_z$ restricts to a morphism $\mathscr N_\alpha:\mathscr N_y\longrightarrow \alpha_\ast\mathscr N_z$, giving us a commutative diagram \begin{equation}\label{cd4} \begin{CD} \mathscr M_y @>\mathscr M_\alpha>>\alpha_\ast\mathscr M_z\\ @A\iota_yAA @AA\alpha_\ast(\iota_z)A \\ \mathscr N_y @>\mathscr N_\alpha >> \alpha_\ast\mathscr N_z\\ \end{CD} \end{equation} \end{thm} \begin{proof} We already know that $\iota_z:\mathscr N_z\longrightarrow \mathscr M_z$ is a monomorphism. Since $\alpha_\ast$ is a right adjoint, it follows that $\alpha_\ast(\iota_z)$ is also a monomorphism. Since $C$ is right semiperfect, we know from Theorem \ref{T3.5} that $\mathbf M^C_{\mathscr R_y}(\psi_y)$ is a Grothendieck category with projective generators $\{G_k\}_{k\in K}$.Using Lemma \ref{L3.2}, it suffices to show that for any $k\in K$ and any morphism $\xi_k : G_k \longrightarrow \mathscr N_y$, there exists $\xi'_k:G_k\longrightarrow \alpha_\ast\mathscr N_z$ such that $\alpha_\ast(\iota_z)\circ \xi'_k=\mathscr M_\alpha\circ \iota_y\circ \xi_k$. \smallskip From \eqref{eq4.6}, we have an epimorphism \begin{equation}\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\eta'_\beta:\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\beta^\ast(V\otimes H_r)\longrightarrow \mathscr N_y \end{equation} Since $G_k$ is projective, we can lift $\xi_k : G_k \longrightarrow \mathscr N_y$ to a morphism $\xi''_k:G_k\longrightarrow \underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\beta^\ast(V\otimes H_r)$ such that \begin{equation}\xi_k=\left(\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\eta'_\beta\right)\circ \xi''_k \end{equation} From Lemma \ref{L4.4}, we know that $\mathscr M_\alpha\circ \iota_y\circ \eta'_\beta$ factors through $\alpha_\ast(\iota_z):\alpha_\ast\mathscr N_z\longrightarrow\alpha_\ast\mathscr M_z$ for each $\beta\in \mathscr X(x,y)$. The result is now clear. \end{proof} Using the adjointness of $(\alpha^\ast,\alpha_\ast)$, we can also obtain a morphism $\mathscr N^\alpha:\alpha^\ast\mathscr N_y\longrightarrow \mathscr N_z$ for each $\alpha\in \mathscr X(y,z)$, corresponding to the morphism $\mathscr N_\alpha:\mathscr N_y\longrightarrow\alpha_\ast\mathscr N_z$ in \eqref{cd4}. The objects $\{\mathscr N_y\in \mathbf M_{\mathscr R_y}^C(\psi_y)\}_{y\in \mathscr X}$, together with the morphisms $\{\mathscr N_\alpha\}_{\alpha\in Mor(\mathscr X)}$ determine an object of $Mod^C-\mathscr R$ that we denote by $\mathscr N$. Additionally, Proposition \ref{P4.5} shows that we have an inclusion $\iota:\mathscr N\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$. Before we proceed further, we will describe the object $\mathscr N$ in a few more ways. \begin{lem}\label{P4.6} Let $\eta'_1:V\otimes H_r\longrightarrow \mathscr N_x$ be the canonical morphism corresponding to the identity map in $\mathscr X(x,x)$. Then, for any $y\in \mathscr X$, we have \begin{equation} \mathscr N_y=Im\left(\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\begin{CD}\beta^\ast (V\otimes H_r)@>\bigoplus\beta^\ast\eta'_1>>\beta^\ast \mathscr N_x@>\mathscr N^\beta>>\mathscr N_y\end{CD}\right) \end{equation} \end{lem} \begin{proof} For any $\beta\in\mathscr X(x,y)$, we consider the commutative diagram \begin{equation}\label{cd4.16} \begin{CD} \beta^\ast(V\otimes H_r)@>\beta^\ast\eta'_1>>\beta^\ast\mathscr N_x @>\mathscr N^\beta>>\mathscr N_y\\ @. @V\beta^\ast(\iota_x)VV @VV\iota_yV\\ @. \beta^\ast\mathscr M_x @>\mathscr M^\beta>> \mathscr M_y\\ \end{CD} \end{equation} By definition, we know that $\iota_x\circ \eta'_1=\eta$, which gives $\beta^\ast(\iota_x)\circ \beta^\ast(\eta'_1)=\beta^\ast(\eta)$. Composing with $\mathscr M^\beta$, we get \begin{equation} Im(\mathscr M^\beta\circ \beta^\ast(\eta))=Im(\mathscr M^\beta\circ \beta^\ast(\iota_x)\circ \beta^\ast(\eta'_1))=Im(\iota_y\circ \mathscr N^\beta\circ \beta^\ast\eta'_1)\cong Im(\mathscr N^\beta\circ \beta^\ast\eta'_1) \end{equation} where the last isomorphism follows from the fact that $\iota_y$ is monic. The result is now clear from the definition in \eqref{eq4.6}. \end{proof} \begin{lem}\label{L4.7} For any $y\in \mathscr X$, we have \begin{equation}\label{ny} \mathscr N_y=\underset{\beta\in \mathscr X(x,y)}{\sum}\textrm{ }Im\left(\begin{CD}\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right) \end{equation} \end{lem} \begin{proof} For the sake of convenience, we set \begin{equation*} \mathscr N'_y:=\underset{\beta\in \mathscr X(x,y)}{\sum}\textrm{ }Im\left(\begin{CD}\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right) \end{equation*} From the commutative diagram in \eqref{cd4.16}, we see that each of the morphisms $\begin{CD}\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}$ factors through the subobject $\mathscr N_y\subseteq \mathscr M_y$. Hence, $\mathscr N_y'\subseteq \mathscr N_y$. On the other hand, it is clear that \begin{equation*} Im\left(\begin{CD}\beta^\ast(V\otimes H_r)@>\beta^\ast\eta_1'>>\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right)\subseteq Im\left(\begin{CD}\beta^\ast \mathscr N_x@>\beta^\ast(\iota_x)>>\beta^\ast \mathscr M_x@>\mathscr M^\beta>>\mathscr M_y\end{CD}\right) \end{equation*} Applying Lemma \ref{P4.6}, it is now clear that $\mathscr N_y\subseteq \mathscr N_y'$. This proves the result. \end{proof} We now make a few conventions : if $\mathcal M$ is a module over a small $K$-linear category $\mathcal R$, we denote by $el(\mathcal M)$ the union $\underset{r\in \mathcal R}{\bigcup}\textrm{ }\mathcal M(r)$. The cardinality of $el(\mathcal M)$ will be denoted by $|\mathcal M|$. If $\mathscr M$ is a module over an entwined $C$-representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$, we denote by $el_{\mathscr X}(\mathscr M)$ the union $\underset{x\in \mathscr X}{\bigcup}\textrm{ }el(\mathscr M_x)$. The cardinality of $el_{\mathscr X}(\mathscr M)$ will be denoted by $|\mathscr M|$. It is evident that if $\mathscr M\in Mod^C-\mathscr R$ and $\mathscr N$ is either a quotient or a subobject of $\mathscr M$, then $|\mathscr N|\leq |\mathscr M|$. \smallskip We now define the following cardinality \begin{equation} \kappa =sup\{ \mbox{$|\mathbb N|$, $|C|$, $|K|$, $|Mor(\mathscr X)|$, $|Mor(\mathscr R_x)|$, $x\in \mathscr X$}\} \end{equation} We observe that $|\beta^\ast(V\otimes H_r)|\leq\kappa$, where $V$ is any finite dimensional $C$-comodule and $\beta\in \mathscr X(x,y)$. \begin{lem}\label{L4.8} We have $|\mathscr N|\leq \kappa$. \end{lem} \begin{proof} We choose $y\in \mathscr X$. From Lemma \ref{P4.6}, we have \begin{equation} \mathscr N_y=Im\left(\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\begin{CD}\beta^\ast (V\otimes H_r)@>\bigoplus\beta^\ast\eta'_1>>\beta^\ast \mathscr N_x@>\mathscr N^\beta>>\mathscr N_y\end{CD}\right) \end{equation} Since $\mathscr N_y$ is an epimorphic image of $\underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\beta^\ast (V\otimes H_r)$, we have \begin{equation}|\mathscr N_y|\leq | \underset{\beta\in \mathscr X(x,y)}{\bigoplus}\textrm{ }\beta^\ast (V\otimes H_r)|\leq \kappa \end{equation} It follows that $ |\mathscr N|=\underset{y\in \mathscr X}{\sum}\textrm{ }|\mathscr N_y|\leq \kappa $. \end{proof} \begin{Thm}\label{T4.9} Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr R:\mathscr X\longrightarrow\mathscr Ent_C$ be an entwined $C$-representation of a small category $\mathscr X$. Then, the category $Mod^C-\mathscr R$ of entwined modules over $\mathscr R$ is a Grothendieck category. \end{Thm} \begin{proof} Since filtered colimits and finite limits in $Mod^C-\mathscr R$ are computed pointwise, it is clear that they commute with each other. \smallskip We now consider an object $\mathscr M$ in $Mod^C-\mathscr R$ and an element $m\in el_{\mathscr X}(\mathscr M)$. Then, $m\in \mathscr M_x(r)$ for some $x\in \mathscr X$ and $r\in \mathscr R_x$. By \cite[Lemma 2.8]{BBR}, we can find a finite dimensionsal $C$-subcomodule $V'\subseteq \mathscr M_x(r)$ containing $m$ and a morphism $\eta': V'\otimes H_r\longrightarrow \mathscr M_x$ in $\mathbf M_{\mathscr R_x}^C(\psi_x)$ such that $\eta'(r)(m\otimes id_r)=m$. Since $C$ is semiperfect, we can choose a finite dimensional projective $V$ in $Comod-C$ along with an epimorphism $V\longrightarrow V'$. This induces a morphism $\eta:V\otimes H_r\longrightarrow \mathscr M_x$ in $\mathbf M_{\mathscr R_x}^C(\psi_x)$. Corresponding to $\eta$, we now define the subobject $\mathscr N\subseteq \mathscr M$ as in \eqref{eq4.6}. It is clear that $m\in el_{\mathscr X}(\mathscr N)$. By Lemma \ref{L4.8}, we know that $|\mathscr N|\leq \kappa$. \smallskip We now consider the set of isomorphism classes of objects in $Mod^C-\mathscr R$ having cardinality $\leq \kappa$. From the above, it is clear that any object in $Mod^C-\mathscr R$ may be expressed as a sum of such objects. By choosing one object from each such isomorphism class, we obtain a set of generators for $Mod^C-\mathscr R$. \end{proof} \section{Entwined representations of a poset and projective generators} In this section, the small category $\mathscr X$ will always be a partially ordered set. If $x\leq y$ in $\mathscr X$, we will say that there is a single morphism $x\longrightarrow y$ in $\mathscr X$. We continue with $C$ being a right semiperfect coalgebra over the field $K$ and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ being an entwined $C$-representation of $\mathscr X$. From Theorem \ref{T4.9}, we know that $Mod^C-\mathscr R$ is a Grothendieck category. \smallskip In this section, we will show that $Mod^C-\mathscr R$ has projective generators. For this, we will construct a pair of adjoint functors \begin{equation}\label{adjexev} ex_x^C:\mathbf M_{\mathscr R_x}^C(\psi_x)\longrightarrow Mod^C-\mathscr R \qquad ev_x^C:Mod^C-\mathscr R\longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x) \end{equation} for each $x\in \mathscr X$. \begin{lem}\label{L5.1} Let $\mathscr X$ be a poset. Fix $x\in \mathscr X$. Then, there is a functor $ex_x^C:\mathbf M_{\mathscr R_x}^C(\psi_x)\longrightarrow Mod^C-\mathscr R$ defined by setting \begin{equation} ex_x^C(\mathcal M)_y:=\left\{ \begin{array}{ll} \alpha^\ast\mathcal M & \mbox{if $\alpha\in \mathscr X(x,y)$}\\ 0 & \mbox{if $\mathscr X(x,y)=\phi$}\\ \end{array}\right. \end{equation} for each $y\in \mathscr X$. \end{lem} \begin{proof} It is immediate that each $ex_x^C(\mathcal M)_y\in \mathbf M_{\mathscr R_y}^C(\psi_y)$. We consider $\beta:y\longrightarrow y'$ in $\mathscr X$. If $x\not\leq y$, we have $0=ex_x^C(\mathcal M)^\beta:0=\beta^\ast ex_x^C(\mathcal M)_y\longrightarrow ex_x^C(\mathcal M)_{y'}$ in $\mathbf M_{\mathscr R_{y'}}^C(\psi_{y'})$. Otherwise, we consider $\alpha:x\longrightarrow y$ and $\alpha':x\longrightarrow y'$. Then, we have \begin{equation*} id=ex_x^C(\mathcal M)^\beta :\beta^\ast ex_x^C(\mathcal M)_y=\beta^\ast\alpha^\ast \mathcal M\longrightarrow \alpha'^\ast\mathcal M=ex_x^C(\mathcal M)_{y'} \end{equation*} which follows from the fact that $\beta\circ \alpha=\alpha'$. Given composable morphisms $\beta$, $\gamma$ in $\mathscr X$, it is now clear from the definitions that $ex_x^C(\mathcal M)^{\gamma\beta}=ex_x^C(\mathcal M)^\gamma\circ \gamma^\ast(ex_x^C(\mathcal M)^\beta)$. \end{proof} \begin{lem}\label{L5.2} Let $\mathscr X$ be a poset. Fix $x\in \mathscr X$. Then, there is a functor \begin{equation} ev_x^C:Mod^C-\mathscr R\longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x)\qquad \mathscr M\mapsto \mathscr M_x \end{equation} Additionally, $ev_x^C$ is exact. \end{lem} \begin{proof} It is immediate that $ev_x^C$ is a functor. Since finite limits and finite colimits in $Mod^C-\mathscr R$ are computed pointwise, it follows that $ev_x^C$ is exact. \end{proof} \begin{thm}\label{P5.3}Let $\mathscr X$ be a poset. Fix $x\in \mathscr X$. Then, $(ex_x^C,ev_x^C)$ is a pair of adjoint functors. \end{thm} \begin{proof} For any $\mathcal M\in \mathbf M_{\mathscr R_x}^C(\psi_x)$ and $\mathscr N\in Mod^C-\mathscr R$, we will show that \begin{equation} Mod^C-\mathscr R(ex_x^C(\mathcal M),\mathscr N)\cong \mathbf M_{\mathscr R_x}^C(\psi_x)(\mathcal M,ev_x^C(\mathscr N)) \end{equation} We begin with a morphism $f:\mathcal M\longrightarrow \mathscr N_x$ in $\mathbf M_{\mathscr R_x}^C(\psi_x)$. Corresponding to $f$, we define $\eta^f:ex_x^C(\mathcal M)\longrightarrow\mathscr N$ in $Mod^C-\mathscr R$ by setting \begin{equation} \eta^f_y:ex^C_x(\mathcal M)_y=\alpha^\ast\mathcal M\xrightarrow{\alpha^\ast f}\alpha^\ast\mathscr N_x\xrightarrow{\mathscr N^\alpha}\mathscr N_y \end{equation} whenever $x\leq y$ and $\alpha\in \mathscr X(x,y)$. Otherwise, we set $0=\eta^f_y:0=ex^C_x(\mathcal M)_y\longrightarrow \mathscr N_y$. For $\beta:y\longrightarrow y'$ in $\mathscr X$, we have to show that the following diagram is commutative. \begin{equation}\label{5.6cd} \begin{CD} \beta^\ast ex_x^C(\mathcal M)_y @>\beta^\ast\eta^f_y>> \beta^\ast\mathscr N_y \\ @Vex_x^C(\mathcal M)^\beta VV @VV\mathscr N^\beta V \\ ex_x^C(\mathcal M)_{y'} @>\eta^f_{y'}>> \mathscr N_{y'} \\ \end{CD} \end{equation} If $x\not\leq y$, then $ex_x^C(\mathcal M)_y=0$ and the diagram commutes. Otherwise, we consider $\alpha:x\longrightarrow y$ and $\alpha'=\beta\circ \alpha:x\longrightarrow y'$. Then, \eqref{5.6cd} reduces to the commutative diagram \begin{equation} \begin{CD} \beta^\ast \alpha^\ast\mathcal M @>\beta^\ast(\mathscr N^\alpha\circ \alpha^\ast f)>> \beta^\ast\mathscr N_y \\ @VidVV @VV\mathscr N^\beta V \\ \beta^\ast\alpha^\ast\mathcal M=\alpha'^\ast\mathcal M @>\mathscr N^{\alpha'}\circ \alpha'^\ast(f)=\mathscr N^\beta\circ\beta^\ast(\mathscr N^\alpha)\circ \beta^\ast\alpha^\ast f>>\mathscr N_{y'} \\ \end{CD} \end{equation} Conversely, we take $\eta:ex_x^C(\mathcal M)\longrightarrow \mathscr N$ in $Mod^C-\mathscr R$. In particular, this determines $f^\eta=\eta_x:\mathcal M\longrightarrow \mathscr N_x$ in $ \mathbf M_{\mathscr R_x}^C(\psi_x)$. It may be easily verified that these two associations are inverse to each other. This proves the result. \end{proof} \begin{cor}\label{C5.4} The functor $ex_x^C:\mathbf M_{\mathscr R_x}^C(\psi_x)\longrightarrow Mod^C-\mathscr R $ preserves projectives. \end{cor} \begin{proof} From Proposition \ref{P5.3}, we know that $(ex_x^C,ev_x^C)$ is a pair of adjoint functors. From Lemma \ref{L5.2}, we know that the right adjoint $ev_x^C$ is exact. It follows therefore that its left adjoint $ex_x^C$ preserves projective objects. \end{proof} \begin{Thm}\label{T5.5} Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr X$ be a poset and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. Then, $Mod^C-\mathscr R$ has projective generators. \end{Thm} \begin{proof} We denote by $Proj^f(C)$ the set of isomorphism classes of finite dimensional projective $C$-comodules. We will show that the family \begin{equation}\mathcal G=\{\mbox{$ex_x^C(V\otimes H_r)$ $\vert$ $x\in \mathscr X$, $r\in \mathscr R_x$, $V\in Proj^f(C)$}\} \end{equation} is a set of projective generators for $Mod^C-\mathscr R$. From Proposition \ref{P3.4}, we know that $V\otimes H_r$ is projective in $\mathbf M_{\mathscr R_x}^C(\psi_x)$, where $r\in \mathscr R_x$ and $V\in Proj^f(C)$. It now follows from Corollary \ref{C5.4} that each $ex_x^C(V\otimes H_r)$ is projective in $Mod^C-\mathscr R$. \smallskip It remains to show that $\mathcal G$ is a set of generators for $Mod^C-\mathscr R$. For this, we consider a monomorphism $\iota:\mathscr N\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ such that $\mathscr N\subsetneq\mathscr M$. Since kernels and cokernels in $Mod^C-\mathscr R$ are taken pointwise, it follows that there is some $x\in \mathscr X$ such that $\iota_x:\mathscr N_x\hookrightarrow\mathscr M_x$ is a monomorphism with $\mathscr N_x\subsetneq\mathscr M_x$. \smallskip From the proof of Theorem \ref{T3.5}, we know that $\{V\otimes H_r\}_{r\in \mathscr R_x,V\in Proj^f(C)}$ is a set of generators for $\mathbf M^C_{\mathscr R_x}(\psi_x)$. Accordingly, we can choose a morphism $f:V\otimes H_r\longrightarrow \mathscr M_x$ with $r\in \mathscr R_x$ and $V\in Proj^f(C)$ such that $f$ does not factor through $ev_x^C(\iota)=\iota_x:\mathscr N_x\hookrightarrow\mathscr M_x$. Applying the adjunction $(ex^C_x,ev_x^C)$, we now obtain a morphism $\eta:ex_x^C(V\otimes H_r)\longrightarrow \mathscr M$ corresponding to $f$, which does not factor through $\iota: \mathscr N\longrightarrow \mathscr M$. It now follows (see, for instance, \cite[$\S$ 1.9]{Tohoku}) that the family $\mathcal G$ is a set of generators for $Mod^C-\mathscr R$. \end{proof} \section{Cartesian modules over entwined representations} We continue with $\mathscr X$ being a poset, $C$ being a right semiperfect $K$-coalgebra and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ being an entwined $C$-representation of $\mathscr X$. In this section, we will introduce the category of cartesian modules over $\mathscr R$. \smallskip Given a morphism $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ in $\mathscr Ent_C$, we already know that the left adjoint $\alpha^\ast$ is right exact. We will say that $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ is flat if $\alpha^\ast: \mathbf M^C_{\mathcal R}(\psi)\longrightarrow \mathbf M^C_{\mathcal S}(\psi')$ is exact. Accordingly, we will say that an entwined $C$-representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat if $\alpha^\ast=\mathscr R_\alpha^\ast:\mathbf M^C_{\mathscr R_x}(\psi_x) \longrightarrow \mathbf M^C_{\mathscr R_y}(\psi_y)$ is exact for each $\alpha:x\longrightarrow y$ in $\mathscr X$. \begin{defn}\label{D6.1} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. Suppose that $\mathscr R$ is flat. Let $\mathscr M$ be an entwined module over $\mathscr R$. We will say that $\mathscr M$ is cartesian if for each $\alpha :x\longrightarrow y$ in $\mathscr X$, the morphism $\mathscr M^\alpha:\alpha^\ast\mathscr M_x\longrightarrow\mathscr M_y$ in $\mathbf M^C_{\mathscr R_y}(\psi_y)$ is an isomorphism. \smallskip We will denote by $Cart^C-\mathscr R$ the full subcategory of $Mod^C-\mathscr R$ consisting of cartesian modules. \end{defn} It is clear that $Cart^C-\mathscr R$ is an abelian category, with filtered colimits and finite limits coming from $Mod^C-\mathscr R$. \smallskip We will now give conditions so that $Cart-\mathscr R$ is a Grothendieck category. For this, we will need some intermediate results. First, we recall (see, for instance, \cite{AR}) that an object $M$ in a Grothendieck category $\mathcal A$ is said to be finitely generated if the functor ${\mathcal A}(M,\_\_)$ satisfies \begin{equation} \underset{i\in I}{\varinjlim}\textrm{ } {\mathcal A}(M,M_i)={\mathcal A}(M,\underset{i\in I}{\varinjlim}\textrm{ } M_i) \end{equation} where $\{M_i\}_{i\in I}$ is any filtered system of objects in $\mathcal A$ connected by monomorphisms. \begin{thm}\label{P6.1} Let $(\mathcal R,C,\psi)$ be an entwining structure with $C$ a right semiperfect coalgebra. Let $V$ be a finite dimensional projective right $C$-comodule. Then, for any $r\in \mathcal R$, the module $V\otimes H_r$ is a finitely generated projective object in $\mathbf M_{\mathcal R}^C(\psi)$. \end{thm} \begin{proof} From Proposition \ref{P3.4}, we already know that $V\otimes H_r$ is a projective object in $\mathbf M_{\mathcal R}^C(\psi)$. To show that it is finitely generated, we consider a filtered system $\{\mathcal M_i\}_{i\in I}$ of objects in $\mathbf M_{\mathcal R}^C(\psi)$ connected by monomorphisms and set $\mathcal M:=\underset{i\in I}{\varinjlim}\textrm{ } \mathcal M_i$. Since $\mathbf M_{\mathcal R}^C(\psi)$ is a Grothendieck category, we note that we have an inclusion $\eta_i:\mathcal M_i\hookrightarrow \mathcal M$ for each $i\in I$. \smallskip We now take a morphism $\zeta:V\otimes H_r\longrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$. We choose a basis $\{v_1,...,v_n\}$ for $V$. For each $1\leq k\leq n$, we now have a morphism in $\mathbf M_{\mathcal R}$ given by \begin{equation} \zeta_k:H_r\longrightarrow V\otimes H_r \qquad H_r(s)=\mathcal R(s,r)\ni f\mapsto v_k\otimes f\in (V\otimes H_r)(s) \end{equation} Then, each composition $\zeta\circ \zeta_k:H_r\longrightarrow\mathcal M$ is a morphism in $\mathbf M_{\mathcal R}$. Since $H_r$ is a finitely generated object in $\mathbf M_{\mathcal R}$, we can now choose $j\in I$ such that every $\zeta\circ \zeta_k$ factors through $\eta_j:\mathcal M_j\hookrightarrow \mathcal M$. We now construct the following pullback diagram in $\mathbf M^C_{\mathcal R}(\psi)$ \begin{equation}\label{eq6.3} \begin{CD} \mathcal N @>>> \mathcal M_j \\ @V\iota VV @VV\eta_jV \\ V\otimes H_r @>\zeta>> \mathcal M\\ \end{CD} \end{equation} Then, $\iota:\mathcal N\longrightarrow\mathcal M$ is a monomorphism in $\mathbf M^C_{\mathcal R}(\psi)$. From the construction of finite limits in $\mathbf M^C_{\mathcal R}(\psi)$, it follows that for each $s\in \mathcal R$, we have a pullback diagram in $Vect_K$ \begin{equation}\label{eq6.4} \begin{CD} \mathcal N(s) @>>> \mathcal M_j(s) \\ @V\iota(s) VV @VV\eta_j(s)V \\ (V\otimes H_r)(s) @>\zeta(s)>> \mathcal M(s)\\ \end{CD} \end{equation} By assumption, we know that $\zeta(s)(v_k\otimes f)\in Im(\eta_j(s))$ for any basis element $v_k$ and any $f\in H_r(s)$. It follows that $Im(\zeta(s))\subseteq Im(\eta_j(s))$ and hence the pullback $\mathcal N(s)=(V\otimes H_r)(s)$. In other words, $\mathcal N=V\otimes H_r$. The result is now clear. \end{proof} \begin{lem}\label{L6.3} Let $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ be a flat morphism in $\mathscr Ent_C$. Let $\mathcal M\in \mathbf M^C_{\mathcal R}(\psi)$. \smallskip (a) There exists a family $\{r_i\}_{i\in I}$ of objects of $\mathcal R$ and a family $\{V_i\}_{i\in I}$ of finite dimensional projective $C$-comodules such that there is an epimorphism in $\mathbf M^C_{\mathcal S}(\psi')$ \begin{equation} \eta: \underset{i\in I}{\bigoplus}\textrm{ } (V_i\otimes H_{\alpha(r_i)})\longrightarrow \alpha^\ast\mathcal M \end{equation} \smallskip (b) Let $s\in \mathcal S$ and let $W$ be a finite dimensional projective in $Comod-C$. Let $\zeta:W\otimes H_s\longrightarrow \alpha^\ast\mathcal M$ be a morphism in $\mathbf M^C_{\mathcal S}(\psi')$. Then, there exists a finite set $\{r_1,...,r_n\}$ of objects of $\mathcal R$, a finite family $\{V_1,...,V_n\}$ of finite dimensional projective $C$-comodules and a morphism $\eta'':\underset{k=1}{\overset{n}{\bigoplus}}V_k\otimes H_{r_k}\longrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ such that $\zeta$ factors through $\alpha^\ast\eta''$. \end{lem} \begin{proof} (a) From the proof of Theorem \ref{T3.5}, we know that there exists an epimorphism in $\mathbf M^C_{\mathcal R}(\psi)$ \begin{equation} \eta' : \underset{i\in I}{\bigoplus}V_i\otimes H_{r_i}\longrightarrow \mathcal M \end{equation} where each $r_i\in \mathcal R$ and each $V_i$ is a finite dimensional projective $C$-comodule. Since $\alpha^\ast:\mathbf M^C_{\mathcal R}(\psi)\longrightarrow \mathbf M^C_{\mathcal S}(\psi')$ is a left adjoint, it induces an epimorphism $\alpha^\ast(\eta')$ in $\mathbf M^C_{\mathcal S}(\psi')$. From the definition in \eqref{ke2.2} and the construction in Proposition \ref{P2.2}, it is clear that $\alpha^\ast(V_i\otimes H_{r_i})=V_i\otimes \alpha^\ast H_{r_i}=V_i\otimes H_{\alpha(r_i)}$. This proves (a). \smallskip (b) We consider the epimorphism $\alpha^\ast\eta'=\eta: \underset{i\in I}{\bigoplus}\textrm{ } (V_i\otimes H_{\alpha(r_i)})\longrightarrow \alpha^\ast\mathcal M$ constructed in (a). From Proposition \ref{P6.1}, we know that $W\otimes H_s$ is a finitely generated projective object in $\mathbf M^C_{\mathcal S}(\psi')$. As such $\zeta:W\otimes H_s\longrightarrow \alpha^\ast\mathcal M$ can be lifted to a morphism $\zeta': W\otimes H_s\longrightarrow \underset{i\in I}{\bigoplus}\textrm{ } (V_i\otimes H_{\alpha(r_i)})$ and $\zeta'$ factors through a finite direct sum of objects from the family $\{V_i\otimes H_{\alpha(r_i)}\}_{i\in I}$. The result is now clear. \end{proof} \begin{lem}\label{L6.4} Let $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ be a flat morphism in $\mathscr Ent_C$. Let $\kappa_1$ be any cardinal such that \begin{equation}\kappa_1\geq max\{ \mbox{$\mathbb N$, $|Mor(\mathcal R)|$, $|C|$, $|K|$}\} \end{equation} Let $\mathcal M\in \mathbf M^C_{\mathcal R}(\psi)$ and let $A\subseteq el(\alpha^\ast\mathcal M)$ be a set of elements such that $|A|\leq \kappa_1$. Then, there is a submodule $\mathcal N\hookrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ with $|\mathcal N|\leq \kappa_1$ such that $A\subseteq el(\alpha^\ast\mathcal N)$. \end{lem} \begin{proof} We consider some element $a\in A\subseteq el(\alpha^\ast\mathcal M)$. Then, we can choose a morphism $\zeta^a:W^a\otimes H_{s^a} \longrightarrow \alpha^\ast\mathcal M$ in $\mathbf M_{\mathcal S}^C(\psi')$ such that $a\in el(Im(\zeta^a))$, where $s^a\in \mathcal S$ and $W^a$ is a finite dimensional projective in $Comod-C$. Using Lemma \ref{L6.3}(b), we can now choose a finite set $\{r^a_1,...,r^a_{n^a}\}$ of objects of $\mathcal R$, a finite family $\{V^a_1,...,V^a_{n^a}\}$ of finite dimensional projective $C$-comodules and a morphism $\eta^{a''}:\underset{k=1}{\overset{n^a}{\bigoplus}}V_k^a\otimes H_{r_k^a}\longrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ such that $\zeta^a$ factors through $\alpha^\ast\eta^{a''}$. We now set \begin{equation} \mathcal N:=Im\left(\eta'':=\underset{a\in A}{\bigoplus}\eta^{a''}:\underset{a\in A}{\bigoplus}\textrm{ }\underset{k=1}{\overset{n^a}{\bigoplus}}V_k^a\otimes H_{r_k^a}\longrightarrow \mathcal M\right) \end{equation} Since $\alpha$ is flat and $\alpha^\ast$ is a left adjoint, we obtain \begin{equation} \alpha^\ast\mathcal N=Im\left(\alpha^\ast\eta''=\underset{a\in A}{\bigoplus}\alpha^\ast\eta^{a''}:\underset{a\in A}{\bigoplus}\textrm{ }\underset{k=1}{\overset{n^a}{\bigoplus}}V_k^a\otimes H_{\alpha(r_k^a)}\longrightarrow\alpha^\ast \mathcal M\right) \end{equation} Since each $a\in el(Im(\zeta^a))$ and $\zeta^a$ factors through $\alpha^\ast\eta^{a''}$, we get $A\subseteq el(\alpha^\ast\mathcal N)$. \smallskip It remains to show that $|\mathcal N|\leq \kappa_1$. Since $\mathcal N$ is a quotient of $\underset{a\in A}{\bigoplus}\textrm{ }\underset{k=1}{\overset{n^a}{\bigoplus}}V_k^a\otimes H_{r_k^a}$ and $|A|\leq \kappa_1$, it suffices to show that each $|V_k^a\otimes H_{r_k^a}|\leq \kappa_1$. This is clear from the definition of $\kappa_1$, using the fact that each $V_k^a$ is finite dimensional. \end{proof} \begin{rem}\label{Rem6.5} \emph{By considering $\alpha=id$ in Lemma \ref{L6.4}, we obtain the following simple consequence: if $A\subseteq el(\mathcal M)$ is any subset with $|A|\leq \kappa_1$, there is a submodule $\mathcal N\hookrightarrow \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ with $|\mathcal N|\leq \kappa_1$ such that $A\subseteq el(\mathcal N)$. } \end{rem} \begin{lem}\label{L6.6} Let $\alpha:(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ be a flat morphism in $\mathscr Ent_C$ and let $\mathcal M\in \mathbf M^C_{\mathcal R}(\psi)$. Let $\kappa_2$ be any cardinal such that $\kappa_2 \geq max\{ \mbox{$\mathbb N$, $|Mor(\mathcal R)|$, $|Mor(\mathcal S)|$, $|C|$, $|K|$}\}$ and let $A\subseteq el(\mathcal M)$ and $B\subseteq el(\alpha^\ast\mathcal M)$ be subsets with $|A|$, $|B|\leq \kappa_2$. Then, there exists a submodule $\mathcal N\subseteq \mathcal M$ in $\mathbf M^C_{\mathcal R}(\psi)$ such that \smallskip (1) $|\mathcal N|\leq \kappa_2$, $|\alpha^\ast\mathcal N|\leq \kappa_2$ \smallskip (2) $A\subseteq el(\mathcal N)$ and $B\subseteq el(\alpha^\ast\mathcal N)$. \end{lem} \begin{proof} Applying Lemma \ref{L6.4} (and Remark \ref{Rem6.5}), we obtain submodules $\mathcal N_1$, $\mathcal N_2\subseteq \mathcal M$ such that \smallskip (1) $|\mathcal N_1|, |\mathcal N_2|\leq \kappa_2$ \smallskip (2) $A\subseteq el(\mathcal N_1)$, $B\subseteq el(\alpha^\ast\mathcal N_2)$. \smallskip We set $\mathcal N:=(\mathcal N_1+\mathcal N_2)\subseteq \mathcal M$. Then, $(\mathcal N_1+\mathcal N_2)$ is a quotient of $\mathcal N_1\oplus\mathcal N_2$ and hence $|\mathcal N|\leq \kappa_2$. Also, it is clear that $A\subseteq el(\mathcal N_1) \subseteq el(\mathcal N)$. Since $\alpha$ is flat, we get $B\subseteq el(\alpha^\ast\mathcal N_2)\subseteq el(\alpha^\ast\mathcal N)$. \smallskip It remains to show that $|\alpha^\ast\mathcal N|\leq \kappa_2$. By the definition in \eqref{ke2.2}, we know that $\alpha^*(\mathcal N)(s)$ is a quotient of \begin{equation}\label{ke6.9} \left(\underset{r\in \mathcal R}{\bigoplus}\mathcal N(r)\otimes \mathcal S(s,\alpha(r))\right) \end{equation} for each $s\in \mathcal S$. Since $\kappa_2 \geq |Mor(\mathcal R)|, |Mor(\mathcal S)|$, it follows from \eqref{ke6.9} that $|\alpha^*(\mathcal N)(s)|\leq\kappa_2$. Again since $\kappa_2 \geq |Mor(\mathcal S)|$, we get $|\alpha^\ast\mathcal N|\leq \kappa_2$. \end{proof} We will now show that $Cart^C-\mathscr R$ has a generator when $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is a flat representation of the poset $\mathscr X$. This will be done using induction on $\mathbb N\times Mor(\mathscr X)$ in a manner similar to the proof of \cite[Proposition 3.25]{EV}. As in Section 4, we set \begin{equation} \kappa =sup\{ \mbox{$|\mathbb N|$, $|C|$, $|K|$, $|Mor(\mathscr X)|$, $|Mor(\mathscr R_x)|$, $x\in \mathscr X$}\} \end{equation} Let $\mathscr M$ be a cartesian module over $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. We now consider an element $m\in el_{\mathscr X}(\mathscr M)$. Suppose that $m\in \mathscr M_x(r)$ for some $x\in \mathscr X$ and $r\in \mathscr R_x$. As in the proof of Theorem \ref{T4.9}, we fix a finite dimensional projective $C$-comodule $V$ and a morphism $\eta: V\otimes H_r \longrightarrow \mathscr M_x$ in $\mathbf M^C_{\mathscr R_x}(\psi_x)$ such that $m$ is an element of the image of $\eta$. Corresponding to $\eta$, we define $\mathscr N\subseteq \mathscr M$ as in \eqref{eq4.6}. It is clear that $m\in el_{\mathscr X}(\mathscr N)$. By Lemma \ref{L4.8}, we know that $|\mathscr N|\leq \kappa$. \smallskip Next, we choose a well ordering of the set $Mor(\mathscr X)$ and consider the induced lexicographic ordering of $\mathbb N\times Mor(\mathscr X)$. Corresponding to each pair $(n,\alpha:y\longrightarrow z)\in \mathbb N\times Mor(\mathscr X)$, we will now define a subobject $\mathscr P(n,\alpha) \hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ satisfying the following conditions. \smallskip (1) $m\in el_{\mathscr X}(\mathscr P(1,\alpha_0))$, where $\alpha_0$ is the least element of $Mor(\mathscr X)$. \smallskip (2) $\mathscr P(n,\alpha)\subseteq \mathscr P(m,\beta)$, whenever $(n,\alpha)\leq (m,\beta)$ in $\mathbb N\times Mor(\mathscr X)$ \smallskip (3) For each $(n,\alpha:y\longrightarrow z)\in \mathbb N\times Mor(\mathscr X)$, the morphism $\mathscr P(n,\alpha)^\alpha:\alpha^\ast \mathscr P(n,\alpha)_y \longrightarrow \mathscr P(n,\alpha)_z$ is an isomorphism in $\mathbf M^C_{\mathscr R_z}(\psi_z)$. \smallskip (4) $|\mathscr P(n,\alpha)|\leq \kappa$. \smallskip For $(n,\alpha:y\longrightarrow z)\in \mathbb N\times Mor(\mathscr X)$, we start the process of constructing the module $\mathscr P(n,\alpha)$ as follows: we set \begin{equation}\label{6.11dp} A^0_0(w):=\left\{ \begin{array}{ll} \mathscr N_w & \mbox{if $n=1$ and $\alpha=\alpha_0$}\\ \underset{(m,\beta)<(n,\alpha)}{\bigcup}\textrm{ }\mathscr P(m,\beta)_w& \mbox{otherwise} \\ \end{array}\right. \end{equation} for each $w\in \mathscr X$. It is clear that each $A^0_0(w)\subseteq el(\mathscr M_w)$ and $|A^0_0(w)|\leq \kappa$. \smallskip Since $\mathscr M$ is cartesian, we know that $\alpha^\ast\mathscr M_y=\mathscr M_z$. Since $\alpha:(\mathscr R_y,C,\psi_y)\longrightarrow (\mathscr R_z,C,\psi_z)$ is flat in $\mathscr Ent_C$, we use Lemma \ref{L6.6} with $A^0_0(y)\subseteq el(\mathscr M_y)$ and $A^0_0(z)\subseteq el(\alpha^\ast\mathscr M_y)= el( \mathscr M_z)$ to obtain $A^0_1(y)\hookrightarrow \mathscr M_y$ in $\mathbf M^C_{\mathscr R_y}(\psi_y)$ such that \begin{equation}\label{card6.12} |A^0_1(y)|\leq \kappa \qquad |\alpha^\ast A^0_1(y)|\leq \kappa \qquad A^0_0(y) \subseteq el(A^0_1(y))\qquad A^0_0(z)\subseteq el(\alpha^\ast A^0_1(y)) \end{equation} We now set $A^0_1(z):=\alpha^\ast A^0_1(y)$. Then, \eqref{card6.12} can be rewritten as \begin{equation}\label{card6.13} |A^0_1(y)|\leq \kappa \qquad |A^0_1(z)|\leq \kappa \qquad A^0_0(y) \subseteq el(A^0_1(y))\qquad A^0_0(z)\subseteq el(A^0_1(z)) \end{equation} We observe here that since $\mathscr X$ is a poset, then $y=z$ implies $\alpha:y\longrightarrow z$ is the identity and hence $A^0_1(y)= A^0_1(z)$. For any $w\ne y,z$ in $\mathscr X$, we set $A^0_1(w)=A^0_0(w)$. Combining with \eqref{card6.13}, we have $A^0_0(w)\subseteq A^0_1(w)$ for every $w\in \mathscr X$ and each $|A^0_1(w)|\leq \kappa$. \begin{lem}\label{L6.61} Let $B\subseteq el_{\mathscr X}(\mathscr M)$ with $|B|\leq \kappa$. Then, there is a submodule $\mathscr Q\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ such that $B\subseteq el_{\mathscr X}(\mathscr Q)$ and $|\mathscr Q|\leq \kappa$. \end{lem} \begin{proof} For any $m\in B\subseteq el_{\mathscr X}(\mathscr M)$ we can choose, as in the proof of Theorem \ref{T4.9}, a subobject $\mathscr Q_m\subseteq \mathscr M$ such that $m\in el_{\mathscr X}(\mathscr Q_m)$ and $|\mathscr Q_m|\leq \kappa$. Then, we set $\mathscr Q:=\underset{m\in B}{\sum} \mathscr Q_m$. In particular, $\mathscr Q$ is a quotient of $ \underset{m\in B}{\bigoplus} \mathscr Q_m$. Since $|B|\leq \kappa$, the result follows. \end{proof} Using Lemma \ref{L6.61}, we now choose a submodule $\mathscr Q^0(n,\alpha)\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ such that $\underset{w\in \mathscr X}{\bigcup}\textrm{ }A^0_1(w)\subseteq el_{\mathscr X}(\mathscr Q^0(n,\alpha))$ and $|\mathscr Q^0(n,\alpha)|\leq \kappa$. In particular, $A^0_1(w)\subseteq \mathscr Q^0(n,\alpha)_w$ for each $w\in \mathscr X$. \smallskip We now iterate this construction. Suppose we have constructed a submodule $\mathscr Q^l(n,\alpha)\hookrightarrow \mathscr M$ for every $l\leq m$ such that $\underset{w\in \mathscr X}{\bigcup}\textrm{ }A^l_1(w)\subseteq el_{\mathscr X}(\mathscr Q^l(n,\alpha))$ and $|\mathscr Q^l(n,\alpha)|\leq \kappa$. Then, we set $A^{m+1}_0(w):=\mathscr Q^m(n,\alpha)_w$ for each $w\in \mathscr X$. We then use Lemma \ref{L6.6} with $A^{m+1}_0(y)\subseteq el(\mathscr M_y)$ and $A^{m+1}_0(z)\subseteq el(\alpha^\ast\mathscr M_y)= el( \mathscr M_z)$ to obtain $A^{m+1}_1(y)\hookrightarrow \mathscr M_y$ in $\mathbf M^C_{\mathscr R_y}(\psi_y)$ such that \begin{equation}\label{card6.14} |A^{m+1}_1(y)|\leq \kappa \qquad |\alpha^\ast A^{m+1}_1(y)|\leq \kappa \qquad A^{m+1}_0(y) \subseteq el(A^{m+1}_1(y))\qquad A^{m+1}_0(z)\subseteq el(\alpha^\ast A^{m+1}_1(y)) \end{equation} We now set $A^{m+1}_1(z):=\alpha^\ast A^{m+1}_1(y)$. Then, \eqref{card6.14} can be rewritten as \begin{equation}\label{card6.15} |A^{m+1}_1(y)|\leq \kappa \qquad |A^{m+1}_1(z)|\leq \kappa \qquad A^{m+1}_0(y) \subseteq el(A^{m+1}_1(y))\qquad A^{m+1}_0(z)\subseteq el(A^{m+1}_1(z)) \end{equation} For any $w\ne y,z$ in $\mathscr X$, we set $A^{m+1}_1(w)=A^{m+1}_0(w)$. Combining with \eqref{card6.15}, we have $A^{m+1}_0(w)\subseteq A^{m+1}_1(w)$ for every $w\in \mathscr X$ and each $|A^{m+1}_1(w)|\leq \kappa$. \smallskip Using Lemma \ref{L6.61}, we now choose a submodule $\mathscr Q^{m+1}(n,\alpha)\hookrightarrow \mathscr M$ in $Mod^C-\mathscr R$ such that $\underset{w\in \mathscr X}{\bigcup}\textrm{ }A^{m+1}_1(w)\subseteq el_{\mathscr X}(\mathscr Q^{m+1}(n,\alpha))$ and $|\mathscr Q^{m+1}(n,\alpha)|\leq \kappa$. In particular, $A^{m+1}_1(w)\subseteq \mathscr Q^{m+1}(n,\alpha)_w$ for each $w\in \mathscr X$. \smallskip Finally, we set \begin{equation}\label{6.16ep} \mathscr P(n,\alpha):=\underset{m\geq 0}{\varinjlim}\textrm{ }\mathscr Q^m(n,\alpha) \end{equation} in $Mod^C-\mathscr R$. \begin{lem}\label{L6.62} The family $\{\mbox{$\mathscr P(n,\alpha)$ $\vert$ $(n,\alpha)\in \mathbb N\times Mor(\mathscr X)$}\}$ satisfies the following conditions. \smallskip (1) $m\in el_{\mathscr X}(\mathscr P(1,\alpha_0))$, where $\alpha_0$ is the least element of $Mor(\mathscr X)$. \smallskip (2) $\mathscr P(n,\alpha)\subseteq \mathscr P(m,\beta)$, whenever $(n,\alpha)\leq (m,\beta)$ in $\mathbb N\times Mor(\mathscr X)$ \smallskip (3) For each $(n,\alpha:y\longrightarrow z)\in \mathbb N\times Mor(\mathscr X)$, the morphism $\mathscr P(n,\alpha)^\alpha:\alpha^\ast \mathscr P(n,\alpha)_y \longrightarrow \mathscr P(n,\alpha)_z$ is an isomorphism in $\mathbf M^C_{\mathscr R_z}(\psi_z)$. \smallskip (4) $|\mathscr P(n,\alpha)|\leq \kappa$. \end{lem} \begin{proof} The conditions (1) and (2) are immediate from the definition in \eqref{6.11dp}. The condition (4) follows from \eqref{6.16ep} and the fact that each $|\mathscr Q^{m+1}(n,\alpha)|\leq \kappa$. \smallskip To prove (3), we notice that $\mathscr P(n,\alpha)_y$ may be expressed as the filtered union \begin{equation} A^0_1(y)\hookrightarrow \mathscr Q^0(n,\alpha)_y\hookrightarrow A^1_1(y)\hookrightarrow \mathscr Q^1(n,\alpha)_y\hookrightarrow \dots \hookrightarrow A^{m+1}_1(y)\hookrightarrow\mathscr Q^{m+1}(n,\alpha)_y\hookrightarrow ... \end{equation} of objects in $\mathbf M^C_{\mathscr R_y}(\psi_y)$. Since $\alpha^\ast$ is exact and a left adjoint, we can express $\alpha^\ast\mathscr P(n,\alpha)_y$ as the filtered union \begin{equation} \label{c6.18} \alpha^\ast A^0_1(y)\hookrightarrow \alpha^\ast\mathscr Q^0(n,\alpha)_y\hookrightarrow \alpha^\ast A^1_1(y)\hookrightarrow \alpha^\ast\mathscr Q^1(n,\alpha)_y\hookrightarrow \dots \hookrightarrow \alpha^\ast A^{m+1}_1(y)\hookrightarrow\alpha^\ast\mathscr Q^{m+1}(n,\alpha)_y\hookrightarrow ... \end{equation} in $\mathbf M^C_{\mathscr R_z}(\psi_z)$. Similarly, $\mathscr P(n,\alpha)_z$ may be expressed as the filtered union \begin{equation} \label{c6.19} A^0_1(z)\hookrightarrow \mathscr Q^0(n,\alpha)_z\hookrightarrow A^1_1(z)\hookrightarrow \mathscr Q^1(n,\alpha)_z\hookrightarrow \dots \hookrightarrow A^{m+1}_1(z)\hookrightarrow\mathscr Q^{m+1}(n,\alpha)_z\hookrightarrow ... \end{equation} in $\mathbf M^C_{\mathscr R_z}(\psi_z)$. By definition, we know that $A^m_1(z)=\alpha^\ast A^m_1(y)$ for each $m\geq 0$. From \eqref{c6.18} and \eqref{c6.19}, it is clear that the filtered colimit of the isomorphisms $\alpha^\ast A^m_1(y)=A^m_1(z)$ induces an isomorphism $\mathscr P(n,\alpha)^\alpha:\alpha^\ast \mathscr P(n,\alpha)_y \longrightarrow \mathscr P(n,\alpha)_z$. \end{proof} \begin{lem}\label{L6.7} Let $\mathscr M$ be a cartesian module over a flat representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. Choose $m\in el_{\mathscr X}(\mathscr M)$. Let $\kappa =max\{ \mbox{$|\mathbb N|$, $|C|$, $|K|$, $|Mor(\mathscr X)|$, $|Mor(\mathscr R_x)|$, $x\in \mathscr X$}\}$. Then, there is a cartesian submodule $\mathscr P\subseteq \mathscr M$ with $m\in el_{\mathscr X}(\mathscr P)$ such that $|\mathscr P|\leq \kappa$. \end{lem} \begin{proof} It is clear that $\mathbb N\times Mor(\mathscr X)$ with the lexicographic ordering is filtered. We set \begin{equation} \mathscr P:=\underset{(n,\alpha)\in \mathbb N\times Mor(\mathscr X)}{\bigcup}\textrm{ }\mathscr P(n,\alpha)\subseteq \mathscr M \end{equation} in $Mod^C-\mathscr R$. It is immediate that $m\in el_{\mathscr X}(\mathscr P)$. Since each $|\mathscr P(n,\alpha)|\leq \kappa$, it is clear that $|\mathscr P|\leq \kappa$. \smallskip We now consider a morphism $\beta:z\longrightarrow w$ in $\mathscr X$. Then, the family $\{(m,\beta)\}_{m\geq 1}$ is cofinal in $\mathbb N\times Mor(\mathscr X)$ and hence it follows that \begin{equation} \mathscr P:=\underset{m\geq 1}{\varinjlim}\textrm{ }\mathscr P(m,\beta) \end{equation} Since each $\mathscr P(m,\beta)^\beta:\beta^\ast \mathscr P(m,\beta)_z \longrightarrow \mathscr P(m,\beta)_w$ is an isomorphism, the filtered colimit $\mathscr P^\beta:\beta^\ast\mathscr P_z\longrightarrow \mathscr P_w$ is an isomorphism. \end{proof} \begin{Thm}\label{T6.10} Let $C$ be a right semiperfect coalgebra over a field $K$. Let $\mathscr X$ be a poset and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation of $\mathscr X$. Suppose that $\mathscr R$ is flat. Then, $Cart^C-\mathscr R$ is a Grothendieck category. \end{Thm} \begin{proof} It is already clear that $Cart^C-\mathscr R$ satisfies the (AB5) condition. From Lemma \ref{L6.7}, it is clear that any $\mathscr M\in Cart^C-\mathscr R$ can be expressed as a sum of a family $\{\mathscr P_m\}_{m\in el_{\mathscr X}(\mathscr M)}$ of cartesian submodules such that each $|\mathscr P_m| \leq \kappa$. As such, isomorphism classes of cartesian modules $\mathscr P$ with $|\mathscr P|\leq \kappa$ form a family of generators for $Cart^C-\mathscr R$. \end{proof} \section{Separability of the forgetful functor} Let $(\mathcal R,C,\psi)$ be an entwining structure. We consider the forgetful functor $\mathcal F:\mathbf M^C_{\mathcal R}(\psi)\longrightarrow \mathbf M_{\mathcal R}$. By \cite[Lemma 2.4 \& Lemma 3.1]{BBR}, we know that $\mathcal F$ has a right adjoint $\mathcal G:\mathbf M_{\mathcal R}\longrightarrow \mathbf M^C_{\mathcal R}(\psi)$ given by setting $\mathcal G( \mathcal N):=\mathcal N\otimes C$, i.e. $\mathcal G(\mathcal N)(r):=\mathcal N(r)\otimes C$ for each $r\in \mathcal R$. The right $\mathcal R$-module structure on $\mathcal G(\mathcal N)$ is given by $(n\otimes c)\cdot f:=nf_\psi\otimes c^\psi$ for $f\in \mathcal R(r',r)$, $n\in \mathcal N(r)$ and $c\in C$. \smallskip We continue with $\mathscr X$ being a poset, $C$ being a right semiperfect coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. We denote by $\mathscr Lin$ the category of small $K$-linear categories. Then, for each $x\in \mathscr X$, we may replace the entwining structure $(\mathscr R_x,C,\psi_x)$ by the $K$-linear category $\mathscr R_x$ to obtain a functor that we continue to denote by $\mathscr R:\mathscr X\longrightarrow \mathscr Lin$. We consider modules over $\mathscr R:\mathscr X\longrightarrow \mathscr Lin$ in the sense of Estrada and Virili \cite[Definition 3.6]{EV} and denote their category by $Mod-\mathscr R$. Explicitly, an object $\mathscr N$ in $Mod-\mathscr R$ consists of a module $\mathscr N_x\in \mathbf M_{\mathscr R_x}$ for each $x\in \mathscr X$ as well as compatible morphisms $\mathscr N_\alpha:\mathscr N_x\longrightarrow \alpha_\ast\mathscr N_y$ (equivalently $\mathscr N^\alpha:\alpha^\ast\mathscr N_x\longrightarrow \mathscr N_y$) for each $\alpha:x\longrightarrow y$ in $\mathscr X$. The module $\mathscr N$ is said to be cartesian if each $\mathscr N^\alpha:\alpha^\ast\mathscr N_x\longrightarrow \mathscr N_y$ is an isomorphism. We denote by $Cart-\mathscr R$ the full subcategory of cartesian modules on $\mathscr R$. \smallskip For each $x\in \mathscr X$, we have a forgetful functor $\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}$ having right adjoint $\mathscr G_x: \mathbf M_{\mathscr R_x}\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)$. From the proofs of Propositions \ref{P2.2} and \ref{P2.3}, it is clear that we have commutative diagrams \begin{equation}\label{cd7.1} \begin{CD} \mathbf M^C_{\mathscr R_y}(\psi_y) @>\alpha_\ast >> \mathbf M^C_{\mathscr R_x}(\psi_x)\\ @V\mathscr F_yVV @VV\mathscr F_xV \\ \mathbf M_{\mathscr R_y}@>\alpha_\ast >> \mathbf M_{\mathscr R_x}\\ \end{CD} \qquad \begin{CD} \mathbf M^C_{\mathscr R_x}(\psi_x) @>\alpha^\ast >> \mathbf M^C_{\mathscr R_y}(\psi_y)\\ @V\mathscr F_xVV @VV\mathscr F_yV \\ \mathbf M_{\mathscr R_x}@>\alpha^\ast >> \mathbf M_{\mathscr R_y}\\ \end{CD}\qquad \begin{CD} \mathbf M_{\mathscr R_y} @>\alpha_\ast >> \mathbf M_{\mathscr R_x}\\ @V\mathscr G_yVV @VV\mathscr G_xV \\ \mathbf M_{\mathscr R_y}^C(\psi_y)@>\alpha_\ast >> \mathbf M^C_{\mathscr R_x}(\psi_x)\\ \end{CD} \end{equation} for each $\alpha:x\longrightarrow y$ in $\mathscr X$. \begin{thm} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Then, the collection $\{\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}\}_{x\in \mathscr X}$ (resp. the collection $\{\mathscr G_x: \mathbf M_{\mathscr R_x}\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)\}_{x\in \mathscr X}$ ) together defines a functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ (resp. a functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$). \end{thm} \begin{proof} We consider $\mathscr M\in Mod^C-\mathscr R$ and set $\mathscr F(\mathscr M)_x:=\mathscr F_x(\mathscr M_x)\in \mathbf M_{\mathscr R_x}$. For a morphism $\alpha:x\longrightarrow y$, we obtain from \eqref{cd7.1} a morphism $\mathscr F(\mathscr M)_\alpha:=\mathscr F_x(\mathscr M_\alpha):\mathscr F_x(\mathscr M_x) \longrightarrow \mathscr F_x(\alpha_\ast\mathscr M_y)=\alpha_\ast\mathscr F_y(\mathscr M_y)$. This shows that $\mathscr F(\mathscr M)$ is an object of $Mod-\mathscr R$. Similarly, it follows from \eqref{cd7.1} that for any $\mathscr N\in Mod-\mathscr R$, we have $\mathscr G(\mathscr N)\in Mod^C-\mathscr R$ obtained by setting $\mathscr G(\mathscr N)_x:=\mathscr G_x(\mathscr N_x) =\mathscr N_x\otimes C$. \end{proof} \begin{thm}\label{P7.2} Let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Then, the functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ has a right adjoint, given by $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$. \end{thm} \begin{proof} We consider $\mathscr M\in Mod^C-\mathscr R$ and $\mathscr N\in Mod-\mathscr R$ along with a morphism $\eta:\mathscr F(\mathscr M)\longrightarrow \mathscr N$ in $Mod-\mathscr R$. We will show how to construct a morphism $\zeta:\mathscr M\longrightarrow \mathscr G(\mathscr N)$ in $Mod^C-\mathscr R$ corresponding to $\eta$. \smallskip For each $x\in \mathscr X$, we consider $\eta_x:\mathscr F(\mathscr M)_x=\mathscr F_x(\mathscr M_x)\longrightarrow \mathscr N_x$ in $\mathbf M_{\mathscr R_x}$. By \cite[Lemma 3.1]{BBR}, we already know that $(\mathscr F_x,\mathscr G_x)$ is a pair of adjoint functors, which gives us $\mathbf M_{\mathscr R_x}(\mathscr F_x(\mathscr M_x),\mathscr N_x)\cong \mathbf M^C_{\mathscr R_x}( \mathscr M_x,\mathscr G_x(\mathscr N_x))$. Accordingly, we define $\zeta_x:\mathscr M_x\longrightarrow \mathscr G_x(\mathscr N_x)=\mathscr N_x\otimes C$ by setting $\zeta_x(m'):=\eta_x(r)(m'_0)\otimes m'_1$ for $m'\in \mathscr M_x(r)$, $r\in \mathscr R_x$. We now consider the diagrams \begin{equation}\label{cd7.2} \begin{CD} \mathscr F_x(\mathscr M_x) @>\eta_x>> \mathscr N_x \\ @V\mathscr F_x(\mathscr M_\alpha)VV @VV\mathscr N_\alpha V\\ \alpha_\ast\mathscr F_y(\mathscr M_y) @>\alpha_\ast(\eta_y)>> \alpha_\ast\mathscr N_y\\ \end{CD} \qquad \Rightarrow \qquad \begin{CD} \mathscr M_x @>\zeta_x>> \mathscr G_x(\mathscr N_x)\\ @V\mathscr M_\alpha VV @VV\mathscr G_x(\mathscr N_\alpha)V\\ \alpha_\ast\mathscr M_y @>\alpha_\ast(\zeta_y)>> \alpha_\ast \mathscr G_y(\mathscr N_y) \\ \end{CD} \end{equation} The left hand side diagram in \eqref{cd7.2} is commutative because $\eta:\mathscr F(\mathscr M)\longrightarrow \mathscr N$ is a morphism in $Mod-\mathscr R$. In order to prove that we have a morphism $\zeta:\mathscr M\longrightarrow \mathscr G(\mathscr N)$ in $Mod^C-\mathscr R$, it suffices to show that this implies the commutativity of the right hand side diagram in \eqref{cd7.2}. \smallskip We consider $m\in el(\mathscr M_x)$. Then, we have $\mathscr G_x(\mathscr N_\alpha)(\zeta_x(m))=\mathscr N_\alpha(\eta_x(m_0))\otimes m_1$. On the other hand, we have $\alpha_\ast(\zeta_y)(\mathscr M_\alpha(m))=\eta_y((\mathscr M_\alpha(m))_0)\otimes (\mathscr M_\alpha(m))_1$. Since $\mathscr M_\alpha$ is $C$-colinear, we have $(\mathscr M_\alpha(m))_0 \otimes (\mathscr M_\alpha(m))_1=\mathscr M_\alpha(m_0)\otimes m_1$. It follows that $\alpha_\ast(\zeta_y)(\mathscr M_\alpha(m))=\eta_y(\mathscr M_\alpha(m_0))\otimes m_1$. From the left hand side commutative diagram in \eqref{cd7.2}, we get $\eta_y(\mathscr M_\alpha(m_0))=\mathscr N_\alpha(\eta_x(m_0))$, which shows that the right hand diagram in \eqref{cd7.2} is commutative. \smallskip Similarly, we may show that a morphism $\zeta':\mathscr M\longrightarrow \mathscr G(\mathscr N)$ in $Mod^C-\mathscr R$ induces a morphism $\eta':\mathscr F(\mathscr M)\longrightarrow \mathscr N$ in $Mod-\mathscr R$ and that these two associations are inverse to each other. This proves the result. \end{proof} We now recall that a functor $F:\mathcal A\longrightarrow \mathcal B$ is said to be separable if the natural transformation $\mathcal A(\_\_,\_\_)\longrightarrow \mathcal B(F(\_\_),F(\_\_))$ is a split monomorphism (see \cite{NBO}, \cite{Raf}). If $F$ has a right adjoint $G:\mathcal B\longrightarrow \mathcal A$, then $F$ is separable if and only if there exists a natural transformation $\upsilon \in Nat(GF,1_{\mathcal A})$ satisfying $\upsilon\circ \mu=1_{\mathcal A}$, where $\mu$ is the unit of the adjunction (see \cite[Theorem 1.2]{Raf}). \smallskip We now consider the forgetful functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ as well as its right adjoint $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ constructed in Proposition \ref{P7.2}. We will need an alternate description for the natural transformations $\mathscr G\mathscr F\longrightarrow 1_{Mod^C-\mathscr R}$. \begin{thm}\label{P7.25} A natural transformation $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ corresponds to a collection of natural transformations $\{\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ such that for any $\alpha:x\longrightarrow y$ in $\mathscr X$ and object $\mathscr M\in Mod^C-\mathscr R$, we have a commutative diagram \begin{equation}\label{cd7.3} \begin{CD} \mathscr G_x\mathscr F_x(\mathscr M_x) @>\upsilon_x(\mathscr M_x)>> \mathscr M_x\\ @V\mathscr G_x\mathscr F_x(\mathscr M_\alpha) VV @VV\mathscr M_\alpha V \\ \alpha_\ast\mathscr G_y\mathscr F_y(\mathscr M_y) @>\alpha_\ast\upsilon_y(\mathscr M_y)>> \alpha_\ast\mathscr M_y\\ \end{CD} \end{equation} in $\mathbf M^C_{\mathscr R_x}(\psi_x)$. \end{thm} \begin{proof} We consider $\upsilon \in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. For $x\in \mathscr X$, we define the natural transformation $\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})$ by setting \begin{equation}\label{eq7.35b} \upsilon_x(\mathcal M):=\upsilon(ex^C_x(\mathcal M))_x:\mathscr G_x\mathscr F_x(\mathcal M)=\mathscr G_x\mathscr F_x((ex^C_x(\mathcal M))_x)\longrightarrow (ex_x^C(\mathcal M))_x=\mathcal M\end{equation} for $\mathcal M\in \mathbf M^C_{\mathscr R_x}(\psi_x)$. We now consider $\mathscr M\in Mod^C-\mathscr R$. For $\alpha:x\longrightarrow y$ in $\mathscr X$, the morphism $\upsilon(\mathscr M):\mathscr G\mathscr F(\mathscr M)\longrightarrow \mathscr M$ in $Mod^C-\mathscr R$ leads to a commutative diagram \begin{equation}\label{7.36b} \begin{CD} (\mathscr G\mathscr F(\mathscr M))_x = \mathscr G_x\mathscr F_x(\mathscr M_x) @>\upsilon(\mathscr M)_x>> \mathscr M_x \\ @V\mathscr G_x\mathscr F_x(\mathscr M_\alpha) VV @VV\mathscr M_\alpha V \\ \alpha_\ast(\mathscr G\mathscr F(\mathscr M))_y=\alpha_\ast\mathscr G_y\mathscr F_y(\mathscr M_y) @> \alpha_\ast(\upsilon(\mathscr M)_y)>> \alpha_\ast\mathscr M_y\\ \end{CD} \end{equation} We now claim that $\upsilon(\mathscr M)_x=(\upsilon(ex^C_x(\mathscr M_x)))_x=\upsilon_x(\mathscr M_x)$ for each $x\in \mathscr X$. For this, we consider the canonical morphism $\zeta: ex_x^C(\mathscr M_x)=ex_x^C(ev_x^C(\mathscr M))\longrightarrow \mathscr M$ in $Mod^C-\mathscr R$ corresponding to the adjoint pair $(ex_x^C,ev_x^C)$ in Proposition \ref{P5.3}. It is clear that $ev_x^C(\zeta)=id$. Then, we have commutative diagrams \begin{equation}\label{7.37b} \begin{array}{ccc} \begin{CD} \mathscr G\mathscr F(ex^C_x(\mathscr M_x)) @>\upsilon(ex^C_x(\mathscr M_x))>> ex_x^C(\mathscr M_x)\\ @V\mathscr G\mathscr F(\zeta)VV @VV\zeta V\\ \mathscr G\mathscr F(\mathscr M) @>\upsilon(\mathscr M)>> \mathscr M \\ \end{CD} & \qquad \Rightarrow \qquad & \begin{CD} \mathscr G_x\mathscr F_x(\mathscr M_x) @>(\upsilon(ex^C_x(\mathscr M_x)))_x>> \mathscr M_x\\ @Vid VV @VVid V\\ \mathscr G_x\mathscr F_x(\mathscr M_x) @>\upsilon(\mathscr M)_x>> \mathscr M_x \\ \end{CD} \\ \end{array} \end{equation} This proves that $\upsilon(\mathscr M)_x=(\upsilon(ex^C_x(\mathscr M_x)))_x=\upsilon_x(\mathscr M_x)$ for each $x\in \mathscr X$. The commutativity of the diagram \eqref{cd7.3} now follows from \eqref{7.36b}. \smallskip Conversely, given a collection of natural transformations $\{\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ satisfying \eqref{cd7.3} for each $\mathscr M \in Mod^C-\mathscr R$, we get $\upsilon(\mathscr M):\mathscr G\mathscr F(\mathscr M)\longrightarrow \mathscr M$ in $Mod^C-\mathscr R$ by setting $\upsilon(\mathscr M)_x= \upsilon_x(\mathscr M_x)$ for each $x\in \mathscr X$. From \eqref{cd7.3}, it is clear that $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. \end{proof} More explicitly, the diagram in \eqref{cd7.3} shows that for each $\alpha:x\longrightarrow y$ in $\mathscr X$ and $r\in \mathscr R_x$, we have a commutative diagram \begin{equation}\label{cd7.4} \begin{CD} \mathscr M_x(r)\otimes C=(\mathscr G_x\mathscr F_x(\mathscr M_x))(r) @>(\upsilon_x(\mathscr M_x))(r)>> \mathscr M_x(r)\\ @V(\mathscr G_x\mathscr F_x(\mathscr M_\alpha))(r) VV @VV\mathscr M_\alpha(r) V \\ \mathscr M_y(\alpha(r))\otimes C=(\mathscr G_y\mathscr F_y(\mathscr M_y))(\alpha(r))=(\alpha_\ast\mathscr G_y\mathscr F_y(\mathscr M_y))(r) @>(\alpha_\ast\upsilon_y(\mathscr M_y))(r)>=(\upsilon_y(\mathscr M_y))(\alpha(r))> (\alpha_\ast\mathscr M_y)(r)=\mathscr M_y(\alpha(r))\\ \end{CD} \end{equation} We note that all morphisms in \eqref{cd7.4} are $C$-colinear. We now give another interpretation of the space $ Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. For this, we consider a collection $\theta:=\{\theta_x(r):C\otimes C\longrightarrow \mathscr R_x(r,r)\}_{x\in \mathscr X,r\in \mathscr R_x}$ of $K$-linear maps satisfying the following conditions. \smallskip (1) Fix $x\in \mathscr X$ and $r\in \mathscr R_x$. Then, for $c$, $d\in C$, we have \begin{equation}\label{theta1} \theta_x(r)(c\otimes d_1)\otimes d_2=(\theta_x(r)(c_2\otimes d))_{\psi_x}\otimes {c_1}^{\psi_x} \end{equation} (2) Fix $x\in \mathscr X$ and $c$, $d\in C$. Then, for $f:s\longrightarrow r$ in $\mathscr R_x$, we have \begin{equation}\label{theta2} (\theta_x(r)(c\otimes d))\circ f=f_{{\psi_x}_{\psi_x}}\circ (\theta_x(s)(c^{\psi_x}\otimes d^{\psi_x})) \end{equation} (3) Fix $c$, $d\in C$. Then, for any $\alpha:x\longrightarrow y$ in $\mathscr X$ and $r\in \mathscr R_x$, we have \begin{equation}\label{theta3} \alpha(\theta_x(r)(c\otimes d))=\theta_y(\alpha(r))(c\otimes d) \end{equation} The space of all such $\theta$ will be denoted by $V_1$. \begin{thm}\label{P7.3} Let $\theta\in V_1$. Then, $\theta$ induces a natural transformation $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$, such that for each $x\in \mathscr X$, $\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})$ is given by \begin{equation}\label{eq7.8} \upsilon_x(\mathcal M):\mathcal M\otimes C\longrightarrow \mathcal M\qquad (m\otimes c)\mapsto \mathcal M(\theta_x(r)(m_1\otimes c))(m_0) \end{equation} for any $\mathcal M\in \mathbf M^C_{\mathscr R_x}(\psi_x)$, $r\in \mathscr R_x$, $m\in \mathcal M(r)$ and $c\in C$. \end{thm} \begin{proof} From \cite[Proposition 3.6]{BBR}, it follows that each $\upsilon_x$ as defined in \eqref{eq7.8} by the collection $\theta_x:= \{\theta_x(r):C\otimes C\longrightarrow \mathscr R_x(r,r)\}_{r\in \mathscr R_x}$ gives a natural transformation $\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})$. To prove the result, it therefore suffices to show the commutativity of the diagram \eqref{cd7.4} for any $\mathscr M\in Mod^C-\mathscr R$. Accordingly, for $\alpha: x\longrightarrow y$ in $\mathscr X$ and $r\in \mathscr R_x$, we have \begin{equation}\label{eq7.9} ((\mathscr M_\alpha(r))\circ (\upsilon_x(\mathscr M_x))(r))(m\otimes c)=(\mathscr M_\alpha(r))(\mathscr M_x(\theta_x(r)(m_1\otimes c))(m_0)) \end{equation} for $m\otimes c\in \mathscr M_x(r)\otimes C$. On the other hand, we have \begin{equation}\label{eq7.10} \begin{array}{ll} (((\upsilon_y(\mathscr M_y))(\alpha(r)))\circ ((\mathscr G_x\mathscr F_x(\mathscr M_\alpha))(r)))(m\otimes c)&=\mathscr M_y(\theta_y(\alpha(r))(\mathscr M_\alpha(m)_1\otimes c ))(\mathscr M_\alpha(r)(m))_0\\ &=\mathscr M_y(\theta_y(\alpha(r))(m_1\otimes c))(\mathscr M_\alpha(r)(m_0))\\ &=\mathscr M_y(\alpha(\theta_x(r)(m_1\otimes c)))(\mathscr M_\alpha(r)(m_0))\\ \end{array} \end{equation} The second equality in \eqref{eq7.10} follows from the $C$-colinearity of $\mathscr M_\alpha(r)$ and the third equality follows by applying condition \eqref{theta3}. We now notice that for any $f\in \mathscr R_x(r,r)$, we have a commutative diagram \begin{equation}\label{cd7.11} \begin{CD} \mathscr M_x(r) @>\mathscr M_\alpha(r)>> \mathscr M_y(\alpha(r))\\ @V\mathscr M_x(f)VV @VV\mathscr M_y(\alpha(f))V \\ \mathscr M_x(r) @>\mathscr M_\alpha(r)>> \mathscr M_y(\alpha(r))\\ \end{CD} \end{equation} Applying \eqref{cd7.11} to $f=\theta_x(r)(m_1\otimes c)\in \mathscr R_x(r,r)$, we obtain from \eqref{eq7.10} that \begin{equation}\label{eq7.12} (((\upsilon_y(\mathscr M_y))(\alpha(r)))\circ ((\mathscr G_x\mathscr F_x(\mathscr M_\alpha))(r)))(m\otimes c)=(\mathscr M_\alpha(r))(\mathscr M_x(\theta_x(r)(m_1\otimes c))(m_0)) \end{equation} This proves the result. \end{proof} Fix $x\in \mathscr X$ and $r\in \mathscr R_x$. We now set \begin{equation} \mathscr H_y^{(x,r)}:=\left\{\begin{array}{ll} \mathscr R_y(\_\_,\alpha(r)) \otimes C & \mbox{if $\alpha:x\longrightarrow y$} \\ 0 & \mbox{if $x\not\leq y$}\\ \end{array}\right. \end{equation} for each $y\in \mathscr X$. \begin{lem}\label{L7.4} For each $x\in \mathscr X$ and $r\in \mathscr R_x$, the collection $\mathscr H^{(x,r)}:=\{\mathscr H_y^{(x,r)}\}_{y\in \mathscr X}$ determines an object of $Mod^C-\mathscr R$. \end{lem} \begin{proof} For each $y\in \mathscr X$, it follows by \cite[Lemma 2.4]{BBR} that $\mathscr H_y^{(x,r)}$ is an object of $\mathbf M^C_{\mathscr R_y}(\psi_y)$. We consider $\beta:y\longrightarrow z$ in $\mathscr X$ and suppose we have $\alpha:x\longrightarrow y$, i.e., $x\leq y$. Then, for $r'\in \mathscr R_y$, we have an obvious morphism \begin{equation}\beta(\_\_)\otimes C: \mathscr H_y^{(x,r)}(r')= \mathscr R_y(r',\alpha(r))\otimes C\longrightarrow \beta_\ast(\mathscr R_z(\_\_,\beta\alpha(r))\otimes C)(r')=\mathscr R_z(\beta(r'),\beta\alpha(r))\otimes C \end{equation} which is $C$-colinear. To prove that $\mathscr H_y^{(x,r)}\longrightarrow \beta_\ast\mathscr H_z^{(x,r)}$ is a morphism in $\mathbf M^C_{\mathscr R_y}( \psi_y)$, it remains to show that for any $g:r''\longrightarrow r'$ in $\mathscr R_y$, the following diagram commutes \begin{equation}\label{cd7.15} \begin{CD} \mathscr R_y(r',\alpha(r))\otimes C @>\cdot g>> \mathscr R_y(r'',\alpha(r))\otimes C\\ @V\beta(\_\_)\otimes CVV @VV\beta(\_\_)\otimes CV \\ \mathscr R_z(\beta(r'),\beta\alpha(r))\otimes C @>\cdot \beta(g)>>\mathscr R_z(\beta(r''),\beta\alpha(r))\otimes C \\ \end{CD} \end{equation} For $f\otimes c\in \mathscr R_y(r',\alpha(r))\otimes C$, we have \begin{equation*} (\beta(\_\_)\otimes C)((f\otimes c)\cdot g)=(\beta(\_\_)\otimes C)(fg_{\psi_y}\otimes c^{\psi_y})=\beta(f)\beta(g_{\psi_y})\otimes c^{\psi_y}=\beta(f)\beta(g)_{\psi_z} \otimes c^{\psi_z}=(\beta(f)\otimes c)\cdot \beta(g) \end{equation*} This shows that \eqref{cd7.15} is commutative. Finally, if $x\not\leq y$, then $0=\mathscr H_y^{(x,r)}\longrightarrow \beta_\ast\mathscr H_z^{(x,r)}$ is obviously a morphism in $\mathbf M^C_{\mathscr R_y}( \psi_y)$. This proves the result. \end{proof} \begin{thm}\label{P7.5} Let $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. For each $x\in \mathscr X$ and $r\in \mathscr R_x$, define $\theta_x(r): C\otimes C\longrightarrow \mathscr R_x(r,r)$ by setting \begin{equation}\label{eq7.16} \theta_x(r)(c\otimes d):=((id\otimes \varepsilon_C)\circ (\upsilon_x(\mathscr H^{(x,r)}_x)(r)))(id_r\otimes c\otimes d) \end{equation} for $c$, $d\in C$. Then, the collection $\theta:=\{\theta_x(r):C\otimes C\longrightarrow \mathscr R_x(r,r)\}_{x\in \mathscr X,r\in \mathscr R_x}$ is an element of $V_1$. \end{thm} \begin{proof} From the definition in \eqref{eq7.16}, we have explicitly that \begin{equation}\label{eq7.17} \theta_x(r)(c\otimes d)=((id\otimes \varepsilon_C)\circ (\upsilon_x(\mathscr R_x(\_\_,r)\otimes C)(r)))(id_r\otimes c\otimes d) \end{equation} Then, it follows from \cite[Proposition 3.5]{BBR} that $\theta_x(r)$ satisfies the conditions in \eqref{theta1} and \eqref{theta2}. It remains to verify the condition \eqref{theta3}. For this we take $\alpha:x\longrightarrow y$ in $\mathscr X$ and consider the commutative diagram \begin{equation}\label{cd7.18} \begin{CD} \mathscr R_x(r',r)\otimes C\otimes C @>\upsilon_x(\mathscr H^{(x,r)}_x)(r')>> \mathscr R_x(r',r)\otimes C @>id\otimes \varepsilon_C>> \mathscr R_x(r',r)\\ @V\alpha(\_\_)\otimes C\otimes CVV @VV\alpha(\_\_)\otimes CV @VV\alpha(\_\_)V\\ \mathscr R_y(\alpha(r'),\alpha(r))\otimes C\otimes C @>\upsilon_y(\mathscr H^{(x,r)}_y)(\alpha(r'))>>\mathscr R_y(\alpha(r'),\alpha(r))\otimes C @>id\otimes \varepsilon_C>>\mathscr R_y(\alpha(r'),\alpha(r))\\ \end{CD} \end{equation} for any $r,r'\in \mathscr R_x$. Since $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$, the commutativity of the left hand side square in \eqref{cd7.18} follows from \eqref{cd7.4}. It is clear that the right hand square in \eqref{cd7.18} is commutative. \smallskip We notice that $\mathscr H_y^{(y,\alpha(r))}=\mathscr H_y^{(x,r)}$ in $\mathbf M_{\mathscr R_y}^C(\psi_y)$. Applying \eqref{cd7.18} with $r'=r\in\mathscr R_x$ and $id_r\otimes c\otimes d\in \mathscr R_x(r,r)\otimes C\otimes C$, it follows from \eqref{eq7.17} that $\alpha(\theta_x(r)(c\otimes d))=\theta_y(\alpha(r))(c\otimes d)$. This proves \eqref{theta3}. \end{proof} \begin{thm}\label{P7.6} $Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ is isomorphic to $V_1$. \end{thm} \begin{proof} From Proposition \ref{P7.3} and Proposition \ref{P7.5}, we see that we have maps $\psi:V_1\longrightarrow Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ and $\phi:Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})\longrightarrow V_1$ in opposite directions. \smallskip We consider $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$. By Proposition \ref{P7.5}, $\upsilon$ induces an element $\theta\in V_1$. Applying Proposition \ref{P7.3}, $\theta$ induces an element in $Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$, which we denote by $\upsilon'$. Then, $\upsilon$ and $\upsilon'$ are determined respectively by natural transformations $\{\upsilon_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ and $\{\upsilon'_x\in Nat(\mathscr G_x\mathscr F_x,1_{\mathbf M^C_{\mathscr R_x}(\psi_x)})\}_{x\in \mathscr X}$ satisfying compatibility conditions as in \eqref{cd7.3}. From \cite[Proposition 3.7]{BBR}, it follows that $\upsilon'_x=\upsilon_x$ for each $x\in \mathscr X$. Hence, $\upsilon'=\upsilon$ and $\psi\circ \phi=id$. Similarly, we can show that $\phi\circ \psi=id$. \end{proof} \begin{Thm}\label{T7.7} Let $\mathscr X$ be a partially ordered set. Let $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Then, the functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ is separable if and only if there exists $\theta\in V_1$ such that \begin{equation}\label{eq7.19d} \theta_x(r)(c_1\otimes c_2)=\varepsilon_C(c)\cdot id_r \end{equation} for every $x\in \mathscr X$, $r\in\mathscr R_x$ and $c\in C$. \end{Thm} \begin{proof} We suppose that $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ is separable. As mentioned before, this implies that there exists $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ such that $\upsilon\circ \mu=1_{Mod^C-\mathscr R}$, where $\mu$ is the unit of the adjunction $(\mathscr F,\mathscr G)$. We set $\theta=\phi(\upsilon)$, where $\phi:Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})\longrightarrow V_1$ is the isomorphism described in the proof of Proposition \ref{P7.6}. In particular, for every $x\in \mathscr X$, $r\in\mathscr R_x$, we have $\upsilon(\mathscr H^{(x,r)})\circ \mu(\mathscr H^{(x,r)})=id$. From \eqref{eq7.16}, it now follows that for every $c\in C$, we have \begin{equation} \begin{array}{ll} \theta_x(r)(c_1\otimes c_2)&=((id\otimes \varepsilon_C)\circ (\upsilon_x(\mathscr H^{(x,r)}_x)(r)))(id_r\otimes c_1\otimes c_2)\\ &=((id\otimes \varepsilon_C)\circ (\upsilon_x(\mathscr H^{(x,r)}_x)(r))\circ \mu(\mathscr H^{(x,r)})_x(r))(id_r\otimes c)\\ &=(id\otimes \varepsilon_C)(id_r\otimes c)=\varepsilon_C(c)\cdot id_r\\ \end{array} \end{equation} Conversely, suppose that there exists $\theta\in V_1$ satisfying the condition in \eqref{eq7.19d}. We set $\upsilon:=\psi(\theta)$, where $\psi:V_1\longrightarrow Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ is the other isomorphism described in the proof of Proposition \ref{P7.6}. We consider $\mathscr M\in Mod^C-\mathscr R$. By \eqref{eq7.8}, we know that \begin{equation}\label{eq7.21} \upsilon_x(\mathscr M_x):\mathscr M_x\otimes C\longrightarrow \mathscr M_x\qquad (m\otimes c)\mapsto \mathscr M_x(\theta_x(r)(m_1\otimes c))(m_0) \end{equation} for any $x\in \mathscr X$, $r\in \mathscr R_x$, $m\in \mathscr M_x(r)$ and $c\in C$. We claim that $\upsilon\circ \mu=1_{Mod^C-\mathscr R}$. For this, we see that \begin{equation} \begin{array}{ll} ((\upsilon(\mathscr M)\circ \mu(\mathscr M))_x(r))(m)&=(\upsilon_x(\mathscr M_x)(r))(m_0\otimes m_1) \\ &= \mathscr M_x(\theta_x(r)(m_{01}\otimes m_1))(m_{00})\\ &= \mathscr M_x(\theta_x(r)(m_{11}\otimes m_{12}))(m_{0})\\ &= \varepsilon_C(m_1)m_0=m\\ \end{array} \end{equation} This proves the result. \end{proof} We now turn to cartesian modules over entwined $C$-representations. For this, we assume additionally that $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, it follows from Theorem \ref{T6.10} that $Cart^C-\mathscr R$ is a Grothendieck category. In particular, by taking $C=K$, we note that $Cart-\mathscr R$ is also a Grothendieck category. \begin{thm}\label{P7.8} Let $\mathscr X$ be a poset, $C$ be a right semiperfect $K$-coalgebra and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation that is also flat. Then, the functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ restricts to a functor $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$. Additionally, $\mathscr F^c$ has a right adjoint $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$. \end{thm} \begin{proof} We consider $\mathscr M\in Cart^C-\mathscr R$. We claim that $\mathscr F(\mathscr M)\in Mod-\mathscr R$ actually lies in the subcategory $Cart-\mathscr R$. Indeed, for $\alpha:x\longrightarrow y$ in $\mathscr X$, we have $\mathscr F(M)_\alpha:\mathscr F_x(\mathscr M_x)=\mathscr M_x\longrightarrow\alpha_\ast\mathscr M_y=\alpha_\ast \mathscr F_y(\mathscr M_y)$ in $\mathbf M_{\mathscr R_x}$. By adjunction, this corresponds to a morphism $\alpha^\ast\mathscr M_x\longrightarrow \mathscr M_y$ in $\mathbf M_{\mathscr R_y}$. But since $\mathscr M\in Cart^C-\mathscr R$, we already know that $\alpha^\ast\mathscr M_x\longrightarrow \mathscr M_y$ is an isomorphism. Hence, $\mathscr F^c(\mathscr M):= \mathscr F(\mathscr M)\in Cart-\mathscr R$. \smallskip We also notice that $Cart^C-\mathscr R$ is closed under taking colimits in $Mod^C-\mathscr R$. Then $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$ preserves colimits and we know from Theorem \ref{T6.10} that both $Cart^C-\mathscr R$ and $Cart-\mathscr R$ are Grothendieck categories. It now follows from \cite[Proposition 8.3.27]{KSch} that $\mathscr F^c$ has a right adjoint. \end{proof} \begin{thm}\label{P7.9} Let $\mathscr X$ be a poset, $C$ be a right semiperfect $K$-coalgebra and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation that is also flat. Suppose there exists $\theta\in V_1$ such that \begin{equation}\label{eq7.19} \theta_x(r)(c_1\otimes c_2)=\varepsilon_C(c)\cdot id_r \end{equation} for every $x\in \mathscr X$, $r\in\mathscr R_x$ and $c\in C$. Then, $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$ is separable. \end{thm} \begin{proof} From Theorem \ref{T7.7}, it follows that $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ is separable. In other words, for any $\mathscr M$, $\mathscr N\in Mod^C-\mathscr R$, the canonical morphism $Mod^C-\mathscr R(\mathscr M,\mathscr N)\longrightarrow Mod-\mathscr R(\mathscr F(\mathscr M),\mathscr F(\mathscr N))$ is a split monomorphism. Since $Cart^C-\mathscr R$ and $Cart-\mathscr R$ are full subcategories of $Mod^C-\mathscr R$ and $Mod-\mathscr R$ respectively and $\mathscr F^c$ is a restriction of $\mathscr F$, the result follows. \end{proof} \section{Separability of the functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$} We continue with $\mathscr X$ being a poset, $C$ being a right semiperfect coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. In this section, we will give conditions for the right adjoint $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ to be separable. \smallskip Putting $C=K$ in Proposition \ref{P5.3}, we see that for each $x\in \mathscr X$, there is a functor $ex_x:\mathbf M_{\mathscr R_x}\longrightarrow Mod-\mathscr R$ having right adjoint $ev_x:Mod-\mathscr R\longrightarrow \mathbf M_{\mathscr R_x}$. In a manner similar to Proposition \ref{P7.25}, we now can show that a natural transformation $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ consists of a collection of natural transformations $\{\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)\}_{x\in \mathscr X}$ such that for any $\alpha:x\longrightarrow y$ in $\mathscr X$ and any $\mathscr N\in Mod-\mathscr R$, we have the following commutative diagram \begin{equation}\label{eq8.1} \begin{CD} \mathscr N_x @>\omega_x(\mathscr N_x)>> \mathscr F_x\mathscr G_x(\mathscr N_x)\\ @V\mathscr N_\alpha VV @VV\mathscr F_x\mathscr G_x(\mathscr N_\alpha)V \\ \alpha_\ast\mathscr N_y @>\alpha_\ast\omega_y(\mathscr N_y)>> \alpha_\ast\mathscr F_y\mathscr G_y(\mathscr N_y)\\ \end{CD} \end{equation} Here, $\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)$ is determined by setting \begin{equation}\label{Yeq7.35b} \omega_x(\mathcal N):=\omega(ex_x(\mathcal N))_x: (ex_x(\mathcal N))_x=\mathcal N\longrightarrow \mathscr F_x\mathscr G_x(\mathcal N)=\mathscr F_x\mathscr G_x((ex_x(\mathcal N))_x)\end{equation} for $\mathcal N\in \mathbf M_{\mathscr R_x}$. As in the proof of Proposition \ref{P7.25}, we can also show that \begin{equation}\label{Ye8.15} \omega_x(\mathscr N_x)=\omega(ex_x(\mathscr N_x))_x=\omega(\mathscr N)_x \end{equation} for any $\mathscr N\in Mod-\mathscr R$ and $x\in \mathscr X$. More explicitly, for each $x\in \mathscr X$ and $r\in \mathscr R_x$, we have a commutative diagram \begin{equation}\label{eq8.2} \begin{CD} \mathscr N_x(r) @>(\omega_x(\mathscr N_x))(r)>> (\mathscr F_x\mathscr G_x(\mathscr N_x))(r)=\mathscr N_x(r)\otimes C\\ @V\mathscr N_\alpha(r)VV @VV(\mathscr F_x\mathscr G_x(\mathscr N_\alpha))(r)V\\ \mathscr N_y(\alpha(r))=(\alpha_\ast\mathscr N_y)(r)@>(\alpha_\ast\omega_y(\mathscr N_y))(r)>=\omega_y(\mathscr N_y)(\alpha(r))> ( \alpha_\ast\mathscr F_y\mathscr G_y(\mathscr N_y))(r)=\mathscr N_y(\alpha(r))\otimes C \\ \end{CD} \end{equation} We will now give another interpretation for the space $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$. For this, we consider a collection $\eta=\{\eta_x(s,r):H_r^x(s)=\mathscr R_x(s,r)\longrightarrow H_r^x(s)\otimes C=\mathscr R_x(s,r)\otimes C:f\mapsto \hat{f}\otimes c_f\}_{x\in \mathscr X,r,s\in \mathscr R_x}$ of $K$-linear maps satisfying the following conditions: \smallskip (1) Fix $x\in \mathscr X$. Then, for $s' \xrightarrow{h} s \xrightarrow{f} r \xrightarrow{g} r'$ in $\mathscr R_x$, we have \begin{equation}\label{8.3eta} \eta_x(s',r')(gfh)=\sum \widehat{gfh}\otimes c_{gfh}=g\hat{f}h_{\psi_x}\otimes c_f^{\psi_x}\in \mathscr R_x(s',r')\otimes C \end{equation} (2) For $\alpha:x\longrightarrow y$ in $\mathscr X$ and $f\in \mathscr R_x(s,r)$ we have \begin{equation}\label{8.4eta} \alpha(\hat{f})\otimes c_f=\widehat{\alpha(f)}\otimes c_{\alpha(f)} \in \mathscr R_y(\alpha(s),\alpha(r))\otimes C \end{equation} The space of all such $\eta$ will be denoted by $W_1$. We note that condition (1) is equivalent to saying that for each $x\in \mathscr X$, the element $\eta_x=\{\eta_x(s,r):\mathscr R_x(s,r)\longrightarrow \mathscr R_x(s,r)\otimes C:f\mapsto \hat{f}\otimes c_f\}_{r,s\in \mathscr R_x}\in Nat(H^x,H^x\otimes C)$, i.e., $\eta_x$ is a morphism in the category of $\mathscr R_x$-bimodules (functors $\mathscr R_x^{op} \otimes \mathscr R_x\longrightarrow Vect_K$). Here $H^x$ is the canonical $\mathscr R_x$-bimodule that takes a pair of objects $(s,r)\in Ob(\mathscr R_x^{op} \otimes \mathscr R_x)$ to $\mathscr R_x(s,r)$. Further, $H^x\otimes C$ is the $\mathscr R_x$-bimodule defined by setting \begin{equation} (H^x\otimes C)(s,r)=\mathscr R_x(s,r)\otimes C\qquad (H^x\otimes C)(h,g)(f\otimes c)=gfh_{\psi_x}\otimes c^{\psi_x} \end{equation} for $s' \xrightarrow{h} s \xrightarrow{f} r \xrightarrow{g} r'$ in $\mathscr R_x$ and $c\in C$. \begin{lem}\label{Lem8.1} There is a canonical morphism $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)\longrightarrow W_1$. \end{lem} \begin{proof} As mentioned above, any $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ corresponds to a collection of natural transformations $\{\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)\}_{x\in \mathscr X}$ satisfying \eqref{eq8.1}. From the proof of \cite[Proposition 3.10]{BBR}, we know that each $\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)$ corresponds to $\eta_x\in Nat(H^x,H^x\otimes C)$ determined by setting \begin{equation}\label{eqr8.6} \eta_x(s,r):H_r^x(s)=\mathscr R_x(s,r)\longrightarrow H_r^x(s)\otimes C=\mathscr R_x(s,r)\otimes C \qquad \eta_x(s,r):=\omega_x(H^x_r)(s) \end{equation} for $r$, $s\in \mathscr R_x$. Here, $H^x_r$ is the right $\mathscr R_x$-module $H_r^x:=\mathscr R_x(\_\_,r):\mathscr R_x^{op}\longrightarrow Vect_K$. We now consider $\alpha:x\longrightarrow y$ in $\mathscr X$ and some $f\in \mathscr R_x(s,r)$. By applying Lemma \ref{L5.1} with $C=K$, we have $ex_x(H^x_r)\in Mod-\mathscr R$ which satisfies $(ex_x(H^x_r))_y=\alpha^\ast H^x_r=H^y_{\alpha(r)}$. Setting $\mathscr N=ex_x(H^x_r)$ in \eqref{eq8.2}, we have \begin{equation}\label{eq8.7} \begin{CD} \mathscr N_x(s)=H^x_r(s) @>(\omega_x(H^x_r))(s)>=\eta_x(s,r)> (\mathscr F_x\mathscr G_x(\mathscr N_x))(s)=H^x_r(s)\otimes C\\ @V\mathscr N_\alpha(s)VV @VV(\mathscr F_x\mathscr G_x(\mathscr N_\alpha))(s)V\\ \mathscr N_y(\alpha(s))=H^y_{\alpha(r)}(\alpha(s))@>\eta_y(\alpha(s),\alpha(r))>=\omega_y(H^y_{\alpha(r)})(\alpha(s))>\mathscr N_y(\alpha(s))\otimes C =H^y_{\alpha(r)}(\alpha(s)) \otimes C\\ \end{CD} \end{equation} It follows that that the collection $\eta_x(s,r)$ satisfies condition \eqref{8.4eta}. This proves the result. \end{proof} \begin{thm}\label{Pro8.2} The spaces $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ and $ W_1$ are isomorphic. \end{thm} \begin{proof} We consider an element $\eta\in W_1$. As mentioned before, this gives a collection $\{\eta_x\in Nat(H^x,H^x\otimes C)\}_{x\in\mathscr X}$ satisfying the compatibility condition in \eqref{8.4eta}. From the proof of \cite[Proposition 3.10]{BBR}, it follows that each $\eta_x$ corresponds to a natural transformation $\omega_x\in Nat(1_{\mathbf M_{\mathscr R_x}},\mathscr F_x\mathscr G_x)$ which satisfies $\omega_x(H^x_r)(s)=\eta_x(s,r)$ for $r$, $s\in \mathscr R_x$. We claim that the collection $\{\omega_x\}_{x\in \mathscr X}$ satisfies the compatibility condition in \eqref{eq8.1} for each $\mathscr N\in Mod-\mathscr R$, thus determining an element $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$. \smallskip We start with $\mathscr N=ex_x(H^x_r)$ for some $x\in \mathscr X$ and $r\in \mathscr R_x$. We consider a morphism $\alpha:y\longrightarrow z$ in $\mathscr X$. If $x\not\leq y$, then $\mathscr N_y=0$ and the condition in \eqref{eq8.1} is trivially satisfied. Otherwise, let $\beta:x\longrightarrow y$ in $\mathscr X$ and set $s=\beta(r)$. In particular, $\mathscr N_y=\beta^\ast H^x_r=H^y_{\beta(r)}=H^y_s$ and $\mathscr N_z=H^z_{\alpha\beta(r)}=H^z_{\alpha(s)}$. Applying the condition \eqref{8.4eta}, we see that the following diagram is commutative for any $s'\in \mathscr R_y$: \begin{equation} \begin{CD} \mathscr R_y(s',s)=\mathscr N_y(s')@>\eta_y(s',s)>=\omega_y(\mathscr N_y)(s')> \mathscr N_y(s')\otimes C=\mathscr R_y(s',s)\otimes C\\ @V\mathscr N_\alpha(s')VV @VV(\mathscr F_y\mathscr G_y(\mathscr N_\alpha))(s')V \\ \mathscr R_z(\alpha(s'),\alpha(s))=\mathscr N_z(\alpha(s'))@>\eta_z(\alpha(s'),\alpha(s))>=\omega_z(\mathscr N_z)(\alpha(s'))> \mathscr N_z(\alpha(s'))\otimes C=\mathscr R_z(\alpha(s'), \alpha(s))\otimes C \\ \end{CD} \end{equation} In other words, the condition in \eqref{eq8.1} is satisfied for $\mathscr N=ex_x(H^x_r)$. From Theorem \ref{T5.5}, we know that the collection \begin{equation}\label{8gen} \{\mbox{$ex_x(H_r^x)$ $\vert$ $x\in \mathscr X$, $r\in \mathscr R_x$}\} \end{equation} is a set of generators for $Mod-\mathscr R$. Accordingly, for any $\mathscr N'\in Mod-\mathscr R$, we can choose an epimorphism $\phi:\mathscr N\longrightarrow \mathscr N'$ where $\mathscr N$ is a direct sum of copies of objects in \eqref{8gen}. Then, $\mathscr N$ satisfies \eqref{eq8.1} and we have commutative diagrams \begin{equation} \begin{CD} \mathscr N_y @>\mathscr N_\alpha>> \alpha_\ast\mathscr N_z @>\alpha_\ast\omega_z(\mathscr N_z)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N_z) \\ @V\phi_yVV @V\alpha_\ast\phi_zVV @VV\alpha_\ast\mathscr F_z\mathscr G_z(\phi_z)V \\ \mathscr N'_y @>\mathscr N'_\alpha>> \alpha_\ast\mathscr N'_z @>\alpha_\ast\omega_z(\mathscr N'_z)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N'_z) \\ \end{CD} \end{equation} \begin{equation} \begin{array}{c} \begin{CD} \mathscr N_y @>\omega_y(\mathscr N_y)>> \mathscr F_y\mathscr G_y(\mathscr N_y) @>\mathscr F_y\mathscr G_y(\mathscr N_\alpha)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N_z) \\ @V\phi_yVV @V\mathscr F_y\mathscr G_y(\phi_y)VV @VV\alpha_\ast\mathscr F_z\mathscr G_z(\phi_z)V \\ \mathscr N'_y @>\omega_y(\mathscr N'_y)>> \mathscr F_y\mathscr G_y(\mathscr N'_y) @>\mathscr F_y\mathscr G_y(\mathscr N'_\alpha)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N'_z) \\ \end{CD}\qquad \qquad \begin{CD} \mathscr N_y @>\omega_y(\mathscr N_y)>> \mathscr F_y\mathscr G_y(\mathscr N_y)\\ @V\mathscr N_\alpha VV @VV\mathscr F_y\mathscr G_y(\mathscr N_\alpha)V \\ \alpha_\ast\mathscr N_z @>\alpha_\ast\omega_z(\mathscr N_z)>> \alpha_\ast\mathscr F_z\mathscr G_z(\mathscr N_z)\\ \end{CD}\\ \end{array} \end{equation} for any $\alpha:y\longrightarrow z$ in $\mathscr X$. Since $\phi_y:\mathscr N_y\longrightarrow \mathscr N'_y$ is an epimorphism, it follows that $\mathscr N'$ also satisfies the condition in \eqref{eq8.1}. This gives a morphism $W_1\longrightarrow Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$. It may be verified that this is inverse to the morphism $Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)\longrightarrow W_1$ in Lemma \ref{Lem8.1}, which proves the result. \end{proof} We will now give conditions for the functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ to be separable. Since $\mathscr G$ has a left adjoint, it follows (see \cite[Theorem 1.2]{Raf}) that $\mathscr G$ is separable if and only if there exists a natural transformation $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ such that $\nu\circ \omega=1_{Mod-\mathscr R}$, where $\nu$ is the counit of the adjunction. \begin{Thm}\label{T8.3} Let $\mathscr X$ be a partially ordered set, $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Then, the functor $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ is separable if and only if there exists $\eta\in W_1$ such that \begin{equation}\label{cond8.3} \begin{CD} id=(id\otimes \varepsilon_C)\circ \eta_x(s,r):\mathscr R_x(s,r)@>\eta_x(s,r)>> \mathscr R_x(s,r)\otimes C@>(id\otimes \varepsilon_C)>> \mathscr R_x(s,r)\end{CD} \end{equation} for each $x\in \mathscr X$ and $s$, $r\in \mathscr R_x$. \end{Thm} \begin{proof} First, we suppose that $\mathscr G$ is separable, i.e., there exists a natural transformation $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ such that $\nu\circ \omega=1_{Mod-\mathscr R}$. Using Proposition \ref{Pro8.2}, we consider $\eta\in W_1$ corresponding to $\omega$. \smallskip By definition, the counit $\nu$ of the adjunction $(\mathscr F,\mathscr G)$ is described as follows: for any $\mathscr N\in Mod-\mathscr R$, we have \begin{equation} \nu(\mathscr N)_x(s):\mathscr N_x(s)\otimes C\longrightarrow \mathscr N_x(s) \qquad n\otimes c\mapsto n\varepsilon_C(c) \end{equation} for each $x\in \mathscr X$, $s\in \mathscr R_x$. We choose $x\in\mathscr X$, $r\in \mathscr R_x$ and set $\mathscr N=ex_x(H^x_r)$. Since $\nu\circ \omega=1_{Mod-\mathscr R}$, it now follows from \eqref{eqr8.6} that \begin{equation}\label{cond8.13} id=\nu(ex_x(H^x_r))_x(s)\circ \omega(ex_x(H^x_r))_x(s)=(id\otimes \varepsilon_C)\circ \omega_x(H^x_r)(s)=(id\otimes \varepsilon_C)\circ \eta_x(s,r) \end{equation} Conversely, suppose that we have $\eta\in W_1$ such that the condition in \eqref{cond8.3} is satisfied. Using the isomorphism in Proposition \ref{Pro8.2}, we obtain the natural transformation $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F\mathscr G)$ corresponding to $\eta$. Then, it is clear from \eqref{cond8.13} that $\nu(\mathscr N)\circ \omega(\mathscr N)=id$ for $\mathscr N=ex_x(H^x_r)$. Since $\{\mbox{$ex_x(H_r^x)$ $\vert$ $x\in \mathscr X$, $r\in \mathscr R_x$}\}$ is a set of generators for $Mod-\mathscr R$, it follows that for any $\mathscr N'\in Mod-\mathscr R$, there is an epimorphism $\phi:\mathscr N\longrightarrow \mathscr N'$ such that $\nu(\mathscr N)\circ \omega(\mathscr N)=id$. We now consider the commutative diagram \begin{equation}\label{cd8.14} \begin{CD} \mathscr N @>\omega(\mathscr N)>> \mathscr F\mathscr G(\mathscr N) @>\nu(\mathscr N)>> \mathscr N \\ @V\phi VV @V\mathscr F\mathscr G(\phi) VV @VV\phi V\\ \mathscr N' @>\omega(\mathscr N')>> \mathscr F\mathscr G(\mathscr N') @>\nu(\mathscr N')>> \mathscr N' \\ \end{CD} \end{equation} Since the upper horizontal composition in \eqref{cd8.14} is the identity and $\phi$ is an epimorphism, it follows that $\nu(\mathscr N')\circ \omega(\mathscr N')=id$. This proves the result. \end{proof} \section{$(\mathscr F,\mathscr G)$ as a Frobenius pair} In Sections 7 and 8, we have given conditions for the functor $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ and its right adjoint $\mathscr G:Mod-\mathscr R \longrightarrow Mod^C-\mathscr R$ to be separable. In this section, we will give necessary and sufficient conditions for $(\mathscr F,\mathscr G)$ to be a Frobenius pair, i.e., $\mathscr G$ is both a right and a left adjoint of $\mathscr F$. First, we note that it follows from the characterization of Frobenius pairs (see for instance, \cite[$\S$ 1]{uni}) that $(\mathscr F,\mathscr G)$ is a Frobenius pair if and only if there exist $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ and $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F \mathscr G)$ such that \begin{equation}\label{eq9.1} \mathscr F(\upsilon(\mathscr M))\circ\omega(\mathscr F(\mathscr M))= id_{\mathscr F(\mathscr M)}\qquad \upsilon(\mathscr G(\mathscr N))\circ \mathscr G(\omega(\mathscr N))=id_{\mathscr G(\mathscr N)} \end{equation} for any $\mathscr M\in Mod^C-\mathscr R$ and $\mathscr N\in Mod-\mathscr R$. Equivalently, for each $x\in \mathscr X$, we must have \begin{equation}\label{eq9.15} \begin{array}{c} (\mathscr F(\upsilon(\mathscr M)))_x\textrm{ }\circ\textrm{ }\omega(\mathscr F(\mathscr M))_x= \mathscr F_x(\upsilon_x(\mathscr M_x))\textrm{ }\circ\textrm{ }\omega_x(\mathscr F_x(\mathscr M_x))= id_{\mathscr F_x(\mathscr M_x)}\\ \upsilon(\mathscr G(\mathscr N))_x\textrm{ }\circ\textrm{ } \mathscr G(\omega(\mathscr N))_x=\upsilon_x(\mathscr G_x(\mathscr N_x))\textrm{ }\circ\textrm{ }\mathscr G_x(\omega_x(\mathscr N_x))=id_{\mathscr G_x(\mathscr N_x)} \\ \end{array} \end{equation} for any $\mathscr M\in Mod^C-\mathscr R$ and $\mathscr N\in Mod-\mathscr R$. \begin{Thm}\label{T9.1} Let $\mathscr X$ be a partially ordered set, $C$ be a right semiperfect $K$-coalgebra and let $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ be an entwined $C$-representation. Let $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ be the forgetful functor and $\mathscr G:Mod-\mathscr R \longrightarrow Mod^C-\mathscr R$ its right adjoint. Then, $(\mathscr F,\mathscr G)$ is a Frobenius pair if and only if there exist $\theta\in V_1$ and $\eta\in W_1$ such that \begin{equation}\label{eq9.2} \varepsilon_C(d)f=\sum \widehat{f}\circ \theta_x(r)(c_f\otimes d)\qquad \varepsilon_C(d)f=\sum \widehat{f_{\psi_x}} \circ \theta_x(r)(d^{\psi_x}\otimes c_f) \end{equation} for every $x\in \mathscr X$, $r\in \mathscr R_x$, $f\in \mathscr R_x(r,s)$ and $d\in C$, where $\eta_x(r,s)(f)=\widehat{f}\otimes c_f$. \end{Thm} \begin{proof} We suppose there exist $\theta\in V_1$ and $\eta\in W_1$ satisfying \eqref{eq9.2} and consider $\mathscr M\in Mod^C-\mathscr R$, $\mathscr N\in Mod-\mathscr R$. Using the isomorphisms in Proposition \ref{P7.6} and Proposition \ref{Pro8.2}, we obtain $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ and $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F \mathscr G)$ corresponding to $\theta$ and $\eta$ respectively. \smallskip For fixed $x\in \mathscr X$, it follows that $\theta_x=\{\theta_x(r):C\otimes C \longrightarrow \mathscr R_x(r,r)\}_{r\in \mathscr R_x}$ and the $\mathscr R_x$-bimodule morphism $\eta_x\in Nat(H^x,H^x\otimes C)$ satisfy the conditions in \cite[Theorem 3.14]{BBR}. Hence, we have \begin{equation}\label{eq9.4} \mathscr F_x(\upsilon_x(\mathcal M))\textrm{ }\circ\textrm{ }\omega_x(\mathscr F_x(\mathcal M))= id_{\mathscr F_x(\mathcal M)}\qquad \upsilon_x(\mathscr G_x(\mathcal N))\textrm{ }\circ\textrm{ }\mathscr G_x(\omega_x(\mathcal N))=id_{\mathscr G_x(\mathcal N)} \end{equation} for any $\mathcal M\in \mathbf M^C_{\mathscr R_x}(\psi_x)$ and $\mathcal N\in \mathbf M_{\mathscr R_x}$. In particular, \eqref{eq9.15} holds for $\mathscr M_x\in \mathbf M^C_{\mathscr R_x}(\psi_x)$ and $\mathscr N_x\in \mathbf M_{\mathscr R_x}$. \smallskip Conversely, suppose that $(\mathscr F,\mathscr G)$ is a Frobenius pair. Then, there exist $\upsilon\in Nat(\mathscr G\mathscr F,1_{Mod^C-\mathscr R})$ and $\omega\in Nat(1_{Mod-\mathscr R},\mathscr F \mathscr G)$ satisfying \eqref{eq9.15} for each $x\in \mathscr X$. Again using the isomorphisms in Proposition \ref{P7.6} and Proposition \ref{Pro8.2}, we obtain corresponding $\theta\in V_1$ and $\eta\in W_1$. \smallskip We now consider $\mathcal M\in \mathbf M^C_{\mathscr R_x}(\psi_x)$ and $\mathcal N\in \mathbf M_{\mathscr R_x}$. Applying \eqref{eq9.15} with $\mathscr M=ex^C_x(\mathcal M)$ and $\mathscr N=ex_x(\mathcal N)$, we have \begin{equation} \mathscr F_x(\upsilon_x(\mathcal M))\textrm{ }\circ\textrm{ }\omega_x(\mathscr F_x(\mathcal M))= id_{\mathscr F_x(\mathcal M)}\qquad \upsilon_x(\mathscr G_x(\mathcal N))\textrm{ }\circ\textrm{ }\mathscr G_x(\omega_x(\mathcal N))=id_{\mathscr G_x(\mathcal N)} \end{equation} It now follows from \cite[Theorem 3.14]{BBR} that $\theta_x=\{\theta_x(r):C\otimes C \longrightarrow \mathscr R_x(r,r)\}_{r\in \mathscr R_x}$ and the $\mathscr R_x$-bimodule morphism $\eta_x\in Nat(H^x,H^x\otimes C)$ satisfy \eqref{eq9.2}. This proves the result. \end{proof} \begin{cor}\label{C9.2} Let $(\mathscr F,\mathscr G)$ be a Frobenius pair. Then, for each $x\in \mathscr X$, $(\mathscr F_x,\mathscr G_x)$ is a Frobenius pair of adjoint functors. \end{cor} \begin{proof} This is immediate from \eqref{eq9.4}. \end{proof} We consider $\alpha:x\longrightarrow y$ in $\mathscr X$. In \eqref{cd7.1}, we observed directly that the functors $\{\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}\}_{x\in \mathscr X}$ commute with both $\alpha^\ast$ and $\alpha_\ast$, while the functors $\{\mathscr G_x: \mathbf M_{\mathscr R_x}\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)\}_{x\in \mathscr X}$ commute only with $\alpha_\ast$. We will now give a sufficient condition for the functors $\{\mathscr G_x: \mathbf M_{\mathscr R_x}\longrightarrow \mathbf M^C_{\mathscr R_x}(\psi_x)\}_{x\in \mathscr X}$ to commute with $\alpha^\ast$. \begin{lem}\label{L9.3} Let $(\mathscr F,\mathscr G)$ be a Frobenius pair. Then, for any $\alpha:x\longrightarrow y$ in $\mathscr X$, we have a commutative diagram \begin{equation}\label{eq9.6} \begin{CD} \mathbf M _{\mathscr R_x}@>\alpha^\ast >> \mathbf M_{\mathscr R_y}\\ @V\mathscr G_xVV @VV\mathscr G_yV \\ \mathbf M^C_{\mathscr R_x}(\psi_x)@>\alpha^\ast >> \mathbf M^C_{\mathscr R_y}(\psi_y)\\ \end{CD} \end{equation} \end{lem} \begin{proof} For $\mathcal M\in \mathbf M _{\mathscr R_x}$, we will show that $\mathscr G_y\alpha^\ast(\mathcal M)=\alpha^\ast\mathscr G_x(\mathcal M)\in \mathbf M^C_{\mathscr R_y}(\psi_y)$. From Corollary \ref{C9.2} we know that each $(\mathscr F_x,\mathscr G_x)$ is a Frobenius pair of adjoint functors. Using this fact and the commutative diagrams in \eqref{cd7.1}, we now have that for any $\mathcal N\in \mathbf M^C_{\mathscr R_y}(\psi_y)$: \begin{equation} \mathbf M^C_{\mathscr R_y}(\psi_y)(\mathscr G_y\alpha^\ast(\mathcal M),\mathcal N)=\mathbf M_{\mathscr R_x}(\mathcal M,\alpha_\ast\mathscr F_y(\mathcal N))=\mathbf M_{\mathscr R_x}(\mathcal M,\mathscr F_x\alpha_\ast(\mathcal N))=\mathbf M^C_{\mathscr R_y}(\psi_y)(\alpha^\ast\mathscr G_x(\mathcal M),\mathcal N) \end{equation} \end{proof} \begin{thm}\label{P9.4} Let $(\mathscr F,\mathscr G)$ be a Frobenius pair. Suppose that $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ restricts to a functor $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$. \end{thm} \begin{proof} For any $\mathscr N\in Cart-\mathscr R$, we claim that $\mathscr G(\mathscr N)\in Mod^C-\mathscr R$ actually lies in $Cart^C-\mathscr R$. By definition of $\mathscr G$, we have for any $\alpha:x\longrightarrow y$, a morphism $\mathscr G(\mathscr N)_\alpha=\mathscr G_x(\mathscr N_\alpha):\mathscr G_x(\mathscr N_x)\longrightarrow \mathscr G_x(\alpha_\ast(\mathscr N_y))=\alpha_\ast(\mathscr G_y(\mathscr N_y))$ in $\mathbf M^C_{\mathscr R_x}(\psi_x)$ which corresponds to a morphism $\mathscr G(\mathscr N)^\alpha:\alpha^\ast(\mathscr G_x(\mathscr N_x))\longrightarrow \mathscr G_y(\mathscr N_y)$ in $\mathbf M^C_{\mathscr R_y}(\psi_y)$. Since $(\mathscr F,\mathscr G)$ is a Frobenius pair, it follows from Lemma \ref{L9.3} that $\mathscr G_y\alpha^\ast(\mathscr N_x)=\alpha^\ast\mathscr G_x(\mathscr N_x)\in \mathbf M^C_{\mathscr R_y}(\psi_y)$. Since $\mathcal N$ is cartesian, we know that $\alpha^\ast\mathscr N_x$ is isomorphic to $\mathscr N_y$ and hence $\mathscr G(\mathscr N)^\alpha=\mathscr G_y(\mathscr N^\alpha)$ is an isomorphism. \end{proof} \begin{cor}\label{C9.5} Let $(\mathscr F,\mathscr G)$ be a Frobenius pair. Suppose that $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ is flat. Then, $(\mathscr F^c,\mathscr G^c)$ is a Frobenius pair of adjoint functors between $Cart^C-\mathscr R$ and $Cart-\mathscr R$. \end{cor} \begin{proof} From Proposition \ref{P7.8}, we know that $\mathscr F:Mod^C-\mathscr R\longrightarrow Mod-\mathscr R$ restricts to a functor $\mathscr F^c:Cart^C-\mathscr R\longrightarrow Cart-\mathscr R$. From Proposition \ref{P9.4}, we know that $\mathscr G:Mod-\mathscr R\longrightarrow Mod^C-\mathscr R$ restricts to a functor $\mathscr G^c:Cart-\mathscr R\longrightarrow Cart^C-\mathscr R$ on the full subcategories of cartesian modules. Since $\mathscr G$ is both right and left adjoint to $\mathscr F$, it is clear that $\mathscr G^c$ is both right and left adjoint to $\mathscr F^c$. \end{proof} \section{Constructing entwined representations} In this final section, we will give examples of how to construct entwined representations and describe modules over them. Let $(\mathcal R,C,\psi)$ be an entwining structure. Then, we consider the $K$-linear category $(C,\mathcal R)_\psi$ defined as follows \begin{equation} Ob((C,\mathcal R)_\psi)=Ob(\mathcal R)\qquad (C,\mathcal R)_\psi(s,r):=Hom_K(C,\mathcal R(s,r)) \end{equation} for $s$, $r\in \mathcal R$. The composition in $(C,\mathcal R)_\psi$ is as follows: given $\phi:C\longrightarrow \mathcal R(s,r)$ and $\phi':C\longrightarrow \mathcal R(t,s)$ respectively in $(C,\mathcal R)_\psi(s,r)$ and $(C,\mathcal R)_\psi(t,s)$, we set \begin{equation} \phi\ast\phi': C\longrightarrow \mathcal R(t,r)\qquad c\mapsto \sum \phi(c_2)_\psi\circ \phi'(c_1^\psi) \end{equation} \begin{lem}\label{L99.1} Let $(\mathcal R,C,\psi)$ be an entwining structure. Then, there is a canonical functor $P_\psi: \mathbf M_{\mathcal R}^C(\psi)\longrightarrow \mathbf M_{(C,\mathcal R)_\psi}$. \end{lem} \begin{proof} We consider $\mathcal M\in \mathbf M_{\mathcal R}^C(\psi)$. We will define $\mathcal N=P_\psi(\mathcal M)\in \mathbf M_{(C,\mathcal R)_\psi}$ by setting $\mathcal N(r):=\mathcal M(r)$ for each $r\in (C,\mathcal R)$. Given $\phi:C\longrightarrow \mathcal R(s,r)$ in $(C,\mathcal R)_\psi(s,r)$, we define $m\ast \phi\in \mathcal N(s)=\mathcal M(s)$ by setting $m\ast \phi= \sum m_0\phi(m_1)$. Here, $\rho_{\mathcal M(r)}(m)=\sum m_0\otimes m_1$ is the right $C$-comodule structure on $\mathcal M(r)$. \smallskip For $\phi':C\longrightarrow \mathcal R(t,s)$ in $(C,\mathcal R)_\psi(t,s)$, we now have \begin{equation} \begin{array}{ll} m\ast (\phi\ast\phi') & = \sum m_0(\phi\ast \phi')(m_1) =\sum m_0\phi(m_{12})_\psi\phi'(m_{11}^\psi)=\sum m_0\phi(m_{2})_\psi\phi'(m_{1}^\psi) \\ &\\ & \\ (m\ast \phi)\ast \phi' & =\sum (m\ast \phi)_0\phi'((m\ast \phi)_1 ) =\sum (m_0 \phi(m_1))_0\phi'((m_0 \phi(m_1))_1 ) \\ &=\sum (m_{00}\phi(m_1)_\psi)\phi'(m_{01}^\psi)=\sum m_0\phi(m_{2})_\psi\phi'(m_{1}^\psi) \\ \end{array} \end{equation} This proves the result. \end{proof} \begin{lem}\label{L99.2} Let $(\alpha,id):(\mathcal R,C,\psi)\longrightarrow (\mathcal S,C,\psi')$ be a morphism of entwining structures. Then, $P_{\psi}\circ (\alpha,id)_\ast=\alpha_\ast \circ P_{\psi'}:\mathbf M_{\mathcal S}^C(\psi')\longrightarrow \mathbf M_{(C,\mathcal R)}$. \end{lem} \begin{proof} We begin with $\mathcal N\in \mathbf M_{\mathcal S}^C(\psi')$. From the construction in Lemma \ref{L99.1}, it is clear that for any $r\in (C,\mathcal R)_\psi$, we have $(P_{\psi}\circ (\alpha,id)_\ast)(\mathcal N)(r)=(\alpha_\ast \circ P_{\psi'})(\mathcal N)(r)=\mathcal N(\alpha(r))$. We set $\mathcal N_1:=(P_{\psi}\circ (\alpha,id)_\ast)(\mathcal N)$ and $\mathcal N_2:=(\alpha_\ast \circ P_{\psi'})(\mathcal N)$ and consider $n\in \mathcal N_1(r)=\mathcal N_2(r)$ as well as $\phi:C\longrightarrow \mathcal R(s,r)$ in $(C,\mathcal R)_\psi(s,r)$. Then, in both $\mathcal N_1(s)$ and $\mathcal N_2(s)$, we have $n\ast \phi=\sum n_0 \alpha(\phi(n_1))$. This proves the result. \end{proof} Now let $\mathscr X$ be a small category and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ an entwined $C$-representation. By replacing each entwining structure $(\mathscr R_x,C,\psi_x)$ with the category $(C,\mathscr R_x)_{\psi_x}$, we obtain an induced representation $(C,\mathscr R)_\psi:\mathscr X\longrightarrow \mathscr Lin$ (we recall that $\mathscr Lin$ is the category of small $K$-linear categories). \begin{thm}\label{P99.3} There is a canonical functor $Mod^C-\mathscr R\longrightarrow Mod-(C,\mathscr R)_\psi$. \end{thm} \begin{proof} By definition, an object $\mathscr M\in Mod^C-\mathscr R$ consists of a collection $\{\mathscr M_x\in \mathbf M^C_{\mathscr R_x}(\psi_x)\}_{x\in \mathscr X}$ and for each $\alpha:x\longrightarrow y$ in $\mathscr X$, a morphism $\mathscr M_\alpha:\mathscr M_x\longrightarrow \alpha_\ast\mathscr M_y$ in $\mathbf M^C_{\mathscr R_x}(\psi_x)$. Applying the functors $P_{\psi_x}:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{(C,\mathscr R_x)_{\psi_x}}$ for $x\in \mathscr X$ and using Lemma \ref{L99.2}, the result is now clear. \end{proof} Now let $C$ be finitely generated as a $K$-vector space and let $C^\ast$ denote its $K$-linear dual. Then, the canonical map $C^\ast\otimes V\longrightarrow Hom_K(C,V)$ is an isomorphism for any vector space $V$. For an entwining structure $(\mathcal R,C,\psi)$, the category $(C,\mathcal R)_\psi$ can now be rewritten as $(C^\ast\otimes \mathcal R)_\psi$ where $(C^\ast\otimes \mathcal R)_\psi (s,r)=C^\ast\otimes \mathcal R(s,r)$ for $s$, $r\in Ob((C^\ast\otimes \mathcal R)_\psi)=Ob(\mathcal R)$. Given $c^\ast\otimes f\in C^\ast\otimes \mathcal R(s,r)$ and $d^\ast\otimes g\in C^\ast\otimes \mathcal R(t,s)$, the composition in $(C^\ast\otimes \mathcal R)_\psi$ is expressed as \begin{equation}\label{eq99.47} (c^\ast\otimes f)\circ (d^\ast \otimes g):C\longrightarrow \mathcal R(t,r)\qquad x\mapsto \sum c^\ast(x_2)d^\ast (x_1^\psi)(f_\psi\circ g) \end{equation} for $x\in C$. It is important to note that when $f$ and $g$ are identity maps, the composition in \eqref{eq99.47} simplifies to \begin{equation}\label{eq99.48} (c^\ast\otimes id_r)\circ (d^\ast \otimes id_r):C\longrightarrow \mathcal R(t,r)\qquad x\mapsto \sum c^\ast(x_2)d^\ast (x_1)id_r \end{equation} In other words, for the canonical morphism $C^\ast\longrightarrow C^\ast\otimes\mathcal R(r,r)$ given by $c^\ast\mapsto c^\ast\otimes id_r$ to be a morphism of algebras, we must use the opposite of the usual convolution product on $C^\ast$. \smallskip Similarly, given an entwined $C$-representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ with $C$ finitely generated as a $K$-vector space, we can replace the induced representation $(C,\mathscr R)_\psi:\mathscr X\longrightarrow \mathscr Lin$ by $(C^\ast\otimes\mathscr R)_\psi$. Then, $Mod-(C,\mathscr R)_\psi$ may be replaced by $Mod-(C^\ast\otimes \mathscr R)_\psi$. \begin{thm}\label{P99.4} Let $\mathscr X$ be a small category and $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$ an entwined $C$-representation. Suppose that $C$ is finitely generated as a $K$-vector space. Then, the categories $Mod^C-\mathscr R$ and $Mod-(C^\ast\otimes \mathscr R)_\psi$ are equivalent. \end{thm} \begin{proof} By Proposition \ref{P99.3}, we already know that any object in $Mod^C-\mathscr R$ may be equipped with a $(C^\ast\otimes\mathscr R)_\psi$-module structure. For the converse, we consider some $\mathscr M\in Mod-(C^\ast\otimes\mathscr R)_\psi$ and choose some $x\in \mathscr X$. \smallskip We make $\mathscr M_x$ into an $\mathscr R_x$-module as follows: for $f\in \mathscr R_x(s,r)$ and $m\in \mathscr M_x(r)$, we set $mf\in \mathscr M_x(s)$ to be $mf:=m(\varepsilon_C\otimes f)$. By considering the canonical morphism $C^\ast\longrightarrow C^\ast\otimes\mathscr R_x(r,r)$, it follows that the right $(C^\ast\otimes \mathscr R_x)_{\psi_x}(r,r)$ module $\mathscr M_x(r)$ carries a right $C^\ast$-module structure. As observed in \eqref{eq99.48}, here the product on $C^\ast$ happens to be the opposite of the usual convolution product. Hence, the right $C^\ast$-module structure on $\mathscr M_x(r)$ leads to a left $C^\ast$-module structure on $\mathscr M_x(r)$ when $C^\ast$ is equipped with the usual product. Since $C$ is finite dimensional, it is well known (see, for instance, \cite[$\S$ 2.2]{book3}) that we have an induced right $C$-comodule structure on $\mathscr M_x(r)$. It may be verified by direct computation that $\mathscr M_x\in \mathbf M^C_{\mathscr R_x}(\psi_x)$. Finally, for a morphism $\alpha: x\longrightarrow y$ in $\mathscr X$, the map $\mathscr M_\alpha:\mathscr M_x \longrightarrow \alpha_\ast\mathscr M_y$ in $\mathbf M_{(C^\ast\otimes \mathscr R_x)_{\psi_x}}$ induces a morphism in $\mathbf M^C_{\mathscr R_x}(\psi_x)$. Hence, $\mathscr M\in Mod-(C^\ast\otimes\mathscr R)_\psi$ may be treated as an object of $Mod^C-\mathscr R$. It may be directly verified that this structure is the inverse of the one defined by Propostion \ref{P99.3}. \end{proof} \smallskip Finally, we will give an example of constructing entwined representations starting from $B$-comodule categories, where $B$ is a bialgebra. So let $B$ be a bialgebra over $K$, having multiplication $\mu_B$, unit map $u_B$ as well as comultiplication $\Delta_B$ and counit map $\varepsilon_B$. Then, the notion of a ``$B$-comodule category,'' which behaves like a $B$-comodule algebra with many objects, is implicit in the literature. \begin{defn}\label{D100.1} Let $B$ be a $K$-bialgebra. We will say that a small $K$-linear category $\mathcal R$ is a right $B$-comodule category if it satisfies the following conditions: \smallskip (i) For any $r$, $s\in \mathcal R$, there is a coaction $\rho=\rho(r,s):\mathcal R(r,s)\longrightarrow \mathcal R(r,s)\otimes B$, $f\mapsto \sum f_0\otimes f_1$, making $\mathcal R(r,s)$ a right $B$-comodule. Further, $\rho(id_r)=id_r\otimes 1_B$ for each $r\in \mathcal R$. \smallskip (ii) For $f\in \mathcal R(r,s)$ and $g\in \mathcal R(s,t)$, we have \begin{equation}\label{eq100.1} \rho(g\circ f)=(g\circ f)_0\otimes (g\circ f)_1= (g_0\circ f_0)\otimes (g_1f_1) \end{equation} We have suppressed the summation signs in \eqref{eq100.1}. We will always refer to a right $B$-comodule category as a co-$B$-category. We will only consider those $K$-linear functors between co-$B$-categories whose action on morphisms is $B$-colinear. Together, the co-$B$-categories form a new category, which we will denote by $Cat^B$. \end{defn} \begin{lem}\label{L100.2} Let $B$ be a bialgebra over $K$. Let $\mathcal R$ be a co-$B$-category and let $C$ be a right $B$-module coalgebra. The collection $\psi:=\psi_{\mathcal R}=\{\psi_{rs}:C\otimes \mathcal R(r,s)\longrightarrow \mathcal R(r,s)\otimes C\}_{r,s \in \mathcal R}$ defined by setting \begin{equation} \psi_{rs}(c\otimes f)=f_\psi\otimes c^\psi=f_0\otimes cf_1 \qquad f\in \mathcal R(r,s), \textrm{ }c\in C \end{equation} makes $(\mathcal R,C,\psi)$ an entwining structure. \end{lem} \begin{proof} We consider morphisms $f$, $g$ in $\mathcal R$ so that $gf$ is defined. Then, for $c\in C$, we see that \begin{equation} \begin{array}{c} (gf)_\psi\otimes c^\psi=(gf)_0\otimes c(gf)_1=(g_0f_0)\otimes c(g_1f_1)=g_\psi f_\psi\otimes c^{\psi\psi}\\ f_\psi\otimes \Delta_C(c^\psi)=f_0\otimes \Delta_C(cf_1)=f_0\otimes c_1f_1\otimes c_2f_2=f_{00}\otimes c_1f_{01}\otimes c_2f_1=f_{\psi\psi}\otimes c_1^\psi\otimes c_2^\psi\\ \varepsilon_C(c^\psi)f_\psi=\varepsilon_C(c)\varepsilon_B(f_1)f_0=\varepsilon_C(c)f\qquad \psi(c\otimes id_r)=id_r\otimes c1_B\\ \end{array} \end{equation} This proves the result. \end{proof} \begin{thm}\label{P100.3} Let $B$ be a $K$-bialgebra and let $C$ be a right $B$-module coalgebra. If $\mathscr X$ is a small category, a functor $\mathscr R':\mathscr X\longrightarrow Cat^B$ induces an entwined $C$-representation of $\mathscr X$ \begin{equation} \mathscr R:\mathscr X\longrightarrow \mathscr Ent_C\qquad x\mapsto (\mathscr R_x,C,\psi_x):=(\mathscr R'_x,C,\psi_{\mathscr R'_x}) \end{equation} \end{thm} \begin{proof} It may be easily verified that the entwining structures constructed in Lemma \ref{L100.2} are functorial with respect to $B$-colinear functors between $B$-comodule categories. This proves the result. \end{proof} We now consider a representation $\mathscr R':\mathscr X\longrightarrow Cat^B$ as in Proposition \ref{P100.3} and the corresponding entwined $C$-representation $\mathscr R:\mathscr X\longrightarrow \mathscr Ent_C$. By considering the underlying $K$-linear category of any co-$B$-category, we obtain an induced representation that we continue to denote by $\mathscr R':\mathscr X\longrightarrow Cat^B\longrightarrow \mathscr Lin$. We conclude by showing how entwined modules over $\mathscr R$ are related to modules over $\mathscr R'$ in the sense of Estrada and Virili \cite{EV}. \begin{thm} \label{P100.4} Let $B$ be a $K$-bialgebra and let $C$ be a right $B$-module coalgebra. Let $\mathscr X$ be a small category, $\mathscr R':\mathscr X\longrightarrow Cat^B$ a functor and let $\mathscr R:\mathscr X\longrightarrow\mathscr Ent_C$ be the corresponding entwined $C$-representation. Then, a module $\mathscr M$ over $\mathscr R$ consists of the following data: \smallskip (1) A module $\mathscr M$ over the induced representation $\mathscr R':\mathscr X\longrightarrow Cat^B\longrightarrow \mathscr Lin$. \smallskip (2) For each $x\in \mathscr X$ and $r\in \mathscr R_x$ a right $C$-comodule structure $\rho_r^x:\mathscr M_x(r)\longrightarrow \mathscr M_x(r)\otimes C$ such that \begin{equation*} \rho_{s}^x(mf)= \big(mf\big)_0 \otimes \big(mf\big)_{1}=m_0f_0\otimes {m_1}f_1 \end{equation*} for every $f \in \mathscr{R}_x(s,r)$ and $m \in \mathscr{M}_x(r).$ \smallskip (3) For each morphism $\alpha:x\longrightarrow y$ in $\mathscr X$, the morphism $\mathscr M_\alpha(r):\mathscr M_x(r)\longrightarrow (\alpha_\ast\mathscr M_y)(r)$ is $C$-colinear for each $r\in \mathscr R_x$. \end{thm} \begin{proof} We consider a datum as described by the three conditions above. The conditions (1) and (2) ensure that each $\mathscr M_x\in \mathbf M^C_{\mathscr R_x}(\psi_x)$. For each $x\in \mathscr X$, there is a forgetful functor $\mathscr F_x:\mathbf M^C_{\mathscr R_x}(\psi_x)\longrightarrow \mathbf M_{\mathscr R_x}$. Let $\alpha:x\longrightarrow y$ be a morphism in $\mathscr X$. From \eqref{cd7.1}, we know that $(\alpha,id)_\ast: \mathbf M_{\mathscr R_y}^C(\psi_y) \longrightarrow \mathbf M_{\mathscr R_x}^C(\psi_x)$ and $\alpha_\ast: \mathbf M_{\mathscr R_y} \longrightarrow \mathbf M_{\mathscr R_x}$ are well behaved with respect to these forgetful functors. For each $r\in \mathscr R_x$, if $\mathscr M_\alpha(r):\mathscr M_x(r)\longrightarrow (\alpha_\ast\mathscr M_y)(r)$ is also $C$-colinear, it follows that $\mathscr M_\alpha$ is a morphism in $\mathbf M_{\mathscr R_x}^C(\psi_x)$. The result is now clear. \end{proof} \small \begin{bibdiv} \begin{biblist} \bib{Abu}{article}{ author={Abuhlail, J. Y.}, title={Dual entwining structures and dual entwined modules}, journal={Algebr. Represent. Theory}, volume={8}, date={2005}, number={2}, pages={275--295}, } \bib{AR}{book}{ author={Ad\'{a}mek, J.}, author={Rosick\'{y}, J.}, title={Locally presentable and accessible categories}, series={London Mathematical Society Lecture Note Series}, volume={189}, publisher={Cambridge University Press, Cambridge}, date={1994}, pages={xiv+316}, } \bib{BBR0}{article}{ author={Balodi, M.}, author={Banerjee, A.}, author={Ray, S.} title={Cohomology of modules over H-categories and co-H-categories}, journal={Canadian Journal of Mathematics}, doi={https://doi.org/10.4153/S0008414X19000403}, } \bib{BBR}{article}{ author={Balodi, M.}, author={Banerjee, A.}, author={Ray, S.} title={Entwined modules over linear categories and Galois extensions}, journal={Israel Journal of Mathematics (to appear), arXiv:1901.00323 [math.CT]}, } \bib{Brx1}{article}{ author={Brzezi\'{n}ski, T.}, title={On modules associated to coalgebra Galois extensions}, journal={J. Algebra}, volume={215}, date={1999}, number={1}, pages={290--317}, } \bib{Brx5}{article}{ author={Brzezi\'{n}ski, T.}, title={Frobenius properties and Maschke-type theorems for entwined modules}, journal={Proc. Amer. Math. Soc.}, volume={128}, date={2000}, number={8}, pages={2261--2270}, } \bib{Brx2}{article}{ author={Brzezi\'{n}ski, T.}, title={The cohomology structure of an algebra entwined with a coalgebra}, journal={J. Algebra}, volume={235}, date={2001}, number={1}, pages={176--202}, } \bib{Brx3}{article}{ author={Brzezi\'{n}ski, T.}, title={The structure of corings: induction functors, Maschke-type theorem, and Frobenius and Galois-type properties}, journal={Algebr. Represent. Theory}, volume={5}, date={2002}, number={4}, pages={389--410}, } \bib{BrMj}{article}{ author={Brzezi\'{n}ski, T.}, author={Majid, S.}, title={Coalgebra bundles}, journal={Comm. Math. Phys.}, volume={191}, date={1998}, number={2}, pages={467--492}, } \bib{uni}{article}{ author={Brzezi\'{n}ski, T.}, author={Caenepeel, S.}, author={Militaru, G.}, author={Zhu, S.}, title={Frobenius and Maschke type theorems for Doi-Hopf modules and entwined modules revisited: a unified approach}, conference={ title={Ring theory and algebraic geometry}, address={Le\'{o}n}, date={1999}, }, book={ series={Lecture Notes in Pure and Appl. Math.}, volume={221}, publisher={Dekker, New York}, }, date={2001}, pages={1--31}, } \bib{Wibook}{book}{ author={Brzezinski, T.}, author={Wisbauer, R.}, title={Corings and comodules}, series={London Mathematical Society Lecture Note Series}, volume={309}, publisher={Cambridge University Press, Cambridge}, date={2003}, pages={xii+476}, } \bib{BuTa2}{article}{ author={Bulacu, D.}, author={Caenepeel, S.}, author={Torrecillas, B.}, title={Frobenius and separable functors for the category of entwined modules over cowreaths, II: applications}, journal={J. Algebra}, volume={515}, date={2018}, pages={236--277}, } \bib{BuTa1}{article}{ author={Bulacu, D.}, author={Caenepeel, S.}, author={Torrecillas, B.} title={Frobenius and Separable Functors for the Category of Entwined Modules over Cowreaths, I: General Theory}, journal={Algebra and Representation Theory}, volume={23}, date={2020}, pages={1119-1157}, } \bib{CaDe}{article}{ author={Caenepeel, S.}, author={De Groot, E.}, title={Modules over weak entwining structures}, conference={ title={New trends in Hopf algebra theory}, address={La Falda}, date={1999}, }, book={ series={Contemp. Math.}, volume={267}, publisher={Amer. Math. Soc., Providence, RI}, }, date={2000}, pages={31--54}, } \bib{X13}{article}{ author={Caenepeel, S.}, author={Militaru, G.}, author={Ion, Bogdan}, author={Zhu, Shenglin}, title={Separable functors for the category of Doi-Hopf modules, applications}, journal={Adv. Math.}, volume={145}, date={1999}, number={2}, pages={239--290}, } \bib{X14}{article}{ author={Caenepeel, S.}, author={Militaru, G.}, author={Zhu, Shenglin}, title={A Maschke type theorem for Doi-Hopf modules and applications}, journal={J. Algebra}, volume={187}, date={1997}, number={2}, pages={388--412}, issn={0021-8693}, review={\MR{1430990}}, doi={10.1006/jabr.1996.6794}, } \bib{X15}{article}{ author={Caenepeel, S.}, author={Militaru, G.}, author={Zhu, S.}, title={Doi-Hopf modules, Yetter-Drinfel\cprime d modules and Frobenius type properties}, journal={Trans. Amer. Math. Soc.}, volume={349}, date={1997}, number={11}, pages={4311--4342}, } \bib{book3}{book}{ author={D\u{a}sc\u{a}lescu, S.}, author={N\u{a}st\u{a}sescu, C.}, author={Raianu, \c{S}.}, title={Hopf algebras}, series={Monographs and Textbooks in Pure and Applied Mathematics}, volume={235}, note={An introduction}, publisher={Marcel Dekker, Inc., New York}, date={2001}, pages={x+401}, } \bib{EV}{article}{ author={Estrada, S.}, author={Virili, S.}, title={Cartesian modules over representations of small categories}, journal={Adv. Math.}, volume={310}, date={2017}, pages={557--609}, } \bib{Tohoku}{article}{ author={Grothendieck, A.}, title={Sur quelques points d'alg\`ebre homologique}, language={French}, journal={Tohoku Math. J. (2)}, volume={9}, date={1957}, pages={119--221}, } \bib{HP}{article}{ author={Hobst, D.}, author={Pareigis, B.}, title={Double quantum groups}, journal={J. Algebra}, volume={242}, date={2001}, number={2}, pages={460--494}, } \bib{Jia}{article}{ author={Jia, L.}, title={The sovereign structure on categories of entwined modules}, journal={J. Pure Appl. Algebra}, volume={221}, date={2017}, number={4}, pages={867--874}, } \bib{KSch}{book}{ author={Kashiwara, M.}, author={Schapira, P.}, title={Categories and sheaves}, series={Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, volume={332}, publisher={Springer-Verlag, Berlin}, date={2006}, pages={x+497}, } \bib{Mit}{article}{ author={Mitchell, B.}, title={Rings with several objects}, journal={Advances in Math.}, volume={8}, date={1972}, pages={1--161}, } \bib{MF}{book}{ author={Mumford, D.}, author={Fogarty, J.}, title={Geometric invariant theory}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas]}, volume={34}, edition={2}, publisher={Springer-Verlag, Berlin}, date={1982}, pages={xii+220}, } \bib{NBO}{article}{ author={N\u{a}st\u{a}sescu, C.}, author={Van den Bergh, M.}, author={Van Oystaeyen, F.}, title={Separable functors applied to graded rings}, journal={J. Algebra}, volume={123}, date={1989}, number={2}, pages={397--413}, } \bib{Raf}{article}{ author={Rafael, M. D.}, title={Separable functors revisited}, journal={Comm. Algebra}, volume={18}, date={1990}, number={5}, pages={1445--1459}, } \bib{Schb}{article}{ author={Schauenburg, Peter}, title={Doi-Koppinen Hopf modules versus entwined modules}, journal={New York J. Math.}, volume={6}, date={2000}, pages={325--329}, } \bib{Schn}{article}{ author={Schneider, H.-J}, title={Principal homogeneous spaces for arbitrary Hopf algebras}, note={Hopf algebras}, journal={Israel J. Math.}, volume={72}, date={1990}, number={1-2}, pages={167--195}, } \end{biblist} \end{bibdiv} \end{document}
1,108,101,563,055
arxiv
\section{Introduction} After the rediscovery in the early 1950s of spatial coincidences between Cepheids and open clusters by \citet{ir55,ir58}, Eggen \citep[see][]{sa58}, and \citet{kh56}, a number of searches for additional coincidences were made by \citet{kr57}, \citet{vb57}, and \citet{ti59}, among others. Tifft's search resulted in the discovery of a near spatial coincidence between the 4$^{\rm d}$.37 Cepheid CG Cassiopeiae and an anonymous open cluster, subsequently catalogued as Berkeley 58 \citep{sw62}, which lies less than one cluster diameter to the west. The field is coincident with a portion of the Perseus spiral arm that is relatively rich in open clusters, and the cluster NGC 7790 with its three Cepheid members lies in close proximity. The possibility that CG Cas might be an outlying member of NGC 7790 was raised at one time by \citet{ef64a,ef64b}, and found some support in a star count analysis by \citet{ko68}. More detailed star counts in the field \citep{tu85} indicate otherwise, as do the available proper motion data \citep{fr74,fr77}. The Cepheid does lie in the corona of Berkeley 58 \citep{tu85}, although Frolov has argued that it is not a probable cluster member. Given a probable distance of ~3 kpc to both CG Cas and Berkeley 58 \citep[e.g.,][]{fr79,pj94}, it is not clear that existing proper motion data are precise enough to provide conclusive evidence pertaining to the cluster membership of CG Cas. The present study was therefore initiated in order to examine the case in more detail. As demonstrated here, there is strong evidence that CG Cas is a likely member of Berkeley 58 and that it can serve as a calibrator for the Cepheid period-luminosity (PL) relation. \section{Observational Data} A variety of observations were obtained for the present investigation. Table 1 presents photoelectric {\it UBV} photometry for bright members of Berkeley 58, obtained during observing runs at Kitt Peak National Observatory in 1981 September, 1982 August, and 1984 August. The data, acquired using 1P21 photomultipliers and standard {\it UBV} filter sets used in conjunction with pulse-counting photometers on the No. 4--0.4-m, No. 2--0.9-m, and 1.3-m telescopes at Kitt Peak, have associated uncertainties typical of our previous investigations of Cepheid clusters \citep{te92,tu92,te94}, namely standard internal errors for a single observation of $\pm 0.01$ in $V$ and {\it B--V}, and $\pm 0.02$ in {\it U--B}, for stars brighter than $V=13$. The estimated external errors for all but the faintest stars are similar in magnitude. The stars are identified by their numbering in Fig. 1, as well as by their 2000 co-ordinates in the 2MASS survey \citep{cu03}; the number of individual observations for each star is given in column 7 of Table 1. \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf1.eps} \end{center} \caption{A finder chart for the field of Berkeley 58 from the red image of the Palomar Observatory Sky Survey. The field of view measures $20^\prime \times 20^\prime$ and is centred at 2000 co-ordinates: RA = $00^{\rm h} 00^{\rm m} 12^{\rm s}.9$, DEC = +60$^{\circ}$ 56${\arcmin}$ 07${\arcsec}$. The top image depicts the location of CG Cas relative to the cluster core, the lower image identifies photoelectrically observed stars. [The National Geographic Society-Palomar Observatory Sky Atlas (POSS-I) was made by the California Institute of Technology with grants from the National Geographic Society.]} \label{fig1} \end{figure} \setcounter{table}{0} \begin{table*} \begin{minipage}{12cm} \caption[]{Photoelectric {\it UBV} Data for Stars in Berkeley 58.} \label{tab1} \begin{tabular}{cccccccl} \hline Star &RA(2000) &DEC(2000) &$V$ &$B-V$ &$U-B$ &n &Notes \\ \hline CG Cas &00 00 59.24 &60 57 32.5 &11.20 &1.30 &+1.00 &15 &F5--G1 I \\ 1 &00 01 21.61 &60 50 21.2 &7.28 &0.14 &$-0.39$ &4 & \\ 2 &00 00 47.68 &60 48 49.8 &9.80 &0.27 &$-0.10$ &4 & \\ 3 &00 00 46.10 &60 58 46.5 &9.85 &1.35 &+1.20 &4 & \\ 4 &00 00 42.77 &61 03 26.1 &10.02 &0.58 &+0.02 &4 & \\ 5 &00 00 32.75 &60 54 11.8 &10.79 &2.04 &+2.44 &2 &(K II)? \\ 6 &00 00 20.63 &60 59 43.1 &10.95 &0.50 &$-0.12$ &4 &B3--5 V$^{\rm a}$ \\ 7 &00 00 48.46 &61 02 49.5 &11.49 &0.46 &$-0.44$ &4 &B3--5 Vnn \\ 8 &00 00 10.99 &61 01 53.6 &12.04 &0.59 &+0.26 &1 & \\ 9 &23 59 45.42 &60 56 28.1 &12.11 &0.57 &+0.48 &3 & \\ 10 &00 00 52.47 &60 56 14.1 &12.16 &2.25 &+2.51 &3 & \\ 11 &00 00 40.07 &61 03 21.9 &12.35 &0.50 &$-0.25$ &4 &B2.5 V \\ 12 &00 00 48.77 &60 59 17.2 &12.55 &1.41 &+1.10 &1 & \\ 13 &00 00 33.91 &60 57 58.7 &12.78 &0.62 &+0.09 &4 & \\ 14 &00 00 13.07 &60 56 25.4 &12.82 &0.57 &+0.04 &4 & \\ 15 &00 00 25.25 &61 00 29.8 &13.12 &1.59 &+1.50 &3 & \\ 16 &00 00 22.63 &60 59 20.7 &13.22 &0.52 &+0.21 &5 & \\ 17 &00 00 25.83 &60 57 58.2 &13.30 &0.83 &+0.55 &4 & \\ 18 &00 00 15.03 &60 57 05.0 &13.35 &0.57 &+0.03 &4 &B6 Vn \\ 19 &00 00 09.46 &60 57 47.6 &13.36 &0.72 &+0.44 &2 & \\ 20 &00 00 06.89 &60 57 37.6 &13.41 &0.60 &+0.11 &2 & \\ 21 &00 00 36.95 &61 02 55.7 &13.41 &0.66 &+0.49 &3 & \\ 22 &00 00 19.09 &60 57 29.8 &13.54 &1.55 &+1.39 &4 & \\ 23 &00 00 16.44 &60 57 08.8 &13.60 &0.56 &+0.00 &4 &B5:: Vnn \\ 23 &00 00 03.17 &60 55 57.1 &13.69 &0.57 &+0.10 &4 &B7 V \\ 25 &00 00 56.70 &61 01 12.0 &13.71 &1.43 &+1.23 &2 \\ 26 &00 00 38.39 &60 58 26.2 &14.11 &0.73 &+0.59 &4 & \\ 27 &00 00 57.31 &60 58 54.9 &14.14 &0.84 &+0.26 &4 & \\ 28 &00 00 22.18 &60 56 39.7 &14.20 &0.62 &+0.19 &5 & \\ 29 &00 00 15.73 &60 56 08.6 &14.70 &0.57 &+0.18 &4 & \\ 30 &00 00 16.73 &60 55 55.6 &14.71 &0.72 &+0.35 &5 &double \\ .. &... &... &15.46 &0.76 &... &CCD & \\ 31 &00 00 10.33 &60 56 25.4 &14.74 &1.57 &... &2 & \\ 32 &00 00 19.10 &60 57 44.5 &14.75 &0.62 &+0.44 &3 & \\ 33 &00 00 09.64 &60 57 10.1 &14.91 &1.02 &+0.54 &1 & \\ 34 &00 00 11.54 &60 55 19.5 &15.06 &1.09 &... &2 & \\ .. &... &... &14.88 &0.66 &+0.13 &CCD & \\ 35 &00 00 18.88 &60 56 24.1 &15.09 &0.61 &+0.25 &4 & \\ 37 &00 00 14.44 &60 55 43.3 &15.16 &1.17 &... &1 & \\ 37 &00 00 23.28 &60 57 27.8 &15.63 &0.81 &+0.79 &3 & \\ \hline \end{tabular} $^{\rm a}$V654 Cas \citep{be93}. \end{minipage} \end{table*} Star 6 is the eclipsing system V654 Cas, for which \citet{be93} cites photoelectric values of {\it V} and {\it B--V} outside of eclipse that are close to the values given here. Star 30 is a close optical double with components of nearly identical brightness. The photoelectric values apply to the combined light from both stars, whereas CCD observations provide uncontaminated data for the southwestern star of the pair, as established by its CCD magnitude being 0.75 mag. fainter. By contrast, the CCD {\it V} magnitude for star 35 is 0.21 mag. brighter, which suggests possible variability in the object. Individual photoelectric observations for CG Cas are presented in Table 2. Photographic {\it UBV} photometry was also obtained for stars in the nuclear and coronal regions of Berkeley 58 from photographic plates of the cluster field obtained in 1984 September using the 1.2-m Elginfield telescope of the University of Western Ontario. The star images were measured using the iris diaphragm photometer at Saint Mary's University, and were reduced to the {\it UBV} system and calibrated with reference to the photoelectric standards identified in Table 1 using the techniques discussed by \citet{tw89}. The resulting data are presented in Table 3 (Appendix) in similar format to the data of Table 1, and the stars are identified by their 2000 co-ordinates. The photographic values for cluster stars in common with the CCD survey \citep{pj94} agree very closely with the CCD values, when the latter are adjusted to the present system. However, earlier photographic {\it UBV} photometry of cluster stars by \citet{fr79} displays systematic differences relative to the present data. Since the present survey samples a much larger number of cluster stars, no attempt was made to combine Frolov's data with the present photometry. CCD {\it UBV} photometry for stars in the nuclear region of Berkeley 58 was published previously by \citet{pj94}, but for this study was recalibrated using the Table 1 stars as standards. The revised photometry for cluster stars is presented in Table 4 (Appendix), where the star numbers correspond to the scheme adopted by \citet{pj94}, incremented by 1000. The stars are also identified by their 2000 co-ordinates. Since the $U$ band measurements have a much brighter limit than the $B$ and $V$ measures, the CCD photometry is less useful for studying the reddening in the field. But it is valuable for identifying the faint portion of the cluster main sequence. Spectroscopic imaging of bright stars in Berkeley 58 was made in 1984 July and 1985 September using the Cassegrain spectrograph on the 1.8-m Plaskett telescope of the Dominion Astrophysical Observatory. The observations, at a dispersion of 15 \AA\ mm$^{-1}$ and centred in the blue spectral region, were recorded photographically and later scanned for radial velocity measurement with the PDS microdensitometer at the David Dunlap Observatory of the University of Toronto \citep[see][]{td84}. It was also possible to estimate spectral types for the stars from the photographic spectra, with results presented in Table 1. The field of the Cepheid CG Cas was also examined on archival images in the collections of Harvard College Observatory and Sternberg Astronomical Institute in order to obtain brightness estimates for the star and to construct seasonal light curves for comparison with a standard light curve constructed from photoelectric observations \citep{be07}. The resulting data were used to estimate times of light maximum for the Cepheid and to track its O--C changes, the differences between observed (O) and computed (C) times of light maximum. Rate of period change, in conjunction with light amplitude, is an excellent diagnostic of the location of individual Cepheids in the instability strip \citep{te06a}, such information providing an excellent parameter for comparison with what can be gleaned from information on the age of the surrounding stars provided by the cluster H-R diagram. \section{Star Counts} The first step in studying Berkeley 58 involved star counts made using a photographic enlargement from a glass copy of the POSS-E plate for the field. Strip counts in several different orientations delineated the cluster centre, followed by ring counts illustrated in Fig. 2; the centre of symmetry is located at RA = $00^{\rm h} 00^{\rm m} 12^{\rm s}.9$, DEC = $+60^{\circ} 56{\arcmin} 07{\arcsec}$ (2000). The upper portion of Fig. 2 illustrates ring counts for stars detected on the 2MASS survey \citep{cu03} to the survey limit , whereas the lower portion shows star counts from the Palomar Observatory Sky Survey (POSS) E-plate to two different magnitude limits. \setcounter{table}{1} \begin{table} \caption[]{Photoelectric {\it UBV} Observations for CG Cassiopeiae.} \label{tab2} \begin{center} \begin{tabular}{cccc} \hline HJD &$V$ &$B-V$ &$U-B$ \\ \hline 2444849.8938 &11.37 &1.28 &... \\ 2444854.8655 &11.53 &1.36 &... \\ 2444856.8539 &11.22 &1.14 &... \\ 2444857.8358 &11.08 &1.14 &... \\ 2444857.8689 &11.09 &1.16 &... \\ 2445197.9177 &10.92 &1.04 &0.74 \\ 2445205.9366 &11.45 &1.25 &0.84 \\ 2445206.8851 &11.04 &1.10 &0.76 \\ 2445933.8457 &11.73 &1.40 &1.04 \\ 2445935.8748 &10.99 &1.08 &0.85 \\ 2445937.8601 &11.59 &1.37 &0.96 \\ 2445938.8420 &11.74 &1.38 &1.02 \\ 2445939.8315 &10.85 &0.99 &0.72 \\ 2445941.8773 &11.55 &1.35 &0.93 \\ 2445942.7724 &11.76 &1.42 &1.04 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf2.eps} \end{center} \caption{Star densities for the field of Berkeley 58, as measured in rings relative to the adopted cluster centre. The upper diagram contains ring counts made from the 2MASS survey, the lower two diagrams ring counts from the POSS E-plate of the field for a faint limit (middle) and a brighter limit (lower). The location of CG Cas relative to the cluster centre is indicated by an arrow.} \label{fig2} \end{figure} The counts from the 2MASS survey were made without regard for overlap with the star cluster NGC 7790, which lies $23\arcmin$ to the northwest of Berkeley 58, whereas the counts from the POSS-E plate were restricted beyond 11$\arcmin$ from the cluster centre to sectors that avoided overlap with the outlying regions of NGC 7790. The effect of contamination from the coronal region of NGC 7790 is detectable in the 2MASS star counts beyond roughly 12$\arcmin$ from the cluster centre, but because of restrictions imposed by the location of Berkeley 58 on the POSS, we were unable to establish uncontaminated star counts from the POSS-E plate beyond about 15$\arcmin$ from the cluster centre. Nevertheless, the two sets of counts appear to yield similar parameters for the inner regions of the cluster. Berkeley 58 is estimated to have a nuclear radius of $r_n \simeq 4\arcmin.5$ (4.0 pc) in the notation of \citet{kh69}, whereas the coronal (or tidal) radius is estimated to be $R_c \simeq 11\arcmin$ (9.7 pc) from the trends in the 2MASS star densities as well as the apparent flattening of the POSS-E star densities in the outermost rings. Star counts predict a total of $197 \pm27$ members brighter than the limit of the 2MASS survey lying within 5$\arcmin$ of the cluster centre, $487 \pm82$ members within 11$\arcmin$ of the cluster centre, field stars within the same regions being 715 and 4835, respectively. Field stars clearly outnumber cluster members in both regions. CG Cas is located 5$\arcmin$.8 from the centre of Berkeley 58, in the cluster coronal region just beyond its nuclear boundaries. Although not projected on the core of Berkeley 58, CG Cas is spatially coincident with the cluster, which occupies most of the field of Fig. 1. \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf3.eps} \end{center} \caption{A {\it UBV} colour-colour diagram for observed Berkeley 58 stars: photoelectric observations (filled circles), photographic observations supplemented by CCD observations (open circles), CCD observations (filled triangles), and CG Cas (circled point). The intrinsic relation for main sequence stars is plotted as a solid line, with the same relation reddened by E$_{B-V} = 0.38$ and E$_{B-V} = 0.70$ shown by dotted lines. The reddening relations for stars of spectral type B6.5 V and A2 V are shown as dashed lines.} \label{fig3} \end{figure} \section{Berkeley 58} Fig. 3 is a {\it UBV} colour-colour diagram for the field of Berkeley 58 surveyed in this study, as constructed from the data of Tables 1, 3, and 4. The phase-averaged data for CG Cas are from \citet{be07}. A reddened sequence of B and A-type cluster members can be detected in the data, but a cluster reddening of ${\rm E}_{B-V} \simeq 0.7$ places them in a section of the colour-colour diagram where they can be confused photometrically with unreddened, foreground, G-type stars. For that reason it becomes essential to make the process of photometric identification of likely spectral classes for individual stars as reliable as possible, through the use of a well-established interstellar extinction relation. The spectral types obtained for six of the B-type, photoelectrically-observed, cluster stars imply a reddening law for Berkeley 58 described by ${\rm E}_{U-B}/{\rm E}_{B-V} = 0.75$, along with a small curvature term \citep{tu89}, identical to the reddening slope found previously for star clusters spatially adjacent to Berkeley 58 \citep{tu76b}. Berkeley 58 stars were therefore dereddened with such a relationship, except for late-type stars where a steeper relationship was adopted, dependent upon the likely intrinsic colours of the stars. \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf4.eps} \end{center} \caption{A 2MASS colour-colour diagram, {\it H--K} versus {\it J--H}, for stars examined in the field of Berkeley 58, without regard to the uncertainties in the observations \citep{cu03}. The intrinsic relation for main sequence stars is plotted as a solid line, as derived from the observed colours of standard stars and stars in clusters of uniform reddening. The direction of reddening in the 2MASS system is indicated.} \label{fig4} \end{figure} The Fig. 3 data indicate an absence of any unreddened O, B, or A-type stars in the observed sample. That feature is confirmed by available 2MASS data for the observed stars \citep{cu03}, which are depicted in the {\it JHK} colour-colour diagram of Fig. 4. An intrinsic relation for main-sequence stars in the 2MASS system was constructed from 2MASS observations of unreddened standard stars and stars in open clusters of uniform reddening \citep[e.g.,][]{tu96b}, adjusted with a reddening slope ${\rm E}_{H-K}/{\rm E}_{J-H} = 0.55$, as derived from reddened stars of known spectral type. The number of cluster stars with {\it U}-band observations is a small fraction of the total sample, so Fig. 4 contains many more stars than Fig. 3. The selection of 2MASS data was also not restricted according to the magnitude of cited uncertainties in the data, so several points in Fig. 4 display unusually large scatter. It seems clear, however, that the sample of cluster stars surveyed consists mainly of stars reddened by ${\rm E}_{J-H} \ge 0.1$, which corresponds to ${\rm E}_{B-V} \ge 0.36$. The correleation of reddening with distance towards Berkeley 58 was established from the available {\it UBV} photometry by dereddening the colours for individual stars in conjunction with a copy of the POSS field on which derived colour excesses E$_{B-V}$ were recorded as they were obtained, with multiple solutions resolved by reference to the reddenings for spatially adjacent stars as well as by the reddenings derived for the stars from their 2MASS colours (Fig. 4). In most cases the smaller {\it JHK} reddening of stars relative to those obtained from {\it UBV} colours was sufficient to resolve questions about likely intrinsic colours for the stars, but there were a number of ambiguous cases where the data from the two surveys yielded disparate solutions, {\it e.g.} 2MASS colours implying an early spectral type and {\it UBV} colours implying a late spectral type. Such cases were unimportant in the final analysis, but are curious nevertheless. \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf5.eps} \end{center} \caption{A variable-extinction diagram for observed Berkeley 58 stars, with symbols as in Fig. 3. Reddening relations of slope $R=A_V/{\rm E}_{B-V}=2.95$ are shown corresponding to distances of $d\simeq 600$ pc ($V_0-M_V=8.9$) and $d\simeq 2700$ pc ($V_0-M_V=12.16$).} \label{fig5} \end{figure} Distance moduli were calculated for individual stars by adoption of zero-age main sequence (ZAMS) values of $M_V$ \citep{tu76a,tu79}, so the values systematically underestimate {\it V--M}$_V$ for unresolved binaries and evolved stars. The resulting scatter in the variable-extinction diagram of Fig. 5 therefore contains a systematic component towards small values of {\it V--M}$_V$. Within such constraints, it is possible to discern certain trends in the data, such as the lack of any significant reddening out to distances of $\sim600$ pc ($V_0-M_V=8.9$), with a reddening of E$_{B-V} \ge 0.4$ beyond that to distances of $\sim2700$ pc ($V_0-M_V=12.16$) or more. At the Galactic location of CG Cas ($l = 116^{\circ}.845$, $b = -1^{\circ}.315$), a more encompassing survey by \citet{nk80} implies a similar trend, with the reddening beginning at distances of $\sim400-900$ pc. Apparently the main extinction for stars in the direction of Berkeley 58 occurs near the far side of the local spiral arm feature. But the picture is not that simple. When the derived reddenings are compared star-for-star in the field of Berkeley 58, there are no obvious trends with spatial location, and trends with distance are difficult to establish without highly accurate luminosities for the observed stars. It can be surmised that there is additional reddening occurring on the near side of the Perseus spiral arm, given the nature of the scatter in the colour excesses. Likely members of Berkeley 58 generally have reddenings of E$_{B-V} \simeq 0.70$, with larger values possibly arising from circumstellar extinction, particularly for late B-type stars where rapid rotation is common \citep[e.g.,][]{tu93,tu96a}. An identical feature is observed in the adjacent cluster NGC 7790 \citep{ta88}. A lower envelope trend for the reddened stars in Fig. 5 implies a ratio of total-to-selective extinction for the field of $R=A_V/{\rm E}_{B-V}=2.95\pm0.30$ from least squares and non-parametric analyses. The value is consistent with previous studies of clusters in this region of the Galaxy \citep{tu76b}, as well as with a value of $R\simeq2.95$ expected for local extinction described by a reddening slope of 0.75 \citep{tu96a}. For subsequent calculations a value of $R=2.95$ was adopted, the exact choice affecting estimates of distance but not the derived luminosity for CG Cas as a cluster member. An observational colour-magnitude diagram for the sampled atars is presented in Fig. 6, with a ZAMS plotted for {\it V--M}$_V$ = 14.29, the apparent distance modulus at E$_{B-V}= 0.70$ for points on the lower relation of Fig. 5. Such parameters provide a reasonable fit to the data, but there remain anomalies requiring further examination. For example, Fig. 6 contains reddened B-type stars more luminous than the turnoff magnitude for a cluster containing CG Cas, a point also indicated in Fig. 3, where dashed relations indicate reddening lines for B6.5 V and A2 V stars, the former corresponding to the expected turnoff color [$(B-V)_0=-0.13$] for stars associated with a 4$^{\rm d}$.37 Cepheid \citep{tu96c}. Clearly the field contains a number of stars younger than the expected evolutionary age of CG Cas. \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf6.eps} \end{center} \caption{A colour-magnitude diagram for Berkeley 58 from all observations: photoelectric (filled circles), photographic (open circles), and CCD (triangles) data. CG Cas is the circled point. The ZAMS is depicted for E$_{B-V}= 0.70$ and {\it V--M}$_V$ = 14.29.} \label{fig6} \end{figure} \setcounter{table}{4} \begin{table} \caption[]{Radial Velocity Data for Berkeley 58 Stars.} \label{tab5} \begin{center} \begin{tabular}{lccc} \hline Star &HJD &V$_{\rm R}$ &Adopted V$_{\rm R}$ \\ & &(km s$^{-1}$) &(km s$^{-1}$) \\ \hline CG Cas &2445906.955 &$-74.5 \pm 3.8$ & \\ &2445908.942 &$-78.3 \pm1.4$ & \\ &2445909.944 &$-98.4 \pm3.0$ & \\ &2445910.933 &$-84.8 \pm1.3$ & \\ &2445911.935 &$-73.0 \pm1.8$ & \\ &2445912.923 &$-63.9 \pm1.7$ & \\ &2446326.874 &$-72.4 \pm2.3$ & \\ &2446327.907 &$-65.5 \pm3.7$ & \\ &2446328.910 &$-100.1 \pm1.2$ & \\ &2446330.890 &$-70.5 \pm1.2$ & \\ &2446331.881 &$-60.5 \pm2.3$ &$-78.8$ \\ \\ 6 &2445908.961 &$-13.3 \pm3.5$ & \\ &2446326.940 &$-88.3 \pm4.1$ & \\ &2446327.955 &$-47.7 \pm6.5$ &$-52.3$ \\ \\ 7 &2445909.963 &$-57.6 \pm5.3$ & \\ &2446327.942 &$-61.3 \pm2.9$ & \\ &2446331.020 &$-68.7 \pm4.2$ &$-62.7$ \\ \\ 11 &2445910.956 &$-80.7 \pm1.2$ & \\ &2446330.919 &$-70.9 \pm5.1$ & \\ &2446331.909 &$-70.4 \pm3.3$ &$-79.1$ \\ \\ 18 &2445911.760 &$-82.1 \pm10.1$ & \\ &2446326.914 &$-81.6 \pm3.3$ &$-81.6$ \\ \\ 23 &2446328.955 &$-77.8 \pm13.0$ &$-77.8$ \\ \\ 24 &2446330.972 &$-69.8 \pm8.5$ &$-69.8$ \\ \\ & &Cluster Mean = &$-79.4 \pm1.0$ \\ \hline \end{tabular} \end{center} \end{table} Such complications may be endemic to the field of both Berkeley 58 and NGC 7790, where the line of sight crosses the interarm region between the Sun and portions of the local spiral feature, then intercepts the Perseus spiral arm with a marked increase in space density for young B-type stars and young-to-intermediate age star clusters. The separation of spiral arm stars from cluster members is difficult but achievable, since the radial velocities for CG Cas and Berkeley 58 stars listed in Table 5 imply a conspicuous velocity difference between the cluster and spiral arm stars. The anomalously young B stars noted above are objects like stars 6 (V654 Cas), 7, and possibly 24, which have systematically more positive velocities than likely cluster members: stars 11, 18, and 23, which have radial velocities close to the systemic velocity of CG Cas \citep[see Fig. 7, which includes radial velocity measurements from][]{jo37,me91,go98}. Except for star 11, which may be anomalous, stars with radial velocities close to that of CG Cas also have spectral types near the expected B6.5 V turnoff. Unfortunately it is not possible to identify fainter cluster members by the same technique, given the bright limit for the present radial velocity survey. Follow-up observations would be useful in that regard. \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf7.eps} \end{center} \caption{The radial velocity variations of CG Cas, with cited uncertainties, as measured in this paper (open circles), from \citet{me91} and \citet{go98} (filled circles), and from \citet{jo37} (open squares). The curve is a simple spectroscopic binary solution to the data from the first three data sets, the data from \citet{jo37} exhibiting systematic deviations near velocity minimum.} \label{fig7} \end{figure} The complications arising from contamination of the cluster field by young stars in the Perseus arm and likely circumstellar reddening for late B-type members were addressed by identifying unaffected cluster stars from their reddenings, which are close to E$_{B-V}{\rm (B0)} = 0.70$. The field of the CCD survey near the cluster centre was found to exhibit a mean reddening of E$_{B-V}{\rm (B0)} = 0.697 \pm0.025$, that for the region of CG Cas a mean reddening of E$_{B-V}{\rm (B0)} = 0.685 \pm0.022$. Stars with full {\it UBV} data were identified as likely cluster members on the basis of reddenings comparable to or larger than those values, while stars near the cluster centre lacking {\it U}-band data were assumed to have B0 star colour excesses as above, but intrinsic colours adjusted for the spectral type dependence of reddening \citep[see][]{fe63}. A-type dwarfs can suffer complications arising from the effects of rotation on their stellar continua and {\it UBV} colours \citep{te06b}, so the adoption of space reddenings for such stars may circumvent potential biases introduced by dereddening their colours to the intrinsic relation for zero-age zero-rotation main sequence stars. The resulting reddening-corrected colour-magnitude diagram for the cluster is plotted in Fig. 8 for 145 likely members, along with CG Cas and its light variations and star 5, which is considered to be a potential K giant member. The reddening for CG Cas corrected for its colour is E$_{B-V} = 0.64 \pm0.02$. A photometric reddening could be obtained from the {\it BVI$_c$} observations of \citet{he96} \citep[see][]{la07}, but a field reddening was adopted as a precaution against potential bias towards large-amplitude Cepheids lying near the centre of the instability strip (unnecessary in the present case, as it turns out). The distance to Berkeley 58 is established by 40 of its A-type ZAMS members, which yield a value of $V_0 - {\rm M}_V = 12.40 \pm 0.12$ s.d., corresponding to a distance of $3026 \pm166$ pc. Except for star 11, which is conceivably a rapid rotator observed nearly pole-on, the bluest cluster stars correspond to spectral type B6 with ({\it B--V})$_0 = -0.16$. A comparison with stellary evolutionary models \citep{me93} implies a cluster age of $10 \pm 1 \times 10^7$ years ($\log \tau = 8.0 \pm0.05$). The corresponding mass of cluster stars falling at the tip of the main-sequence red turnoff (RTO) is $5.4 M_{\sun}$ \citep{me93}. \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf8.eps} \end{center} \caption{A reddening free colour-magnitude diagram for Berkeley 58. The dashed curve represents the ZAMS for $V_0-M_V=12.40$, and the solid line with dotted lines on either side represents an isochrone from \citet{me93} for $\log \tau = 8.0 \pm0.1$. The range of light variations for CG Cas are depicted, as are the observational boundaries for the Cepheid instability strip. The red object on the evolved giant sequence is star 5.} \label{fig8} \end{figure} \section{CG Cassiopeiae} The systemic radial velocity of CG Cas (Table 5) is a close match to the mean velocity of Berkeley 58 derived from likely cluster members 11, 18, and 23, and the evolutionary age of the cluster closely matches what is predicted for the pulsation period of the Cepheid \citep{tu96c}. The luminosity of CG Cas as a likely member of Berkeley 58 is $\langle M_V \rangle = -3.06 \pm0.12$, which matches a value of $\langle M_V \rangle = -3.04$ predicted with a Cepheid period-radius relation and the inferred effective temperature of CG Cas ($\log T_{\rm eff} = 3.775$) from its derived intrinsic colour \citep{tb02}. The case for membership of CG Cas in Berkeley 58 is very strong. The exact evolutionary status of CG Cas can be established from the direction and rate of its period changes \citep{te06a}, in conjunction with its large blue light amplitude of $\Delta B = 1.22$ \citep{be07}. The period changes for CG Cas were established here from examination of archival photographic plates in the Harvard and Sternberg collections, as well as from an analysis of new and existing photometry for the star. A working ephemeris for CG Cas based upon the available data was: \begin{displaymath} {\rm JD}_{\rm max} = 2432436.94 + 4.3656292 \: E , \end{displaymath} where $E$ is the number of elapsed cycles. An extensive analysis of all available observations produced the data summarized in Table 6, which lists the results for different epochs, the type of data analyzed (PG = photographic, VIS = visual telescopic observations, B = photoelectric {\it B}, and V = photoelectric {\it V}), the number of observations used to establish the times of light maximum, and the source of the observations, in addition to the temporal parameters. The data are plotted in Fig. 9. A regression analysis of the O--C data of Table 6 produced a parabolic solution for the ephemeris defined by: \begin{displaymath} {\rm JD}_{\rm max} = 2432436.9493(\pm0.0080) \end{displaymath} \begin{displaymath} + 4.3656289(\pm0.0000024) \: E + 1.1757(\pm0.0983) \times 10^{-7} \:E^2 , \end{displaymath} which is plotted in Fig. 8. The parabolic trend corresponds to a period increase of $+0.170 \pm 0.014$ s yr$^{-1}$ ($ \log{\dot{P}}= -0.770 \pm 0.036$), a value typical of Cepheids lying slightly blueward of the centre of the instability strip and in the third crossing. The location of CG Cas in Fig. 8 relative to the observational boundaries of the Cepheid instability strip \citep{te06b} is consistent with that conclusion, although the stellar evolutionary models seem to require adjustments (metallicity, mixing of surface layers?) to match the observations. \begin{figure} \begin{center} \includegraphics[width=6cm]{turnerf9.eps} \end{center} \caption{The differences between observed (O) and computed (C) times of light maximum for CG Cas, computed in units of pulsation phase. The upper diagram shows the actual O--C variations with their uncertainties, the lower diagram the residuals from the calculated parabolic evolutionary trend.} \label{fig9} \end{figure} \setcounter{table}{5} \begin{table*} \begin{minipage}{13cm} \caption[]{Times of Maximum Light for CG Cas.} \label{tab6} \begin{tabular}{ccccccl} \hline HJD$_{\rm max}$ &$\pm \sigma$ &Band &Epoch &O--C &Observations &Reference \\ & & &(E) &(phase) &(n) & \\ \hline 2413407.3442 &0.0292 &PG &--4359 &+0.1714 &55 &This paper (Harvard) \\ 2415144.8677 &0.0428 &PG &--3961 &+0.1746 &7 &This paper (SAI )\\ 2416314.8382 &0.0338 &PG &--3693 &+0.1566 &72 &This paper (Harvard) \\ 2417572.0492 &0.1070 &PG &--3405 &+0.0664 &11 &This paper (SAI) \\ 2419794.1315 &0.0336 &PG &--2896 &+0.0436 &63 &This paper (Harvard) \\ 2423788.6688 &0.0271 &PG &--1981 &+0.0304 &98 &This paper (Harvard) \\ 2426102.4299 &0.0196 &PG &--1451 &+0.0082 &128 &This paper (Harvard) \\ 2426940.6916 &0.0567 &VIS &--1259 &+0.0691 &46 &\citet{la33} \\ 2428023.3553 &0.0571 &PG &--1011 &+0.0569 &19 &This paper (SAI) \\ 2428455.5134 &0.0271 &PG &--912 &+0.0177 &92 &This paper (Harvard) \\ 2429568.7382 &0.0559 &PG &--657 &+0.0071 &28 &This paper (SAI) \\ 2430847.7814 &0.0321 &PG &--364 &--0.0790 &81 &This paper (Harvard) \\ 2431576.8823 &0.0854 &PG &--197 &--0.0381 &17 &\citet{er61} \\ 2433100.4046 &0.0430 &PG &+152 &--0.1203 &59 &This paper (Harvard) \\ 2433183.4566 &0.0308 &PG &+171 &--0.0152 &37 &This paper (SAI) \\ 2433371.0678 &0.0848 &PG &+214 &--0.1261 &23 &\citet{er61} \\ 2434117.6643 &0.0350 &PG &+385 &--0.0521 &25 &This paper (SAI) \\ 2435174.1804 &0.0285 &PG &+627 &--0.0182 &74 &This paper (SAI) \\ 2435379.4291 &0.0443 &PG &+674 &+0.0459 &10 &\citet{ro59} \\ 2435619.5498 &0.1388 &PG &+729 &+0.0570 &19 &\citet{er61} \\ 2435837.7841 &0.0168 &PG &+779 &+0.0099 &18 &\citet{zs59} \\ 2436802.5876 &0.0070 &B &+1000 &+0.0094 &13 &\citet{oo60} \\ 2436802.6183 &0.0119 &V &+1000 &+0.0401 &15 &\citet{oo60} \\ 2436933.5492 &0.0054 &B &+1030 &+0.0021 &22 &\citet{ba62} \\ 2436937.9440 &0.0085 &V &+1031 &+0.0313 &23 &\citet{ba62} \\ 2438666.6957 &0.0174 &PG &+1427 &--0.0061 &41 &This paper (SAI) \\ 2439077.0406 &0.0299 &PG &+1521 &--0.0303 &16 &This paper (SAI) \\ 2440268.8346 &0.0241 &PG &+1794 &--0.0530 &24 &This paper (SAI) \\ 2441146.3548 &0.0121 &PG &+1995 &--0.0242 &95 &This paper (SAI) \\ 2441866.7282 &0.0142 &PG &+2160 &+0.0204 &55 &This paper (SAI) \\ 2442355.6761 &0.0178 &PG &+2272 &+0.0178 &47 &This paper (SAI) \\ 2442862.0722 &0.0159 &PG &+2388 &+0.0010 &74 &This paper (SAI) \\ 2443045.5091 &0.0058 &V &+2430 &+0.0815 &71 &\citet{ch82} \\ 2443957.9197 &0.0206 &PG &+2639 &+0.0756 &25 &This paper (SAI) \\ 2444844.1310 &0.0099 &B &+2842 &+0.0643 &9 &\citet{be86} \\ 2444852.8817 &0.0150 &V &+2844 &+0.0837 &11 &\citet{be86} \\ 2445189.0177 &0.0117 &B &+2921 &+0.0663 &8 &\citet{be86} \\ 2445189.0509 &0.0074 &V &+2921 &+0.0995 &8 &\citet{be86} \\ 2445394.1872 &0.0098 &B &+2968 &+0.0512 &14 &This paper \\ 2445429.1355 &0.0115 &V &+2976 &+0.0745 &15 &This paper \\ 2445883.1690 &0.0061 &B &+3080 &+0.0826 &8 &\citet{be86} \\ 2445883.1870 &0.0086 &V &+3080 &+0.1006 &8 &\citet{be86} \\ 2447760.4530 &0.0042 &B &+3510 &+0.1461 &39 &\citet{be92a} \\ 2447760.4823 &0.0059 &V &+3510 &+0.1754 &39 &\citet{be92a} \\ 2448118.4162 &0.0060 &B &+3592 &+0.1277 &18 &\citet{be92b} \\ 2448118.4546 &0.0085 &V &+3592 &+0.1661 &18 &\citet{be92b} \\ 2448515.7127 &0.0043 &B &+3683 &+0.1520 &20 &\citet{be92c} \\ 2448515.7328 &0.0052 &V &+3683 &+0.1721 &20 &\citet{be92c} \\ 2451458.2287 &0.0152 &V &+4357 &+0.2341 &27 &\citet{wo04} \\ \hline \end{tabular} \end{minipage} \end{table*} \section{Discussion} The case for potential membership of the Cepheid CG Cas in the sparse open cluster Berkeley 58 has been studied using photometric (pe, pg, CCD) observations, spectroscopy (V$_{\rm R}$, spectral types), star counts, and O--C data for the Cepheid. The cluster Berkeley 58 is particularly difficult to separate from the young stars of the Perseus spiral arm, which raises concerns about future studies of distant open cluster calibrators for the Cepheid PL relation. Careful analysis of the available data leads to a cluster reddening of E$_{B-V}{\rm (B0)} = 0.70$, a distance of $3.03 \pm0.17$ kpc, and an age of $10 \pm 1 \times 10^7$ years. CG Cas is a likely member on the basis of radial velocity, location outside the cluster nucleus within the cluster coronal region, evolutionary status indicated by its period changes and light amplitude, and implied luminosity. It becomes an important Cepheid calibrator lying near the centre of the instability strip. It may seem unusual that many potential Cepheid calibrators lie in cluster coronae rather than cluster nuclear regions \citep{tu85}, but a possible explanation relates to two dynamical lines of evidence. First, massive cluster members lie preferentially in outer regions of clusters \citep{bu78}, possibly because of how proto-cluster interstellar clouds fragment into proto-stars. Second, as indicated by colour-magnitude diagrams for NGC 654 \citep{st80} and other young clusters \citep{tu96b}, cluster nuclear regions tend to be dominated by rapidly rotating stars, possibly the result of merged binary systems, and other close binaries, in which case potential Cepheid progenitors are less likely to evolve to the dimensions typical of pulsating variables because of restrictions on their dimensions engendered by potential physical companions. The case of CG Cas in Berkeley 58 appears to be yet another example of the effect. \subsection*{ACKNOWLEDGEMENTS} The present study was supported by research funding awarded through the Natural Sciences and Engineering Research Council of Canada (NSERC), through the Small Research Grants program of the American Astronomical Society, through the Russian Foundation for Basic Research (RFBR), and through the program of Support for Leading Scientific Schools of Russia. We are endebted to Ron Lyons for scanning the radial velocity plates used in this study, and to the director of Harvard College Observatory for access to the plate stacks.
1,108,101,563,056
arxiv
\section{Introduction} In the early eighties tools from algebraic geometry were applied by V. Goppa to construct linear codes using algebraic curves over finite fields, see \cite{G1982}. Nowadays these codes are called algebraic-geometric codes, AG codes for short. The starting point in the construction of an AG code is a projective, absolutely irreducible, non singular algebraic curve $\mathcal X$ of genus $g \geq 1$ defined over the finite field $\mathbb {F}_q$ with cardinality $q$. Let $F=\mathbb {F}_q(\mathcal X)$ be its function field with $\mathbb {F}_q$ being the field of constants. Consider $Q_1, \dots , Q_n$ pairwise distinct rational places on $F$. Let $D=Q_1+ \dots +Q_n$ and $G$ be divisors such that $Q_i$ is not in the support of $G$ for $i=1, \dots, n$. The linear code $C_{\Omega}(D,G)$ is defined by $$ C_{\Omega}(D,G)=\{(\mbox{res}_{Q_1}(\eta),\ldots,\mbox{res}_{Q_n}(\eta))\mid \eta\in \Omega(G-D)\}\subseteq \mathbb {F}_q^n, $$ where $\Omega(G-D)$ is the space of $\mathbb {F}_q$-rational differentials $\eta$ on $\mathcal{X}$ such that either $\eta=0$ or $\mbox{div}(\eta)\succeq G-D$ and $\mbox{res}_{Q_j}\eta$ is the residue of $\eta$ at $Q_j$. The code $C_{\Omega}(D,G)$ has length $n$ and dimension $k = i(G-D) - i(G)$ where $i(G)$ denotes the speciality index of the divisor $G$. We say that $C_\Omega(D,G)$ is an $[n, k, d]$-code where $d$ denotes the minimum distance of the code. One of the main features of this code is that its minimum distance $d$ satisfies the classical Goppa bound, namely $$d \geq \deg G - (2g-2).$$ The integer $d^*=\deg G - (2g-2)$ is usually called the \emph{designed minimum distance}. One way to obtain codes with good parameters is to find codes that improve the designed minimum distance. If $G=\alpha P$ for some rational place $P$ on $F$ and $D$ is the sum of other rational places on $\mathcal X $, then the code $C_{\Omega}(D,G)$ is called an {\it one-point AG code}. Analogously, if $G= \alpha_1 P_1+ \cdots +\alpha_nP_n$ for $n$ distinct rational places $P_1,\ldots,P_n$ on $\mathcal X$, then $C_{\Omega}(D,G)$ is called a {\it $n$-point AG code}. For a more detailed introduction to AG codes, see \cite{HLP1998,S2009}. For a one-point divisor $G=\alpha P$ on the function field $F$, Garcia, Kim, and Lax \cite{GL1992,GKL1993} improved the designed minimum distance using the arithmetical structure of the Weierstrass semigroup at the rational place $P$. For a two-point divisor $G = \alpha_1P_1 + \alpha_2P_2$, Homma and Kim \cite{HK2001} introduced the notion of pure gaps and obtained similar results. By choosing $\alpha_1$ and $\alpha_2$ satisfying certain arithmetical conditions depending on the structure of the Weierstrass semigroup at $P_1$ and $P_2$, they improved the designed minimum distance. Matthews \cite{G2001} showed that for an arbitrary curve there exist two-point AG codes that have better parameters than any comparable one-point AG code constructed from the same curve. Finally, for divisors $G= \alpha_1 P_1+ \cdots +\alpha_nP_n$ at $n$ distinct rational places on $\mathcal X$, results from the theory of generalized Weierstrass semigroups and pure gaps were obtained by Carvalho and Torres \cite{CT2005}. They have been used to obtain AG codes whose minimum distance beats the classical Goppa bound on the minimum distance, see Theorem \ref{distmanypoints}. Many applications and results on AG codes can be found for one- and two-point codes in \cite{HK2001, DK2011, ST2014, MQS2016}, and for $n$-point codes in \cite{G2004, CT2005, CK2007}. The minimum distances of several AG codes have been studied in the case when $\mathcal X$ is a Kummer curve. For instance, when $\mathcal X$ is the Hermitian curve results can be found in \cite{HK2001, G2001, HK2006}, or a subcover of the Hermitian curve in \cite{M2004}, or a generalization of the Hermitian curve in \cite{ST2014}. In this paper we analyze $n$-point codes when $F$ is a Kummer extension defined by $y^m= f(x),$ where $ f(x) \in \mathbb {F}_q[x]$ is a separable polynomial of degree $r$ coprime to $m$. We extend results by Castellanos, Masuda, and Quoos \cite{MQS2016} on the Weierstrass semigroup at two rational places $P_1$ and $P_2$. In particular, for a class of Kummer curves we explicitly compute the number of gaps at $P_1, P_2$, see Theorem \ref{twopoints}, generalizing a result by Matthews {\rm \cite[Theorem 3.6]{G2001}}. For Kummer extensions, we also study the Weierstrass semigroup at many rational places under some hypothesis on the places. We give an arithmetic characterization of pure gaps (Propositions \ref{puregapsmanypoints} and \ref{puregapsinfty}) and apply it to a large family of Kummer extensions to provide families of pure gaps (Propositions \ref{puregapstwopoints}, \ref{manypoints1}, and \ref{manypoints2}). We obtain codes such that the Singleton defect $\delta=n+1-k-d$ is improved, see Remarks \ref{applications1} and \ref{applications2}. We illustrate our results constructing AG codes on many points from the the Hermitian function field, and observe that the best improvements on the minimum distance with respect to the corresponding ones in the MinT's Tables~\cite{MinT} are obtained by two- or three-point codes. The paper is organized as follows. In Section \ref{Sec:Preliminary} we set the notations and present the preliminary results on the Weierstrass semigroup at two and many points. In Sections \ref{Sec:TwoPoints} and \ref{Sec:ManyPoints} we consider a large class of Kummer curves. In particular, in Section \ref{Sec:TwoPoints} we study the Weierstrass semigroup at two totally ramified rational points and compute the number of gaps at them. In Section \ref{Sec:ManyPoints} we give an arithmetic characterization of pure gaps at many points which provides families of pure gaps. We apply them to the construction of AG codes improving the Singleton defect. We illustrate our results constructing AG codes on many points from the Hermitian curve, see Example \ref{ExHerm}. \section{Preliminary results}\label{Sec:Preliminary} Let $\mathcal X$ be a projective, absolutely irreducible, nonsingular algebraic curve of genus $g$ defined over the finite field $\mathbb {F}_q$. Let $F=\mathbb {F}_q(\mathcal X)$ be its function field with the field of constants $\mathbb {F}_q$. For a function $z$ in $F$, $(z)$ and $(z)_\infty$ stand for its principal and polar divisor, respectively. We denote by $\mathbb {P}(F)$ the set of places of $F$ and by $\mathcal D_F$ the free abelian group generated by the places of $F$. The elements $D$ of $\mathcal D_F$ are called {\it divisors} and can be written as $$ D=\sum_{P\in \mathbb {P}(F)}n_P\,P\quad \text{ with } n_P\in \mathbb {Z},\; n_P=0\text{ for almost all }P\in\mathbb {P}(F). $$ The degree of a divisor $D$ is $\deg(D)=\sum\limits_{P\in \mathbb {P}(F)}n_P \cdot\deg P$, where $\deg P$ is the degree of the place $P$ over $\mathbb {F}_q$. Given a divisor $D\in\mathcal D_F$, the Riemann-Roch vector space associated to $D$ is defined by $ \mathcal L(D):=\{z\in F\,|\, (z)\ge -D\}\cup \{0\}. $ We denote by $\ell(D)$ the dimension of $\mathcal L(D)$ as a vector space over the field of constants $\mathbb {F}_q$. From the Riemann-Roch Theorem, it follows that, for divisors $D$ such that $2g-1 < \deg D$, we have $\ell(D) = \deg(D)+1-g$, see \cite[Th. 1.5.17]{S2009}. Let $\mathbb {N}$ be the set of non-negative integers. For distinct rational places $P_1, \dots, P_s$ on $\mathbb {P}(F)$, let $$ H(P_1, \dots, P_s)= \{(n_1, \dots , n_s) \in \mathbb {N}^s \ | \ \exists \ z \in F \text{ with } (z)_\infty = n_1P_1 + \cdots + n_sP_s \} $$ be the Weierstrass semigroup at $P_1, \dots , P_s$. The complement $G(P_1, \dots, P_s)=\mathbb {N}^s \setminus H(P_1, \dots, P_s)$ is always a finite set and its elements are called {\it Weierstrass gaps} at $P_1, \dots, P_s$. A gap can be characterized in terms of the dimension of certain Riemann-Roch spaces, more specifically, an $s$-tuple $(n_1, \dots , n_s) \in \mathbb {N}^s$ is a gap at $P_1, \dots, P_s$ if and only if $\ell\big(\sum_{i=1}^s n_iP_i\big)= \ell\big((\sum_{i=1}^s n_iP_i)- P_j\big)$ for some $j \in \{1, \dots, s\}$. For $s=1$, the semigroup $H(P_1)$ is the well-known Weierstrass semigroup at one point on the curve and $G(P_1)$ has exactly $g$ gaps. For $s \geq 2$, the number of gaps may vary depending on the choice of the points. When $s=2$, the size of $G(P_1,P_2)$ was given by M. Homma \cite{H1996} in terms of $G(P_1)$ and $G(P_2)$ as follows. Let $1=a_1<a_2<\cdots<a_g$ and $1=b_1<b_2<\cdots<b_g$ be the gap sequences at $P_1$ and $P_2$, respectively. For $i=1,\ldots, g$, let $\gamma(a_i) = \min \{ b \in G(P_2) \mid (a_i, b) \in H(P_{1}, P_{2}) \}$. By \cite[Lemma 2.6]{K1994}, $\{ \gamma(a_i) \mid i=1, \dots, g \} = G(P_{2})$. Therefore, there exists a permutation $\sigma$ of the set $\{1, \ldots , g\}$ such that $\gamma(a_i)=b_{\sigma(i)}$, and $$\Gamma(P_{1}, P_{2})=\{ (a_{i} , b_{\sigma(i)}) \mid i=1, \ldots ,g\}$$ is the graph of a bijective map $\gamma $ between $G(P_{1})$ and $G(P_{2})$. Define $$r(P_1,P_2)=|\{(x,y)\in\Gamma(P_1,P_2)\mid x<y,\gamma(x)>\gamma(y)\}|$$ the number of inversions for $\gamma$. \begin{theorem}[{\!\!\cite[Theorem 1]{H1996}}]\label{numberofgaps} Under the above notation, the number of gaps at $P_1,P_2$ is $$ |G(P_1,P_2)|=\sum_{i=1}^g a_i + \sum_{i=1}^g b_i - r(P_1,P_2). $$ \end{theorem} A characterization of $\Gamma(P_1,P_2)$ is the following. \begin{lemma}[{\!\!\cite[Lemma 2]{H1996}}] \label{lemmagamma} Let $\Gamma '$ be a subset of $(G(P_{1}) \times G(P_{2})) \cap H(P_{1},P_{2})$. If there exists a permutation $\tau$ of $\{ 1, \ldots , g\}$ such that $\Gamma ' = \{ (a_{i} , b_{\tau(i)}) \mid i=1, \ldots ,g \}$, then $\Gamma ' = \Gamma(P_{1}, P_{2})$. \end{lemma} The Weierstrass semigroup $H(P_{1}, P_{2})$ can be recovered from $\Gamma (P_{1}, P_{2})$ as follows. For $\mathbf{x} = (a_1, b_1),\mathbf{y} = (a_2, b_2) \in \mathbb{N}^2$, define the \textit{least upper bound} of $\mathbf{x}$ and $\mathbf{y}$ as $\mathrm{lub}(\mathbf{x},\mathbf{y})= \left(\max\{a_1, a_2\}, \max\{b_1, b_2\}\right)$. Then, by \cite[Lemma 2.2]{K1994}, \begin{equation}\label{lub} H(P_{1},P_{2}) = \{ \mathrm{lub} (\mathbf{x},\mathbf{y}) \mid \mathbf{x},\mathbf{y} \in \Gamma(P_{1}, P_{2}) \cup (H(P_{1}) \times \{0\}) \cup (\{0\} \times H(P_{2})) \}. \end{equation} We now introduce the important concept of pure gaps that will be used in the construction of AG codes. An $s-$tuple $(n_1, \dots, n_s)\in\mathbb {N}^s$ is a {\it pure gap} at $P_1, \dots , P_s$ if $$\ell\Big(\sum_{i=1}^s n_iP_i\Big)= \ell\Big(\big(\sum_{i=1}^s n_iP_i\big)- P_j\Big) \text{ for all } j=1, \dots, s.$$ The set of pure gaps at $P_1,\ldots,P_s$ is denoted by $G_0(P_1,\ldots,P_s)$. Clearly, a pure gap is always a gap. \begin{lemma}[{\!\!\cite[Lemma 2.5]{CT2005}}]\label{purecondition} An $s$-tuple $(n_1, \dots, n_s)$ is a pure gap at $P_1, \dots ,P_s$ if and only if $\ell\big(\sum_{i=1}^s n_iP_i\big)= \ell\big(\sum_{i=1}^s (n_i-1)P_i\big)$. \end{lemma} Pure gaps can be used to improve the designed minimum distance of AG codes. \begin{theorem}[{\!\!\cite[Theorem 3.4]{CT2005}}] \label{distmanypoints} Let $P_1, \dots , P_s,Q_1, \dots, Q_n$ be pairwise distinct $\mathbb {F}_q$-rational points on $\mathcal X$ and $(a_1, \dots , a_s),(b_1, \dots , b_s)\in\mathbb {N}^s$ be two pure gaps at $P_1, \dots, P_s$. Consider the divisors $D= Q_1+ \cdots + Q_n$ and $G= \sum_{i=1}^s (a_i+b_i-1)P_i$. Suppose that $a_i \leq b_i$ for all $i=1, \dots , s$, and that each $s$-tuple $(c_1, \dots , c_s)\in\mathbb {N}^s$ with $a_i \leq c_i \leq b_i$ for $i=1,\ldots,s$, is also a pure gap at $P_1, \dots , P_s$. The the minimum distance $d$ of $C_{\Omega}(D,G)$ satisfies $$ d \geq \deg(G) -(2g-2)+s+\sum_{i=1}^s (b_i-a_i).$$ \end{theorem} Hereafter we work on a Kummer extension $F=\mathbb {F}_q(x,y)/\mathbb {F}_q(x)$ defined by $y^m=f(x), \, m\geq2, \, p\nmid m, f(x)$ a separable polynomial of degree $r$ in $\mathbb{F}_q[x]$ and $\gcd(m,r)=1$. We denote by $P_1,\ldots ,P_s, P_\infty (s \leq r),$ the rational places of $F$ which are totally ramified in the extension $F/ \mathbb {F}_q(x)$, and $P_\infty$ is the pole of $x$. The genus $g$ of $F$ is $(m-1)(r-1)/2$. We use a result by Maharaj \cite{M2004} to build up an arithmetic characterization of pure gaps at many points in a \emph{Kummer extension}. Firstly we need the definition of the restriction of a divisor in a function field extension $F/K$. For any divisor $D$ of $F$ and any intermediate field $K \subseteq E\subseteq F$, write $D = \sum_{R\in \mathbb {P}(E)}\;\sum_{Q\in \mathbb {P}(F),\,Q|R}\, n_Q\, Q$. We define the restriction of $D$ to $E$ as $$ D\Big|_{E}= \sum\limits_{R\in \mathbb {P}(E)} \min\,\left\{\left\lfloor\frac{n_Q}{e(Q|R)}\right\rfloor\colon {Q|R}\right\}\,R, $$ where $e(Q|R)$ is the ramification index of $Q$ over $R$. \begin{theorem}[{\!\!\cite[Theorem 2.2]{M2004}}] \label{ThMaharaj} Let $F/\mathbb {F}_q(x)$ be a Kummer extension of degree $m$ defined by $y^m=f(x)$. Then, for any divisor $D$ of $F$ that is invariant under the action of $Gal(F/\mathbb {F}_q(x))$, we have that $$ \mathcal{L}(D)= \bigoplus\limits_{t=0}^{m-1} \mathcal L\left(\left[D+(y^t)\right]\Big|_{\mathbb {F}_q(x)}\right)\,y^t,$$ where $\left[D+(y^t)\right]\Big|_{\mathbb {F}_q(x)}$ denotes the restriction of the divisor $D+(y^t)$ to $\mathbb {F}_q(x)$. \end{theorem} \section{The Weierstrass semigroup at two points}\label{Sec:TwoPoints} Let $F/\mathbb {F}_q(x)$ be a Kummer extension defined by $y^m=f(x)$, where $f(x)\in\mathbb {F}_q[x]$ is separable of degree $r$ coprime with $m$, and consider the Weierstrass semigroup $H(P,Q)$ at two rational places of $F$ which are totally ramified in $F/\mathbb {F}_q(x)$. As pointed out in Equation \eqref{lub}, the semigroup $H(P,Q)$ is related to the set $\Gamma(P,Q)$, and \cite[Theorem 4.3]{MQS2016} yields \begin{equation*}\label{gammainfty} \Gamma(P_\infty, P_1)= \left\{ \left(mr-mj-ri, i+m(j-1)\right) \mid 1 \leq i \leq m-1-\left\lfloor \frac{m}{r}\right\rfloor, 1\leq j \leq r-1- \left\lfloor \frac{ri}{m}\right\rfloor \right\}, \end{equation*} where $P_\infty$ is the unique pole of $x$ and $P_1$ is another totally ramified place. We now compute $\Gamma(P_1,P_2)$, where $P_1$ and $P_2$ are two distinct rational places of $F$ different from $P_\infty$ and totally ramified in the extension $F/\mathbb {F}_q(x)$. \begin{proposition}\label{gammaP1P2} Let $F/\mathbb {F}_q(x)$ be a Kummer extension defined by $y^m=f(x)$, where $f(x)\in\mathbb {F}_q[x]$ is separable of degree $r$ and $\gcd(r,m)=1$. If $P_1$ and $P_2$ are two distinct totally ramified places of $F$ different from $P_\infty$, then $$ \Gamma(P_1,P_2)=\left\{\left(mi-j, m\left(\left\lceil\frac{rj}{m}\right\rceil-i\right)-j\right) \mid 1+\left\lfloor\frac{m}{r}\right\rfloor \leq j \leq m-1, 1\leq i \leq \left\lceil \frac{rj}{m}\right\rceil -1\right\}. $$ \end{proposition} \begin{proof} For $\iota\in\{1,2\}$ let $\alpha_\iota\in\mathbb {F}_q$ be such that $P_\iota$ is the unique zero of $x-\alpha_\iota$ in $F$. Let $i,j$ be positive integers and $k =\left\lceil\frac{jr}{m}\right\rceil-i$, so that $(i+k)m\geq jr$. By \cite[Prop. 3.1]{MQS2016}, the pole divisor of $\frac{y^{j}}{(x-\alpha_1)^i(x-\alpha_2)^k}$ is $(mi-j)P_1+(mk-j)P_2$. Also, for $j\in\left\{ 1+\left\lfloor\frac{m}{r}\right\rfloor,\ldots, m-1\right\}$ and $h\in\left\{ 1,\ldots, \left\lceil \frac{rk}{m}\right\rceil -1\right\}$, we have that $(mh-j)\in G(P_1)\cap G(P_2)$ by \cite[Th. 3.2]{MQS2016}. Hence, the set $$ \Gamma^{\prime} = \left\{\left(mi-j, m\left(\left\lceil\frac{rj}{m}\right\rceil-i\right)-j\right)\ \mid \ 1+\left\lfloor\frac{m}{r}\right\rfloor \leq j \leq m-1, 1\leq i \leq \left\lceil \frac{rj}{m}\right\rceil -1\right\} $$ is a subset of $G(P_1)\times G(P_2) \cap H(P_1,P_2)$. The cardinality of $\Gamma^{\prime}$ is $$ |\Gamma^{\prime}| = \sum_{k=1+ \left\lfloor\frac{m}{r} \right\rfloor }^{m-1} \left( \left\lceil \frac{rk}{m} \right\rceil -1 \right)= \left(\sum_{k=1+ \left\lfloor\frac{m}{r} \right\rfloor }^{m-1} \left\lceil \frac{rk}{m} \right\rceil\right) -\left(m- \left\lfloor \frac{m}{r} \right\rfloor -1\right)$$ $$ = \left( \sum_{k=0}^{m-1} \left\lceil \frac{rk}{m} \right\rceil \right) - \left\lfloor\frac{m}{r} \right\rfloor -\left(m- \left\lfloor \frac{m}{r} \right\rfloor -1\right) =- \sum_{k=0}^{m-1} \left\lfloor \frac{-rk}{m} \right\rfloor -m+1$$ $$ = -\left(m-1\right)\left(-r-1\right)/2 -m+1= \left(m-1\right)\left(r-1\right)/2 = g, $$ using \cite[Page 94]{GKP}. Therefore $\Gamma'=\Gamma(P_1,P_2)$ by Lemma \ref{lemmagamma}. \end{proof} From Proposition \ref{gammaP1P2} we are able to compute the number of gaps at two totally ramified places in the case $m\equiv1\pmod r$. \begin{theorem}\label{twopoints} Let $F/\mathbb {F}_q(x)$ be a Kummer extension defined by $y^m=f(x)$, where $f(x)\in\mathbb {F}_q[x]$ is separable of degree $r$ and $\gcd(r,m)=1$. Let $P_\infty\in\mathbb {P}(F)$ be the pole of $x$ and $P_1\ne P_2$ be two other totally ramified rational places in $F/\mathbb {F}_q(x)$. If $m=ur+1$ for some integer $u$, then \begin{align*} & | G(P_1,P_2) | = \frac{ur(r-1)(3ur^2-5ur+4r+4u-2)}{12}\text{, and }\\ & | G(P_\infty,P_1) | = \frac{ur(r-1)(3ur^2-3ur+2r+2)}{12}. \end{align*} \end{theorem} \begin{proof} By Proposition \ref{gammaP1P2}, $$ \Gamma(P_1,P_2)=\left\{\left(mi-j, m\left(\left\lceil\frac{rj}{m}\right\rceil-i\right)-j\right)\ \mid \ 1+u \leq j \leq m-1, 1\leq i \leq \left\lceil \frac{rj}{m}\right\rceil -1\right\}. $$ Setting $(i_0, j_0)\in\mathbb {N}^2$ with $1+u \leq j_0 \leq m-1$ and $1\leq i_0 \leq \left \lceil \frac{rj_0}{m}\right\rceil -1$; by Theorem \ref{numberofgaps}, we need to count the number $r_{i_0,j_0}$ of pairs $(i_1, j_1)\in\mathbb {N}^2$ such that \begin{small} \begin{equation}\label{conditions} 1+u \leq j_1 \leq ru,\, 1\leq i_1 \leq \left \lceil \frac{rj_1}{m}\right\rceil -1,\, m\left(i_0-i_1\right) < j_0-j_1,\, m\left(\left\lceil \frac{rj_1}{m} \right\rceil - \left\lceil \frac{rj_0}{m} \right\rceil +i_0-i_1 \right) < j_1-j_0 . \end{equation} \end{small} For $h\in\{0,1\}$ write $j_h=k_h u+t_h$ with $k_h \in \left\{ 1, \dots, r-1\right\}$ and $t_h \in \left\{ 1, \dots , u\right\}$. Then $\left \lceil \frac{rj_h}{m} \right\rceil = k_h+1$. We split $r_{i_0,j_0}$ in a number of cases: \begin{itemize} \item $j_1=j_0$. Then \eqref{conditions} implies $i_0+1 \leq i_1 \leq k_1$. \item $j_1 > j_0$ and $k_1=k_0$. Then \eqref{conditions} implies $1 \leq t_0 \leq u-1$, $t_1 \geq t_0+1$, and $i_0+1 \leq i_1 \leq k_1$. \item $j_1 > j_0$ and $k_1>k_0$. Then \eqref{conditions} implies $i_0+k_1-k_0 \leq i_1 \leq k_1$. \item $j_1 < j_0$ and $k_1 < k_0$. Then \eqref{conditions} implies $1\leq t_0,t_1\leq u$, $1\leq i_0\leq k_1$, and $i_0\leq i_1 \leq k_1$. \item $j_1 < j_0$ and $k_1 = k_0$. Then \eqref{conditions} implies $2 \leq t_0 \leq u$, $t_1 \leq t_0-1$, and $i_0+1 \leq i_1 \leq k_1$. \end{itemize} By direct computation, this yields \begin{align*} r(P_1,P_2) & =\sum_{(i_0,j_0)\in\Gamma(P_1,P_2)} r_{i_0,j_0} =\sum_{k_0=1}^{r-1} \sum_{t_0=1}^u \sum_{i_0=1}^{k_0} (k_0-i_0)+ \sum_{k_0=1}^{r-1} \sum_{t_0=1}^{u-1} \sum_{t_1=t_0+1}^u\sum_{i_0=1}^{k_0} (k_0-i_0)\\ &+ \sum_{k_0=1}^{r-2} \sum_{t_0=1}^{u} \sum_{k_1=k_0+1}^{r-1}\sum_{t_1=1}^u \sum_{i_0=1}^{k_0} (k_0-i_0+1)+ \sum_{k_0=2}^{r-1} \sum_{t_0=1}^{u} \sum_{k_1=1}^{k_0-1}\sum_{t_1=1}^u\sum_{i_0=1}^{k_1} (k_1-i_0+1)\\ &+ \sum_{k_0=1}^{r-1} \sum_{t_0=2}^{u} \sum_{t_1=1}^{t_0-1}\sum_{i_0=1}^{k_0} (k_0-i_0) = \frac{u^2(r-2)(r-1)r(r+3)}{12}. \end{align*} Also, by \cite[Th. 3.2]{MQS2016}, we have \begin{align} \sum_{n \in G(P_1)} n & = \sum_{n \in G(P_2)} n = \sum_{j=1+u}^{m-1} \sum_{i=1}^{\left\lceil \frac{rj}{m} \right\rceil-1} \left(mi-j\right) =\sum_{k=1}^{r-1} \sum_{t=1}^{u} \sum_{i=1}^{k-1} \left((ur+1)i-(ku+t)\right) \\ &= \frac{ur(r-1)(2r^2u-2ru+2r-u-1)}{12}. \label{gapshere} \end{align} Therefore we obtain $$| G(P_1, P_2)| =\sum_{n \in G(P_1)} n + \sum_{n \in G(P_2)} n - r(P_1,P_2) =\frac{ur(r-1)(3r^2u-5ru+4r+4u-2)}{12}.$$ By \cite[Theorem 4.3]{MQS2016}, $$\Gamma(P_\infty,P_1)=\left\{ \left(mr-mj-ri, m\left(j-1\right)+i\right) \mid 1 \leq i \leq m-1- u , 1\leq j \leq r-1-\left\lfloor \frac{ri}{m} \right\rfloor \right\}.$$ For $(i_0, j_0)\in\mathbb {N}^2$ with $1 \leq i_0 \leq m-1- u$ and $1\leq j_0 \leq r-1-\left\lfloor \frac{ri_0}{m} \right\rfloor$, as above we need to count the number $s_{i_0,j_0}$ of pairs $(i_1,j_1)\in\mathbb {N}^2$ such that \begin{small} \begin{equation}\label{eq1} 1 \leq i_1 \leq m-1- u,\, 1\leq j_1 \leq r-1-\left\lfloor \frac{ri_1}{m} \right\rfloor,\, m\left(j_1-j_0\right) < r\left(i_0-i_1\right),\, m\left(j_1-j_0\right) < \left(i_0-i_1\right). \end{equation} \end{small} For $h\in\{0,1\}$ write $i_h=k_h u + t_h$, with $k_h\in\{0,\ldots,r-2\}$ and $t_h\in\{1,\ldots,u\}$. Then $\left \lfloor \frac{ri_h}{m} \right\rfloor = k_h$. We split $s_{i_0,j_0}$ in a number of cases: \begin{itemize} \item $i_1=i_0$. Then \eqref{eq1} implies $1\leq j_1\leq j_0-1$. \item $i_1>i_0$, $k_1>k_0$, and $t_1\leq t_0$. Then \eqref{eq1} implies $k_1-k_0+1\leq j_0\leq r-1-k_0$ and $1\leq j_1\leq k_0-k_1+j_0$. \item $i_1>i_0$, $k_1\geq k_0$, and $t_1> t_0$. Then \eqref{eq1} implies $k_1-k_0+2\leq j_0\leq r-1-k_0$ and $1\leq j_1\leq k_0-k_1-1+j_0$. \item $i_1<i_0$ and $k_1<k_0$. Then \eqref{eq1} implies $1\leq j_1\leq j_0$. \item $i_1<i_0$, $k_1=k_0$ and $t_1<t_0$. Then \eqref{eq1} implies $1\leq j_1\leq j_0$. \end{itemize} By direct computation, this yields \begin{align*} r(P_\infty, P_1)& =\sum_{(i_0,j_0)\in\Gamma(P_\infty,P_1)} s_{i_0,j_0} = \sum_{k_0=0}^{r-2}\sum_{t_0=1}^{u} \sum_{j_0=1}^{r-1-k_0}(j_0-1)\\ &+\sum_{k_0=0}^{r-2}\sum_{t_0=1}^{u} \sum_{k_1=k_0+1}^{r-2} \sum_{t_1=1}^{t_0} \sum_{j_0=k_1-k_0+1}^{r-1-k_0}(k_0-k_1+j_0) \\ &+ \sum_{k_0=0}^{r-2}\sum_{t_0=1}^{u} \sum_{k_1=k_0}^{r-2} \sum_{t_1=t_0+1}^{u} \sum_{j_0=k_1-k_0+2}^{r-1-k_0}(k_0-k_1-1+j_0)\\ & + \sum_{k_0=0}^{r-2}\sum_{t_0=1}^{u} \sum_{k_1=0}^{k_0-1} \sum_{t_1=1}^{u} \sum_{j_0=1}^{r-1-k_0}j_0+ \sum_{k_0=0}^{r-2}\sum_{t_0=1}^{u}\sum_{t_1=1}^{t_0-1} \sum_{j_0=1}^{r-1-k_0}j_0 =\frac{u(r-1)r(ur^2+r-u-5)}{12}. \end{align*} Also, by \cite[Th. 3.2]{MQS2016}, we have \begin{align*} \sum_{n \in G(P_\infty)} n & = \sum_{i=1}^{m-1-u} \sum_{j=1}^{r-1-\left\lfloor \frac{ri}{m} \right\rfloor} \left(mr-mj-ri\right)\\ & = \sum_{k=0}^{r-2} \sum_{t=1}^{u}\sum_{j=1}^{r-1-k} \left(mr-mj-r(ku+t)\right) = \frac{ur(r-1)(2ur^2-ur+r-2)}{12}, \end{align*} and $\sum_{n \in G(P_1)} n$ was computed in \ref{gapshere}. Therefore we obtain $$| G(P_1, P_2)| =\sum_{n \in G(P_1)} n - \sum_{n \in G(P_2)} n + r(P_1,P_2) =\frac{ur(r-1)(3r^2u-5ru+4r+4u-2)}{12}.$$ \end{proof} \begin{remark} If $\mathcal H$ is the function field of the Hermitian curve defined by $y^{q+1}=x^q+x$ over $\mathbb{F}_{q^2}$, then Theorem {\rm \ref{twopoints}} was already obtained in {\rm \cite[Th. 3.6]{G2001}}. In fact, the places of $\mathcal H$ which are totally ramified in $ H/\mathbb F_{q^2}(x)$ are centered at Weierstrass points of $\mathcal H$. \end{remark} \section{Pure gaps at many points and codes}\label{Sec:ManyPoints} Throughout this section, $F/\mathbb {F}_q(x)$ is a Kummer extension defined by $y^m=f(x)$, where $f(x)\in\mathbb {F}_q[x]$ is separable of degree $r$ and $\gcd(r,m)=1$. Let $P_\infty\in\mathbb {P}(F)$ denote the unique pole of $x$, while $P_1,\ldots,P_s$ ($s\geq1$) are other totally ramified places in $F/\mathbb {F}_q(x)$ different from $P_\infty$. In this section we give arithmetic conditions which characterize the pure gaps at $P_1,\ldots,P_s$ and at $P_\infty,P_1,\ldots,P_s$. We use this characterization to determine explicit families of pure gaps at many points and apply it to construct AG codes with good parameters. \begin{proposition}\label{puregapsmanypoints} Under the above notation, let $s\leq r$. The $s$-tuple $(a_1, \dots, a_s) \in\mathbb {N}^s$ is a pure gap at $P_1, \dots, P_s$ if and only if, for every $t \in \{0,\ldots ,m - 1\}$, exactly one of the following two conditions is satisfied: \begin{enumerate} \item[i)] $\sum_{i=1}^s \left\lfloor \frac{a_i+t}{m}\right\rfloor +\left\lfloor \frac{-rt}{m}\right\rfloor<0$; \item[ii)] $\sum_{i=1}^s \left\lfloor \frac{a_i+t}{m}\right\rfloor +\left\lfloor \frac{-rt}{m}\right\rfloor \geq 0$ and $\left\lfloor \frac{a_i+t}{m}\right\rfloor=\left\lfloor \frac{a_i-1+t}{m}\right\rfloor$, for all $i=1, \dots ,s$. \end{enumerate} \end{proposition} \begin{proof} Let $P_1,\ldots,P_r$, be all the places of $F$ which are totally ramified in $F/\mathbb {F}_q(x)$ except $P_\infty$, that is, $P_i$ is the zero of $x-\alpha_i$, where $f(x)=\prod_{i=1}^r (x-\alpha_i)$ is the separable polynomial defining $F$ by $y^m=f(x)$. Then the divisor of $y$ in $F$ is $(y)=\sum_{i=1}^{r}P_i-rP_{\infty}$, and hence, for any $t\in\{0,\ldots,m-1\}$, $$\sum_{i=1}^s a_iP_i+(y^t)=\sum_{i=1}^s (a_i+t)P_i+\sum_{i=s+1}^{r}tP_i- rtP_{\infty}\,.$$ Let $Q_1,\ldots,Q_r,Q_\infty$ be the places of $\mathbb {F}_q(x)$ lying under $P_1,\ldots,P_r,P_\infty$, respectively. Then $$ \left[\sum_{i=1}^s a_iP_i+\left(y^t\right) \right]\Big|_{{K(x)}} = \sum_{i=1}^s \left \lfloor \frac{a_i+t}{m}\right\rfloor Q_i +\left \lfloor \frac{-rt}{m}\right\rfloor Q_{\infty}. $$ Since $$ \mathcal{L}(\sum_{i=1}^s a_iP_i)=\bigoplus_{t=0}^{m-1}\mathcal{L}\left(\left[\sum_{i=1}^s a_iP_i+\left(y^t\right) \right]\Big |_{K(x)} \right)y^t, $$ by Theorem \ref{ThMaharaj}, we have $$ \ell\left(\sum_{i=1}^s a_iP_i\right)=\sum_{t=0}^{m-1}\ell\left( \sum_{i=1}^s \left \lfloor \frac{a_i+t}{m}\right\rfloor Q_i+\left \lfloor \frac{-rt}{m}\right\rfloor Q_{\infty}\right), $$ $$ \ell\left(\sum_{i=1}^s (a_i-1)P_i\right)=\sum_{t=0}^{m-1} \ell\left( \sum_{i=1}^s \left \lfloor \frac{a_i-1+t}{m}\right\rfloor Q_i+\left \lfloor \frac{-rt}{m}\right\rfloor Q_{\infty}\right). $$ By Lemma \ref{purecondition}, $(a_1, \dots, a_s)$ is a pure gap at $P_1,\ldots,P_s$ if and only if $$ \ell\left( \sum_{i=1}^s \left \lfloor \frac{a_i+t}{m}\right\rfloor Q_i +\left \lfloor \frac{-rt}{m}\right\rfloor Q_{\infty}\right)-\ell\left( \sum_{i=1}^s \left \lfloor \frac{a_i-1+t}{m}\right\rfloor Q_i+\left \lfloor \frac{-rt}{m}\right\rfloor Q_{\infty}\right)=0 $$ for all $t\in \{0,\ldots,m-1\}$. Since $\mathbb {F}_q(x)$ has genus $0$, this happens if and only if, for all $t\in \{0,\ldots,m-1\}$, either $$\sum_{i=1}^s \left \lfloor \frac{a_i+t}{m}\right\rfloor+\left \lfloor \frac{-rt}{m}\right\rfloor<0$$ or $$\sum_{i=1}^s \left \lfloor \frac{a_i+t}{m}\right\rfloor+\left \lfloor \frac{-rt}{m}\right\rfloor \geq 0 \quad{\rm and }\quad \sum_{i=1}^s \left \lfloor \frac{a_i+t}{m}\right\rfloor = \sum_{i=1}^s \left \lfloor \frac{a_i-1+t}{m}\right\rfloor.$$ \end{proof} \begin{proposition}\label{puregapsinfty} Let $s\leq r$, then an $(s+1)$-tuple $(a_0, a_1, \dots, a_s)\in \mathbb{N}^{s+1}$ is a pure gap at $P_\infty, P_1, \dots, P_s$ if and only if, for every $t \in \{0,\ldots ,m - 1\}$, exactly one of the following two conditions is satisfied: \begin{enumerate} \item[i)] $\sum_{i=1}^s \left\lfloor \frac{a_i+t}{m} \right\rfloor + \left\lfloor \frac{a_0-rt}{m}\right \rfloor <0 $; \item[ii)] $\sum_{i=1}^s \left\lfloor \frac{a_i+t}{m} \right\rfloor + \left\lfloor \frac{a_0-rt}{m}\right \rfloor \geq 0 $, $\left\lfloor \frac{a_0-rt}{m}\right \rfloor= \left\lfloor \frac{a_0-1-rt}{m} \right\rfloor $ and $ \left\lfloor \frac{a_i+t}{m} \right\rfloor = \left\lfloor \frac{a_i-1+t}{m} \right\rfloor$ for $i=1, \dots, s$. \end{enumerate} \end{proposition} \begin{proof} The proof is very similar to the proof of Proposition \ref{puregapsmanypoints} and it is omitted. \end{proof} We now present three families of pure gaps at two points for $m\equiv1\pmod r$. \begin{proposition}\label{puregapstwopoints} Suppose that $m=ur+1$ for some integer $u$. Then \begin{enumerate} \item[i)] $((r-1)m-2r,1)$ is a pure gap at $P_{\infty},P_1$; \item[ii)] $((r-2)m-r,b)$, with $b\in \{1,\ldots,u+1\}$ are pure gaps at $P_{\infty},P_1$; \item[iii)] $((r-3)m+1+\alpha ,1+\beta)$, with $\alpha\in \{0,\ldots,2u-1\}$ and $\beta \in \{0,\ldots,u-1\}$ are pure gaps at $P_1,P_2$. \end{enumerate} \end{proposition} \begin{proof} Let $a= rm-m-2r$ and $t\in \{0,\ldots,m-1\}$. We have $\left\lfloor \frac{a-rt}{m}\right\rfloor \neq \left\lfloor \frac{a-1-rt}{m}\right\rfloor$ if and only if $m$ divides $a-rt=(r-1)m-r(t+2)$, that is $t=m-2$. Also, $t=m-2$ implies $\left\lfloor \frac{a-qt}{q^{\ell}+1}\right\rfloor=-1$. For any $t\in \{0,\ldots,m-2\}$ we have $\left\lfloor \frac{1+t}{m}\right\rfloor =\left\lfloor \frac{t}{m}\right\rfloor=0$. We conclude that for any $t\in\{0,\ldots,m-2\}$ either $\left\lfloor \frac{a-rt}{m}\right\rfloor+\left\lfloor \frac{1+t}{m}\right\rfloor<0$ or $\left\lfloor \frac{a-rt}{m}\right\rfloor+\left\lfloor \frac{1+t}{m}\right\rfloor=\left\lfloor \frac{a-1-rt}{m}\right\rfloor+\left\lfloor \frac{t}{m}\right\rfloor.$ For $t=m-1$, $\left\lfloor \frac{a-rt}{m}\right\rfloor+\left\lfloor \frac{1+t}{m}\right\rfloor=-2+1=-1<0.$ By Proposition \ref{puregapsinfty}, $(a,1)$ is a pure gap at $P_\infty, P_1$. Now let $a=rm-2m-r$, $b\in\{1,\ldots,u+1\}$, and $t\in \{0,\ldots,m-1\}$. We have that $\left\lfloor \frac{b+t}{m}\right\rfloor\in\{0,1\}$, and $\left\lfloor \frac{b+t}{m}\right\rfloor=1$ if and only if $t+b\geq m$, that is $t \in \{m-b,\dots, m-1\}$. In this case, $$\left\lfloor \frac{a-rt}{m}\right\rfloor = \left\lfloor \frac{rm-2m-r-rt}{m}\right\rfloor =-2 +\left\lfloor \frac{rm-r-rt}{m}\right\rfloor =-2,$$ since $0\leq rm-r-rt\leq r(b-1) \leq ru < m $. Hence, for all $t \in \{m-b,\dots, m-1\}$, $$\left\lfloor \frac{a-rt}{m}\right\rfloor+\left\lfloor \frac{b+t}{m}\right\rfloor=-2+1<0.$$ For $t \in \{0,\dots, m-b-1\}$, we have that $$\left\lfloor \frac{a-rt}{m}\right\rfloor+\left\lfloor \frac{b+t}{m}\right\rfloor= \left\lfloor \frac{a-rt}{m}\right\rfloor= \left\lfloor \frac{a-1-rt}{m}\right\rfloor=\left\lfloor \frac{a-1-rt}{m}\right\rfloor+\left\lfloor \frac{b-1+t}{m}\right\rfloor.$$ By Proposition \ref{puregapsinfty}, $(a,b)$ is a pure gap at $P_\infty, P_1$. Finally let $t \in \{ 0, \dots, m-1 \}$ and $(a_\alpha, b_\beta)=((r-3)m+1+\alpha,1+\beta)$ with $ \alpha\in \{0,\ldots,2u-1\}$ and $\beta\in \{0,\ldots,u-1\}$. Note that $\left\lfloor \frac{a_{\alpha}+t}{m}\right\rfloor \neq \left\lfloor \frac{a_{\alpha}-1+t}{m}\right\rfloor$ if and only if $t=m-1-\alpha$, and $\left\lfloor \frac{b_{\alpha}+t}{m}\right\rfloor \neq \left\lfloor \frac{b_{\alpha}-1+t}{m}\right\rfloor$ if and only if $t=m-1-\beta$. Therefore, $$\left\lfloor \frac{a_{\alpha}+t}{m}\right\rfloor +\left\lfloor \frac{b_{\alpha}+t}{m}\right\rfloor \neq \left\lfloor \frac{a_{\alpha}-1+t}{m}\right\rfloor+\left\lfloor \frac{b_{\alpha}-1+t}{m}\right\rfloor$$ if and only if $t=m-1-\alpha$ or $t=m-1-\beta$. Suppose $t=m-1-\alpha$. Then \begin{align*} & \left\lfloor \frac{-rt}{m}\right\rfloor=-r+\left\lfloor \frac{r(1+\alpha)}{m}\right\rfloor = \left\{\begin{array}{ll} -r,& \alpha \leq u-1 \\ -r +1,& \alpha\geq u\\ \end{array} \right. ,\\ &\left\lfloor \frac{a_{\alpha}+t}{m}\right\rfloor=r-2, \quad \left\lfloor \frac{b_{\beta}+t}{m}\right\rfloor= 1+ \left\lfloor \frac{\beta - \alpha}{m}\right\rfloor = \left\{ \begin{array}{ll} 1,& \text{ for }\beta\geq \alpha\\ 0, & \text{ for } \beta<\alpha\\ \end{array} \right.. \end{align*} If $\alpha\geq u$, then $$\left\lfloor \frac{-rt}{m}\right\rfloor+\left\lfloor \frac{a_{\alpha}+t}{m}\right\rfloor+\left\lfloor \frac{b_{\beta}+t}{m}\right\rfloor=(-r+1)+(r-2)+0<0;$$ if $\alpha\leq u-1$, then $$\left\lfloor \frac{-rt}{m}\right\rfloor+\left\lfloor \frac{a_{\alpha}+t}{m}\right\rfloor+\left\lfloor \frac{b_{\beta}+t}{m}\right\rfloor\leq-r+(r-2)+1<0.$$ Suppose $t=m-1-\beta$. Then \begin{align*} &\left\lfloor \frac{-rt}{m}\right\rfloor= -r+\left\lfloor \frac{r(1+\beta)}{m}\right\rfloor=-r,\\ & \left\lfloor \frac{a_{\alpha}+t}{m}\right\rfloor= r-2+\left\lfloor \frac{\alpha-\beta}{m}\right\rfloor= \left\{ \begin{array}{ll} r-3, & \text{ for } \alpha<\beta\\ r-2, & \text{ for } \alpha\geq \beta\\\end{array}\right., \qquad \left\lfloor \frac{b_{\beta}+t}{m}\right\rfloor=1. \end{align*} Hence, \begin{align*} \left\lfloor \frac{-rt}{m}\right\rfloor+\left\lfloor \frac{a_{\alpha}+t}{m}\right\rfloor+\left\lfloor \frac{b_{\beta}+t}{m}\right\rfloor\leq -r +(r-2)+1<0. \end{align*} The thesis follows from Proposition \ref{puregapsmanypoints}. \end{proof} The following results present two families of pure gaps at many points for $m\equiv1\pmod r$. \begin{proposition}\label{manypoints1} Suppose that $m=ur+1$ for some integer $u$, $s<r$, and $\alpha_i\in\{0,\ldots,(s+1-i)u-1\}$ for $i=1,\ldots,s$. Then $(a_1,\ldots,a_s)=((r-s-1)m+1+\alpha_1,1+\alpha_2,\ldots,1+\alpha_s)$ is a pure gap at $P_1,\ldots,P_s$. \end{proposition} \begin{proof} Suppose there exist $t\in\{0,\ldots,m-1\}$ and $j\in\{1,\ldots,s\}$ such that $\left\lfloor\frac{a_j+t}{m}\right\rfloor\ne\left\lfloor\frac{a_j-1+t}{m}\right\rfloor$. Thus $t=m-1-\alpha_j$. Let $h\in\{0,\ldots,r-2\}$ be such that $hu\leq\alpha_j< (h+1)u$. We have \begin{align*} &\left\lfloor \frac{-rt}{m}\right\rfloor=\left\lfloor \frac{-r(m-1-\alpha_j)}{m}\right\rfloor=-r+\left\lfloor \frac{r(1+\alpha_j)}{m}\right\rfloor = -r+h,\\ &\left\lfloor \frac{a_{1}+t}{m}\right\rfloor=\left\lfloor \frac{(r-s-1)m+1+\alpha_1+m-1-\alpha_j}{m}\right\rfloor=\left\{ \begin{array}{ll} r-s,& \alpha_1\geq \alpha_j\\ r-s-1, & \alpha_1< \alpha_j\\ \end{array} \right., \end{align*} and, for $i>1$, $$\left\lfloor \frac{a_{i}+t}{m}\right\rfloor=\left\lfloor \frac{1+\alpha_i+m-1-\alpha_j}{m}\right\rfloor=\left\{ \begin{array}{ll} 0,& \alpha_i< \alpha_j\\ 1, & \alpha_i\geq \alpha_j\\ \end{array} \right..$$ Since $$ |\{i\in\{2,\ldots,s\}:\alpha_i\geq\alpha_j\}|\leq s-1-|\{i\in\{2,\ldots,s\}: (s+1-i)h-1<uh\}|=s-1-h, $$ this implies that $$\left\lfloor \frac{-rt}{m}\right\rfloor+\left\lfloor \frac{a_{1}+t}{m}\right\rfloor+\sum_{i=2}^m\left\lfloor \frac{a_{i}+t}{m}\right\rfloor \leq (-r+h)+(r-s)+(s-1-h)<0. $$ Hence, the thesis follows by Proposition \ref{puregapsmanypoints}. \end{proof} \begin{proposition}\label{manypoints2} Suppose that $m=ur+1$ for some integer $u$, $s<r-1$, $\alpha\in \{0,\ldots,s\}$, and $\beta_i\in \{0,\ldots,iu-1\}$ for $i\in\{1,\ldots,s\}$. Then $(a_0,a_1,\ldots,a_s)=((r-s-1)m-r+\alpha,1+\beta_1,\ldots,1+\beta_s)$ is a pure gap at $P_{\infty},P_1,\ldots,P_s$. \end{proposition} \begin{proof} Let $t\in\{0,\ldots,m-2\}$, so that $t=ku+z$ with $k\in\{0,\ldots,r-1\}$ and $z\in\{0,\ldots,u-1\}$. Suppose $\left\lfloor \frac{a_0-rt}{m} \right\rfloor \ne \left\lfloor \frac{a_0-1-rt}{m} \right\rfloor$. Then $m\mid(a_0-rt)=(r-s-k-1)m+\alpha+k-r(z+1)$. Since $|\alpha+k-r(z+1)|<m$, this implies $\alpha+k=r(z+1)$, whence $r\mid(\alpha+k)$. As $0\leq\alpha,k\leq r-1$, and $r(z+1)>0$, we have that $\alpha+k=r$ and $z=0$. Hence, $t=m-1-\alpha u$. Then $$ \left\lfloor\frac{a_0-rt}{m}\right\rfloor=r-s-k-1=\alpha-s-1 . $$ Also, $1+\beta_i+t\leq m-1-(\alpha-j)u$ for all $i$. Thus $a_j+t\leq m-1$ for all $j\in\{1,\ldots,\alpha\}$, so $$ \sum_{i=1}^s \left\lfloor\frac{a_i+t}{m}\right\rfloor \leq s-\alpha. $$ Therefore, $$ \sum_{i=1}^s \left\lfloor\frac{a_i+t}{m}\right\rfloor + \left\lfloor\frac{a_0-rt}{m}\right\rfloor <0. $$ Now suppose $\left\lfloor\frac{a_i+t}{m}\right\rfloor\ne\left\lfloor\frac{a_j-1+t}{m}\right\rfloor$ for some $j\in\{1,\ldots,s\}$. Since $1\leq a_j+t<2m$, this implies $t=m-a_j=m-1-\beta_j$. Let $h\in\{0,r-3\}$ be such that $hu\leq\beta_j<(h+1)u$. We have $$ \left\lfloor\frac{a_0-rt}{m}\right\rfloor=-s-1+\left\lfloor\frac{\alpha+r\beta_j}{m}\right\rfloor=-s-1+h $$ and, for $i>0$, $$ \left\lfloor\frac{a_i+t}{m}\right\rfloor=1+\left\lfloor\frac{\beta_i-\beta_j}{m}\right\rfloor=\left\{ \begin{array}{ll} 0,& \beta_i< \beta_j\\ 1, & \beta_i\geq \beta_j\\ \end{array} \right.. $$ Since $$ |\{i\in\{1,\ldots,s\}:\beta_i\geq\beta_j\}|\leq s-|\{i\in\{1,\ldots,s\}: ih-1<uh\}|=s-h, $$ this implies that $$\left\lfloor \frac{a_0-rt}{m}\right\rfloor+\sum_{i=1}^m\left\lfloor \frac{a_{i}+t}{m}\right\rfloor \leq (-s-1+h)+(s-h)<0. $$ Finally, let $t=m-1$. Then $\left\lfloor\frac{a_0-rt}{m}\right\rfloor=-s-1$ and $\left\lfloor\frac{a_i+t}{m}\right\rfloor=1$ for all $i>0$. Hence, $$\left\lfloor \frac{a_0-rt}{m}\right\rfloor+\sum_{i=1}^m\left\lfloor \frac{a_{i}+t}{m}\right\rfloor = (-s-1)+s<0. $$ The thesis follows by Proposition \ref{puregapsinfty}. \end{proof} By means of Theorem \ref{distmanypoints}, the results on pure gaps of this section can be used in order to obtain AG codes with good parameters. This is pointed out in the next remarks where we compute an upper bound for the Singleton defect of some codes. \begin{remark}\label{applications1} For a Kummer extension $y^m=f(x)$, where $m=ur+1$ and $s\leq r-1$, consider the pure gaps $(a_1,\ldots,a_s)=((r-s-1)m+1,1,\ldots,1)$ and $(b_1,\ldots,b_s)=((r-s-1)m+su,(s-1)u,\ldots,u)$. Define the divisors $G=\sum_{i=1}^s (a_i+b_i-1)P_i$ and $D$ as the sum of $n$ rational places of $F$ different from $P_1,\ldots,P_s$. Consider the $[n,k,d]$-code $C_{\Omega}(D,G)$. Suppose $2g-2 < \deg G < n$, then $k=n+g-1-\deg G$. Since $F$ has genus $g=ur(r-1)/2$ we have by Proposition {\rm\ref{manypoints1}} and Theorem {\rm\ref{distmanypoints}} that the Singleton defect $\delta=n+1-k-d$ satisfies $$ \delta\leq \frac{ur(r-1)-us(s+1)}{2}. $$ \end{remark} \begin{remark}\label{applications2} For a Kummer extension $y^m=f(x)$, where $m=ur+1$ and $s\leq r-2$ consider the pure gaps $(a_0,a_1,\ldots,a_s)=((r-s-1)m-r,1,\ldots,1)$ and $(b_0,b_1,\ldots,b_s)=((r-s-1)m-r+s,u,\ldots,su)$. Define the divisors $G=(a_0+b_0-1)P_\infty+\sum_{i=1}^s (a_i+b_i-1)P_i$ and $D$ as the sum of $n$ rational places of $F$ different from $P_\infty,P_1,\ldots,P_s$ and consider the $[n,k,d]$-code $C_{\Omega}(D,G)$. Suppose $2g-2 < \deg G < n$, then $k=n+g-1-\deg G$. Since $F$ has genus $g=ur(r-1)/2$ we have by Proposition {\rm \ref{manypoints2}} and Theorem {\rm\ref{distmanypoints}} that the Singleton defect $\delta$ satisfies $$ \delta\leq \frac{ur(r-1)-us(s+1)}{2}-s-1.$$ \end{remark} We illustrate the results obtained by constructing codes on many points over the Hermitian function field. \begin{example}\label{ExHerm} The Hermitian function field $\mathcal H$ is defined by the affine equation $y^{q+1}=x^q+x$, it is maximal over $\mathbb{F}_{q^2}$ and has genus $g=q(q-1)/2$. We apply Remark \ref{applications1} to construct $[n,k,d]$-codes $C_{\Omega}(D,G)$ from $\mathcal H$. In this case we have $r=q, u=1, 1 \leq s \leq q-1$ and $ \deg G= 2(q-s-1)(q+1)+s(s+1)/2$. We choose $s$ such that $2g-2 < \deg G < n$ with $n=q^3+1-s$. Then \begin{align*} & k=n+g-1-\deg G=q^3-\frac{3}{2}q^2+\Big(2s-\frac{1}{2}\Big)q-\frac{s^2-s}{2}+2,\\ & d \geq \deg G -(2g-2)+s+\sum_{i=1}^s (b_i-a_i)=q^2-(2s-1)q+s^2-s. \end{align*} \end{example} Table \ref{tabellina} summarizes results from Example \ref{ExHerm}. We list AG codes with the same or better parameters with respect to the corresponding ones in the MinT's Tables \cite{MinT}. \begin{center} \begin{table} \caption{Results from Example \ref{ExHerm}}\label{tabellina} \vspace*{0.3 cm} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $q^2$ & $s$ & $n$ & $k$ & $d\geq$ & \text{improvement on $d$ compared with ~\cite{MinT}} \\ \hline 16 & 1 & 64 & 48 & 12 & 1\\ \hline 16 & 2 & 63 & 55 & 6 & 0 \\ \hline 25 & 1 & 125 & 97 & 20 & 1 \\ \hline 25 & 2 & 124 & 106 & 12 & 1 \\ \hline 49& 2 & 342 & 295 & 30 & 3 \\ \hline 49 & 3 & 341 & 307 & 20 & 1 \\ \hline 64 & 1 & 512 & 430 & 56 & 1\\ \hline 64 & 2 & 511 & 445 & 42 & 3\\ \hline 64 & 3 & 510 & 459 & 30 & 2\\ \hline 64 & 4 & 509 & 472 & 20 & 0 \\ \hline 81 & 3 & 727 & 656 & 42 & 3\\ \hline 81 & 4 & 726 & 671 & 30 & 0 \\ \hline \end{tabular} \end{center} \end{table} \end{center} \section{Acknowledgments} The research of D. Bartoli and G. Zini was partially supported by Ministry for Education, University and Research of Italy (MIUR) (Project PRIN 2012 ``Geometrie di Galois e strutture di incidenza'' - Prot. N. 2012XZE22K$_-$005) and by the Italian National Group for Algebraic and Geometric Structures and their Applications (GNSAGA - INdAM). The second author L. Quoos was partially supported by CNPq, PDE grant number 200434/2015-2. This work was done while the author enjoyed a sabbatical at the Universit\`a degli Studi di Perugia leave from Universidade Federal do Rio de Janeiro.
1,108,101,563,057
arxiv
\section{Introduction} \label{sec:Introduction} Recently, control theoretic tools have gained popularity in analyzing optimization and machine learning algorithms \cite{lessard2016analysis,hu2017dissipativity,hu2017unified,pmlr-v80-hu18b,fazlyab2018analysis,wilson2016lyapunov,nishihara2015general,sundararajan2017robust,cherukuri2017role,hu2017analysis}. Typically, convergence analysis in optimization is performed in a case-by-case manner and the corresponding techniques highly depend on the structure of algorithms and assumptions of objective functions. However, by representing iterative algorithms and prior information of objective functions as feedback dynamical systems, we can apply tools from control theory to carry out the convergence analysis in a more systematic way. Such a framework is pioneered by \cite{lessard2016analysis}, where the authors used semidefinite programming to analyze the convergence of a class of optimization algorithms including gradient descent (GD), Heavy-ball (HB) and Nesterov's accelerated gradient (NAG), by assuming the gradient of the loss function satisfies some integral quadratic constraints (IQCs). Indeed, the standard smoothness and convexity assumptions can be well rephrased as IQCs. Afterwards, similar approaches have been developed to analyze various optimization algorithms such as ADMM \cite{nishihara2015general}, distributed methods \cite{sundararajan2017robust}, proximal algorithms \cite{fazlyab2018analysis}, and stochastic finite-sum methods \cite{hu2017unified,pmlr-v80-hu18b}. Exploiting the connection between control and optimization also provides new insights into momentum methods \cite{hu2017dissipativity,wilson2016lyapunov,HuACC2017} and facilitates the design of new algorithms \cite{van2018fastest,cyrus2018robust,dhingra2018proximal,pmlr-v80-kolarijani18a}. Moreover, control tools are also useful in analyzing the robustness of algorithms against computation inexactness \cite{lessard2016analysis,cherukuri2017role,hu2017analysis,aybat2018robust}. This paper considers a class of nonconvex optimization problems whose objective functions satisfy the so-called Regularity Condition (RC)~\cite{candes2015phase,chi2019nonconvex}, which is a geometric condition characterizing the curvatures of nonconvex functions. Such a condition has appeared in many important machine learning and signal processing applications including phase retrieval \cite{candes2015phase,ChenCandes15solving,zhang2017reshaped,zhang2016provable,wang2017solving}, deep linear neural networks \cite{zhou2017characterization}, shallow nonlinear neural networks \cite{li2017convergence}, matrix sensing \cite{tu2016low,li2018median}, to name a few. While it is straightforward to show that GD converges linearly for problems satisfying RC \cite{candes2015phase}, however, the behavior of AGD under RC is remains poorly understood in theory, despite its empirical success \cite{pauwels2017fienup}. Our work is motivated to deepen the understanding of convergence of AGD under RC. The main contribution of this paper lies in the theoretical convergence guarantee of AGD algorithms under RC. In particular, we provide an analytical characterization of hyperparameter choices that ensure linear convergence of AGD for all functions that satisfy RC. In addition, the framework and tools developed herein may inspire more research on analyzing the convergence of sophisticated algorithms under nonconvex settings. Specifically, the analysis for momentum methods typically involves subtle constructions of Hamiltonians (or Lyapunov functions). Our frequency domain approach can implicitly ensure the existence of such Lyapunov functions without explicit constructions. This sheds new light on the analysis of momentum methods for nonconvex optimization problems. It is worth noting that, RC is essentially the same as the sector bound condition in \cite{lessard2016analysis} when it holds globally. In \cite[Sections 4.5 and 4.6]{lessard2016analysis}, LMI conditions have been implemented to analyze momentum methods under the sector bound condition. The results, however, are numerical validations of convergence for given hyperparameters by running a small semidefinite program, and such validation needs to be done whenever the hyperparameters are changed. Built on this prior work, we focus on how to obtain analytical convergence regions from these LMIs without solving semidefinite programs. Our analytical results for momentum methods under RC provide deeper understandings on the connection between control theory and nonconvex optimization. The rest of the paper is organized as follows. In Section 2, we state the problem and introduce the AGD methods of interest. Section 3 presents how to incorporate the algorithms and RC into a dynamical system and transfer the convergence analysis into the stability analysis. Section 4 derives the analytical convergence conditions of AGD methods under RC via a frequency domain approach by applying the KYP lemma. In Section 5, we discuss how to extend the results to the general case where RC only holds locally. \section{Problem background} A general optimization problem can be described as \begin{equation}\label{eq:loss} \underset{z\in\mathbb{R}^{n}}{\text{minimize}}\quad f(z), \end{equation} where $f(z)$ may be both nonconvex and nonsmooth. \subsection{Regularity Condition} This paper focuses on a special yet important case of nonconvex optimization, where the objective function $f(\cdot)$ satisfies the Regularity Condition (RC) \cite{candes2015phase}, defined as follows. \begin{defn}[Regularity Condition]\label{def:RC} A function $f(\cdot)$ is said to satisfy the Regularity Condition RC($ \mu,\lambda, \epsilon $) with positive constants $ \mu,\lambda$ and $\epsilon $, if \begin{equation}\label{eq:locRC} \langle \nabla f(z), z- x^{\star} \rangle\geq\frac{\mu}{2}\| \nabla f(z)\|^2+\frac{\lambda}{2} \left\| z- x^{\star} \right\|^2 \end{equation} for all $z\in \mathcal{N}_\epsilon(x^{\star}): = \left\{ z: \| z - x^{\star}\|\leq \epsilon \|x^{\star}\| \right\}$, where $x^{\star}$ is a local minimizer of $f(z)$. \end{defn} It is straightforward to check that one must have $\mu\lambda\leq 1$ by Cauchy-Schwartz inequality. RC can be regarded as a combination of one-point strong convexity and smoothness \cite{chi2019nonconvex}, and does not require the function $f(\cdot)$ to be convex. RC has appeared in a wide range of applications, a partial list of which includes phase retrieval \cite{candes2015phase}, deep linear neural networks \cite{zhou2017characterization}, shallow nonlinear neural networks \cite{li2017convergence} and matrix sensing \cite{tu2016low,li2018median}. Our framework can handle the general case where RC only holds locally as defined. To simplify the presentation, we will first assume RC holds globally, i.e. $\epsilon=\infty$ in Definition \ref{def:RC}, in which case $x^\star$ becomes the global minimizer correspondingly. It turns out that our main results can be directly applied to the case when RC holds locally, using proper initializations, which will be explained in detail in Section \ref{sec:LRC}. Without ambiguity, we will omit the neighborhood radius $\epsilon$ and use the notation $RC(\mu,\lambda)$ to denote the global RC in the derivation of the main results. \subsection{Accelerated gradient descent methods\label{subsec:acceleratedGD}} In practice, AGD methods are widely adopted for its ability to accelerate the convergence. Two widely-used acceleration schemes include Nesterov's accelerated gradient (NAG) method~\cite{nesterov2003introductory}, given as \begin{align}\label{Nesterov} y_k &= (1+\beta)z_k-\beta z_{k-1}, \nonumber \\ z_{k+1} &= y_k-\alpha\nabla f(y_k), \quad k=0,1,\ldots, \end{align} where $\alpha>0$ is the step size, $0\leq\beta<1$ is the momentum parameter; and Heavy-Ball (HB) method \cite{polyak1964some}, given as \begin{align}\label{HB} y_k &= (1+\beta)z_k-\beta z_{k-1}, \nonumber \\ z_{k+1} &= y_k-\alpha\nabla f(z_k), \quad k=0,1,\ldots, \end{align} where $\alpha>0$ is the step size, $0\leq\beta<1$ is the momentum parameter. In fact, we can describe a general AGD method that subsumes HB and NAG as special cases: \begin{equation}\label{accAlg} \begin{aligned} y_k &= (1+\beta_2)z_{k}-\beta_2 z_{k-1},\\ z_{k+1} &= (1+\beta_1)z_{k}-\beta_1 z_{k-1}-\alpha \nabla f(y_k). \end{aligned} \end{equation} Despite the empirical success, the convergence of AGD in the nonconvex setting remains unclear to a large extent. For example, it is not known whether AGD converges under RC, whether it converges linearly if it does and how to set the step size and the momentum parameters to guarantee its convergence. These challenging questions motivate us to look for new tools to better understand AGD for nonconvex optimization. \section{A control view on the convergence analysis of AGD under RC} Robust control theory has been tailored to the convergence analysis of optimization methods \cite{lessard2016analysis,nishihara2015general,hu2017dissipativity,hu2017unified}. The proofs of our main theorems also rely on such techniques. In this section, we will discuss how to transform the convergence analysis of AGD under RC to the robust stability analysis of dynamical systems and derive LMI conditions to guarantee the convergence. \subsection{AGD as feedback dynamical system} Observe that a general AGD method \eqref{accAlg} can be viewed as a linear dynamical system subject to nonlinear feedback: \begin{equation}\label{dysys} \begin{aligned} z_{k+1}^{(1)} &= (1+\beta_1)z_{k}^{(1)}-\beta_1 z_{k}^{(2)}-\alpha u_k,\\ z_{k+1}^{(2)} &= z_{k}^{(1)},\\ y_k &= (1+\beta_2)z_{k}^{(1)}-\beta_2 z_{k}^{(2)},\\ u_k &= \nabla f(y_k). \end{aligned} \end{equation} To see this, let $z_{k}^{(1)}=z_{k}$, and $z_{k}^{(2)}=z_{k-1}$. Then it can be easily verified that \eqref{dysys} represents HB when $(\beta_1,\beta_2)=(\beta,0)$, and NAG when $(\beta_1,\beta_2)=(\beta,\beta)$. Let $\otimes$ denote the Kronecker product. We use the notation $G(A, B, C, D)$ to denote a dynamical system $G$ governed by the following iterative state-space model: \begin{align*} \phi_{k+1}&=(A\otimes I_n) \phi_k+(B\otimes I_n) u_k,\\ y_k&=(C\otimes I_n) \phi_k+(D\otimes I_n) u_k, \end{align*} where $I_n$ is the identity matrix of size $n$. If we define $\phi_k=\left[\begin{array}{c} z_{k}^{(1)}\\z_{k}^{(2)} \end{array}\right]$ as the state, $u_k$ as the input and $y_k$ as the output, then \eqref{dysys} can be regarded as a dynamical system shown in Figure~\ref{sysdiag}, where the feedback $\nabla f(y_k)$ is a static nonlinearity that depends on the gradient of the loss function, and $G(A, B, C, D)$ is a linear system specified by \begin{equation}\label{eq:unisys} \left[\begin{array}{c|c} A &B\\ \hline C &D \\ \end{array} \right] = \left[\begin{array}{cc|c} 1+\beta_1 &-\beta_1 &-\alpha \\ 1 & 0 & 0\\ \hline 1+\beta_2 &-\beta_2 & 0 \end{array} \right]. \end{equation} \begin{figure} \centering \tikzstyle{thickarrow}=[line width=2mm,draw=osu,-triangle 45] \begin{tikzpicture}[scale=2] \node[block] (sys) {G}; \node[block, below=1em of sys] (feedback) {$ \nabla f(y_k) $}; \draw[->,thick] (sys.east) -- ++ (2em,0) coordinate[yshift=1em](l){} |-node[near start, right]{\small $y_k$} (feedback.east); \draw[->,thick] (feedback.west) -- ++ (-1.5em,0) coordinate[yshift=-1em](l){} |-node[near start, left]{\small $u_k$} (sys.west); \end{tikzpicture} \caption{\small The dynamical system representation of first-order methods.} \label{sysdiag} \end{figure} \subsection{Convergence analysis of AGD under RC via stability analysis of a feedback system}\label{subsec:lyFunc} In the following, we will illustrate how to show the convergence of AGD. First, define $\phi_*=\left[\begin{array}{c} x^\star\\x^\star \end{array}\right]$ as the equilibrium of the dynamical system \eqref{dysys}. If the system is (globally) asymptotically stable, then $\phi_k \xrightarrow{k\rightarrow\infty}\phi_*$. It further implies $z_{k} \xrightarrow{k\rightarrow\infty}x^\star$. In other words, the asymptotic stability of the dynamical system can indicate the convergence of the iterates to a fixed point. From now on, we focus on the stability analysis of the feedback control system \eqref{dysys}. The stability analysis of the feedback control system \eqref{dysys} can be carried out using robust control theory. The main challenge is on the nonlinear feedback term $u_k = \nabla f(y_k)$. Our key observation is that RC can be rewritten as the following quadratic constraint: \begin{equation}\label{quadrabound} \left[ \begin{array}{c} y_k-y_*\\u_k-u_*\end{array} \right]^T \left[ \begin{array}{c c} -\lambda I_n &I_n\\ I_n &-\mu I_n \end{array} \right] \left[ \begin{array}{c} y_k-y_*\\u_k-u_*\end{array} \right] \geq 0, \end{equation} where $y_*=x^\star$ and $u_*=\nabla f(y_*)=0$. Applying the quadratic constraint framework in \cite{lessard2016analysis}, we can derive an LMI as a sufficient stability condition as stated in the following proposition. A formal proof of this result can be found in the appendix. \begin{prop}\label{thm:convergence} Let $x^\star\in\mathbb{R}^n$ be the global minimizer of the loss function $f(\cdot)$ which satisfies RC($ \mu,\lambda $). For a given first-order method characterized by $G(A,B,C,D)$, if there exists a matrix $P\succ0$ and $\rho\in (0,1)$ such that the following inear matrix inequality (LMI) \eqref{eq:maintheo} holds, \begin{equation}\label{eq:maintheo} \left[\begin{array}{cc} A^TPA-\rho^2 P &A^TPB \\ B^TPA &B^TPB \end{array} \right]+ \left[ \begin{array}{c c} C & D\\ 0_{1\times 2} &1 \end{array} \right]^T\left[ \begin{array}{c c} -\lambda &1\\ 1 &-\mu \end{array} \right] \left[ \begin{array}{c c} C & D\\ 0_{1\times 2} &1 \end{array} \right]\preceq0, \end{equation} then the state $\phi_k$ generated by the first-order algorithm $G(A,B,C,D)$ converges to the fixed point $\phi_*$ linearly, i.e., \begin{equation} \label{eq:linear_convergence} \|\phi_k-\phi_*\|\leq \sqrt{\mathrm{cond}(P)}\rho^k\|\phi_0-\phi_*\| \text{ for all } k, \end{equation} where $\mathrm{cond}(P)$ is the condition number of $P$. \end{prop} \begin{rem} For fixed $(A,B,C,D)$ and $\rho$, the LMI \eqref{eq:maintheo} is linear in $P$ and hence an LMI. The size of this LMI is $3 \times 3$, and the decision variable $P$ is a $2\times 2$ matrix. The size of the LMI \eqref{eq:maintheo} is independent of the state dimension $n$. \end{rem} \begin{rem} The LMI \eqref{eq:maintheo} is similar to the one derived in~\cite{lessard2016analysis} under the so-called sector bound condition. The relationship between RC and the sector bound is discussed in detail in the appendix. Different from~\cite{lessard2016analysis}, we focus on deriving analytical convergence regions (see Section \ref{sec:anaResultMain}), in contrast to verifying convergence numerically for specific parameters, offering deeper insight regarding the convergence behavior of AGD methods under RC. In addition, we also extend the results to the case where RC holds only locally around the fixed point (see Section \ref{sec:LRC}). \end{rem} \section{Convergence conditions of AGD}\label{sec:anaResultMain} In this section, we focus on how to obtain analytical convergence conditions of AGD under RC based on \eqref{eq:maintheo}. Analytically solving \eqref{eq:maintheo} is challenging since one typically needs to express $P$ explicitly as a function of $(A,B,C,D)$ and $(\lambda, \mu)$. Our main idea is to transform the LMI \eqref{eq:maintheo} to equivalent frequency domain inequalities (FDIs) which can reduce unknown parameters using the classical KYP lemma~\cite{rantzer1996kalman}. Then we can derive the main convergence results by solving the FDIs analytically. \subsection{The Kalman-Yakubovich-Popov (KYP) lemma} We first introduce the KYP lemma and the reader is referred to \cite{rantzer1996kalman} for an elegant proof. \begin{lem} (\cite[Theorem 2]{rantzer1996kalman}) Given $A$, $B$, $M$, with $\text{det}(e^{j\omega}I-A)\neq 0 $ for $\omega\in\mathbb{R}$, the following two statements are equivalent: \begin{enumerate} \vspace*{-0.5em} \item $\forall \omega\in\mathbb{R}$, \begin{equation} \hspace*{-0.2cm}\left[\!\begin{array}{c} (e^{j\omega}I-A)^{-1}B\\I \end{array}\! \right]^*\!M\!\left[\!\begin{array}{c} (e^{j\omega}I-A)^{-1}B\\I \end{array}\! \right]\!\prec\! 0. \end{equation} \item There exists a matrix $P\!\in\!\mathbb{R}^{n\times n}$ such that $P\!=\!P^T$ and \begin{equation}\label{LMIK} M+\left[\begin{array}{cc} A^TPA-P &A^TPB \\ B^TPA &B^TPB \end{array} \right]\prec 0. \end{equation} \end{enumerate} \end{lem} The general KYP lemma only asks $P$ to be symmetric instead of being positive definite (PD) as in our problem. To ensure that the KYP lemma can be applied to solve \eqref{eq:maintheo}, some adjustments of the lemma are necessary. In fact, we observe that if $A$ of the dynamical system is Schur stable and the upper left corner of $M$, denoted as $M_{11}$, is positive semidefinite (PSD), then by checking the principal minor $M_{11}+A^TPA-P\prec0$, we know $P$ satisfying \eqref{LMIK} must be PD. We define these conditions on $A$ and $M$ as KYP Conditions, which are restrictions to make sure that all solutions of symmetric $P$ for \eqref{LMIK} are PD. \begin{defn}[KYP Conditions] The KYPC($A,M$) are listed as: \vspace*{-0.5em} \begin{enumerate} \item $\text{det}(e^{j\omega}I-A)\neq 0 $ for $\omega\in\mathbb{R}$; \item $A$ is Schur stable; \item The left upper corner of $M$ in \eqref{LMIK} is PSD. \end{enumerate} \end{defn} Thus we can conclude the following corollary. \begin{cor}\label{cor:kyp} Under KYPC($A,M$), the following two statements are equivalent: \begin{enumerate} \vspace*{-0.5em} \item $\forall \omega\in\mathbb{R}$, \begin{equation} \hspace*{-0.2cm}\left[\!\begin{array}{c} (e^{j\omega}I-A)^{-1}B\\I \end{array}\! \right]^*\!M\!\left[\!\begin{array}{c} (e^{j\omega}I-A)^{-1}B\\I \end{array}\! \right]\!\prec\! 0. \end{equation} \item There exists a matrix $P\in\mathbb{R}^{n\times n}$ such that $P\succ0$ and \begin{equation}\label{LMIKYP} M+\left[\begin{array}{cc} A^TPA-P &A^TPB \\ B^TPA &B^TPB \end{array} \right]\prec 0. \end{equation} \end{enumerate} \end{cor} One can easily check, however, that $A$ and $M$ of a general AGD in \eqref{eq:maintheo} do not satisfy the KYPC($A,M$). Therefore, we need to rewrite the dynamical system \eqref{dysys} in a different way to satisfy the KYPC($A,M$), so that its stability analysis can be done by combining Proposition \ref{thm:convergence} and Corollary \ref{cor:kyp}. In the following, we first introduce a way to rewrite the dynamical system to satisfy the KYPC($A,M$). \subsection{How to satisfy the KYP Conditions?} Recall that a general AGD can be written as \eqref{accAlg}. Here we introduce a slack variable $\delta$ to rewrite the algorithm: \begin{equation}\label{accAlgDelta} \begin{aligned} z_{k+1}\! = &\left(1+\delta+\beta_1+\delta\beta_2\right) z_{k}\!-\!(\beta_1+\delta\beta_2) z_{k-1}\\ &\quad -\alpha \nabla f(y_k)-\delta y_k,\\ y_k\! = &(1+\beta_2)z_{k}-\beta_2 z_{k-1}. \end{aligned} \end{equation} Observe that for any value of $\delta$, \eqref{accAlgDelta} provides the same update rule as \eqref{accAlg}. It can be viewed as a generalized representation of the dynamical systems corresponding to the targeted AGD. Similar to \eqref{dysys}, we rewrite \eqref{accAlgDelta} as a dynamical system $G(A',B',C',D')$: \begin{equation}\label{systemshift} \hspace*{-0.2cm}\begin{aligned} z_{k+1}^{(1)}\! &=\! \left(1\!+\!\beta_1\!+\!\delta\!+\!\delta\beta_2\right)z_{k}^{(1)}\! -\! (\beta_1\!+\!\delta\beta_2) z_{k}^{(2)}\! +\! u_k,\\ z_{k+1}^{(2)}\! &= \!z_{k}^{(1)},\\ y_k\! &= \!(1+\beta_2)z_{k}^{(1)}-\beta_2 z_{k}^{(2)},\\ u_k\! &= \!-\alpha\nabla f(y_k) - \delta y_k. \end{aligned} \end{equation} Correspondingly, $$\left[\begin{array}{c|c} A' &B'\\ \hline C' &D'\\ \end{array} \right] = \left[\begin{array}{cc|c} 1+\beta_1+\delta+\delta\beta_2 &-(\beta_1+\delta\beta_2) &1\\ 1 &0 &0\\ \hline 1+\beta_2 &-\beta_2 &0 \end{array} \right]. $$ In addition to the adjustment of the dynamics, the feedback of $G(A',B',C',D')$ also differs from that in \eqref{dysys}. As a consequence, the quadratic bound for the new feedback $u_k = -\alpha\nabla f(y_k) - \delta y_k$ is shifted as stated in the following lemma. \begin{lem}\label{lem:feedbshift} Let $f$ be a loss function which satisfies RC($ \mu,\lambda $) and $y_*=x^{\star}$ be a minimizer. If $u_k = -\alpha\nabla f(y_k) - \delta y_k$, then $y_k$ and $u_k$ can be quadratically bounded as \begin{equation}\label{RCshift} \left[\begin{array}{c} y_{k}-y_*\\u_k-u_*\end{array} \right]^T M' \left[\begin{array}{c} y_{k}-y_*\\u_k-u_*\end{array} \right]\geq0. \end{equation} where $M'=\left[\begin{array}{cc} -\left(2\alpha\delta+\lambda\alpha^2+\mu\delta^2\right) &-\alpha-\mu\delta\\ -\alpha-\mu\delta &-\mu \end{array}\right]$. \end{lem} Now we have general representations of $A'$, $M'$ with one unknown parameter $\delta$. We need to certify the region of $(\alpha,\beta_1,\beta_2)$ such that its corresponding ($A',M'$) has at least one $\delta$ satisfying the KYPC($A',M'$). \begin{lem}\label{lem:kypcalg} Let $f$ be a loss function which satisfies RC($ \mu,\lambda $). There is at least one representation of the dynamical system \eqref{systemshift} satisfying KYPC($A',M'$), if and only if the step size $\alpha$ and the momentum parameters $\beta_1,\beta_2$ obey the following restriction: \begin{equation}\label{HbshiftLem} 0<\alpha<\frac{2(1+\beta_1)(1+\sqrt{1-\mu\lambda})}{\lambda(1+2\beta_2)}. \end{equation} \end{lem} By Lemma \ref{lem:kypcalg}, if the parameters of a fixed AGD with $(\alpha,\beta_1,\beta_2)$ satisfy \eqref{HbshiftLem}, then all feasible symmetric $P$'s for \eqref{LMIKYP} can be guaranteed to be PD. Now we are ready to use the KYP lemma to complete the convergence analysis of an accelerated algorithm. \subsection{Stability region of AGD under RC} By Proposition \ref{thm:convergence}, we can solve the stability of the new system \eqref{systemshift} by finding some feasible $P\succ 0$ to the key LMI~\eqref{eq:maintheo} with respect to the corresponding $(A',B',C',D',M')$ for some rate $0<\rho<1$. We are interested in obtaining the analytical region of $(\alpha, \beta_1, \beta_2)$ that guarantees the linear convergence of AGD under RC. We use the following strict matrix inequality without caring about a specific rate $\rho$, \begin{equation}\label{LMIshift} \left[\begin{array}{cc} A'^TPA'- P &A'^TPB' \\ B'^TPA' &B'^TPB' \end{array} \right]+ \left[\begin{array}{c c} C' &0\\ 0_{1\times 2} &1 \end{array} \right]^TM' \left[\begin{array}{c c} C' &0\\ 0_{1\times 2} &1 \end{array} \right]\prec0. \end{equation} \begin{figure*}[ht] \centering \captionsetup[subfigure]{labelformat=empty} \subfloat[\small (a) Fixing $\lambda=0.5$ and varying $\mu$]{{\includegraphics[width=6cm]{HB3D_mu_volume} }}% \qquad \subfloat[\small (b) Fixing $\mu=0.5$ and varying $\lambda$]{{\includegraphics[width=6cm]{HB3D_lambda_volume} }}% \caption{\small Visualization of the convergence regions of HB when perturbing the RC parameters.}% \label{regionplot_3D} \end{figure*} \begin{rem} Our arguments can also be modified to derive the parameter region which guarantees the convergence with a fixed rate $\rho$. For such an analysis, we can modify the LMI \eqref{LMIshift} by rescaling the matrices$(A',B')$ as $\tilde{A}=A'/\rho$ and $\tilde{B}=B'/\rho$. Then the resultant LMI can be converted to an FDI by the KYP lemma, and a similar analysis can be carried forward. Such analytical analysis of the convergence rate is even more difficult to interpret due to the presence of $\rho$ in $(\tilde{A},\tilde{B})$. For simplicity, this paper focuses on the derivation of stability regions. \end{rem} Observe that now \eqref{LMIshift} is of the same form as \eqref{LMIKYP}. By the KYP lemma (Corollary \ref{cor:kyp}), under KYPC($A',M'$) \eqref{LMIshift} can be equivalently solved by studying the following FDI: \begin{equation}\label{FDIshift} \left[\begin{array}{c} (e^{j\omega}I-A')^{-1}B'\\I \end{array} \right]^*\left[\begin{array}{c c} C' &0\\ 0_{1\times 2} &1 \end{array} \right]^TM' \left[\begin{array}{c c} C' &0\\ 0_{1\times 2} &1 \end{array} \right]\left[\begin{array}{c} (e^{j\omega}I-A')^{-1}B'\\I \end{array} \right]< 0,\quad \forall \omega\in\mathbb{R}. \end{equation} By simplifying \eqref{FDIshift} we observe that all uncertain terms can be canceled out and then conclude the following lemma. \begin{lem}\label{lem:fdiSimp} To find the stability region of a general AGD method under RC($ \mu,\lambda $), or equivalently, to find the region of $(\alpha,\beta_1,\beta_2)$ such that there exists a feasible $P\!\succ\! 0$ satisfying \eqref{LMIshift}, it is equivalent to find $(\alpha,\beta_1,\beta_2)$ which simultaneously obeys \eqref{HbshiftLem} and guarantees the following FDI: \begin{equation}\label{FDIgeneral} \hspace*{-0.35cm}\begin{aligned} &\!4(\alpha\beta_2\!-\!\mu\beta_1)\cos^2\omega\! +\! 2\!\left[\! \mu(1\!+\!\beta_1)^2\!+\!\lambda\alpha^2\beta_2(1\!+\!\beta_2)\right.\!\\ &\!\left.-\alpha(1\!+\!\beta_1)(1\!+\!2\beta_2) \right]\cos\omega\! +\!2\alpha( 1\!+\!\beta_1\!+\!2\beta_1\beta_2 )\!\\ &\!-2\mu(1\!+\!\beta_1^2)\!-\!\lambda\alpha^2\left[ \beta_2^2\!+\!(1\!+\!\beta_2)^2 \right]< 0, \quad \forall \omega\in\mathbb{R}.\! \end{aligned}\! \end{equation} \end{lem} We omit the proof of Lemma \ref{lem:fdiSimp} since it follows easily from Corollary \ref{cor:kyp} and some simple calculations to simplify \eqref{FDIshift}. By setting different $\beta_1,\beta_2$ in \eqref{FDIgeneral}, we can obtain the convergence condition of a general AGD method using the KYP lemma. In the following, we focus on the two most important cases: HB and NAG and other cases can be discussed in a similar way. The stability regions of HB and NAG can be obtained by letting $\beta_2=0$ and $\beta_1=\beta_2=\beta$ in \eqref{FDIgeneral}, respectively. Then we can obtain Theorem \ref{thm:HBregionInf} and Theorem \ref{thm:NesregionInf}. \begin{thm}\label{thm:HBregionInf} Let $x^\star\in\mathbb{R}^n$ be the global minimizer of the loss function $f(\cdot)$ which satisfies RC($ \mu,\lambda $). For any step size $\alpha>0$ and momentum parameter $\beta\in(0,1)$ lying in the region: $$ \Big\{(\alpha,\beta): H_1(\beta)\leq\alpha \nonumber \leq\frac{2(\beta+1)(1-\sqrt{1-\mu\lambda})}{\lambda}\Big\} \cup \Big\{(\alpha,\beta): 0<\alpha\leq\min \{H_1(\beta), H_2(\beta) \} \Big\}. $$ where $H_1(\beta) = \frac{\mu \beta^2\!+\!6\mu\beta\!+\!\mu}{\beta+1}$ and \begin{align*} H_2(\beta) & =\frac{P_2(\beta)\!-\!\sqrt{P_2(\beta)^2\!-\!4P_1(\beta)P_3(\beta)}}{2P_1(\beta)} \end{align*} with $P_1(\beta)=4\mu\lambda\beta-\beta^2-1-2\beta$, $P_2(\beta)= 2\mu\beta+2\mu\beta^2-2\mu\beta^3-2\mu$, and $P_3(\beta)=4\mu^2\beta^3+4\mu^2\beta-6\mu^2\beta^2-\mu^2\beta^4-\mu^2$, the iterates $z_{k}$ generated by HB~\eqref{HB} converge linearly to $x^\star$ as $k\rightarrow\infty$. \end{thm} \begin{thm}\label{thm:NesregionInf} Let $x^\star\in\mathbb{R}^n$ be {blue}the global minimizer of the loss function $f(\cdot)$ which satisfies RC($ \mu,\lambda $). For any step size $\alpha>0$ and momentum parameter $\beta\in(0,1)$ lying in the region: $$ \left\{(\alpha,\beta):N_1(\beta)\leq\alpha<\frac{2(\beta+1)(1-\sqrt{1-\mu\lambda})}{\lambda(1+2\beta)} \right\} \cup \Big\{(\alpha,\beta):0<\alpha\leq\min \left\{N_1(\beta), N_2(\beta) \right\} \Big\}. $$ where $$ N_1(\beta)=\frac{Q_1(\beta)-\sqrt{Q_1(\beta)^2-(1+6\beta+\beta^2)Q_2(\beta)}}{2\lambda\beta(\beta+1)},$$ $$ N_2(\beta)=\left\{\! \beta: \frac{Q_3(\beta)\!-\!\sqrt{Q_3(\beta)^2\!-\!(1-\beta)^2Q_2(\beta)}}{2\lambda\beta(\beta+1)}\!\leq\alpha, \; g\left(\frac{(\mu-\alpha)(1+\beta)^2+(\lambda\alpha^2-\alpha)(\beta+\beta^2)}{4\mu\beta-4\alpha\beta}\right)\!=\!0\!\right\}, $$ $Q_1(\beta)=1+7\beta+2\beta^2$,$Q_2(\beta)=4\mu\lambda\beta(1+\beta)$,$Q_3(\beta)=1-\beta+2\beta^2$, the iterates $z_{k}$ generated by NAG~\eqref{Nesterov} converge linearly to $x^\star$ as $k\rightarrow\infty$. \end{thm} \begin{rem}\label{rem:Nes} The bound $N_2(\beta)$ is an implicit function of $\beta$. It is hard to derive an explicit expression since $g(\cdot)=0$ is a 4th-order equation of $\beta$. The function $g(\eta)$ is: $$ g(\eta):=4\mu\beta \eta^2-2(2\mu\beta+\mu\beta^2-\alpha\beta+\mu-\alpha)\eta +2\mu+2\mu\beta^2-2\alpha-2\alpha\beta+\lambda\alpha^2. $$ \end{rem} \begin{figure*}[ht] \centering \captionsetup[subfigure]{labelformat=empty} \subfloat[\small (a) HB]{{\includegraphics[width=6cm]{HB_region} }}% \qquad \subfloat[\small (b) NAG]{{\includegraphics[width=6cm]{Nes_region} }}% \caption{\small Visualization of the convergence regions of two AGD methods taking RC parameters as $\mu=0.5,\lambda=0.5$.}% \label{regionplot} \end{figure*} The analytical results stated in the above theorems can provide rich insights on the convergence behaviors of AGD. Take the convergence region of HB as an example. In Figure \ref{regionplot_3D} (a), we fix the RC parameter $\lambda=0.5$ and vary $\mu$ within $[0.01,1.9]$, while in Figure \ref{regionplot_3D} (b), we fix $\mu=0.5$ and vary the value of $\lambda$ within $[0.01,1.9]$. Observe that when we fix one of the RC parameter and increase the other, the stability region of ($\alpha,\beta$) gets larger. Notice that $\mu$ plays a role similar to the inverse of the smoothness parameter, and therefore, it dominantly determines the step size, which is clearly demonstrated in Figure~\ref{regionplot_3D} (a). In addition, when we fix the values of a pair of ($\mu,\lambda$) (e.g. Figure~\ref{regionplot}), we can see that even when $\alpha$ exceeds the value of the bound of GD (the maximal feasible $\alpha$ when $\beta=0$), the Heavy-ball method can still ensure convergence when we choose $\beta$ properly. This property has not been discussed in the literature. We emphasize that our theoretic analysis is a complement rather than a replacement for the numerical LMI approach in \cite{lessard2016analysis}. Our closed-form expressions for the stability region do provide some complementary benefits to the numerical approach in \cite{lessard2016analysis}. First, from our closed-form formulas, one can tell that the stability region of HB is well described by the feasible set of some relatively simple quadratic inequalities while the characterization of the stability region boundary of NAG partially involves a fourth-order polynomial. Such a difference is not directly reflected by the numerical LMI approach in \cite{lessard2016analysis}. Actually our closed-form expression for the stability region of HB is quite simple. Our study on HB and NAG just illustrates that the interpretability of the analytical formulas for the stability region depends on the specific momentum method being analyzed. Second, the stability region is easier and faster to visualize from analytical forms than numerical results. When given a pair of $(\mu,\lambda)$, one needs to make a small grid of $(\alpha,\beta_1,\beta_2)$ and solve an LMI for each single pair, which is computationally complex but can be avoided if we have closed-form analytical results. More importantly, the LMI conditions in Lessard et al. [16] can only certify convergence numerically for fixed $(\alpha,\beta_1,\beta_2)$ under a given pair of RC parameter $(\mu,\lambda)$. However, our analytical results provide continuous stability regions with respect to $(\mu,\lambda)$, which is hard to achieve using numerical results. \subsection{Numerical example} In this subsection, we will use a simple example satisfying RC globally to show show how our results help to choose parameters of different first-order methods in practice. Consider a loss function as shown in Figure~\ref{Fig:numRes}(a) with an expression as $$f(x)=\left\{\begin{aligned} &x^2, \quad x\in [-6,6]\\ &x^2 + 1.5|x|\left( \cos(|x|-6) -1 \right),\quad \text{otherwise}. \end{aligned} \right. $$ This nonconvex loss function was also discussed in \cite{ChenCandes15solving,chi2019nonconvex}. One can check that $f$ satisfies $RC(0.5,0.5)$. We initialize at $x_0=x_1=24$ and choose $\alpha=0.1$. By Theorem \ref{thm:HBregionInf}, HB can converge when $\beta<0.5942$. For NAG, we choose $\beta<0.6950$ according to Theorem \ref{thm:NesregionInf}. Furthermore, it is common to choose the hyper-parameter $\beta$ as large as possible to obtain a better performance. Therefore, the corresponding $\beta$'s to HB and NAG are chosen as $0.59$ and $0.69$, respectively. In Figure \ref{Fig:numRes}(b), we see that all the three algorithms can converge and the two accelerated methods HB and NAG obviously outperform GD. \begin{figure*}[ht] \centering \captionsetup[subfigure]{labelformat=empty} \subfloat[\small (a) Loss function $f$]{{\includegraphics[width=6cm]{numResFunc} }}% \qquad \subfloat[\small (b) Convergence of three algorithms]{{\includegraphics[width=6.18cm]{numResSemilog} }}% \caption{\small An example satisfying RC and numerical experiments}% \label{Fig:numRes} \end{figure*} \section{Local Regularity Condition}\label{sec:LRC} So far, all the above derivations assume RC holds globally. In addition, the existing control framework for optimization methods \cite{lessard2016analysis,fazlyab2018analysis,hu2017unified,hu2017dissipativity,van2018fastest,cyrus2018robust,dhingra2018proximal} all require global constraints. In certain problems, however, RC may only hold locally around the fixed point. In this section, we explain how our framework can accommodate such cases, as long as the algorithms are initialized properly as stated in the following theorem whose proof can be found in the appendix. \begin{thm}\label{thm:localIni} Let $x^\star\in\mathbb{R}^n$ be a local minimizer of the loss function $f(\cdot)$ which satisfies RC($ \mu,\lambda,\epsilon $) with some positive constants $\mu,\lambda,\epsilon$. Assume that $P\succ 0$ is a feasible solution of the LMI \eqref{eq:maintheo}. If the first two iterates initialized properly according to $z_{-1},z_0\in\mathcal{N}_{\epsilon/\sqrt{10\mathrm{cond}(P)}}(x^\star)$, then $y_k\in \mathcal{N}_\epsilon(x^{\star}),\forall k$. \end{thm} Theorem~\ref{thm:localIni} ensures all the following iterates will not exceed the local neighborhood satisfying RC since $y_k\in \mathcal{N}_\epsilon(x^{\star})$ for all $k$, so that we can still transfer RC to a quadratic bound at each iteration, and thus all the previous results still hold for the convergence analysis of AGD under a general setting where RC only holds locally. In practice, spectral methods can be used as an initialization scheme to locate an initial estimate in a desired neighborhood of the fixed point. For example, we consider a popular inverse problem in signal processing called phase retrieval, where the goal is to recover a signal $x^\star\in\mathbb{R}^n$ from the magnitudes of its linear measurements, $y_r=|a_r^Tx^\star|^2$, $r=1,\ldots, m$, where $a_r\in\mathbb{R}^{n}$ is the $r$th sampling vector, and $m$ is the number of samples. If $a_r$'s are drawn with i.i.d. standard Gaussian entries, it is shown that the loss function $f(z)=\frac{1}{2m}\sum_{r=1}^{m}\left(y_r - |a_r^Tz|^2 \right)^2$ satisfies RC locally in $\mathcal{N}_{\epsilon}(x^{\star})$ (ignoring the sign ambiguity in identifying $x^\star$), where $\epsilon$ is a small constant (e.g. $1/10$) \cite{candes2015phase}. On the other end, the spectral method proposed in \cite{ChenCandes15solving} returns an initial estimate $\mathcal{N}_{\epsilon}(x^{\star})$ as soon as the sample complexity $m$ is above the order of $O(n/\epsilon)$. Therefore, as long as $m=O(n)$, the spectral method can successfully land an initialization in the region satisfying RC. In addition, the quality of the initialization also impacts the iteration complexity logarithmically as suggested by \eqref{eq:linear_convergence}. We refer the readers to \cite{candes2015phase,KesMonSew2010} for more details of initialization techniques. \section{Conclusions} In this paper, we apply control tools to analyze the convergence of AGD (including HB and NAG) under the Regularity Condition. Our main contribution lies in the {\em analytical} characterization of the convergence regions in terms of the algorithm parameters $(\alpha,\beta)$ and the RC parameters $(\mu, \lambda)$. Such convergence results do not exist in the current literature and offer useful insights in the analysis and design of AGD for a class of nonconvex optimization problems. \section*{Acknowledgement} The work of H. Xiong and W. Zhang is partly supported by the National Science Foundation under grant CNS-1552838. The work of Y. Chi is supported in part by ONR under grant N00014-18-1-2142, by ARO under grant W911NF-18-1-0303, and by NSF under grants CCF-1806154 and ECCS-1818571.
1,108,101,563,058
arxiv
\section*{Introduction} Lie bialgebras were introduced by V. Drinfel'd in \cite{Drinfel'd V.G.2, Drinfel'd V.G.4}, they are infinitesimal versions of compatible Poisson structures on Lie groups and maybe viewed as the Lie-theoretic case of a bialgebra. He raised various problems related to quantum groups and quantization. The study of quasi-triangular quantum groups involves the solutions of the quantum Yang-Baxter equations. In the classical limit, the solutions of the classical Yang-Baxter equations provide examples of Lie bialgebras. Since then a huge research activity was dedicated to these kind of algebraic structures. The aim of this paper is to define and study Hom-Lie superbialgebras which are Hom-type generalization of Lie superbialgebras. Hom-Lie superbialgebras are Hom-Lie superalgebras provided with a cobracket and a compatibility condition. Motivated by examples of $q$-deformations of algebras of vector fields, J. Hartwig, D. Larsson, and S. Silvestrov introduced the notion of Hom-Lie algebra in \cite{HartwigLarssonSilvestrov}, as a generalization of Lie algebras where the Jacobi condition is twisted by a Homomorphism. The graded case of Hom-Lie algebras were studied first by F. Ammar and the last author in \cite{Ammar F Makhlouf A 1}, while Hom-Lie bialgebras were discussed by D. Yau, then by C. Bai and Y. Sheng. Recently L. Cai and Y. Sheng presented a slightly different approach of Hom-Lie bialgebras called purely Hom-Lie bialgebras in \cite{Sheng2}. The paper is organized as follows, in the first section we provide the relevant definitions and some properties about Hom-Lie superbialgebras. Moreover, we give some key constructions and a classification of 3-dimensional Hom-Lie superbialgebras with 2-dimensional even part. In Section 2, We define Matched pairs and Manin supertriples, then we establish their relationships with Hom-Lie superbialgebras. We construct a Hom-Lie superalgebra structure on the direct sum of two Hom-Lie superalgebras $(\mathfrak{g},[\cdot,\cdot],\phi)$ and $(\mathfrak{g}',[\cdot,\cdot]',\phi')$, such that $\mathfrak{g}$ is a $\mathfrak{g}'$-module and $\mathfrak{g}'$ is a $\mathfrak{g}$-module, also we construct a Hom-Lie superbialgebra structure on the direct sum $\mathfrak{g}\oplus\mathfrak{g}^*$ where $\mathfrak{g}^* $ is the dual superspace of $\mathfrak{g}$. Section 3 is dedicated to coboundary Hom-Lie superbialgebras and quasi-triangular Hom-Lie superbialgebras. We show how a coboundary or quasi-triangular Hom-Lie superbialgebra can be constructed from a Hom-Lie superalgebra and an $r$-matrix. In the last section, we study perturbation of cobrackets in Hom-Lie superbialgebras, following Drinfel'd's perturbation theory of quasi-Hopf algebras. We describe Hom-Lie superbialgebras obtained by infinitesimal deformations of the cobracket. \section{\bf Basics and Classification of Hom-Lie superbialgebras} In this section, we introduce and study Hom-Lie superbialgebras, which are Hom-type version of Lie superbialgebras, see \cite{Drinfel'd V.G.2,Drinfel'd V.G.4}. We extend to graded case the definition of Hom-Lie bialgebra introduced in \cite{Yau2}. We show that the dual of a finite dimensional Hom-Lie superbialgebra is also a Hom-Lie superbialgebra (Theorem \ref{amineeeee}), generalizing the self-dual property of Hom-Lie bialgebras. First, let us start by fixing some definitions and notations. Let $\mathfrak{L}=\mathfrak{L}_{\bar{0}}\oplus\mathfrak{L}_{\bar{1}}$ be a $\mathbb{Z}_2$-graded vector space over an arbitrary field $\mathbb{K}$ of characteristic 0. In the sequel, we will consider only element which are $\mathbb{Z}_{2}$-homogeneous. For $x\in\mathfrak{L}$, we denote by ${|x|}\in\mathbb{Z}_{2}$ its parity, i.e., $x\in \mathfrak{L}_{|x|}$. We denote by $\tau$ the super-twist map of $\mathfrak{L}\otimes\mathfrak{L}$ namely, $ \tau(x\otimes y)=(-1)^{{|x|}{|y|}}y\otimes x $ for $x,y\in \mathfrak{L}.$ The super-cyclic map $\xi$ permutes the coordinates of $\mathfrak{L}\otimes\mathfrak{L}\otimes\mathfrak{L}$, it is defined as \begin{equation*} \xi=(\mathbf{1}\otimes\tau)\cdot(\tau\otimes\mathbf{1}) : x\otimes y\otimes z\mapsto (-1)^{{|x|}({|y|}+{|z|})}y\otimes z\otimes x, \end{equation*} for $x, y, z\in\mathfrak{L}$, where $\mathbf{1}$ is the identity map on $\mathfrak{L} $. We denote by $\mathfrak{L^\ast}=$Hom$(\mathfrak{L}, \mathbb{K})$ the linear dual of $\mathfrak{L}$. For $\phi\in\mathfrak{L^\ast}$ and $x\in\mathfrak{L}$, we often use the adjoint notation $\langle\phi, x\rangle$ for $\phi(x)\in\mathbb{K}$.\\ For a linear map $\Delta: \mathfrak{L}\rightarrow \mathfrak{L}\otimes\mathfrak{L}$ (comultiplication), we use Sweedler's notation $\Delta(x)=\sum_{(x)} x_{1}\otimes x_{2}$ for $x\in\mathfrak{L}$. We will often omit the summation sign $\sum_{(x)}$ to make it simple. The parity $|r|$ of $r\in\mathfrak{L}^{\otimes2}$ is defined as follows : since we assume $r$ homogenous, there exists $|r|\in\mathbb{Z}_{2}$, such that $r$ can be written as $r=\sum r_{1}\otimes r_{2}\in\mathfrak{L}^{\otimes2}$, $r_{1}, r_{2}$ are homogenous elements with $|r|=|r_{1}|+|r_{2}|$. \begin{definition}(\cite{Ammar F Makhlouf A 1}). A \emph{Hom-Lie superalgebra } is a triple $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ consisting of a superspace $\mathfrak{L}$, an even bilinear map $[\cdot ,\cdot ]:\mathfrak{L} \times \mathfrak{L} \rightarrow \mathfrak{L}$ and an even superspace homomorphism $\alpha:\mathfrak{L} \rightarrow \mathfrak{L}$ satisfying \begin{equation}\label{701} [x,y]=-(-1)^{{|x|}{|y|}}[y,x], \end{equation} \begin{equation}\label{702} (-1)^{{|x|}{|z|}}[\alpha(x),[y,z]]+(-1)^{{|z|}{|y|}}[\alpha(z),[x,y]]+(-1)^{{|y|}{|x|}}[\alpha(y),[z,x]]=0 \end{equation} for all homogeneous elements $x,y,z$ in $\mathfrak{L}$.\\ It is multiplicative if, in addition $\alpha\circ[\cdot ,\cdot ]=[\cdot ,\cdot ]\circ\alpha^{\otimes2}$, (i.e., $\alpha([x, y])=[\alpha(x), \alpha(y)]$, $\forall x,y \in \mathfrak{L}$). \end{definition} \begin{definition}\label{00001} (\cite{Hengyun Y and Yucai S}, \cite{MakhloufSilvestrov2}). A \emph{Hom-Lie supercoalgebra} is a triple $(\mathfrak{L}, \Delta, \alpha)$ consisting of a superspace $\mathfrak{L}$, an even superspace homomorphism $\alpha:\mathfrak{L} \rightarrow \mathfrak{L}$ and a linear map $\Delta:\mathfrak{L} \rightarrow \mathfrak{L}\otimes\mathfrak{L}$ (the cobracket) such that \begin{equation}\label{2007} \Delta(\mathfrak{L}^i) \subset \sum_{i=j+k} \mathfrak{L}^j\otimes\mathfrak{L}^k \ \ for \ \ i\in\mathbb{Z}_{2}, \end{equation} \begin{equation} Im\Delta\subset Im(1\otimes1-\tau) \ \ i.e. \ \ \Delta \ is \ skew-supersymmetric, \end{equation} \begin{equation}\label{jacobi} (1\otimes1\otimes1+\xi+\xi^2)\circ(\alpha\otimes\Delta)\circ\Delta=0 : \mathfrak{L}\rightarrow \mathfrak{L}\otimes\mathfrak{L}\otimes\mathfrak{L}. \end{equation} If, in addition, $\Delta\circ\alpha=\alpha^{\otimes2}\circ\Delta$, then $\mathfrak{L}$ is called co-multiplicative.\\ A Hom-Lie supercoalgebra with $\alpha=Id$ is exactly a Lie supercoalgebra \cite{Hengyun Y and Yucai S}. \end{definition} \begin{remark} Let $Im\Delta\subset Im(1\otimes1-\tau)\subset ker(1\otimes1+\tau)$, then $(1\otimes1+\tau)\circ\Delta=0$ \ \ i.e.\ \ $\Delta$ is skew-supersymmetric (\cite{Walter}). \end{remark} \begin{definition} (1) For an element $x$ in a Hom-Lie superalgebra $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ and $n\geq 2$, define the adjoint map $ad_{x}:\mathfrak{L}^{\otimes n}\rightarrow \mathfrak{L}^{\otimes n}$ by \begin{equation}\label{action} ad_{x}(y_{1}\otimes\cdot\cdot\cdot\otimes y_{n})=\sum_{i=1}^{n} (-1)^{{|x|}({|y_{1}|+{|y_{2}|}}+\cdot\cdot\cdot+{|y_{i-1}|})}\alpha(y_{1})\otimes\cdot\cdot\cdot\otimes\alpha(y_{i-1})\otimes[x, y_{i}]\otimes\alpha(y_{i+1})\cdot\cdot\cdot\otimes\alpha(y_{n}). \end{equation} For $n=2$, $ad_{x}(y_{1}\otimes y_{2})=[x, y_{1}]\otimes\alpha(y_{2})+(-1)^{{|x|}{|y_{1}|}}\alpha(y_{1})\otimes[x, y_{2}]$.\\ Conversely, given $\gamma=y_{1}\otimes\cdot\cdot\cdot\otimes y_{n}$, we define the map $ad(\gamma):\mathfrak{L}\rightarrow \mathfrak{L}^{\otimes n}$ by $ad(\gamma)(x)=ad_{x}(\gamma)$, for $x\in\mathfrak{L}$.\\ (2) For an element $x$ in a Hom-Lie superalgebra $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ and $n\geq 2$, define the adjoint map \\ $ad_{\alpha(x)}:\mathfrak{L}^{\otimes n}\rightarrow \mathfrak{L}^{\otimes n}$ by \begin{equation} ad_{\alpha(x)}(y_{1}\otimes\cdot\cdot\cdot\otimes y_{n})=\sum_{i=1}^{n} (-1)^{{|x|}({|y_{1}|+{|y_{2}|}}+\cdot\cdot\cdot+{|y_{i-1}|})}\alpha(y_{1})\otimes\cdot\cdot\cdot\otimes\alpha(y_{i-1})\otimes[\alpha(x), y_{i}]\otimes\alpha(y_{i+1})\cdot\cdot\cdot\otimes\alpha(y_{n}). \end{equation} \end{definition} \begin{definition}\label{ae}A \emph{(multiplicative) Hom-Lie superbialgebra } is a quadruple $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ such that \begin{enumerate} \item $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ is a (multiplicative) Hom-Lie superalgebra. \item $(\mathfrak{L}, \Delta, \alpha)$ is a (co-multiplicative) Hom-Lie supercoalgebra. \item The following compatibility condition holds for all $x, y \in\mathfrak{L}$ : \begin{equation}\label{a} \Delta([x,y])=ad_{\alpha(x)}(\Delta(y))-(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(\Delta(x)). \end{equation} \end{enumerate} \end{definition} \begin{definition} The map $f:\mathfrak{L} \rightarrow \mathfrak{L'}$ is called even (resp. odd) map if $f(\mathfrak{L}_{i})\subset \mathfrak{L'}_{i}$ (resp. $f(\mathfrak{L}_{i})\subset \mathfrak{L'}_{i+1}),$ for $i=0, 1$. A morphism of Hom-Lie superbialgebras is an even linear map such that \\ $$\alpha\circ f=f\circ\alpha,\ \ \ \ f\circ[\cdot ,\cdot ]=[\cdot ,\cdot ]\circ f^{\otimes2}\ \ \ \ \text{and }\ \ \ \Delta\circ f=f^{\otimes2}\circ\Delta.$$\\ An isomorphism of Hom-Lie superbialgebras is an invertible morphism of Hom-Lie superbialgebras. Two Hom-Lie superbialgebras are said to be isomorphic if there exists an isomorphism between them. \end{definition} \begin{remark} A Hom-Lie superbialgebra with $\alpha=Id$ is exactly a Lie superbialgebra, as defined in \cite{Hengyun Y and Yucai S,Drinfel'd V.G.2,Drinfel'd V.G.4}. \end{remark} \begin{remark}\label{2000}The compatibility condition (\ref{a}) is, in fact, a cocycle condition in Hom-Lie superalgebra cohomology \cite{MakhloufSilvestrov3}, just as it is the case in a Lie superbialgebra with Lie superalgebra cohomology \cite{Drinfel'd V.G.4}. Indeed, we can regard $\mathfrak{L}^{\otimes2}$ as an $\mathfrak{L}$-module via the $\alpha$-twisted adjoint action (\ref{a}): \begin{align}\label{2001} x\cdot(y_{1}\otimes y_{2})&=ad_{\alpha(x)}(y_{1}\otimes y_{2})\\ &=[\alpha(x), y_{1}]\otimes\alpha(y_{2})+(-1)^{{|x|}{|y_{1}|}}\alpha(y_{1})\otimes[\alpha(x), y_{2}],\nonumber\end{align} for $x\in\mathfrak{L}$ and $y_{1}\otimes y_{2}\in\mathfrak{L}^{\otimes2}$.\\ Then we can think of the cobracket $\Delta:\mathfrak{L} \rightarrow \mathfrak{L}\otimes\mathfrak{L}$ as a 1-cochain $\Delta\in C^1(\mathfrak{L},\mathfrak{L}^{\otimes2})$. Here $ C^1(\mathfrak{L},\mathfrak{L}^{\otimes2})$ is defined as the linear super-subspace of Hom$(\mathfrak{L},\mathfrak{L}^{\otimes2})$ consisting of maps that commute with $\alpha$. Generalizing \cite{MakhloufSilvestrov3} to include coefficients in $\mathfrak{L}^{\otimes2}$, the differential on $\Delta$ is given by \begin{equation}\label{fff} (\delta^{1}_{HL}\Delta)(x, y)=\Delta([x,y])-x\cdot\Delta(y)+(-1)^{{|x|}{|y|}}y\cdot\Delta(x) =\Delta([x,y])-ad_{\alpha(x)}(\Delta(y))+(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(\Delta(x)). \end{equation} Therefore, (\ref{a}) says exactly that $\Delta\in C^1(\mathfrak{L},\mathfrak{L}^{\otimes2})$ is a 1-cocycle. \end{remark} \begin{example} \emph{Classification of 2-dimensional Hom-Lie superbialgebras with 1-dimensional even part}. \\ Let $\mathfrak{L}=\mathfrak{L}_{\bar{0}}\oplus\mathfrak{L}_{\bar{1}}$ be a 2-dimensional superspace where $\mathfrak{L}_{\bar{0}}$ is generated by $e_{1}$ and $\mathfrak{L}_{\bar{1}}$ is generated by $e_{2}$. The triple $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ is a Hom-Lie superalgebra when $[e_{1}, e_{1}]=0$, $[e_{1}, e_{2}]=b e_{2}$ and $[e_{2}, e_{2}]=c e_{1}$ with $\alpha(e_{1})=a_{1}e_{1}$, $\alpha(e_{2})=a_{2}e_{2}$ and $a_{2}bc=0$, where $b,c, a_{1}, a_{2}$ are parameters in $\mathbb{K}$.\\ The triple $(\mathfrak{L}, \Delta, \alpha)$ is a Hom-Lie supercoalgebra if $\Delta(e_{1})=0$ and $\Delta(e_{2})=d(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})$, where $d\in\mathbb{K}$.\\ The triple $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ is a Hom-Lie superbialgebras if $a_{1}=1$ or $a_{1}=-1$ and $a_{2}bd=0$. \end{example} \begin{remark} Recently Cai and Sheng introduced a different notion of Hom-Lie bialgebras called \emph{purely Hom-Lie bialgebra}, see \cite{Sheng2}, which we can extend to the super case as follows. Let $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$, where $\alpha$ is invertible, and $(\mathfrak{L}^*, [\cdot ,\cdot ], (\alpha^{-1})^*)$ be two Hom-Lie superalgebras. The pair $(\mathfrak{L},\mathfrak{L}^*) $ is a purely Hom-Lie superbialgebra if holds the compatibility condition \begin{equation} \Delta([x,y])=ad_{\alpha^{-1}(x)}(\Delta(y))-(-1)^{{|x|}{|y|}}ad_{\alpha^{-1}(y)}(\Delta(x)). \end{equation} Notice that this condition is different from condition \eqref{a}. Purely Hom-Lie bialgebras in a graded case will be studied in a forthcoming paper. \end{remark} \textbf{Classification of 3-dimensional Hom-Lie superbialgebras with 2-dimensional even part.} \\ Let $\mathfrak{L}=\mathfrak{L}_{\bar{0}}\oplus\mathfrak{L}_{\bar{1}}$ be a 3-dimensional superspace where $\mathfrak{L}_{\bar{0}}$ is generated by $e_{1}$, $e_{2}$ and $\mathfrak{L}_{\bar{1}}$ is generated by $e_{3}$. We aim to construct Hom-Lie superbialgebras . We set for the linear map $\alpha$ $$\alpha(e_{1})=a_{1}e_{1}+a_{2}e_{2},\ \ \ \ \alpha(e_{2})=a_{3}e_{1}+a_{4}e_{2}, \ \ \ \ \ \alpha(e_{3})=a_{5}e_{3},$$ where $a_{1}, a_{2}, a_{3}, a_{4}, a_{5}$ are parameters in $\mathbb{K}.$\\ The structure of the bracket $[\cdot ,\cdot ]$ is of the form $$[e_{3},e_{3}]=b_{1}e_{1}+b_{2}e_{2},\ \ \ \ [e_{1},e_{2}]=b_{3}e_{1}+b_{4}e_{2},\ \ \ \ [e_{1},e_{3}]=b_{5}e_{3},\ \ \ \ [e_{2},e_{3}]=b_{6}e_{3},$$ where $b_{1}, b_{2}, b_{3}, b_{4}, b_{5}, b_{6}$ are parameters in $\mathbb{K}.$\\ The structure of the cobracket $\Delta$ is of the form \begin{eqnarray*}&\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})+c_5 e_{3}\otimes e_{3} ,\ \ \ \Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})+c_6 e_{3}\otimes e_{3},\\ & \Delta(e_{3})=c_{3}(e_{1}\otimes e_{3}-e_{3}\otimes e_{1})+c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2}), \end{eqnarray*} where $c_{1}, c_{2}, c_{3}, c_{4}$ are parameters in $\mathbb{K}.$\\ In the sequel, we will consider the linear map under the Jordan forms with respect to a suitable basis. We split the calculations in two cases. First, we deal with diagonal case and then with Jordan case.\\ \textbf{1) Diagonal case }: We consider the linear map $\alpha$, where with respect to a suitable basis the matrix is of the form: $\alpha=\left(\begin{array}{lll} a_{1} \ \ 0 \ \ \ 0\\ 0 \ \ \ a_{4} \ \ 0\\ 0 \ \ \ 0 \ \ \ a_{5}\\ \end{array}\right)$, which corresponds to $a_{2}=0$, $a_{3}=0$, and the eigenvalues are pairwise non-equal. \\ We obtained, when the eigenvalues are nonzero, the following corresponding (multiplicative) Hom-Lie superbialgebras :\\ \begin{small} \begin{tabular}{|l|l|l|} \hline \ \ \ \ \ \ \ \ \ \ Linear map &\ \ \ \ \ \ \ \ bracket &\ \ \ \ \ \ \ \ cobracket \\ \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2},\ \ \alpha(e_{3})=-e_{3}$ & $[e_{3}, e_{3}]=0, [e_{1}, e_{3}]=b_{5}e_{3} \ or \ (b_{5}=0),$& $\Delta(e_{1})=c_{5}e_{3}\otimes e_{3},$ \\ & &$\Delta(e_{2})=0,$ \\ &$[e_{1}, e_{2}]=b_{4}e_{2}, [e_{2}, e_{3}]=0$& $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & \ \ \ All bracket are zero& $\Delta(e_{1})=0,$ \\ & &$\Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ && $\Delta(e_{3})=c_{3}(e_{1}\otimes e_{3}-e_{3}\otimes e_{1})$ \\ \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \alpha(e_{3})=-e_{3} $ & $[e_{3}, e_{3}]=b_{1}e_{1}, $& $\Delta(e_{1})=0,$ \\ &$$ &$\Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ &$[e_{1}, e_{2}]=[e_{1}, e_{3}]=[e_{2}, e_{3}]=0$& $\Delta(e_{3})=c_{3}(e_{1}\otimes e_{3}-e_{3}\otimes e_{1})$ \\ \hline $\alpha(e_{1})=a_{1}e_{1}, \ \ \alpha(e_{2})=e_{2}, \ \alpha(e_{3})=-e_{3} $ & $[e_{3}, e_{3}]=[e_{1}, e_{3}]=0, [e_{1}, e_{2}]=b_{3}e_{1}$& $\Delta(e_{1})=0,$ \\ & &$\Delta(e_{2})=c_{6}e_{3}\otimes e_{3},$ \\ &$[e_{2}, e_{3}]=b_{6}e_{3} \ or \ (b_{6}=0 \ and \ b_{3}=0)$& $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=a_{1}e_{1}, \ \ \alpha(e_{2})=e_{2}, \ \alpha(e_{3})=-e_{3} $ & $[e_{1}, e_{2}]=[e_{1}, e_{3}]=[e_{2}, e_{3}]=0,$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ & &$\Delta(e_{2})=0,$ \\ &$[e_{3}, e_{3}]=b_{2}e_{2} \ or \ (b_{2}=0)$& $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=e_{1}, \ \alpha(e_{2})=a_{4}e_{2}, \ \alpha(e_{3})=a_{5}e_{3} $ & $[e_{3}, e_{3}]=[e_{1}, e_{3}]=[e_{2}, e_{3}]=0, $& $\Delta(e_{1})=0,$ \\ &$$&$\Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}), $ \\ &$[e_{1}, e_{2}]=b_{4}e_{2}$& $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=e_{1},\ \alpha(e_{2})=a_{4}e_{2},\ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=[e_{2}, e_{3}]=0$ & All cobracket are zero \\ $$ & $[e_{1}, e_{2}]=b_{4}e_{2}, [e_{1}, e_{3}]=b_{5}e_{3}$ & \\ \hline $\alpha(e_{1})=a_{1}e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & \ \ \ All bracket are zero& $\Delta(e_{1})=c_{5}e_{3}\otimes e_{3},$ \\ $a_{5}=\pm\sqrt{a_{1}} \ or \ (\alpha(e_{1})=e_{1} \ and \ \alpha(e_{3})=-e_{3})$& &$\Delta(e_{2})=0,$ \\ && $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=a_{1}e_{1},\ \alpha(e_{2})=e_{2},\ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=[e_{1}, e_{3}]=0$ & All cobracket are zero \\ $$ & $[e_{1}, e_{2}]=b_{3}e_{1}, [e_{2}, e_{3}]=b_{6}e_{3}$ & \\ \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=b_{2}e_{2},\ [e_{1}, e_{2}]=b_{4}e_{2},$& $\Delta(e_{1})=0, \Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}-$ \\ $(a_{5}=\pm \sqrt{a_{4}})$ &$(\ b_{5}=\frac{a_{5}b_{4}}{2a_{4}}, c_{3}=\frac{a_{5}c_{2}}{2a_{4}}, c_{6}=-\frac{b_{4}c_{2}}{b_{2}})$ &$e_{2}\otimes e_{1})+c_{6}e_{3}\otimes e_{3}, $ \\ &$[e_{2}, e_{3}]=0, \ [e_{1}, e_{3}]=b_{5}e_{3},$& $\Delta(e_{3})=c_{3}(e_{1}\otimes e_{3}-e_{3}\otimes e_{1})$ \\ \hline $\alpha(e_{1})=a_{1}e_{1}, \ \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=b_{1}e_{1},\ [e_{1}, e_{2}]=b_{3}e_{1},$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})$ \\ $(a_{5}=\pm \sqrt{a_{1}})$ &$(\ b_{6}=-\frac{a_{5}b_{3}}{2a_{1}}, c_{4}=-\frac{a_{5}c_{1}}{2a_{1}}, c_{5}=-\frac{b_{3}c_{1}}{b_{1}})$ &$+c_{5}e_{3}\otimes e_{3}, \Delta(e_{2})=0, $ \\ &$[e_{1}, e_{3}]=0, \ [e_{2}, e_{3}]=b_{6}e_{3},$& $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=a_{1}e_{1}, \ \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=0,\ [e_{1}, e_{2}]=b_{3}e_{1},$& $\Delta(e_{1})=c_{5}e_{3}\otimes e_{3},$ \\ $(a_{5}=\pm \sqrt{a_{1}})$ &$(\ b_{6}=-\frac{a_{5}b_{3}}{2a_{1}})$ &$\Delta(e_{2})=0, $ \\ &$[e_{1}, e_{3}]=0, \ [e_{2}, e_{3}]=b_{6}e_{3},$& $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=a_{1}e_{1}, \ \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & \ \ \ All bracket are zero & $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})$ \\ $(a_{5}=\pm\sqrt{a_{1}}, c_{4}=-\frac{a_{5}c_{1}}{2a_{1}})$& &$+c_{5}e_{3}\otimes e_{3}, \Delta(e_{2})=0,$ \\ && $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline \end{tabular}\\ \end{small} \begin{small} \begin{tabular}{|l|l|l|} \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & \ \ \ All bracket are zero & $\Delta(e_{1})=0, \Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}$ \\ $(a_{5}=\pm\sqrt{a_{4}}, c_{3}=\frac{a_{5}c_{2}}{2a_{4}})$& &$-e_{2}\otimes e_{1})+c_{6}e_{3}\otimes e_{3},$ \\ && $\Delta(e_{3})=c_{3}(e_{1}\otimes e_{3}-e_{3}\otimes e_{1})$ \\ \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & \ \ \ All bracket are zero & $\Delta(e_{1})=0, $ \\ $(a_{5}=\pm\sqrt{a_{4}})$& &$\Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ && $\Delta(e_{3})=c_{3}(e_{1}\otimes e_{3}-e_{3}\otimes e_{1})$ \\ \hline $\alpha(e_{1})=e_{1}, \ \alpha(e_{2})=a_{4}e_{2}, \ \alpha(e_{3})=a_{5}e_{3} $ & $[e_{1}, e_{2}]=b_{4}e_{2},$& $\Delta(e_{1})=0,$ \\ $(a_{5}=\pm\sqrt{a_{4}})$&&$\Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}), $ \\ &$[e_{3}, e_{3}]=[e_{1}, e_{3}]=[e_{2}, e_{3}]=0$& $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{1}, e_{2}]=b_{4}e_{2},\ [e_{1}, e_{3}]=b_{5}e_{3},$& $\Delta(e_{1})=0,$ \\ $(a_{5}=\pm \sqrt{a_{4}})$ &$(\ b_{5}=\frac{a_{5}b_{4}}{2a_{4}})$ &$ \Delta(e_{2})=c_{6}e_{3}\otimes e_{3}, $ \\ &$[e_{2}, e_{3}]=[e_{3}, e_{3}]=0,$& $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=a_{1}e_{1},\ \alpha(e_{2})=a_{4}e_{2},\ \alpha(e_{3})=a_{5}e_{3}$ &\ \ \ All bracket are zero & All cobracket are zero \\ & & \\ \hline $\alpha(e_{1})=a_{1}e_{1}, \ \alpha(e_{2})=e_{2}, \ \alpha(e_{3})=a_{5}e_{3} $ & $[e_{1}, e_{2}]=b_{3}e_{1},$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ $(a_{5}=\pm\sqrt{a_{1}})$&&$\Delta(e_{2})=0, $ \\ &$[e_{1}, e_{3}]=[e_{2}, e_{3}]=[e_{3}, e_{3}]=0$& $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=a_{1}e_{1},\ \alpha(e_{2})=a_{4}e_{2},\ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=b_{1}e_{1},$ & All cobracket are zero \\ $(a_{5}=\pm\sqrt{a_{1}})$ & $[e_{1}, e_{2}]=[e_{1}, e_{3}]=[e_{2}, e_{3}]=0$ & \\ \hline $\alpha(e_{1})=a_{1}e_{1},\ \alpha(e_{2})=a_{4}e_{2},\ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=b_{2}e_{2},$ & All cobracket are zero \\ $(a_{5}=\pm\sqrt{a_{4}})$ & $[e_{1}, e_{2}]=[e_{1}, e_{3}]=[e_{2}, e_{3}]=0$ & \\ \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \ \alpha(e_{3})=-e_{3}$ & \ \ \ All bracket are zero & $\Delta(e_{1})=0, $ \\ $$& &$\Delta(e_{2})=0,$ \\ && $\Delta(e_{3})=c_{3}(e_{1}\otimes e_{3}-e_{3}\otimes e_{1})$\\ \hline $\alpha(e_{1})=a_{1}e_{1}, \ \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=-e_{3}$ & \ \ \ All bracket are zero & $\Delta(e_{1})=0, $ \\ $$& &$\Delta(e_{2})=0,$ \\ && $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$\\ \hline $\alpha(e_{1})=e_{1}, \ \ \alpha(e_{2})=a_{4}e_{2},\ \ \alpha(e_{3})=-e_{3}$ & $[e_{1}, e_{2}]=b_{4}e_{2}, $& $\Delta(e_{1})=0,$ \\ & &$\Delta(e_{2})=c_{2}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ &$[e_{1}, e_{3}]=[e_{2}, e_{3}]=[e_{3}, e_{3}]=0$& $\Delta(e_{3})=0$ \\ \hline \end{tabular}\\ \end{small} \textbf{2) Jordan case} : Now, we consider the linear map $\alpha$ where the corresponding matrix is of the form $\alpha=\left(\begin{array}{lll} a_{1} \ \ 1 \ \ \ 0\\ 0 \ \ \ a_{1} \ \ 0\\ 0 \ \ \ 0 \ \ \ a_{5}\\ \end{array}\right)$, that is $a_{2}=1$, $a_{3}=0$, $a_{4}=a_{1}$.\\ We obtained the following corresponding (multiplicative) Hom-Lie superbialgebras :\\ \begin{small} \begin{tabular}{|l|l|l|} \hline \ \ \ \ \ \ \ \ \ \ Linear map &\ \ \ \ \ \ \ \ bracket &\ \ \ \ \ \ \ \ cobracket \\ \hline $\alpha(e_{1})=e_{2}, \ \ \alpha(e_{2})=\alpha(e_{3})=0$ & $[e_{3}, e_{3}]=b_{2}e_{2},\ [e_{1}, e_{2}]=b_{4}e_{2},$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})$ \\ & &$+c_{5} e_{3}\otimes e_{3}, \Delta(e_{2})=0,$ \\ &$[e_{1}, e_{3}]=[e_{2}, e_{3}]=0$& $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=e_{2}, \ \ \alpha(e_{2})=\alpha(e_{3})=0$ & $[e_{3}, e_{3}]=b_{2}e_{2},\ [e_{1}, e_{2}]=b_{4}e_{2}, $& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})$ \\ & &$+c_{5} e_{3}\otimes e_{3}, \Delta(e_{2})=0,$ \\ &$[e_{1}, e_{3}]=b_{5}e_{3}, [e_{2}, e_{3}]=0, \ or \ (b_{5}=0)$& $\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=e_{1}+e_{2}, \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=0$ & $[e_{3}, e_{3}]=[e_{2}, e_{3}]=0,$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ & &$\Delta(e_{2})=0,$ \\ &$[e_{1}, e_{2}]=b_{4}e_{2}, [e_{1}, e_{3}]=b_{5}e_{3} \ or \ (b_{5}=0)$& $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=e_{2},\ \alpha(e_{2})=0,\ \alpha(e_{3})=a_{5}e_{3},$ & $[e_{3}, e_{3}]= [e_{1}, e_{3}]=[e_{2}, e_{3}]=0,$ & $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ $(a_{5}\neq0)$ & $[e_{1}, e_{2}]=b_{4}e_{2}$ & $\Delta(e_{2})=\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=e_{1}+e_{2}, \ \ \alpha(e_{2})=e_{2}, \ \alpha(e_{3})=a_{5}e_{3} $ &\ \ \ All bracket are zero& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ $$& &$\Delta(e_{2})=0,$ \\ && $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=e_{1}+e_{2}, \ \alpha(e_{2})=e_{2}, \ \alpha(e_{3})=a_{5}e_{3} $ & $[e_{1}, e_{2}]=b_{4}e_{2},\ [e_{1}, e_{3}]=b_{5}e_{3},$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ $$ &$$&$\Delta(e_{2})=0, \ for\ c_{4}=\frac{b_{5}c_{1}}{b_{4}}$ \\ &$[e_{3}, e_{3}]=[e_{2}, e_{3}]=0$& $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=e_{1}+e_{2}, \ \ \alpha(e_{2})=e_{2}, \ \alpha(e_{3})=a_{5}e_{3} $ &\ \ \ All bracket are zero& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})$ \\ $(a_{5}=\pm1, c_{4}=-\frac{a_{5}c_{1}}{2})$& &$+c_{5}e_{3}\otimes e_{3}, \Delta(e_{2})=0,$ \\ && $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline \end{tabular} \end{small} \begin{small} \begin{tabular}{|l|l|l|} \hline $\alpha(e_{1})=e_{1}+e_{2}, \ \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=[e_{2}, e_{3}]=0,$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})$ \\ $(a_{5}=\pm1, c_{4}=-\frac{a_{5}c_{1}}{2})$ &$(\ b_{5}=-\frac{a_{5}b_{4}}{2})$ &$+c_{5}e_{3}\otimes e_{3}, \Delta(e_{2})=0, $ \\ &$[e_{1}, e_{2}]=b_{4}e_{2}, [e_{1}, e_{3}]=b_{5}e_{3},$& $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=e_{1}+e_{2}, \ \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{1}, e_{2}]=[e_{1}, e_{3}]=[e_{2}, e_{3}]=0,$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1}),$ \\ $(a_{5}=\pm1)$ &$$ &$\Delta(e_{2})=0, $ \\ &$[e_{3}, e_{3}]=b_{2}e_{2}$& $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=-e_{1}+e_{2},\ \alpha(e_{2})=-e_{2},\ \alpha(e_{3})=0,$ & $[e_{3}, e_{3}]=[e_{1}, e_{2}]=[e_{2}, e_{3}]=0,$ & $\Delta(e_{1})=\Delta(e_{2})=0,$ \\ $$ & $[e_{1}, e_{3}]=b_{5}e_{3}, \ or \ (b_{5}=0)$ & $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=a_{1}e_{1}+e_{2},\ \alpha(e_{2})=a_{4}e_{2},\ \alpha(e_{3})=a_{5}e_{3}$ &\ \ \ All bracket are zero & All cobracket are zero \\ $(a_{1}\neq0, \ a_{5}\neq0)$ & & \\ \hline $\alpha(e_{1})=e_{1}+e_{2}, \ \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=[e_{2}, e_{3}]=0,$& $\Delta(e_{1})=c_{5}e_{3}\otimes e_{3},$ \\ $(a_{5}=\pm1)$ &$$ &$$ \\ &$[e_{1}, e_{3}]=b_{5}e_{3}, [e_{1}, e_{2}]=b_{4}e_{2}, \ or \ (b_{4}=0)$& $\Delta(e_{2})=\Delta(e_{3})=0$ \\ \hline $\alpha(e_{1})=e_{1}+e_{2}, \ \ \alpha(e_{2})=e_{2}, \ \ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=b_{2}e_{2}, [e_{1}, e_{2}]=b_{4}e_{2},$& $\Delta(e_{1})=c_{1}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})$ \\ $(a_{5}=\pm1)$ &$(b_{5}=\frac{a_{5}b_{4}}{2}, c_{5}=\frac{-b_{4}c_{1}+2a_{5}b_{4}c_{4}}{2b_{2}}, c_{4}=\pm \frac{c_{1}}{2})$ &$+c_{5}e_{3}\otimes e_{3}, \Delta(e_{2})=0, $ \\ &$[e_{1}, e_{3}]=b_{5}e_{3}, [e_{2}, e_{3}]=0,$& $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=a_{1}e_{1}+e_{2}, \ \ \alpha(e_{2})=a_{4}e_{2}, \ \alpha(e_{3})=0 $ &\ \ \ All bracket are zero& $\Delta(e_{1})= \Delta(e_{2})=0,$ \\ $(a_{1}=a_{4})$& &$\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ && $$ \\ \hline $\alpha(e_{1})=e_{1}+e_{2},\ \alpha(e_{2})=e_{2},\ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{1}, e_{3}]=b_{5}e_{3}, \ $ & $\Delta(e_{1})=\Delta(e_{2})=0,$ \\ $(a_{5}=\pm1)$ & $[e_{3}, e_{3}]=[e_{1}, e_{2}]=[e_{2}, e_{3}]=0$ & $\Delta(e_{3})=c_{4}(e_{2}\otimes e_{3}-e_{3}\otimes e_{2})$ \\ \hline $\alpha(e_{1})=a_{1}e_{1}+e_{2},\ \alpha(e_{2})=a_{4}e_{2},\ \alpha(e_{3})=a_{5}e_{3}$ & $[e_{3}, e_{3}]=b_{2}e_{2},$ & All cobracket are zero \\ $(a_{5}=\pm\sqrt{a_{1}}, a_{1}=a_{4})$ & $[e_{1}, e_{2}]=[e_{1}, e_{3}]=[e_{2}, e_{3}]=0$ & \\ \hline $\alpha(e_{1})=a_{1}e_{1}+e_{2},\ \alpha(e_{2})=a_{4}e_{2},\ \alpha(e_{3})=0$ & $[e_{1}, e_{3}]=b_{5}e_{3},$ & All cobracket are zero \\ $(a_{1}=a_{4})$ & $[e_{1}, e_{2}]=[e_{3}, e_{3}]=[e_{2}, e_{3}]=0$ & \\ \hline \end{tabular}\\ \end{small} \ The following result shows that a Hom-Lie superbialgebra deforms into another Hom-Lie superbialgebra along any endomorphism. \begin{thm}\label{aaa} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ be a Hom-Lie superbialgebra and an even map $\beta:\mathfrak{L}\rightarrow\mathfrak{L}$ be a Hom-Lie superbialgebra morphism. Then $$\mathfrak{L}_{\beta}=(\mathfrak{L}, [\cdot ,\cdot ]_{\beta}=\beta\circ[\cdot ,\cdot ], \Delta_{\beta}=\Delta\circ\beta,\beta\alpha)$$ is also a Hom-Lie superbialgebra, which is multiplicative if $\mathfrak{L}$ is. \end{thm} \begin{proof} It is immediate that $[\cdot ,\cdot ]_{\beta}$ is skew-supersymmetric (\ref{701}) because $$ [x,y]_{\beta}=\beta([x,y])=\beta(-(-1)^{{|x|}{|y|}}[y,x])=-(-1)^{{|x|}{|y|}}\beta([y,x])=-(-1)^{{|x|}{|y|}}[y,x]_{\beta}.$$ The Hom super-Jacobi identity holds in $\mathfrak{L}_{\beta}$ because $$(-1)^{{|x|}{|z|}} [\beta\alpha(x),[y,z]_{\beta}]_{\beta} =(-1)^{{|x|}{|z|}}\beta^{2}[\alpha(x),[y,z]]=-(-1)^{{|z|}{|y|}}[\beta\alpha(z),[x,y]_{\beta}]_{\beta}-(-1)^{{|y|}{|x|}}[\beta\alpha(y),[z,x]_{\beta}]_{\beta},$$ i.e., $$(-1)^{{|x|}{|z|}} [\beta\alpha(x),[y,z]_{\beta}]_{\beta}+(-1)^{{|z|}{|y|}}[\beta\alpha(z),[x,y]_{\beta}]_{\beta}+ (-1)^{{|y|}{|x|}}[\beta\alpha(y),[z,x]_{\beta}]_{\beta}=0.$$ $\Delta_{\beta}$ is skew-supersymmetric because $$\tau\circ\Delta_{\beta}=\tau\circ\Delta\circ\beta=-\Delta\circ\beta=-\Delta_{\beta}.$$ Likewise, the Hom-super-co-jacobi identity holds in $\mathfrak{L}_{\beta}$ because $$(1\otimes1\otimes1+\xi+\xi^{2})\circ(\beta\alpha\otimes\Delta_{\beta})\circ\Delta_{\beta}=(\beta^{\otimes{3}})^{2}(1\otimes1\otimes1+\xi+\xi^{2})\circ(\alpha\otimes\Delta)\circ\Delta=0.$$ To check the compatibility condition (\ref{a}) in $\mathfrak{L}_{\beta}$, we compute as follows : \begin{align*} \Delta_{\beta}([x,y]_{\beta})&= (\beta^{\otimes{2}})^{2}\Delta([x,y])\\ &=(\beta^{\otimes{2}})^{2}([\alpha(x), y_{1}]\otimes\alpha(y_{2}))+(\beta^{\otimes{2}})^{2}((-1)^{{|x|}{|y_{1}|}}\alpha(y_{1})\otimes[\alpha(x), y_{2}])\\ &-(\beta^{\otimes{2}})^{2}((-1)^{{|x|}{|y|}}[\alpha(y), x_{1}]\otimes\alpha(x_{2}))-(\beta^{\otimes{2}})^{2}((-1)^{{|x|}{|y|}}(-1)^{{|y|}{|x_{1}|}}\alpha(x_{1})\otimes[\alpha(y), x_{2}])\\ &=[\beta\alpha(x),\beta(y_{1})]_{\beta}\otimes\beta\alpha(\beta(y_{2}))+(-1)^{{|x|}{|y_{1}|}}\beta\alpha(\beta(y_{1}))\otimes[\beta\alpha(x),\beta(y_{2})]_{\beta}\\ &-(-1)^{{|x|}{|y|}}[\beta\alpha(y),\beta(x_{1})]_{\beta}\otimes\beta\alpha(\beta(x_{2}))-(-1)^{{|x|}{|y|}}(-1)^{{|y|}{|x_{1}|}}\beta\alpha(\beta(x_{1}))\otimes[\beta\alpha(y),\beta(x_{2})]_{\beta}\\ &=[\beta\alpha(x),\beta(y_{1})]_{\beta}\otimes\beta\alpha(\beta(y_{2}))+(-1)^{{|x|}{|\beta(y_{1})|}}\beta\alpha(\beta(y_{1}))\otimes[\beta\alpha(x),\beta(y_{2})]_{\beta}\\ &-(-1)^{{|x|}{|y|}}[\beta\alpha(y),\beta(x_{1})]_{\beta}\otimes\beta\alpha(\beta(x_{2}))-(-1)^{{|x|}{|y|}}(-1)^{{|y|}{|\beta(x_{1})|}}\beta\alpha(\beta(x_{1}))\otimes[\beta\alpha(y),\beta(x_{2})]_{\beta}\\ &=ad_{\beta\alpha(x)}(\Delta_{\beta}(y))-(-1)^{{|x|}{|y|}}ad_{\beta\alpha(y)}(\Delta_{\beta}(x)). \end{align*} Because $|\beta(x_{1})|=|x_{1}|$, and $|\beta(y_{1})|=|y_{1}|$ (i.e., an even map $\beta$). We have shown that $\mathfrak{L}_{\beta}$ is a Hom-Lie superbialgebra. The super-multiplicativity assertion is obvious. \end{proof} Now we discuss two special cases of Theorem \ref{aaa}. The next result says that one can obtain multiplicative Hom-Lie superbialgebras from Lie superbialgebras and their endomorphisms. A construction result of this form for Hom-type algebras was first given in \cite{Yau1}. \begin{cor}\label{aaaa} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta)$ be a Lie superbialgebra and an even map $\beta:\mathfrak{L}\rightarrow\mathfrak{L}$ be a Lie superbialgebra morphism. Then $$ \mathfrak{L}_{\beta}=(\mathfrak{L}, [\cdot ,\cdot ]_{\beta}=\beta\circ[\cdot ,\cdot ], \Delta_{\beta}=\Delta\circ\beta,\beta)$$ is a multiplicative Hom-Lie superbialgebra. \end{cor} \begin{proof} This is the $\alpha=Id$ special case of Theorem \ref{aaa}. \end{proof} The next result says that every multiplicative Hom-Lie superbialgebra gives rise to an infinite sequence of multiplicative Hom-Lie superbialgebras. \begin{cor}\label{www} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ be a multiplicative Hom-Lie superbialgebra. Then $$ \mathfrak{L}_{\alpha^{n}}=(\mathfrak{L}, [\cdot ,\cdot ]_{\alpha^{n}}=\alpha^{n}\circ[\cdot ,\cdot ], \Delta_{\alpha^{n}}=\Delta\circ\alpha^{n},\alpha^{n+1})$$ is also a multiplicative Hom-Lie superbialgebra for each integer $n\geq0$. \end{cor} \begin{proof} This is the $\beta=\alpha^{n}$ special case of Theorem \ref{aaa}. \end{proof} Next we consider when Hom-Lie superbialgebra of the from $ \mathfrak{L}_{\beta}$, as in Corollary \ref{aaaa}, are isomorphic. \begin{thm} \label{bb} Let $\mathfrak{g}$ and $\mathfrak{h}$ be Lie superbialgebras. Let $\alpha:\mathfrak{g}\rightarrow\mathfrak{g}$ and $\beta:\mathfrak{h}\rightarrow\mathfrak{h}$ be Lie superbialgebras morphisms with $\beta$ and $\beta^{\otimes{2}}$ injective. Then the following statements are equivalent :\\ 1) The Hom-Lie superbialgebras $\mathfrak{g}_{\alpha}$ and $\mathfrak{h}_{\beta}$ as in Corollary \ref{aaaa}, are isomorphic.\\ 2) There exists a Lie superbialgebra isomorphism $\gamma:\mathfrak{g}\rightarrow\mathfrak{h}$ such that $\gamma\alpha=\beta\gamma.$ \end{thm} \begin{proof} To show that the first statement implies the second statement, suppose that $\gamma:\mathfrak{g}_{\alpha}\rightarrow\mathfrak{h}_{\beta}$ is an isomorphism of Hom-Lie superbialgebras. Then $\gamma\alpha=\beta\gamma$ automatically.\\ To see that $\gamma$ is a Lie superbialgebra isomorphism, first we check that it commutes with the Lie bracket. For any two elements $x$ and $y$ in $\mathfrak{g}$, we have $$\beta\gamma[x,y]=\gamma\alpha[x,y] =\gamma([x,y]_{\alpha}) =[\gamma(x),\gamma(y)]_{\beta} =\beta[\gamma(x),\gamma(y)].$$ Since $\beta$ is injective, we conclude that $\gamma[x,y]=[\gamma(x),\gamma(y)],$ i.e., $\gamma$ is a Lie superbialgebra isomorphism.\\ To check that $\gamma$ commutes with the Lie cobrackets, we compute as follows: \begin{align*} \beta^{\otimes{2}}(\gamma^{\otimes{2}}(\Delta(x)))&=(\beta\gamma)^{\otimes{2}}(\Delta(x)) =(\gamma\alpha)^{\otimes{2}}(\Delta(x)) =\gamma^{\otimes{2}}(\alpha^{\otimes{2}}(\Delta(x))) =\gamma^{\otimes{2}}(\Delta_{\alpha}(x)) =\Delta_{\beta}(\gamma(x))\\ &=\beta^{\otimes{2}}(\Delta(\gamma(x))). \end{align*} The injectivity of $\beta^{\otimes{2}}$ now implies that $\gamma$ commutes with the Lie cobrackets. Therefore, $\gamma$ is a Lie superbialgebra isomorphism. The other implication is proved by a similar argument, much of which is already given above. \end{proof} For a Lie superbialgebra $\mathfrak{g}$, let Aut$(\mathfrak{g})$ be the group of Lie superbialgebra isomorphisms from $\mathfrak{g}$ to $\mathfrak{g}$. In Theorem \ref{bb}, restricting to the case $\mathfrak{g}=\mathfrak{h}$ with $\alpha$ and $\beta$ both invertible, we obtain the following special case. \begin{cor}\label{bbb}Let $\mathfrak{g}$ be a Lie superbialgebra and $\alpha, \beta\in$ Aut$(\mathfrak{g})$. Then the Hom-Lie superbialgebras $\mathfrak{g}_{\alpha}$ and $\mathfrak{g}_{\beta}$, as in Corollary \ref{aaaa}, are isomorphic if and only if $\alpha$ and $\beta$ are conjugate in Aut$(\mathfrak{g})$. \end{cor} Corollary \ref{bbb} can be restated as follows. \begin{cor} Let $\mathfrak{g}$ be a Lie superbialgebra.Then there is a bijection between the following two sets:\\ 1) The set of isomorphism classes of Hom-Lie superbialgebras $\mathfrak{g}_{\alpha}$ with $\alpha$ invertible.\\ 2) The set of conjugacy classes in the group Aut$(\mathfrak{g})$. \end{cor} The next result shows that finite dimensional Hom-Lie superbialgebras, like Lie superbialgebras, can be dualized. A proof of this self-dual property for the special of Lie bialgebras can be found in \cite{Majid}. \begin{remark}\label{01} $\bullet$ If $(\mathfrak{L}, \Delta, \alpha)$ is a Hom-Lie supercoalgebra, then $(\mathfrak{L}^{\ast}, [\cdot ,\cdot ] , \alpha)$ is a Hom-Lie superalgebra. Here $[\cdot ,\cdot ]$ and $\alpha$ in $\mathfrak{L}^{\ast}$ are dual to $\Delta$ and $\alpha$, respectively, in $\mathfrak{L}$.\\ $\bullet$ Conversely, if $(\mathfrak{L}, [\cdot ,\cdot ] , \alpha)$ is a finite dimensional Hom-Lie superalgebra, then $(\mathfrak{L}^{\ast}, \Delta , \alpha)$ is a Hom-Lie supercoalgebra, where $\Delta$ and $\alpha$ in $\mathfrak{L}^{\ast}$ are dual to $[\cdot ,\cdot ]$ and $\alpha$, respectively, in $\mathfrak{L}$. \end{remark} \begin{thm}\label{amineeeee} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ be a finite dimensional (multiplicative) Hom-Lie superbialgebra. Then its linear dual $\mathfrak{L^\ast}=$Hom$(\mathfrak{L}, \mathbb{K})$ is also a (multiplicative) Hom-Lie superbialgebra with the dual structure maps: \begin{equation}\label{amine} \alpha(\phi)= \phi\circ\alpha,\ \ \ \langle[\phi, \psi], x \rangle=\langle \phi\otimes\psi, \Delta(x)\rangle, \ \ \ \langle\Delta(\phi), x\otimes y\rangle=\langle \phi, [x,y]\rangle,\\ \end{equation} for $x,y\in\mathfrak{L}$ and $\phi,\psi\in\mathfrak{L^\ast}$. \end{thm} \begin{proof} As we mentioned right after Remark \ref{01}, $(\mathfrak{L}^{\ast}, [\cdot ,\cdot ] , \alpha)$ is a Hom-Lie superalgebra, which is true even if $\mathfrak{L}$ is not finite dimensional. Moreover, $(\mathfrak{L}^{\ast}, \Delta, \alpha)$ is a Hom-Lie supercoalgebra, whose validity depends on the finite dimensionality of $\mathfrak{L}$. Thus, it remains to check the compatibility condition (\ref{a}) between the bracket and the cobracket in $\mathfrak{L}^{\ast}$, i.e., \begin{eqnarray}\label{cc} \langle\Delta([\phi, \psi]),x\otimes y\rangle=\langle ad_{\alpha(\phi)}(\Delta(\psi))-(-1)^{{|\phi|}{|\psi|}}ad_{\alpha(\psi)}(\Delta(\phi)), x\otimes y\rangle \end{eqnarray} for $x,y\in\mathfrak{L}$ and $\phi,\psi\in\mathfrak{L^\ast}$.\\ Using Definition \ref{amine}, the compatibility condition (\ref{a}) in $\mathfrak{L}$, we compute the left-hand side of (\ref{cc}) as follows:\\ \begin{align*} \langle\Delta([\phi, \psi]),x\otimes y\rangle&= \langle[\phi, \psi], [x, y]\rangle\\ &=\langle \phi\otimes\psi, \Delta([x, y])\rangle\\ &=\langle \phi\otimes\psi, ad_{\alpha(x)}(\Delta(y))-(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(\Delta(x))\rangle\\ &=\langle \phi\otimes\psi,[\alpha(x), y_{1}]\otimes\alpha(y_{2})\rangle + (-1)^{{|x|}{|y_{1}|}}\langle \phi\otimes\psi, \alpha(y_{1})\otimes[\alpha(x), y_{2}]\rangle\\ &-(-1)^{{|x|}{|y|}}\langle\phi\otimes\psi, [\alpha(y), x_{1}]\otimes\alpha(x_{2})\rangle - (-1)^{{|x|}{|y|}}(-1)^{{|y|}{|x_{1}|}}\langle \phi\otimes\psi, \alpha(x_{1})\otimes[\alpha(y), x_{2}]\rangle\\ &=\langle \Delta(\phi)\otimes\psi,\alpha(x)\otimes y_{1}\otimes\alpha(y_{2})\rangle + (-1)^{{|x|}{|y_{1}|}}\langle \phi\otimes\Delta(\psi), \alpha(y_{1})\otimes\alpha(x)\otimes y_{2}\rangle\\ &-(-1)^{{|x|}{|y|}}\langle\Delta(\phi)\otimes\psi, \alpha(y)\otimes x_{1}\otimes\alpha(x_{2})\rangle\\ &-(-1)^{{|x|}{|y|}}(-1)^{{|y|}{|x_{1}|}}\langle \phi\otimes\Delta (\psi), \alpha(x_{1})\otimes\alpha(y)\otimes x_{2}\rangle\\ &=\langle \phi_{1}\otimes\phi_{2}\otimes\psi,\alpha(x)\otimes y_{1}\otimes\alpha(y_{2})\rangle + (-1)^{{|x|}{|y_{1}|}}\langle \phi\otimes\psi_{1}\otimes\psi_{2}, \alpha(y_{1})\otimes\alpha(x)\otimes y_{2}\rangle\\ &-(-1)^{{|x|}{|y|}}\langle\phi_{1}\otimes\phi_{2}\otimes\psi, \alpha(y)\otimes x_{1}\otimes\alpha(x_{2})\rangle\\ &-(-1)^{{|x|}{|y|}}(-1)^{{|y|}{|x_{1}|}}\langle\phi\otimes\psi_{1}\otimes\psi_{2}, \alpha(x_{1})\otimes\alpha(y)\otimes x_{2}\rangle\\ &=\langle \alpha(\phi_{1})\otimes\phi_{2}\otimes\alpha(\psi), x\otimes y_{1}\otimes y_{2}\rangle + (-1)^{{|x|}{|y_{1}|}}\langle \alpha(\phi)\otimes\alpha(\psi_{1})\otimes\psi_{2}, y_{1}\otimes x\otimes y_{2}\rangle \\ &-(-1)^{{|x|}{|y|}}\langle\alpha(\phi_{1})\otimes\phi_{2}\otimes\alpha(\psi), y\otimes x_{1}\otimes x_{2}\rangle\\ &-(-1)^{{|x|}{|y|}}(-1)^{{|y|}{|x_{1}|}}\langle\alpha(\phi)\otimes\alpha(\psi_{1})\otimes\psi_{2}, x_{1}\otimes y\otimes x_{2}\rangle. \end{align*} Using, in addition, the skew-supersymmetric of the bracket and the cobracket in $\mathfrak{L}^{\ast}$,\\ $([\phi_{1}, \alpha(\psi)]=-(-1)^{{|\phi_{1}|}{|\psi|}}[\alpha(\psi), \phi_{1}]$ and $\phi_{1}\otimes\phi_{2}=-(-1)^{{|\phi_{1}|}{|\phi_{2}|}}\phi_{2}\otimes\phi_{1}$).\\ The above four terms become: \begin{align*} &=-(-1)^{{|\phi|}{|\psi|}}(-1)^{{|\psi|}{|\phi_{1}|}}\langle\alpha(\phi_{1})\otimes[\alpha(\psi), \phi_{2}], x\otimes y\rangle + (-1)^{{|\phi|}{|\psi_{1}|}}\langle\alpha(\psi_{1})\otimes[\alpha(\phi), \psi_{2}], x\otimes y\rangle\\ &-(-1)^{{|\phi|}{|\psi|}}\langle[\alpha(\psi), \phi_{1}]\otimes\alpha(\phi_{2}), x\otimes y\rangle + \langle[\alpha(\phi), \psi_{1}]\otimes\alpha(\psi_{2}), x\otimes y\rangle.\\ &=\langle ad_{\alpha(\phi)}(\Delta(\psi))-(-1)^{{|\phi|}{|\psi|}}ad_{\alpha(\psi)}(\Delta(\phi)), x\otimes y\rangle \end{align*} This is exactly the right-hand side of (\ref{cc}). \end{proof} \section{Matched pairs, Hom-Lie superbialgebras and Manin supertriples} In this section, we introduce the notions of matched pair of Hom-Lie superalgebras and a Manin supertriple of Hom-Lie superalgebras. First, we recall the basics about representations of a Hom-Lie superalgebras. \begin{definition}\label{samak} (\cite{Ammar F Makhlouf A 2,Sheng,Sheng1}) Let $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ be a Hom-Lie superalgebra and $M=M_{\bar{0}}\oplus M_{\bar{1}}$ an arbitrary vector superspace. A representation of the Hom-Lie superalgebra with respect to $A\in \mathfrak{gl}(M)$, is an even linear map $\rho:\mathfrak{L} \rightarrow End(M)$, such that $\rho(\mathfrak{L_{i}})(M_{j})\subset M_{i+j}$ where $i, j\in\mathbb{Z}_{2}$, and satisfying \begin{equation}\label{1 condition} \rho(\alpha(x))\circ A=A\circ\rho(x) \end{equation} and \begin{equation}\label{2 condition} \rho([x, y])\circ A=\rho(\alpha(x))\circ\rho(y)-(-1)^{{|x|}{|y|}}\rho(\alpha(y))\circ\rho(x). \end{equation} for all homogeneous elements $x, y \in\mathfrak{L}$.\\ We denote a representation by $(M, \rho,A)$. It is straightforward to see that $(\mathfrak{L}, ad, \alpha)$ is a representation, called the adjoint representation, see \cite{Benayadi Makhlouf,Sheng}.\\ Given a representation $(M, \rho,A)$, define $\rho^{\ast}:\mathfrak{L}\rightarrow End(M^{\ast})$ by \begin{equation} \langle\rho^{\ast}(x)(\xi), \nu\rangle=-(-1)^{{|x|}{|\xi|}}\langle\xi,\rho(x)(\nu)\rangle, \end{equation} $\forall$ $x\in\mathfrak{L}$, $\xi\in M^{\ast}$, $\nu\in M.$ This representation is called admissible representation with respect to $(M, \rho,A)$. \end{definition} Now, let $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ and $(\mathfrak{g'}, [\cdot ,\cdot ]_{\mathfrak{g'}}, \alpha_{\mathfrak{g'}})$ be two multiplicative Hom-Lie superalgebras. Set $\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}$ and $\mathfrak{g'}=\mathfrak{g'}_{\bar{0}}\oplus\mathfrak{g'}_{\bar{1}}$. Let $\rho:\mathfrak{g} \rightarrow \mathfrak{gl}(\mathfrak{g'})$ and $\rho':\mathfrak{g'} \rightarrow \mathfrak{gl}(\mathfrak{g})$ be two linear maps. Define a skew-supersymmetric bracket $[\cdot ,\cdot ]_{\widetilde{G}}: \widetilde{G}\times\widetilde{G}\rightarrow \widetilde{G}$, where $\widetilde{G}$ is given by : $\widetilde{G}=\widetilde{G}_{\bar{0}}\oplus \widetilde{G}_{\bar{1}}$, where $\widetilde{G}_{\bar{0}}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g'}_{\bar{0}}$ and $\widetilde{G}_{\bar{1}}=\mathfrak{g}_{\bar{1}}\oplus\mathfrak{g'}_{\bar{1}}$. We set $|(x,x')|=|x|=|x'|$ and $|(y,y')|=|y|=|y'|$, for all homogeneous elements $x, y$ in $\mathfrak{g}$ and $x', y'$ in $\mathfrak{g'}$. Define $\widetilde{\alpha}:\widetilde{G}\rightarrow\widetilde{G}$ by $$\widetilde{\alpha}(x, x')=(\alpha_{\mathfrak{g}}(x), \alpha_{\mathfrak{g'}}(x')),$$ and the bracket $\widetilde{G}$ by \begin{equation}\label{zara} \textbf{[}(x, x'), (y, y')\textbf{]}_{\widetilde{G}}=\textbf{(}[x, y]_{\mathfrak{g}}-(-1)^{{|x|}{|y|}}\rho'(y')(x)+\rho'(x')(y), [x', y']_{\mathfrak{g'}}+\rho(x)(y')-(-1)^{{|x|}{|y|}}\rho(y)(x')\textbf{)}. \end{equation} \begin{thm}\label{nouveau} The triple $(\widetilde{G}=\mathfrak{g}\oplus\mathfrak{g'}, [\cdot ,\cdot ]_{\widetilde{G}}, \widetilde{\alpha})$, where $\widetilde{G}$, $[\cdot ,\cdot ]_{\widetilde{G}}$, $\widetilde{\alpha}$ are defined above is a multiplicative Hom-Lie superalgebra if and only if $\rho$ and $\rho'$ are representations of $\mathfrak{g}$ and $\mathfrak{g'}$ respectively and the following conditions are satisfied \begin{align}\label{seff} \rho(\alpha_{\mathfrak{g}}(z))([x', y']_{\mathfrak{g'}})&=[\rho(z)(x'), \alpha_{\mathfrak{g'}}(y')]_{\mathfrak{g'}}+(-1)^{{|x|}{|z|}}[\alpha_{\mathfrak{g'}}(x'), \rho(z)(y')]_{\mathfrak{g'}} \\&+(-1)^{{{|x|}{|y|}}+{{|y|}{|z|}}}\rho(\rho'(y')(z))(\alpha_{\mathfrak{g'}}(x'))-(-1)^{{|x|}{|z|}}\rho(\rho'(x')(z))(\alpha_{\mathfrak{g'}}(y')).\nonumber\end{align} \begin{align}\label{chevro} \rho'(\alpha_{\mathfrak{g'}}(z'))([x, y]_{\mathfrak{g}})&=[\rho'(z')(x), \alpha_{\mathfrak{g}}(y)]_{\mathfrak{g}}+(-1)^{{|x|}{|z|}}[\alpha_{\mathfrak{g}}(x), \rho'(z')(y)]_{\mathfrak{g}}\\ &+(-1)^{{{|x|}{|y|}}+{{|y|}{|z|}}}\rho'(\rho(y)(z'))(\alpha_{\mathfrak{g}}(x))-(-1)^{{|x|}{|z|}}\rho'(\rho(x)(z'))(\alpha_{\mathfrak{g}}(y)).\nonumber\end{align} \end{thm} \begin{proof} Assume $(\widetilde{G}=\mathfrak{g}\oplus\mathfrak{g'}, [\cdot ,\cdot ]_{\widetilde{G}}, \widetilde{\alpha})$ is a multiplicative Hom-Lie superalgebra. The multiplicativity condition writes \begin{equation}\label{multiplicative} \widetilde{\alpha}\textbf{(}\textbf{[}(x, x'), (y, y')\textbf{]}_{\widetilde{G}}\textbf{)}=\textbf{[}\widetilde{\alpha}(x, x'), \widetilde{\alpha}(y, y')\textbf{]}_{\widetilde{G}}. \end{equation} Developing (\ref{multiplicative}), leads to the first condition (\ref{1 condition}). Indeed \begin{align*}\widetilde{\alpha}\textbf{(}\textbf{[}(x, x'), (y, y')\textbf{]}_{\widetilde{G}}\textbf{)}&=\textbf{(}\alpha_{\mathfrak{g}}([x, y]_{\mathfrak{g}})-(-1)^{{|x|}{|y|}}\alpha_{\mathfrak{g}}(\rho'(y')(x))+\alpha_{\mathfrak{g}}(\rho'(x')(y)),\\& \alpha_{\mathfrak{g'}}([x', y']_{\mathfrak{g'}})+\alpha_{\mathfrak{g'}}(\rho(x)(y'))-(-1)^{{|x|}{|y|}}\alpha_{\mathfrak{g'}}(\rho(y)(x'))\textbf{)}.\end{align*} \begin{align*}\textbf{[}\widetilde{\alpha}(x, x'), \widetilde{\alpha}(y, y')\textbf{]}_{\widetilde{G}}& =\textbf{(}[\alpha_{\mathfrak{g}}(x), \alpha_{\mathfrak{g}}(y)]_{\mathfrak{g}}-(-1)^{{|x|}{|y|}}\rho'(\alpha_{\mathfrak{g'}}(y'))(\alpha_{\mathfrak{g}}(x))+ \rho'(\alpha_{\mathfrak{g'}}(x'))(\alpha_{\mathfrak{g}}(y)),\\& [\alpha_{\mathfrak{g'}}(x'), \alpha_{\mathfrak{g'}}(y')]_{\mathfrak{g'}}+\rho(\alpha_{\mathfrak{g}}(x))(\alpha_{\mathfrak{g'}}(y')) -(-1)^{{|x|}{|y|}}\rho(\alpha_{\mathfrak{g}}(y))(\alpha_{\mathfrak{g'}}(x'))\textbf{)}.\end{align*} Since $\alpha_{\mathfrak{g}}$ and $\alpha_{\mathfrak{g'}}$ are multiplicative, which implies that \begin{equation}\label{57} \rho(\alpha_{\mathfrak{g}}(x))\circ\alpha_{\mathfrak{g'}}=\alpha_{\mathfrak{g'}}\circ\rho(x), \end{equation} \begin{equation}\label{058} \rho'(\alpha_{\mathfrak{g'}}(x'))\circ\alpha_{\mathfrak{g}}=\alpha_{\mathfrak{g}}\circ\rho'(x'). \end{equation} Developing the Hom-super-Jacobi identity (\ref{702}), that for $x, y, z \in\mathfrak{g}$ and $x', y', z'\in\mathfrak{g'}$,\\ $$(-1)^{{|(x,x')|}{|(z, z')|}}[\widetilde{\alpha}(x, x'), [(y, y'), (z, z')]_{\widetilde{G}}]_{\widetilde{G}}+(-1)^{{|(y,y')|}{|(x, x')|}}[\widetilde{\alpha}(y, y'), [(z, z'), (x, x')]_{\widetilde{G}}]_{\widetilde{G}}$$ $$+(-1)^{{|(z,z')|}{|(y, y')|}}[\widetilde{\alpha}(z, z'), [(x, x'), (y, y')]_{\widetilde{G}}]_{\widetilde{G}}=0,$$ leads to the second condition (\ref{2 condition}). Indeed, \begin{align*} \bullet &(-1)^{{|(x,x')|}{|(z, z')|}}[\widetilde{\alpha}(x, x'), [(y, y'), (z, z')]_{\widetilde{G}}]_{\widetilde{G}}= (-1)^{{|x|}{|z|}}\textbf{(}[\alpha_{\mathfrak{g}}(x), [y, z]_{\mathfrak{g}}]_{\mathfrak{g}}-(-1)^{{|y|}{|z|}}[\alpha_{\mathfrak{g}}(x), \rho'(z')(y)]_{\mathfrak{g}}\\ &+[\alpha_{\mathfrak{g}}(x), \rho'(y')(z)]_{\mathfrak{g}} -(-1)^{{|x|}({|y|}+{|z|})}\rho'([y', z']_{\mathfrak{g'}})(\alpha_{\mathfrak{g}}(x))-(-1)^{{|x|}({|y|}+{|z|})}\rho'(\rho(y)(z'))(\alpha_{\mathfrak{g}}(x))\\ &+(-1)^{{|x|}({|y|}+{|z|})}(-1)^{{|y|}{|z|}}\rho'(\rho(z)(y'))(\alpha_{\mathfrak{g}}(x)) +\rho'(\alpha_{\mathfrak{g'}}(x'))([y, z]_{\mathfrak{g}})\\ &-(-1)^{{|y|}{|z|}}\rho'(\alpha_{\mathfrak{g'}}(x'))(\rho'(z')(y)) +\rho'(\alpha_{\mathfrak{g'}}(x'))(\rho'(y')(z))\textbf{,} \ \ [\alpha_{\mathfrak{g'}}(x'), [y', z']_{\mathfrak{g'}}]_{\mathfrak{g'}}\\ &+[\alpha_{\mathfrak{g'}}(x'),\rho(y)(z')]_{\mathfrak{g'}} -(-1)^{{|y|}{|z|}}[\alpha_{\mathfrak{g'}}(x'), \rho(z)(y')]_{\mathfrak{g'}} +\rho(\alpha_{\mathfrak{g}}(x))([y',z']_{\mathfrak{g'}})\\ &+\rho(\alpha_{\mathfrak{g}}(x))(\rho(y)(z')) -(-1)^{{|y|}{|z|}}\rho(\alpha_{\mathfrak{g}}(x))(\rho(z)(y')) -(-1)^{{|x|}({|y|}+{|z|})}\rho([y,z]_{\mathfrak{g}})(\alpha_{\mathfrak{g'}}(x'))\\ &+(-1)^{{|x|}({|y|}+{|z|})}(-1)^{{|y|}{|z|}}\rho(\rho'(z')(y))(\alpha_{\mathfrak{g'}}(x')) -(-1)^{{|x|}({|y|}+{|z|})}\rho(\rho'(y')(z))(\alpha_{\mathfrak{g'}}(x'))\textbf{)}.\\ \end{align*} Similarly, we compute : $$(-1)^{{|(y,y')|}{|(x, x')|}}[\widetilde{\alpha}(y, y'), [(z, z'), (x, x')]_{\widetilde{G}}]_{\widetilde{G}} \ \ and \ \ (-1)^{{|(z,z')|}{|(y, y')|}}[\widetilde{\alpha}(z, z'), [(x, x'), (y, y')]_{\widetilde{G}}]_{\widetilde{G}}.$$ Setting ($|x|=|x'|$, $|y|=|y'|$, and $|z|=|z'|$ ) give \begin{equation}\label{1122a} \rho([x, y]_{\mathfrak{g}})\circ\alpha_{\mathfrak{g'}}=\rho(\alpha_{\mathfrak{g}}(x))\circ\rho(y)-(-1)^{{|x|}{|y|}}\rho(\alpha_{\mathfrak{g}}(y))\circ\rho(x), \end{equation} \begin{equation}\label{6060} \rho'([x', y']_{\mathfrak{g'}})\circ\alpha_{\mathfrak{g}}=\rho'(\alpha_{\mathfrak{g'}}(x'))\circ\rho'(y')-(-1)^{{|x'|}{|y'|}}\rho'(\alpha_{\mathfrak{g'}}(y'))\circ\rho'(x'), \end{equation} which implies (\ref{seff}), and (\ref{chevro}).\\ By Eqs. (\ref{57}) and (\ref{1122a}), we deduce that $\rho$ is a representation of the Hom-Lie superalgebra $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ on $\mathfrak{g'}$ with respect to $\alpha_{\mathfrak{g'}}$. By Eqs. (\ref{058}) and (\ref{6060}), we deduce that $\rho'$ is a representation of the Hom-Lie superalgebra $(\mathfrak{g'}, [\cdot ,\cdot ]_{\mathfrak{g'}}, \alpha_{\mathfrak{g'}})$ on $\mathfrak{g}$ with respect to $\alpha_{\mathfrak{g}}$. \end{proof} \begin{example}[See \cite{Ammar F Makhlouf A 2}] Given a representation $(V, [., .]_{V}, \beta)$ of a Hom-Lie superalgebra $(G, [\cdot ,\cdot ], \alpha)$. Set $\widetilde{G}=G\oplus V$ and $\widetilde{G_{k}}=G_{k}\oplus V_{k}$. If $x\in G_{i}$ and $v\in V_{i}$ $(i\in\mathbb{Z}_{2})$, we denote $|(x, v)|=|x|$.\\ Define a skew-supersymmetric bracket $[\cdot ,\cdot ]_{\widetilde{G}}:\wedge^{2}(G\oplus V)\rightarrow G\oplus V$ by\\ $$[(x, \mu), (y, v)]_{\widetilde{G}}=([x, y], [x, v]_{V}-(-1)^{{|x|}{|y|}}[y, \mu]_{V}).$$\\ Define $\widetilde{\alpha}:G\oplus V\rightarrow G\oplus V$ by $\widetilde{\alpha}(x, v)=(\alpha(x), \beta(v))$. Then $(G\oplus V, [-, -]_{\widetilde{G}}, \widetilde{\alpha})$ is a Hom-Lie superalgebra, which we call the semi-direct product of the Hom-Lie superalgebra $(G, [\cdot ,\cdot ], \alpha)$ by $V$. \end{example} \begin{definition} For any $x\in\mathfrak{g}$, define $ad_{x}:\wedge^{n}\mathfrak{g}\rightarrow\wedge^{n}\mathfrak{g}$ by $ad_{x}P=[x, P]_{\mathfrak{g}}$. Its dual map $ad^{\ast}:\wedge^{n}\mathfrak{g}^{\ast}\rightarrow\wedge^{n}\mathfrak{g}^{\ast}$ is defined as \begin{equation} \langle ad_{x}(y), \xi \rangle=-(-1)^{{|x|}{|y|}}\langle y, ad^{\ast}_{x}(\xi)\rangle, \end{equation} for all $x, y \in\mathfrak{g}$, and $\xi\in\mathfrak{g}^{\ast}$, where the pairing $\langle \ , \ \rangle$ is supersymmetric, i.e. $\langle x, \xi \rangle=(-1)^{{|x|}{|\xi|}}\langle \xi, x\rangle,$ for $x\in\mathfrak{g}$,\\ and $\xi\in\mathfrak{g}^{\ast}$. More precisely, for any $\xi_{1},\cdot\cdot\cdot,\xi_{n}\in\mathfrak{g}^{\ast}$, we have \begin{equation} ad^{\ast}_{x}(\xi_{1}\wedge\cdot\cdot\cdot\wedge\xi_{n})=\sum_{i=1}^{n} (-1)^{{|x|}({|\xi_{1}|+{|\xi_{2}|}}+\cdot\cdot\cdot+{|\xi_{i-1}|})}\alpha^{\ast}_{\mathfrak{g}}(\xi_{1})\wedge\cdot\cdot\cdot\wedge ad^{\ast}_{x}(\xi_{i})\wedge\cdot\cdot\cdot\wedge\alpha^{\ast}_{\mathfrak{g}}(\xi_{n}). \end{equation} \end{definition} \begin{definition} A \emph{matched pairs of Hom-Lie superalgebras}, which we denote by $(\mathfrak{g}, \mathfrak{g'}, \rho, \rho')$, consists of two Hom-Lie superalgebras $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ and $(\mathfrak{g'}, [\cdot ,\cdot ]_{\mathfrak{g'}}, \alpha_{\mathfrak{g'}})$, together with representations $\rho:\mathfrak{g} \rightarrow \mathfrak{gl}(\mathfrak{g'})$ and $\rho':\mathfrak{g'} \rightarrow \mathfrak{gl}(\mathfrak{g})$ with respect to $\alpha_{\mathfrak{g'}}$ and $\alpha_{\mathfrak{g}}$ respectively, such that the compatibility conditions (\ref{seff}) and (\ref{chevro}) are satisfied. \end{definition} In the following, we concentrate on the case that $\mathfrak{g'}$ is $\mathfrak{g}^{\ast}$, the dual space of $\mathfrak{g}$, and $ \alpha_{\mathfrak{g'}}= \alpha^{\ast}_{\mathfrak{g}}$, $\rho=ad^{\ast}$, $\rho'=\mathfrak{ad}^{\ast}$, where $\mathfrak{ad}^{\ast}$ is the dual map of $\mathfrak{ad}$. Notice that $ad$ and $\mathfrak{ad}$ are the adjoint representations associated to Hom-Lie superalgebras $\mathfrak{g}$ and $\mathfrak{g}^{\ast}$ respectively. Let $x, y, z$ be elements in $\mathfrak{g}$ and $\xi, \eta$ elements in $\mathfrak{g}^{\ast}$. \\ For a Hom-Lie superalgebra $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ (resp. $(\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}, \alpha_{\mathfrak{g}^{\ast}})$), let $\Delta^{\ast}:\mathfrak{g}^{\ast}\rightarrow\wedge^{2}\mathfrak{g}^{\ast}$ (resp. $\Delta:\mathfrak{g}\rightarrow\wedge^{2}\mathfrak{g}$) be the dual map of $[\cdot ,\cdot ]_{\mathfrak{g}}:\wedge^{2}\mathfrak{g}\rightarrow\mathfrak{g}$ (resp. $[\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}:\wedge^{2}\mathfrak{g}^{\ast}\rightarrow\mathfrak{g}^{\ast}$), i.e. $$\langle\Delta^{\ast}(\xi), x\wedge y \rangle =\langle \xi, [x, y]_{\mathfrak{g}}\rangle, \ \ \ \ \ \langle\Delta(x), \xi\wedge\eta \rangle =\langle x, [\xi, \eta]_{\mathfrak{g}^{\ast}}\rangle.$$ A Hom-Lie superalgebra $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ is called \emph{admissible} if its adjoint representation is admissible, that is $[(Id-\alpha^{2})(x), \alpha(y)]=0$. In particular, if a Hom-Lie superalgebra $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ is \emph{ involutive}, that is $\alpha^{2}=Id$, then it is admissible. \begin{proposition}\label{ahmedd} A pair of admissible Hom-Lie superalgebras $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ and $(\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}, \alpha_{\mathfrak{g}}^{\ast})$ determines a Hom-Lie superbialgebra $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}},\Delta, \alpha_{\mathfrak{g}})$, where $\Delta$ is the dual operation of $ [\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}$ if \begin{equation}\label{sure} \langle\Delta([x, y]_{\mathfrak{g}}), \alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge\eta \rangle = \langle ad_{\alpha_{\mathfrak{g}}(x)}(\Delta(y)), \alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge\eta \rangle - (-1)^{{|x|}{|y|}} \langle ad_{\alpha_{\mathfrak{g}}(y)}(\Delta(x)), \alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge\eta \rangle, \end{equation} \begin{equation}\label{2bia} \langle\Delta^{\ast}([\xi, \eta]_{\mathfrak{g}^{\ast}}), \alpha_{\mathfrak{g}}(x)\wedge y \rangle = \langle \mathfrak{ad}_{{\alpha_{\mathfrak{g}}^{\ast}(\xi)}}(\Delta^{\ast}(\eta)), \alpha_{\mathfrak{g}}(x)\wedge y \rangle - (-1)^{{|\xi|}{|\eta|}} \langle \mathfrak{ad}_{{\alpha_{\mathfrak{g}}^{\ast}(\eta)}}(\Delta^{\ast}(\xi)), \alpha_{\mathfrak{g}}(x)\wedge y \rangle. \end{equation} \end{proposition} \begin{remark} Following Bai and Sheng, we may denote this Hom-Lie superbialgebra by $(\mathfrak{g}, \mathfrak{g}^{\ast})$. \end{remark} \begin{thm}\label{bnbn10} Let $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ and $(\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}, \alpha_{\mathfrak{g}}^{\ast})$ be a pair of admissible Hom-Lie superalgebra. Then $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}},\Delta, \alpha_{\mathfrak{g}})$, where $\Delta$ is the dual of $[\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}$, is a Hom-Lie superbialgebra if and only if $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ and $(\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}, \alpha_{\mathfrak{g}}^{\ast})$ is a matched pairs of Hom-Lie superalgebras, i.e. $(\mathfrak{g}\oplus\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\widetilde{G}}, \alpha_{\mathfrak{g}}\oplus\alpha_{\mathfrak{g}}^{\ast})$ is a multiplicative Hom-Lie superalgebra, where $[\cdot ,\cdot ]_{\widetilde{G}}$ is given by Eq. (\ref{zara}), in which $\rho=ad^{\ast}$ and $\rho'=\mathfrak{ad}^{\ast}.$ \end{thm} \begin{proof} By Theorem \ref{nouveau}, two admissible Hom-Lie superalgebras $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ and $(\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}, \alpha_{\mathfrak{g}}^{\ast})$ form a matched pair of Hom-Lie superalgebras if and only if \begin{equation}\label{superrr} ad^{\ast}_{\alpha_{\mathfrak{g}}(z)}([\xi, \eta]_{\mathfrak{g}^{\ast}})=[ad^{\ast}_{z}(\xi), \alpha_{\mathfrak{g}}^{\ast}(\eta)]_{\mathfrak{g}^{\ast}}+(-1)^{{|\xi|}{|z|}}[\alpha_{\mathfrak{g}}^{\ast}(\xi), ad^{\ast}_{z}(\eta)]_{\mathfrak{g}^{\ast}} \end{equation} \hspace{6cm} $+(-1)^{{{|\xi|}{|\eta|}}+{{|\eta|}{|z|}}}ad^{\ast}_{\mathfrak{ad}^{\ast}_{\eta}(z)}(\alpha_{\mathfrak{g}}^{\ast}(\xi)) -(-1)^{{|\xi|}{|z|}}ad^{\ast}_{\mathfrak{ad}^{\ast}_{\xi}(z)}(\alpha_{\mathfrak{g}}^{\ast}(\eta)).$\\ \begin{equation}\label{supeddddd} \mathfrak{ad}^{\ast}_{\alpha_{\mathfrak{g}}^{\ast}(\xi)}([x,y]_{\mathfrak{g}})=[\mathfrak{ad}^{\ast}_{\xi}(x), \alpha_{\mathfrak{g}}(y)]_{\mathfrak{g}}+(-1)^{{|x|}{|\xi|}}[\alpha_{\mathfrak{g}}(x), \mathfrak{ad}^{\ast}_{\xi}(y)]_{\mathfrak{g}} \end{equation} \hspace{6cm} $+(-1)^{{{|x|}{|y|}}+{{|y|}{|\xi|}}}\mathfrak{ad}^{\ast}_{ad^{\ast}_{y}(\xi)}(\alpha_{\mathfrak{g}}(x)) -(-1)^{{|x|}{|\xi|}}\mathfrak{ad}^{\ast}_{ad^{\ast}_{x}(\xi)}(\alpha_{\mathfrak{g}}(y)).$\\ Since $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ and $(\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}, \alpha_{\mathfrak{g}}^{\ast})$ are admissible Hom-Lie superalgebras i.e. ($\alpha_{\mathfrak{g}}^{2}=Id$, and $(\alpha_{\mathfrak{g}}^{\ast})^{2}=Id$), see Lemma 2.9 in \cite{Sheng1}. By Eq. (\ref{supeddddd}), we get \begin{align*} 0&=\langle -\mathfrak{ad}^{\ast}_{\alpha_{\mathfrak{g}}^{\ast}(\xi)}([x,y]_{\mathfrak{g}})+[\mathfrak{ad}^{\ast}_{\xi}(x), \alpha_{\mathfrak{g}}(y)]_{\mathfrak{g}}+(-1)^{{|x|}{|\xi|}}[\alpha_{\mathfrak{g}}(x), \mathfrak{ad}^{\ast}_{\xi}(y)]_{\mathfrak{g}} +(-1)^{{{|x|}{|y|}}+{{|y|}{|\xi|}}}\mathfrak{ad}^{\ast}_{ad^{\ast}_{y}(\xi)}(\alpha_{\mathfrak{g}}(x))\\ &-(-1)^{{|x|}{|\xi|}}\mathfrak{ad}^{\ast}_{ad^{\ast}_{x}(\xi)}(\alpha_{\mathfrak{g}}(y))\textbf{,}\ \eta \rangle\\ &=(-1)^{{|\xi|}({|x|}+{|y|})}\langle [x,y]_{\mathfrak{g}}, [\alpha_{\mathfrak{g}}^{\ast}(\xi), \eta]_{\mathfrak{g}^{\ast}} \rangle-(-1)^{{|y|}({|x|}+{|\xi|})}\langle ad_{\alpha_{\mathfrak{g}}(y)}(\mathfrak{ad}^{\ast}_{\xi}(x)), \eta \rangle +(-1)^{{|x|}{|\xi|}}\langle ad_{\alpha_{\mathfrak{g}}(x)}(\mathfrak{ad}^{\ast}_{\xi}(y)),\eta \rangle\\ &-(-1)^{{|\xi|}({|x|}+{|y|})}\langle\alpha_{\mathfrak{g}}(x), [ad^{\ast}_{y}(\xi), \eta]_{\mathfrak{g}^{\ast}} \rangle +(-1)^{{|x|}{|\xi|}}(-1)^{{|y|}({|x|}+{|\xi|})}\langle\alpha_{\mathfrak{g}}(y), [ad^{\ast}_{x}(\xi), \eta]_{\mathfrak{g}^{\ast}}\rangle\\ &=(-1)^{{|\xi|}({|x|}+{|y|})}\langle [x,y]_{\mathfrak{g}}, [\alpha_{\mathfrak{g}}^{\ast}(\xi), \eta]_{\mathfrak{g}^{\ast}} \rangle-(-1)^{{|x|}{|\xi|}}\langle x, [\xi, ad^{\ast}_{\alpha_{\mathfrak{g}}(y)}(\eta)]_{\mathfrak{g}^{\ast}}\rangle+(-1)^{{|y|}({|x|}+{|\xi|})}\langle y, [\xi, ad^{\ast}_{\alpha_{\mathfrak{g}}(x)}(\eta)]_{\mathfrak{g}^{\ast}}\rangle\\ &-(-1)^{{|\xi|}({|x|}+{|y|})}\langle x, [\alpha_{\mathfrak{g}}^{\ast}(ad^{\ast}_{y}(\xi)), \alpha_{\mathfrak{g}}^{\ast}(\eta)]_{\mathfrak{g}^{\ast}} \rangle +(-1)^{{|x|}{|\xi|}}(-1)^{{|y|}({|x|}+{|\xi|})}\langle y, [\alpha_{\mathfrak{g}}^{\ast}(ad^{\ast}_{x}(\xi)), \alpha_{\mathfrak{g}}^{\ast}(\eta)]_{\mathfrak{g}^{\ast}}\rangle, \end{align*} \begin{align*} &=(-1)^{{|\xi|}({|x|}+{|y|})}\langle [x,y]_{\mathfrak{g}}, [\alpha_{\mathfrak{g}}^{\ast}(\xi), \eta]_{\mathfrak{g}^{\ast}} \rangle-(-1)^{{|x|}{|\xi|}}\langle x, [(\alpha_{\mathfrak{g}}^{\ast})^{2}(\xi), ad^{\ast}_{\alpha_{\mathfrak{g}}(y)}(\eta)]_{\mathfrak{g}^{\ast}}\rangle\\ &+(-1)^{{|y|}({|x|}+{|\xi|})}\langle y, [(\alpha_{\mathfrak{g}}^{\ast})^{2}(\xi), ad^{\ast}_{\alpha_{\mathfrak{g}}(x)}(\eta)]_{\mathfrak{g}^{\ast}}\rangle-(-1)^{{|\xi|}({|x|}+{|y|})}\langle x, [ad^{\ast}_{\alpha_{\mathfrak{g}}(y)}(\alpha_{\mathfrak{g}}^{\ast}(\xi)), \alpha_{\mathfrak{g}}^{\ast}(\eta)]_{\mathfrak{g}^{\ast}} \rangle\\ &+(-1)^{{|x|}{|\xi|}}(-1)^{{|y|}({|x|}+{|\xi|})}\langle y, [ad^{\ast}_{\alpha_{\mathfrak{g}}(x)}(\alpha_{\mathfrak{g}}^{\ast}(\xi)), \alpha_{\mathfrak{g}}^{\ast}(\eta)]_{\mathfrak{g}^{\ast}}\rangle\\ &=(-1)^{{|\xi|}({|x|}+{|y|})}\langle \Delta([x,y]_{\mathfrak{g}}), \alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge\eta \rangle-(-1)^{{|x|}{|\xi|}}\langle \Delta(x), (\alpha_{\mathfrak{g}}^{\ast})^{2}(\xi)\wedge ad^{\ast}_{\alpha_{\mathfrak{g}}(y)}(\eta)\rangle\\ &+(-1)^{{|y|}({|x|}+{|\xi|})}\langle\Delta(y), (\alpha_{\mathfrak{g}}^{\ast})^{2}(\xi)\wedge ad^{\ast}_{\alpha_{\mathfrak{g}}(x)}(\eta)\rangle-(-1)^{{|\xi|}({|x|}+{|y|})}\langle \Delta(x), ad^{\ast}_{\alpha_{\mathfrak{g}}(y)}(\alpha_{\mathfrak{g}}^{\ast}(\xi))\wedge \alpha_{\mathfrak{g}}^{\ast}(\eta)\rangle\\ &+(-1)^{{|x|}{|\xi|}}(-1)^{{|y|}({|x|}+{|\xi|})}\langle\Delta(y), ad^{\ast}_{\alpha_{\mathfrak{g}}(x)}(\alpha_{\mathfrak{g}}^{\ast}(\xi))\wedge \alpha_{\mathfrak{g}}^{\ast}(\eta)\rangle, \end{align*} which implies that \begin{align*} \langle\Delta([x,y]_{\mathfrak{g}}), \alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge\eta \rangle&=-(-1)^{{|x|}{|y|}}\langle \Delta(y), ad^{\ast}_{\alpha_{\mathfrak{g}}(x)}(\alpha_{\mathfrak{g}}^{\ast}(\xi))\wedge \alpha_{\mathfrak{g}}^{\ast}(\eta)+(-1)^{{|x|}{|\xi|}}(\alpha_{\mathfrak{g}}^{\ast})^{2}(\xi)\wedge ad^{\ast}_{\alpha_{\mathfrak{g}}(x)}(\eta)\rangle\\ &+\langle\Delta(x),ad^{\ast}_{\alpha_{\mathfrak{g}}(y)}(\alpha_{\mathfrak{g}}^{\ast}(\xi))\wedge \alpha_{\mathfrak{g}}^{\ast}(\eta)+(-1)^{{|y|}{|\xi|}}(\alpha_{\mathfrak{g}}^{\ast})^{2}(\xi)\wedge ad^{\ast}_{\alpha_{\mathfrak{g}}(y)}(\eta)\rangle\\ &=-(-1)^{{|x|}{|y|}}\langle\Delta(y), ad^{\ast}_{\alpha_{\mathfrak{g}}(x)}(\alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge \eta)\rangle+\langle\Delta(x), ad^{\ast}_{\alpha_{\mathfrak{g}}(y)}(\alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge \eta)\rangle\\ &=\langle ad_{\alpha_{\mathfrak{g}}(x)}(\Delta(y)), \alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge\eta \rangle - (-1)^{{|x|}{|y|}} \langle ad_{\alpha_{\mathfrak{g}}(y)}(\Delta(x)), \alpha_{\mathfrak{g}}^{\ast}(\xi)\wedge\eta\rangle , \end{align*} which is exactly Eq. (\ref{sure}). Similarly, one deduces that Eq. (\ref{superrr}) is equivalent to Eq. (\ref{2bia}). \end{proof} Let $V$ be a superspace and $\langle\ , \ \rangle:V^{\ast}\times V\rightarrow \mathbb{K}$ be the canonical pairing. Then we identify $V$ with $V^{\ast}$ by the pairing $\langle x, \xi \rangle=(-1)^{{|x|}{|\xi|}}\langle\xi, x \rangle$, $x\in V$ and $\xi\in V^{\ast}$. On the other hand, we shall say that a bilinear form $(|):V\times V\rightarrow \mathbb{K}$ is supersymmetric if $(\upsilon|\omega)=(-1)^{{|\upsilon|}{|\omega|}}(\omega|\upsilon)$. \begin{definition} A \emph{Manin supertriple of Hom-Lie superalgebras} is a triple of Hom-Lie superalgebras $(\mathfrak{M}, \mathfrak{g}, \mathfrak{g'})$ together with a nondegenerate supersymmetric bilinear form $S$ on $\mathfrak{M}$ such that \begin{enumerate} \item $S$ is invariant, i.e. for any $x, y, z \in\mathfrak{M}$, we have \begin{equation}\label{manin 1} S([x, y]_{\mathfrak{M}}, z)=S(x, [y, z]_{\mathfrak{M}}), \end{equation} \begin{equation}\label{manin 02} S(\phi_{\mathfrak{M}}(x), y)=S(x, \phi_{\mathfrak{M}}(y)). \end{equation} \item $ \mathfrak{g}$ and $\mathfrak{g'}$ are isotropic Hom-Lie sub-superalgebra of $\mathfrak{M}$, such that $\mathfrak{M}=\mathfrak{g}\oplus\mathfrak{g'}$ as vector superspace. \end{enumerate} \end{definition} \begin{proposition}\label{russe} Let $(\mathfrak{g}, \mathfrak{g}^{\ast})$ be a Hom-Lie superbialgebra in the sense of Proposition \ref{ahmedd}. Then $(\mathfrak{g}\oplus\mathfrak{g}^{\ast}, \mathfrak{g}, \mathfrak{g}^{\ast})$ is a Manin supertriple of Hom-Lie superalgebras. \end{proposition} \begin{proof} Let $(\mathfrak{g}, \mathfrak{g}^{\ast})$ be a Hom-Lie superbialgebra in the sense of Proposition \ref{ahmedd}, i.e. $\mathfrak{g}$ and $\mathfrak{g}^{\ast}$ are admissible Hom-Lie superalgebras such that Eqs. (\ref{sure}), (\ref{2bia}) are satisfied. By Theorem \ref{bnbn10}, we know that $(\mathfrak{g}\oplus\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\mathfrak{g}\oplus\mathfrak{g}^{\ast}}, \alpha_{\mathfrak{g}}\oplus\alpha_{\mathfrak{g}}^{\ast})$ is a Hom-Lie superalgebra, where $[-, -]_{\mathfrak{g}\oplus\mathfrak{g}^{\ast}}$ is given by \begin{equation}\label{omi} [x+\xi, y+\eta]_{\mathfrak{g}\oplus\mathfrak{g}^{\ast}}=[x, y]_{\mathfrak{g}}+[\xi, \eta]_{\mathfrak{g}^{\ast}}+ad^{\ast}_{x}(\eta)-(-1)^{{|x|}{|y|}}ad^{\ast}_{y}(\xi)+ \mathfrak{ad}^{\ast}_{\xi}(y)-(-1)^{{|x|}{|y|}}\mathfrak{ad}^{\ast}_{\eta}(x), \end{equation} \ \ for all homogeneous elements $x, y$ and $\xi, \eta$ in $\mathfrak{g}$ and $\mathfrak{g}^{\ast}$ respectively. From the construction above we have $|x|=|\xi|$ and $|y|=|\eta|$.\\ Furthermore, there is an obvious supersymmetric bilinear form on $\mathfrak{g}\oplus\mathfrak{g}^{\ast}$:\\ \begin{equation}\label{pose} S(x+\xi, y+\eta)=\langle x, \eta\rangle+\langle \xi, y\rangle=\langle x, \eta\rangle+(-1)^{{|\xi|}{|y|}}\langle y, \xi\rangle. \end{equation} \ \ It's straightforward, using the supersymmetry and $\langle ad_{x}(y), \xi \rangle=-(-1)^{{|x|}{|y|}}\langle y, ad^{\ast}_{x}(\xi)\rangle$, with $|x|=|\xi|$ and $|y|=|\eta|$, that Eqs. (\ref{manin 1}) and (\ref{manin 02}) are satisfied, i.e. the bilinear form defined by Eq. (\ref{pose}) is invariant. \end{proof} Conversely, if $(\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}, \mathfrak{g}, {\mathfrak{g}}^{\ast})$ is a Manin supertriple of Hom-Lie superalgebras with the invariant bilinear from $S$ given by Eq. (\ref{pose}), then for any $x, y\in\mathfrak{g}$ and $\xi, \eta\in{\mathfrak{g}}^{\ast}$, we have the natural scalar product on $\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}$ defined by $$S(x, y)=0, \ \ \ S(\xi, \eta)=0, \ \ \ S(x, \xi)=\langle x, \xi \rangle, \ \ \ \ x, y\in\mathfrak{g}, \ \xi, \eta\in{\mathfrak{g}}^{\ast}.$$ Due to the invariance of $S$, we have \begin{align*} S([x, \xi]_{\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}}, y)&=(-1)^{{|y|}({|x|}+{|\xi|})}S(y,[x, \xi]_{\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}})=(-1)^{{|y|}({|x|}+{|\xi|})}S([y, x]_{\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}}, \xi)=(-1)^{{|y|}({|x|}+{|\xi|})}S([y, x]_{\mathfrak{g}},\xi)\\ &=-(-1)^{{|y|}{|\xi|}}\langle ad_{x}(y), \xi\rangle=(-1)^{{|y|}({|x|}+{|\xi|})}\langle y, ad^{\ast}_{x}(\xi)\rangle=\langle ad^{\ast}_{x}(\xi), y\rangle, \end{align*} $S([x, \xi]_{\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}}, \eta)=S(x, [\xi, \eta]_{\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}})=S(x, [\xi, \eta]_{{\mathfrak{g}}^{\ast}})=\langle x, \mathfrak{ad}_{\xi}(\eta)\rangle=-(-1)^{{|x|}{|\xi|}}\langle\mathfrak{ad}^{\ast}_{\xi}(x), \eta\rangle,$\\ which implies that : $$[x, \xi]_{\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}}=ad^{\ast}_{x}(\xi)-(-1)^{{|x|}{|\xi|}}\mathfrak{ad}^{\ast}_{\xi}(x),$$ that is, the Hom-Lie bracket on $\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}$ is given by Eq. (\ref{omi}). Therefore, $(\mathfrak{g}, {\mathfrak{g}}^{\ast}, ad^{\ast}, \mathfrak{ad}^{\ast})$ is a matched pair of Hom-Lie superalgebras and hence $(\mathfrak{g}, {\mathfrak{g}}^{\ast})$ is a Hom-Lie superbialgebra. Note that we deduce naturally that both $\mathfrak{g}$ and $\mathfrak{g}^{\ast}$ are admissible Hom-Lie superalgebras.\\ Summarizing the above study, Theorem \ref{bnbn10} and Proposition \ref{russe}, we have the following conclusion. \begin{thm} Let $(\mathfrak{g}, [\cdot ,\cdot ]_{\mathfrak{g}}, \alpha_{\mathfrak{g}})$ and $(\mathfrak{g}^{\ast}, [\cdot ,\cdot ]_{\mathfrak{g}^{\ast}}, \alpha_{\mathfrak{g}}^{\ast})$ be two admissible Hom-Lie superalgebras. Then the following conditions are equivalent. \begin{enumerate} \item $(\mathfrak{g}, {\mathfrak{g}}^{\ast})$ is a Hom-Lie superbialgebra in the sense of Proposition \ref{ahmedd}. \item $(\mathfrak{g}, {\mathfrak{g}}^{\ast}, ad^{\ast}, \mathfrak{ad}^{\ast})$ is a matched pair of Hom-Lie superalgebras. \item $(\mathfrak{g}\oplus{\mathfrak{g}}^{\ast}, \mathfrak{g}, {\mathfrak{g}}^{\ast})$ is a Manin supertriple of Hom-Lie superalgebras with the invariant bilinear from (\ref{pose}). \end{enumerate} \end{thm} \section{Coboundary and quasi-triangular Hom-Lie superbialgebras} In this section, we define and study coboundary Hom-Lie superbialgebras and quasi-triangular Hom-Lie superbialgebras. Then we show how a coboundary or a quasi-triangular Hom-Lie superbialgebra can be constructed from a Hom-Lie superalgebra and an $r$-matrix. \begin{definition} A \emph{(multiplicative) coboundary Hom-Lie superbialgebra} $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha, r)$ consists of a (multiplicative) Hom-Lie superbialgebra $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ and an element $r=\sum r_{1}\otimes r_{2}\in\mathfrak{L}^{\otimes2}$ such that $\alpha^{\otimes2}(r)=r$ and \begin{equation}\label{888} \Delta(x)=ad_{x}(r)=\sum [x, r_{1}]\otimes\alpha(r_{2})+(-1)^{{|x|}{|r_{1}|}}\alpha(r_{1})\otimes[x, r_{2}] \end{equation} for all $x\in\mathfrak{L}$. \end{definition} \begin{remark} \begin{equation}\label{777} \Delta(x)=(-1)^{{|x|}{|r|}}(\sum [x, r_{1}]\otimes\alpha(r_{2})+(-1)^{{|x|}{| r_{1}|}}\alpha(r_{1})\otimes[x, r_{2}]) \end{equation} for $x\in\mathfrak{L}$, where the parity $|r|$ of $r$ is defined as follows : since we assume $r$ is homogenous, there exists $|r|\in\mathbb{Z}_{2}$, such that $r$ can be written as $r=\sum r_{1}\otimes r_{2}\in\mathfrak{L}^{\otimes2}$, $r_{1}, r_{2}$ are homogenous elements with $|r|=|r_{1}|+|r_{2}|$. (Note that equation (\ref{777}) and (\ref{2007}) show that we have $|r|=\bar{0}$, namely $|r_{1}|=|r_{2}|$). So we get (\ref{888}). \end{remark} \begin{definition} The classical Yang-Baxter equation (CYBE):\\ $$c(r)=[r_{12}, r_{13}]+[r_{12}, r_{23}]+[r_{13}, r_{23}]=0,$$ where $r_{ij}$ are defined by\begin{eqnarray}\label{10} && r_{12}=\sum r_{1}\otimes r_{2}\otimes1=r\otimes1, \\ \nonumber && r_{13}=\sum r_{1}\otimes1 \otimes r_{2}=(1\otimes\tau)(r\otimes1)=(\tau\otimes1)(1\otimes r),\\ \nonumber && r_{23}=\sum 1\otimes r_{1}\otimes r_{2}=1\otimes r \end{eqnarray} and considered as elements in ${U}(\mathfrak{L})$, the universal enveloping algebra of a Lie superalgebra $\mathfrak{L}$. Elements \eqref{10} belongs to $\mathfrak{L}\otimes\mathfrak{L}\otimes\mathfrak{L}$.\\ \end{definition} \begin{definition} The classical Hom-Yang-Baxter equation (CHYBE) in a Hom-Lie superalgebra $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ is \begin{equation}\label{1000} [[r,r]]^{\alpha}=[r_{12}, r_{13}]+[r_{12}, r_{23}]+[r_{13}, r_{23}]=0 \end{equation} for $r\in\mathfrak{L}\otimes\mathfrak{L}$. The three brackets in (\ref{1000}) are defined as \\ \begin{eqnarray}\label{cvc} && [r_{12},r'_{13}]=\sum (-1)^{{| r'_{1}|}{|r_{2}|}}[r_{1}, r'_{1}]\otimes \alpha(r_{2})\otimes \alpha(r'_{2}), \\ \nonumber && [r_{12},r'_{23}]=\sum \alpha(r_{1})\otimes [r_{2},r'_{1}]\otimes \alpha(r'_{2}), \\ \nonumber && [r_{13},r'_{23}]=\sum (-1)^{{|r'_{1}|}{| r_{2}|}} \alpha(r_{1}) \otimes \alpha(r'_{1}) \otimes [r_{2}, r'_{2}], \end{eqnarray} where $r=\sum r_{1}\otimes r_{2}$ and $r'=\sum r'_{1}\otimes r'_{2}$ $\in\mathfrak{L}\otimes\mathfrak{L}$. If $\alpha=Id$, then the (CHYBE) reduces to the (CYBE). \end{definition} \begin{definition} A \emph{(multiplicative) quasi-triangular Hom-Lie superbialgebra} is a (multiplicative) coboundary Hom-Lie superbialgebra in which $r$ is a solution of the CHYBE (\ref{1000}). In these cases, we also write $\Delta$ as $ad(r)$. \end{definition} \begin{remark}Note that we do not require $r$ to be skew-supersymmetric in a coboundary Hom-Lie superbialgebra, whereas in \cite{Drinfel'd V.G.4} $r$ is assumed to be skew-supersymmetric in a coboundary Lie bialgebra. We follow the convention in \cite{Majid} and \cite{Yau2}. \end{remark} \begin{remark}\label{masr} Condition \eqref{888} is a natural because from Remark \ref{2000} the compatibility condition (\ref{a}) in Hom-Lie superbialgebra $\mathfrak{L}$ says that the cobracket $\Delta$ is a 1-cocycle in $ C^1(\mathfrak{L},\mathfrak{L}^{\otimes2})$, where $\mathfrak{L}$ acts on $\mathfrak{L}^{\otimes2}$ via the $\alpha$-twisted adjoint action (\ref{2001}). The simplest 1-cocycles are the 1-coboundaries, i.e, images of $\delta^{0}_{HL}$. We can define the Hom-Lie 0-cochains and 0 th differential as follows, extending the definition in \cite{MakhloufSilvestrov3}. Set $C^0(\mathfrak{L},\mathfrak{L}^{\otimes2})$ as the subspace of $\mathfrak{L}^{\otimes2}$ consisting of elements that are fixed by $\alpha^{\otimes2}$. Then we define the differential $$\delta^{0}_{HL}:C^0(\mathfrak{L},\mathfrak{L}^{\otimes2})\rightarrow C^1(\mathfrak{L},\mathfrak{L}^{\otimes2})$$ by setting $\delta^{0}_{HL}(r)=ad(r),$ as in (\ref{action}). It is not hard to check that, for $r\in C^0(\mathfrak{L},\mathfrak{L}^{\otimes2})$, we have $\delta^{1}_{HL}(\delta^{0}_{HL}(r))=0,$ where $\delta^{1}_{HL}$ is defined in (\ref{fff}). In fact, what this condition says is that \begin{eqnarray}\label{rrrr} && 0=\delta^{1}_{HL}(\delta^{0}_{HL}(r))(x, y) \\ \nonumber && \ \ = \delta^{1}_{HL}(ad(r))(x, y) =ad_{[x, y]}(r)-ad_{\alpha(x)}(ad_{y}(r))+(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(ad_{x}(r)) \end{eqnarray} for all $x, y\in\mathfrak{L}$. We will prove (\ref{rrrr}) in Lemma \ref{compatiblite} below. Thus, such an element $\delta^{0}_{HL}(r)=ad(r)$ is a 1-coboundary, and hence a 1-cocycle. This fact makes $ad(r)$ (with $\alpha^{\otimes2}(r)=r$) a natural candidate for the cobracket in a Hom-Lie superbialgebra and also justifies the name coboundary Hom-Lie superbialgebra. \end{remark} The following result is the analogue of Theorem \ref{aaa} for coboundary or quasi-triangular Hom-Lie superbialgebras. It says that coboundary or quasi-triangular Hom-Lie superbialgebras deform into other coboundary or quasi-triangular Hom-Lie superbialgebras via suitable endomorphisms. \begin{thm}\label{zz} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta=ad(r), \alpha, r)$ be a coboundary Hom-Lie superbialgebra and an even map $\beta:\mathfrak{L}\rightarrow \mathfrak{L}$ be a morphism such that $\beta^{\otimes2}(r)=r$. Then $\mathfrak{L}_{\beta}=(\mathfrak{L}, [\cdot ,\cdot ]_{\beta}=\beta\circ[\cdot ,\cdot ], \Delta_{\beta}=\Delta\circ\beta,\beta\alpha, r)$ is also a coboundary Hom-Lie superbialgebra, which is multiplicative if $\mathfrak{L}$ is. Moreover, if $\mathfrak{L}$ is quasi-triangular, then so is $\mathfrak{L}_{\beta}$. \end{thm} \begin{proof} By Theorem \ref{aaa} we know that $\mathfrak{L}_{\beta}$ is a Hom-Lie superbialgebra, multiplicative if $\mathfrak{L}$ is. To check that $\mathfrak{L}_{\beta}$ is coboundary, first note that $(\beta\alpha)^{\otimes2}(r)=\beta^{\otimes2}\alpha^{\otimes2}(r) = r.$ To check the condition (\ref{888}) in $\mathfrak{L}_{\beta}$, we compute as follows: \begin{align*} \Delta_{\beta}(x)&=\beta^{\otimes2}(\Delta(x)) =\beta^{\otimes2}([x, r_{1}]\otimes\alpha(r_{2}))+\beta^{\otimes2}((-1)^{{|x|}{| r_{1}|}}\alpha(r_{1})\otimes[x, r_{2}])\\ &=[x, r_{1}]_{\beta}\otimes\beta\alpha(r_{2})+(-1)^{{|x|}{| r_{1}|}}\beta\alpha(r_{1})\otimes[x, r_{2}]_{\beta}. \end{align*} The last expression above is $ad_{x}(r)$ in $\mathfrak{L}_{\beta}$, which shows that $\mathfrak{L}_{\beta}$ is coboundary.\\ Finally, suppose in addition that $\mathfrak{L}$ is quasi-triangular, i.e., $r$ is a solution of the CHYBE in $\mathfrak{L}$. Using the notation in (\ref{1000}) we have: \begin{align*} 0=\beta^{\otimes3}([r_{12}, r_{13}]+[r_{12}, r_{23}]+[r_{13}, r_{23}]) =[r_{12}, r_{13}]_{\beta}+[r_{12}, r_{23}]_{\beta}+[r_{13}, r_{23}]_{\beta}, \end{align*} where the last expression is defined in $\mathfrak{L}_{\beta}$. This shows that $r$ is a solution of the CHYBE in $\mathfrak{L}_{\beta}$, so $\mathfrak{L}_{\beta}$ is quasi-triangular.\\ The following result is the analogue of Corollary \ref{aaaa} for coboundary or quasi-triangular Hom-Lie superbialgebras. It says that these objects can be obtained by twisting coboundary or quasi-triangular Lie superbialgebras via suitable endomorphisms. \end{proof} \begin{cor} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, r)$ be a coboundary Lie superbialgebra and an even map $\beta:\mathfrak{L}\rightarrow \mathfrak{L}$ be a Lie superalgebra morphism such that $\beta^{\otimes2}(r)=r$. Then $\mathfrak{L}_{\beta}=(\mathfrak{L}, [\cdot ,\cdot ]_{\beta}=\beta\circ[\cdot ,\cdot ], \Delta_{\beta}=\Delta\circ\beta,\beta, r)$ is a multiplicative coboundary Hom-Lie superbialgebra. If, in addition, $\mathfrak{L}$ is a quasi-triangular Lie superbialgebra, then $\mathfrak{L}_{\beta}$ is a multiplicative quasi-triangular Hom-Lie superbialgebra. \end{cor} \begin{proof} This is the $\alpha=Id$ special case of Theorem \ref{zz}, provided that we can show that\\ $\Delta\circ\beta=\beta^{\otimes2}\circ\Delta$. We compute as follows: \begin{align*} \beta^{\otimes2}(\Delta(x))&=\beta^{\otimes2}(ad_{x}(r)) =\beta^{\otimes2}([x, r_{1}]\otimes r_{2})+\beta^{\otimes2}((-1)^{{|x|}{|r_{1}|}}r_{1}\otimes[x, r_{2}])\\ &=[\beta(x), \beta(r_{1})]\otimes\beta(r_{2})+(-1)^{{|x|}{|r_{1}|}}\beta(r_{1})\otimes[\beta(x), \beta(r_{2})]\\ &=[\beta(x), r_{1}]\otimes r_{2}+(-1)^{{|x|}{|r_{1}|}}r_{1}\otimes[\beta(x), r_{2}] =ad_{\beta(x)}(r) =\Delta(\beta(x)). \end{align*} \end{proof} The next result says that every multiplicative coboundary or quasi-triangular Hom-Lie superbialgebra gives rise to an infinite sequence of multiplicative coboundary or quasi-triangular Hom-Lie superbialgebra. It is similar to Corollary \ref{www}. \begin{cor} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha, r)$ be a multiplicative coboundary (resp. quasi-triangular) Hom-Lie superbialgebra. Then $\mathfrak{L}_{\alpha^{n}}=(\mathfrak{L}, [\cdot ,\cdot ]_{\alpha^{n}}=\alpha^{n}\circ[\cdot ,\cdot ], \Delta_{\alpha^{n}}=\Delta\circ\alpha^{n},\alpha^{n+1}, r)$ is also a multiplicative coboundary (resp. quasi-triangular) Hom-Lie superbialgebra for each integer $n\geq0$. \end{cor} \begin{proof} This is the $\beta=\alpha^{n}$ special case of Theorem \ref{zz}. \end{proof} In the following result, we describe some sufficient condition under which a Hom-Lie superalgebra becomes a coboundary Hom-Lie superbialgebra.\\ In what follows, for an element $r=\sum r_{1}\otimes r_{2}$, we write $r_{21}$ for $\tau(r)=\sum (-1)^{{|r_{1}|}{|r_{2}|}}r_{2}\otimes r_{1}.$ \begin{lem}\label{compatiblite} Let $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ be a multiplicative Hom-Lie superalgebra and $r\in\mathfrak{L}^{\otimes2}$ be an element such that $\alpha^{\otimes2}(r)=r$, (\ref{2007}) and (\ref{777}), ((\ref{777})\ and\ (\ref{2007}),\ i.e., $|r|=\bar{0})$. Then $\Delta=ad(r):\mathfrak{L}\rightarrow\mathfrak{L}^{\otimes2}$ satisfies (\ref{a}), i.e., $ad_{[x, y]}(r)=ad_{\alpha(x)}(ad_{y}(r))-(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(ad_{x}(r))$ for $x, y\in\mathfrak{L}$. \end{lem} \begin{proof} We will use $\alpha^{\otimes2}(r)=r$, the skew-supersymmetry (\ref{701}) and the Hom-Jacobi identity of (\ref{702}) and $\alpha([x,y])=[\alpha(x),\alpha(y)]$, (multiplicative) in the computation below. For $x, y \in\mathfrak{L}$, we have: \begin{align*} ad_{[x, y]}(r)&=[[x, y], r_{1}]\otimes\alpha(r_{2})+(-1)^{{|[x, y]|}{|r_{1}|}}\alpha(r_{1})\otimes[[x, y], r_{2}]\\ &=[[x, y], \alpha(r_{1})]\otimes\alpha^{2}(r_{2})+(-1)^{{|x|}{|r_{1}|}}(-1)^{{|y|}{|r_{1}|}}\alpha^{2}(r_{1})\otimes[[x, y], \alpha(r_{2})]\\ &=\textbf{(}[\alpha(x), [y,r_{1}]]+(-1)^{{|x|}{|y|}}(-1)^{{|x|}{|r_{1}|}}[\alpha(y), [r_{1}, x]]\textbf{)}\otimes\alpha^{2}(r_{2})\\ &+(-1)^{{|x|}{|r_{1}|}}(-1)^{{|y|}{|r_{1}|}}\alpha^{2}(r_{1})\otimes\textbf{(}[\alpha(x), [y,r_{2}]]+(-1)^{{|x|}{|y|}}(-1)^{{|x|}{|r_{2}|}}[\alpha(y), [r_{2}, x]]\textbf{)}\\ &=[\alpha(x), [y,r_{1}]]\otimes\alpha^{2}(r_{2})+(-1)^{{|x|}{|y|}}(-1)^{{|x|}{|r_{1}|}}[\alpha(y), [r_{1}, x]]\otimes\alpha^{2}(r_{2})\\ &+(-1)^{{|x|}{|r_{1}|}}(-1)^{{|y|}{|r_{1}|}}\alpha^{2}(r_{1})\otimes[\alpha(x), [y,r_{2}]]+(-1)^{{|x|}{|y|}}(-1)^{{|y|}{|r_{1}|}}\alpha^{2}(r_{1})\otimes[\alpha(y), [r_{2}, x]]\\ &=[\alpha(x), [y,r_{1}]]\otimes\alpha^{2}(r_{2})+(-1)^{{|x|}{|y|}}(-1)^{{|x|}{|r_{1}|}}\alpha([y,r_{1}])\otimes [\alpha(x), \alpha(r_{2})]\\ &+(-1)^{{|y|}{|r_{1}|}}[\alpha(x), \alpha(r_{1})]\otimes\alpha([y,r_{2}])+(-1)^{{|x|}{|r_{1}|}}(-1)^{{|y|}{|r_{1}|}}\alpha^{2}(r_{1})\otimes[\alpha(x), [y,r_{2}]]\\ &-(-1)^{{|x|}{|y|}}[\alpha(y), [x,r_{1}]]\otimes\alpha^{2}(r_{2})-(-1)^{{|y|}{|r_{1}|}}\alpha([x, r_{1}])\otimes[\alpha(y), \alpha(r_{2})]\\ &-(-1)^{{|x|}{|y|}}(-1)^{{|x|}{|r_{1}|}}[\alpha(y), \alpha(r_{1})]\otimes\alpha([x, r_{2}])-(-1)^{{|x|}{|y|}}(-1)^{{|y|}{|r_{1}|}+{|x|}{|r_{1}|}}\alpha^{2}(r_{1})\otimes[\alpha(y), [x,r_{2}]]\\ &=ad_{\alpha(x)}\textbf{(}[y,r_{1}]\otimes\alpha(r_{2})+(-1)^{{|y|}{|r_{1}|}}\alpha(r_{1})\otimes[y, r_{2}]\textbf{)}\\ &-(-1)^{{|x|}{|y|}}ad_{\alpha(y)}\textbf{(}[x,r_{1}]\otimes\alpha(r_{2})+(-1)^{{|x|}{|r_{1}|}}\alpha(r_{1})\otimes[x, r_{2}]\textbf{)}\\ &=ad_{\alpha(x)}(ad_{y}(r))-(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(ad_{x}(r)). \end{align*} \end{proof} \begin{thm}\label{Lie} Let $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ be a multiplicative Hom-Lie superalgebra and $r\in\mathfrak{L}^{\otimes2}$ be an element such that $\alpha^{\otimes2}(r)=r, \ \ r_{21}=-r,$\ (\ref{2007}) and (\ref{777}), ((\ref{777})\ and\ (\ref{2007}),\ i.e., $|r|=\bar{0})$, and \begin{equation}\label{ppp} \alpha^{\otimes3}(ad_{x}([[r, r]]^{\alpha}))=0 \end{equation} for all $x\in\mathfrak{L}$, where $[[r, r]]^{\alpha}$ is defined in (\ref{1000}). Define $\Delta:\mathfrak{L}\rightarrow\mathfrak{L}^{\otimes2}$ as $\Delta(x)=ad_{x}(r)$ as in (\ref{888}).\\ Then $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha, r)$ is a multiplicative coboundary Hom-Lie superbialgebra. \end{thm} \begin{proof} We will show the following statements:\\ 1) $\Delta=ad(r)$ commutes with $\alpha$.\\ 2) $\Delta$ is skew-supersymmetric.\\ 3) The compatibility condition (\ref{a}) holds.\\ 4) The condition (\ref{ppp}) is equivalent Hom-super-coJacobi identity of $\Delta$ (\ref{jacobi}).\\ Write $r=\sum r_{1}\otimes r_{2}$, $r'=\sum r'_{1}\otimes r'_{2}$ and $\tau(r)=\sum (-1)^{{|r_{1}|}{|r_{2}|}}r_{2}\otimes r_{1}$, $\tau(r')=\sum (-1)^{{|r'_{1}|}{|r'_{2}|}}r'_{2}\otimes r'_{1}$. To show that $\Delta=ad(r)$ commutes with $\alpha$, pick an element $x\in\mathfrak{L}$, the summation sign will often be omitted in computation to simplify the typography. Using the definition $\Delta=ad(r)$, $\alpha([x, r_{1}])=[\alpha(x), \alpha(r_{1})]$, $\alpha([x, r_{2}])=[\alpha(x), \alpha(r_{2})]$ and the assumption $\alpha^{\otimes2}(r)=r$ we have \begin{align*} \Delta(\alpha(x))&=[\alpha(x), r_{1}]\otimes\alpha(r_{2})+(-1)^{{|x|}{|r_{1}|}}\alpha(r_{1})\otimes[\alpha(x), r_{2}]\\ &=\alpha([x, r_{1}])\otimes\alpha^{2}(r_{2})+(-1)^{{|x|}{|r_{1}|}}\alpha^{2}(r_{1})\otimes\alpha([x, r_{2}]) =\alpha^{\otimes2}(\Delta(x)). \end{align*} This shows that $\Delta$ commutes with $\alpha$.\\ Now we show that $\Delta=ad(r)$ is skew-supersymmetric. We have \begin{align*} \Delta(x)&=[x, r_{1}]\otimes\alpha(r_{2})+(-1)^{{|x|}{|r_{1}|}}\alpha(r_{1})\otimes[x, r_{2}].\\ \tau(\Delta(x))&=(-1)^{{|r_{1}|}{|r_{2}|}}([x, r_{2}]\otimes\alpha(r_{1})+(-1)^{{|x|}{|r_{2}|}}\alpha(r_{2})\otimes[x, r_{1}]). \end{align*} Then $\Delta(x)+\tau(\Delta(x))=ad_{x}(r+r_{21})=ad_{x}(0)=0,$ since \begin{align*} r+r_{21}&=\sum(r_{1}\otimes r_{2}+(-1)^{{|r_{1}|}{|r_{2}|}}r_{2}\otimes r_{1}). \end{align*} We will prove that the compatibility condition (\ref{a}) holds in Lemma \ref{compatiblite}.\\ Finally, we show that the Hom-super-coJacobi identity (\ref{jacobi}) of $\Delta=ad(r)$ is equivalent to (\ref{ppp}). Let us unwrap the Hom-super-coJacobi identity. Fix an element $x\in\mathfrak{L}$, and let $r'=\sum r'_{1}\otimes r'_{2}$ be another copy of $r$. Then we write \begin{align*} \omega&=(\alpha\otimes\Delta)(\Delta(x))\\ &=(\alpha\otimes\Delta)([x, r_{1}]\otimes\alpha(r_{2})+(-1)^{{|x|}{|r_{1}|}}\alpha(r_{1})\otimes[x, r_{2}])\\ &=\alpha([x, r_{1}])\otimes[\alpha( r_{2}), r'_{1}]\otimes\alpha( r'_{2})+(-1)^{{|r_{2}|}{|r'_{1}|}}\alpha([x, r_{1}])\otimes\alpha(r'_{1})\otimes[\alpha( r_{2}), r'_{2}]\\ &+(-1)^{{|x|}{|r_{1}|}}\alpha^2(r_{1})\otimes[[x, r_{2}], r'_{1}]\otimes\alpha(r'_{2})+(-1)^{{|x|}{|r_{1}|}}(-1)^{{|x|}{|r'_{1}|}}(-1)^{{|r_{2}|}{|r'_{1}|}}\alpha^2(r_{1})\otimes\alpha(r'_{1})\otimes[[x, r_{2}], r'_{2}]. \end{align*} Note \begin{align*} A_{1}&=\alpha([x, r_{1}])\otimes[\alpha( r_{2}), r'_{1}]\otimes\alpha( r'_{2}),\\ B_{1}&=(-1)^{{|r_{2}|}{|r'_{1}|}}\alpha([x, r_{1}])\otimes\alpha(r'_{1})\otimes[\alpha( r_{2}), r'_{2}],\\ C_{1}&=(-1)^{{|x|}{|r_{1}|}}\alpha^2(r_{1})\otimes[[x, r_{2}], r'_{1}]\otimes\alpha(r'_{2}),\\ D_{1}&=(-1)^{{|x|}{|r_{1}|}}(-1)^{{|x|}{|r'_{1}|}}(-1)^{{|r_{2}|}{|r'_{1}|}}\alpha^2(r_{1})\otimes\alpha(r'_{1})\otimes[[x, r_{2}], r'_{2}]. \end{align*} we get $ \omega=A_{1}+B_{1}+C_{1}+D_{1}. $ With these notations, the Hom-super-coJacobi identity of $\Delta=ad(r)$ (applied to $x$) becomes \begin{equation}\label{anis3} (1\otimes1\otimes1+\xi+\xi^2)\circ(\alpha\otimes\Delta)\circ\Delta(x) = \sum_{i=1}^{3}(A_{i}+B_{i}+C_{i}+D_{i})=0. \end{equation} Therefore, to prove the equivalence between the Hom-super-coJacobi identity of $\Delta$ and (\ref{ppp}), it suffices to show \begin{equation}\label{Hom} \alpha^{\otimes3}(ad_{x}([[r, r]]^{\alpha}))=\sum_{i=1}^{3}(A_{i}+B_{i}+C_{i}+D_{i}), \end{equation} which we will prove in Lemma \ref{xyz0} below.\\ The proof of Theorem \ref{Lie} will be completed once we prove the Lemma below. \end{proof} \begin{lem}\label{xyz0} The condition (\ref{Hom}) holds. \end{lem} \begin{proof} It suffices to show the following three equalities:\\ \begin{eqnarray}\label{102030} && \alpha^{\otimes3}(ad_{x}([r_{12}, r_{13}]))=A_{3}+B_{2}+C_{3}+D_{2}, \end{eqnarray} \begin{eqnarray}\label{anis1} && \alpha^{\otimes3}(ad_{x}([r_{12}, r_{23}]))=A_{1}+B_{3}+C_{1}+D_{3}, \end{eqnarray} \begin{eqnarray}\label{anis2} && \alpha^{\otimes3}(ad_{x}([r_{13}, r_{23}]))=A_{2}+B_{1}+C_{2}+D_{1}, \end{eqnarray} where the three bracket, which add up to $[[r,r]]^{\alpha}$, are defined in (\ref{cvc}). The proofs for the three equalities are very similar, so we will only give the proof of (\ref{102030}).\\ Since $r=r'$ and $r_{12}=-r$, we have \begin{eqnarray} && A_{3}=(-1)^{{|x|}{|r_{2}|}}(-1)^{{|r'_{1}|}{|r_{1}|}}[\alpha(r_{2}), r'_{1}]\otimes\alpha(r'_{2})\otimes\alpha([x, r_{1}]) \\ \nonumber && \ \ \ \ =(-1)^{{|x|}{|r_{2}|}}(-1)^{{|r_{1}|}{|r'_{1}|}}[\alpha(r'_{2}), r_{1}]\otimes\alpha(r_{2})\otimes\alpha([x, r'_{1}]) \\ \nonumber && \ \ \ \ =-(-1)^{{|x|}{|r_{2}|}}[\alpha(r'_{1}), r_{1}]\otimes\alpha(r_{2})\otimes\alpha([x, r'_{2}])\\ \nonumber && \ \ \ \ =-(-1)^{{|x|}{|r_{2}|}}[\alpha^2(r'_{1}), \alpha^2(r_{1})]\otimes\alpha^3(r_{2})\otimes\alpha([x, \alpha(r'_{2})])\\ \nonumber && \ \ \ \ =\alpha^{\otimes3}\textbf{(}(-1)^{{|x|}{|r_{2}|}}(-1)^{{|r'_{1}|}{|r_{2}|}}\alpha([r_{1}, r'_{1}])\otimes\alpha^2(r_{2})\otimes[x, \alpha(r'_{2})]\textbf{)}. \end{eqnarray} In the fourth equality we used $\alpha^{\otimes2}(r)=r$, $\alpha^{\otimes2}(r')=r'$. In the equality we used the skew-supersymmetry of $[-, -]$ and $\alpha([r_{1}, r'_{1}])=[\alpha(r_{1}), \alpha(r'_{1})]$, we know $(-1)^{{|x|}{|r_{1}|}}(-1)^{{|x|}{|r'_{1}|}}=1$, just to check the calculations. Similar computation give \begin{eqnarray*} && B_{2}=(-1)^{{|r'_{1}|}{|r_{1}|}}[\alpha(r_{2}), r'_{2}]\otimes\alpha([x, r_{1}])\otimes\alpha(r'_{1}) \\ \nonumber &&=\alpha^{\otimes3}\textbf{(}(-1)^{{|x|}{|r_{1}|}}(-1)^{{|x|}{|r'_{1}|}}(-1)^{{|r'_{1}|}{|r_{2}|}}\alpha([r_{1}, r'_{1}])\otimes[x, \alpha(r_{2})]\otimes\alpha^2(r'_{2})\textbf{)}, \end{eqnarray*} \begin{eqnarray*} C_{3}=(-1)^{{|r'_{1}|}{|r_{2}|}}[[x, r_{2}], r'_{1}]\otimes\alpha(r'_{2})\otimes\alpha^2(r_{1}) =(-1)^{{|x|}{|r'_{2}|}}[[r'_{2}, x], \alpha(r_{2})]\otimes\alpha^2(r_{1})\otimes\alpha^2(r'_{1}), \end{eqnarray*} \begin{eqnarray*} D_{2}=(-1)^{{|r'_{1}|}{|r_{2}|}}[[x, r_{2}], r'_{2}]\otimes\alpha^2(r'_{2})\otimes\alpha(r'_{1}) =(-1)^{{|r'_{1}|}{|r_{2}|}}[[x, r_{2}], \alpha(r'_{2})]\otimes\alpha^2(r_{1})\otimes\alpha^2(r'_{1}). \end{eqnarray*} Using, in addition, the skew-supersymmetry (\ref{701}) and the Hom-Jacobi identity (\ref{702}) of $[-, -]$, we add $C_{3}$ and $D_{2}$: \begin{align*} C_{3}+D_{2}&=(-1)^{{|r'_{1}|}{|r_{2}|}}\textbf{(}(-1)^{{|x|}{|r'_{2}|}}(-1)^{{|r_{2}|}{|r'_{2}|}}[[r'_{2}, x], \alpha(r_{2})]+[[x, r_{2}], \alpha(r'_{2})]\textbf{)}\otimes\alpha^2(r_{1})\otimes\alpha^2(r'_{1})\\ &=(-1)^{{|r'_{1}|}{|r_{2}|}}[\alpha(x),[r_{2}, r'_{2}]]\otimes\alpha^2(r_{1})\otimes\alpha^2(r'_{1})\\ &=(-1)^{{|r'_{1}|}{|r_{2}|}}[\alpha(x),[r_{1}, r'_{1}]]\otimes\alpha^2(r_{2})\otimes\alpha^2(r'_{2})\\ &=(-1)^{{|r'_{1}|}{|r_{2}|}}[\alpha(x),[\alpha(r_{1}), \alpha(r'_{1})]]\otimes\alpha^3(r_{2})\otimes\alpha^3(r'_{2})\\ &=\alpha^{\otimes3}\textbf{(}(-1)^{{|r'_{1}|}{|r_{2}|}}[x, [r_{1}, r'_{1}]]\otimes\alpha^2(r_{2})\otimes\alpha^2(r'_{2})\textbf{)}. \end{align*} Using the definition (\ref{action}) of $ad_{x}$, we now conclude that: \begin{align*} A_{3}+B_{2}+C_{3}+D_{2}&=\alpha^{\otimes3}\textbf{(}(-1)^{{|x|}{|r_{2}|}}(-1)^{{|r'_{1}|}{|r_{2}|}}\alpha([r_{1}, r'_{1}])\otimes\alpha^2(r_{2})\otimes[x, \alpha(r'_{2})]\textbf{)}\\ &+\alpha^{\otimes3}\textbf{(}(-1)^{{|x|}{|r_{1}|}}(-1)^{{|x|}{|r'_{1}|}}(-1)^{{|r'_{1}|}{|r_{2}|}}\alpha([r_{1}, r'_{1}])\otimes[x, \alpha(r_{2})]\otimes\alpha^2(r'_{2})\textbf{)}\\ &+\alpha^{\otimes3}\textbf{(}(-1)^{{|r'_{1}|}{|r_{2}|}}[x, [r_{1}, r'_{1}]]\otimes\alpha^2(r_{2})\otimes\alpha^2(r'_{2})\textbf{)}\\ &=\alpha^{\otimes3}\textbf{(}(-1)^{{|r'_{1}|}{|r_{2}|}}ad_{x}([r_{1}, r'_{1}]\otimes\alpha(r_{2})\otimes\alpha(r'_{2}))\textbf{)}\\ &=\alpha^{\otimes3}\textbf{(}ad_{x}((-1)^{{|r'_{1}|}{|r_{2}|}}[r_{1}, r'_{1}]\otimes\alpha(r_{2})\otimes\alpha(r'_{2}))\textbf{)}\\ &=\alpha^{\otimes3}\textbf{(}ad_{x}([r_{12},r_{13}])\textbf{)}. \end{align*} This proves (\ref{102030}).\\ The equalities (\ref{anis1}) and (\ref{anis2}) are proved by very similar computations.\\ Therefore, the equality (\ref{Hom}) holds. Together with (\ref{anis3}) we have shown that the the Hom-super-coJacobi identity of $\Delta=ad(r)$ is equivalent to $\alpha^{\otimes3}(ad_{x}([[r, r]]^{\alpha}))=0$ \end{proof} The following result is an immediate consequence of Theorem (\ref{Lie}). It gives sufficient conditions under which a Hom-Lie superalgebra becomes a quasi-triangular Hom-Lie superbialgebra. \begin{cor} Let $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ be a multiplicative Hom-Lie superalgebra and $r\in\mathfrak{L}^{\otimes2}$ be an element such that $\alpha^{\otimes2}(r)=r, \ \ r_{21}=-r,$\ (\ref{2007}) and (\ref{777}), ((\ref{777})\ and\ (\ref{2007}),\ i.e., $|r|=\bar{0})$, and $[[r, r]]^{\alpha}=0.$ Then $(\mathfrak{L}, [\cdot ,\cdot ], ad(r), \alpha, r)$ is a multiplicative quasi-triangular Hom-Lie superbialgebra. \end{cor} \begin{thm} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha, r)$ be a coboundary Hom-Lie superbialgebra. Then the following statements are equivalent, \\ (1) $\mathfrak{L}$ is a quasi-triangular Hom-Lie superbialgebra, i.e., $[[r,r]]^{\alpha}=0$ (\ref{1000}).\\ (2) The equality $(\alpha\otimes\Delta)(r)=-[r_{12}, r_{13}]$ holds, where the bracket is defined in (\ref{cvc}).\\ (3) The equality $(\Delta\otimes\alpha)(r)=[r_{13}, r_{23}]$ holds, where the bracket is defined in (\ref{cvc}). \end{thm} \begin{proof} The equivalence between three statements clearly follows from the equalities. Let $r'=\sum r'_{1}\otimes r'_{2}$ be another copy of $r$. Since $\Delta=ad(r)$ (\ref{888}), $r=r'$ and $r_{1}\otimes r_{2}=-(-1)^{{|r_{1}|}{|r_{2}|}}r_{2}\otimes r_{1}$. By calculation we will find results \begin{align*} (\alpha\otimes\Delta)(r_{1}\otimes r_{2})&=\alpha(r_{1})\otimes\Delta(r_{2}) =\alpha(r_{1})\otimes[r_{2}, r'_{1}]\otimes\alpha(r'_{2})+(-1)^{{|r'_{1}|}{|r_{2}|}}\alpha(r_{1})\otimes\alpha(r'_{1})\otimes[r_{2}, r'_{2}]\\ &=[r_{12}, r_{23}]+[r_{13}, r_{23}]. \end{align*} This shows the equivalence between statements (1) and (2). Likewise, we have \begin{align*} (\Delta\otimes\alpha)(r_{1}\otimes r_{2}) &=\Delta(r_{1})\otimes\alpha(r_{2}) =[r_{1},r'_{1}]\otimes\alpha(r'_{2})\otimes\alpha(r_{2})+(-1)^{{|r_{1}|}{|r'_{1}|}}\alpha(r'_{1})\otimes[r_{1}, r'_{2}]\otimes\alpha(r_{2})\\ &=-(-1)^{{|r'_{1}|}{|r_{2}|}}[r_{1}, r'_{1}]\otimes\alpha(r_{2})\otimes\alpha(r'_{2})-\alpha(r_{1})\otimes[r_{2}, r'_{1}]\otimes\alpha(r'_{2})\\ &=-[r_{12}, r_{13}]-[r_{12}, r_{23}]. \end{align*} This shows the equivalence between statements (1) and (3). \end{proof} \section{Cobracket perturbation in Hom-Lie superbialgebras } The purpose of this section is to study perturbation of cobrackets in Hom-Lie superbialgebras, following Drinfel'd's perturbation theory of quasi-Hopf algebras (\cite{Drinfel'd V.G.1}, \cite{Drinfel'd V.G.5}, \cite{Drinfel'd V.G.6}, \cite{Drinfel'd V.G.7}).\\ We address the following question : " If $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ is a Hom-Lie superbialgebra (Definition \ref{ae}) and $t\in\mathfrak{L}^{\otimes2}$, under what conditions does the perturbed cobracket $\Delta_{t}=\Delta+ad(t)$ give another Hom-Lie superbialgebra $(\mathfrak{L}, [\cdot ,\cdot ],\Delta_{t},\alpha)$ ?"\\ \\ Define the perturbed cobracket $\Delta_{t}=\Delta+ad(t)$. For $x\in\mathfrak{L}$ and $t=\sum t_{1}\otimes t_{2}\in\mathfrak{L}^{\otimes2}$ and also recall the adjoint map $ad_{x}:\mathfrak{L}^{\otimes n}\rightarrow \mathfrak{L}^{\otimes n}$ (\ref{action}) we have : \begin{align*} \Delta_{t}(x)&=\Delta(x)+ad_{x}(t) =\Delta(x)+ad_{x}(t_{1}\otimes t_{2}) =\Delta(x)+[x, t_{1}]\otimes\alpha(t_{2})+(-1)^{{|x|}{|t_{1}|}}\alpha(t_{1})\otimes[x, t_{2}]. \end{align*} This is a natural question because $\Delta$ is a 1-cocycle (Remark \ref{2000}), $ad(t)$ (\ref{action}) is a 1-coboundary when $\alpha^{\otimes2}(t)=t$ (Remark \ref{masr}), and perturbation of cocycles by coboundaries is a natural concept in homological algebra. Of course, we have more to worry about than just the cocycle condition (\ref{a}) because $(\mathfrak{L}, \Delta_{t},\alpha)$ must be a Hom-Lie supercoalgebra (Definition \ref{00001}).\\ \\ In the following result, we give some sufficient conditions under which the perturbed cobracket $\Delta_{t}$ gives another Hom-Lie superbialgebra. This is a generalization of \cite{Majid}, which deals with cobracket perturbation in Lie superbialgebras.\\ A result about cobracket perturbation in a quasi-triangular Hom-Lie superbialgebra, is given after the following result. We also briefly discuss triangular Hom-Lie superbialgebra, which is the Hom-Type version of Drinfel'd's triangular Lie bialgebra \cite{Drinfel'd V.G.4}.\\ Let us recall some notations first. For $t=\sum t_{1}\otimes t_{2}\in\mathfrak{L}^{\otimes2}$, the symbol $t_{21}$ denotes $\tau(t)=\sum (-1)^{{|t_{1}|}{|t_{2}|}} t_{2}\otimes t_{1}$. If $\varphi(x, y)$ is an expression in the elements $x$ and $y$, we set $$|\varphi(x, y)|=\varphi(x, y)-(-1)^{{|x|}{|y|}}\varphi(y, x).$$ For example, the compatibility condition $\Delta([x,y])=ad_{\alpha(x)}(\Delta(y))-(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(\Delta(x))$ (\ref{a})\\ is equivalent to $$\Delta([x,y])=|ad_{\alpha(x)}(\Delta(y))|.$$ Moreover $[[x, y], \alpha(z)]=|[\alpha(x), [y, z]]|$, where $|[\alpha(x), [y, z]]|=[\alpha(x), [y, z]]-(-1)^{{|x|}{|y|}}[\alpha(y), [x, z]]$.\\ By calculation the Hom-super-Jacobi identity (\ref{702}) is equivalent to $[[x, y], \alpha(z)]=|[\alpha(x), [y, z]]|$.\\ Note that we have $$|\varphi(x, y)+\psi(x, y)|=|\varphi(x, y)|+|\psi(x, y)|.$$ It is also noted to simplify writing $(1\otimes1\otimes1+\xi+\xi^2)=\circlearrowleft$. \begin{thm}\label{monsieur} Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ be a multiplicative Hom-Lie superbialgebra and $t\in\mathfrak{L}^{\otimes2}$ be an element such that $\alpha^{\otimes2}(t)=t, \ \ t_{21}=-t,$\ (\ref{2007}) and (\ref{777}), ((\ref{777})\ and\ (\ref{2007}),\ i.e., $|t|=\bar{0})$ and \begin{equation}\label{faxx} \alpha^{\otimes3}(ad_{x}([[t, t]]^{\alpha}+\circlearrowleft(\alpha\otimes\Delta)(t))=0 \end{equation} for all $x\in\mathfrak{L}$. Then $\mathfrak{L}_{t}=(\mathfrak{L}, [\cdot ,\cdot ], \Delta_{t}=\Delta+ad(t), \alpha)$ is multiplicative Hom-Lie superbialgebra. \end{thm} \begin{proof} To show that $\mathfrak{L}_{t}$ is a multiplicative Hom-Lie superbialgebra, we need to prove the following conditions:\\ It is clear that $(\mathfrak{L}, [\cdot ,\cdot ], \alpha)$ is a multiplicative Hom-Lie superalgebra.\\ It remains to show that $(\mathfrak{L}, \Delta_{t}, \alpha)$ co-multiplicative Hom-Lie supercoalgebra, and the compatibility condition (\ref{a}) holds for $\Delta_{t}$ and $[\cdot ,\cdot ]$.\\ Precisely we need to prove four things:\\ $\bullet$ $\alpha^{\otimes2}\circ\Delta_{t}=\Delta_{t}\circ\alpha$, equality is true because: \begin{align*} \Delta_{t}(\alpha(x))&=\Delta(\alpha(x))+ad_{\alpha(x)}(t).\\ \alpha^{\otimes2}(\Delta_{t}(x))&=\alpha^{\otimes2}(\Delta(x))+\alpha^{\otimes2}(ad_{x}(t)) =\Delta(\alpha(x))+\alpha^{\otimes2}(ad_{x}(t_{1}\otimes t_{2})). \end{align*} Using $\alpha^{\otimes2}(t)=t$, we have \begin{align*} \alpha^{\otimes2}(ad_{x}(t_{1}\otimes t_{2}))&=[\alpha(x), t_{1}]\otimes\alpha(t_{2})+(-1)^{{|x|}{|t_{1}|}}\alpha(t_{1})\otimes[\alpha(x), t_{2}] =ad_{\alpha(x)}(t). \end{align*} $\bullet$ $\Delta_{t}$ is skew-supersymmetric because: \begin{align*} \Delta_{t}(x)&=\Delta(x)+ad_{x}(t),\\ \tau(\Delta_{t}(x))&=-\Delta(x)+(-1)^{{|t_{1}|}{|t_{2}|}}\textbf{(}[x,t_{2}]\otimes\alpha(t_{1})+(-1)^{{|x|}{|t_{2}|}}\alpha(t_{2})\otimes[x,t_{1}]\textbf{)}. \end{align*} Then : $\Delta_{t}(x)+\tau(\Delta_{t}(x))=ad_{x}\textbf{(}t+(-1)^{{|t_{1}|}{|t_{2}|}}t_{2}\otimes t_{1}\textbf{)}=ad_{x}(t+t_{21})=ad_{x}(0)=0.$\\ $\bullet$ Now, we need to show the compatibility condition (\ref{a}) holds for $\Delta_{t}$ and $[\cdot ,\cdot ]$: \begin{equation}\label{a2016} \Delta_{t}([x,y])=ad_{\alpha(x)}(\Delta_{t}(y))-(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(\Delta_{t}(x)), \end{equation} which is equivalent to \begin{equation}\label{add1} \Delta_{t}([x,y])=|ad_{\alpha(x)}(\Delta_{t}(y))|. \end{equation} Since $\Delta_{t}=\Delta+ad(t)$, (\ref{add1}) is equivalent to \begin{align*} \Delta([x, y])+ad_{[x, y]}(t)&=|ad_{\alpha(x)}(\Delta(y))+ad_{\alpha(x)}(ad_{y}(t))| =|ad_{\alpha(x)}(\Delta(y))|+|ad_{\alpha(x)}(ad_{y}(t))|. \end{align*} Moreover, since $\Delta([x,y])=ad_{\alpha(x)}(\Delta(y))-(-1)^{{|x|}{|y|}}ad_{\alpha(y)}(\Delta(x))= |ad_{\alpha(x)}(\Delta(y))|$, because $\mathfrak{L}$ is a Hom-Lie superbialgebra, (\ref{add1}) is equivalent to, $ ad_{[x, y]}(t)=|ad_{\alpha(x)}(ad_{y}(t))|, $ which holds by Lemma \ref{compatiblite}.\\ $\bullet$ Finally, we must show the Hom-super-coJacobi identity of $\Delta_{t}$, which states \begin{equation}\label{shss} \circlearrowleft(\alpha\otimes\Delta_{t})(\Delta_{t}(x))=0 \end{equation} for all $x\in\mathfrak{L}$. Using the definition $\Delta_{t}=\Delta+ad(t)$. We can rewrite (\ref{shss}) as \begin{equation}\label{147a} \circlearrowleft(\alpha\otimes\Delta)(\Delta(x))+\circlearrowleft(\alpha\otimes\Delta)(ad_{x}(t))+\\ \circlearrowleft(\alpha\otimes ad(t))(\Delta(x))+\circlearrowleft(\alpha\otimes ad(t))(ad_{x}(t))=0. \end{equation} We already know that $\circlearrowleft(\alpha\otimes\Delta)(\Delta(x))=0$, which is the Hom-super-coJacobi identity of $\Delta$.\\ Moreover, in (\ref{anis3}) and (\ref{Hom}) (in the proof of Theorem \ref{Lie} with $t$ instead of $r$), we already showed that \begin{equation}\label{147b} \circlearrowleft(\alpha\otimes ad(t))(ad_{x}(t))=\alpha^{\otimes3}(ad_{x}([[t, t]]^{\alpha})). \end{equation} In view of (\ref{147a}) and (\ref{147b}), the Hom-super-coJacobi identity of $\Delta_{t}$ (\ref{shss}) is equivalent to \begin{equation}\label{mosq} \alpha^{\otimes3}(ad_{x}([[t, t]]^{\alpha}))+\circlearrowleft\textbf{(}(\alpha\otimes ad(t))(\Delta(x))+(\alpha\otimes\Delta)(ad_{x}(t))\textbf{)}=0. \end{equation} Using the assumption (\ref{faxx}), the condition (\ref{mosq}) is equivalent to \begin{equation}\label{m10} \circlearrowleft\textbf{(}(\alpha\otimes ad(t))(\Delta(x))+(\alpha\otimes\Delta)(ad_{x}(t))\textbf{)}=\alpha^{\otimes3}(\circlearrowleft ad_{x}((\alpha\otimes\Delta)(t))). \end{equation} We will prove (\ref{m10}) in Lemma \ref{edin} below.\\ The proof of Theorem \ref{monsieur} will be complete once we prove Lemma \ref{edin}. \end{proof} \begin{lem}\label{edin} The condition (\ref{m10}) holds. \end{lem} \begin{proof} Write $\Delta(x)=\sum x_{1}\otimes x_{2}$ and $t=\sum t_{1}\otimes t_{2}\in\mathfrak{L}^{\otimes2}$. Then the left-hand side of (\ref{m10}) is: \begin{align*} &\circlearrowleft\textbf{(}(\alpha\otimes ad(t))(\Delta(x))+(\alpha\otimes\Delta)(ad_{x}(t))\textbf{)}\\ &= \ \circlearrowleft\textbf{(}\alpha(x_{1})\otimes ad_{x_{2}}(t_{1}\otimes t_{2})+(\alpha\otimes\Delta)([x, t_{1}]\otimes\alpha(t_{2})+(-1)^{{|x|}{|t_{1}|}}\alpha(t_{1})\otimes[x, t_{2}])\textbf{)}\\ &= \ \circlearrowleft\textbf{(}\alpha(x_{1})\otimes[x_{2}, t_{1}]\otimes\alpha(t_{2})+(-1)^{{|x_{2}|}{|t_{1}|}}\alpha(x_{1})\otimes\alpha(t_{1})\otimes [x_{2}, t_{2}]\textbf{)}\\ &+\circlearrowleft\textbf{(}\alpha([x, t_{1}])\otimes\Delta(\alpha(t_{2}))+(-1)^{{|x|}{|t_{1}|}}\alpha^{2}(t_{1})\otimes\Delta([x, t_{2}])\textbf{)}. \end{align*} Write $\Delta(t_{2})=\sum t'_{2}\otimes t''_{2}$. Recall from (\ref{a}) that: $\Delta([x, t_{2}])=ad_{\alpha(x)}(\Delta(t_{2}))-(-1)^{{|x|}{|t_{2}|}}ad_{\alpha(t_{2})}(\Delta(x))$, because $\mathfrak{L}$ is a Hom-Lie superbialgebra. We can continue the above computation as follows :\\ \begin{align*} &= \ \circlearrowleft\textbf{(}\alpha(x_{1})\otimes[x_{2}, t_{1}]\otimes\alpha(t_{2})+(-1)^{{|x_{2}|}{|t_{1}|}}\alpha(x_{1})\otimes\alpha(t_{1})\otimes [x_{2}, t_{2}]+\alpha([x, t_{1}])\otimes\alpha^{\otimes2}(\Delta(t_{2}))\textbf{)}\\ &+\circlearrowleft\textbf{(}(-1)^{{|x|}{|t_{1}|}}\alpha^{2}(t_{1})\otimes[\alpha(x), t'_{2}]\otimes\alpha(t''_{2})+(-1)^{{|x|}{|t_{1}|}}(-1)^{{|x|}{|t'_{2}|}}\alpha^{2}(t_{1})\otimes\alpha(t'_{2})\otimes [\alpha(x), t''_{2}]\textbf{)}\\ &-\circlearrowleft\textbf{(}(-1)^{{|x|}{|t_{1}|}}(-1)^{{|x|}{|t_{2}|}}\alpha^{2}(t_{1})\otimes[\alpha(t_{2}),x_{1}]\otimes\alpha(x_{2})\\ &-(-1)^{{|x|}{|t_{1}|}}(-1)^{{|x|}{|t_{2}|}}(-1)^{{|x_{1}|}{|t_{2}|}}\alpha^{2}(t_{1})\otimes\alpha(x_{1})\otimes[\alpha(t_{2}), x_{2}]\textbf{)}. \end{align*} It follows the skew-supersymmetry of $\Delta$ applied to $x$ (i.e, $\sum x_{1}\otimes x_{2}=-\sum (-1)^{{|x_{1}|}{|x_{2}|}} x_{2}\otimes x_{1}$), $|t|=\bar{0}$, $t_{21}=-t$, and $\alpha^{\otimes2}(t)=t$, We find that \\ $\bullet$ $\alpha^{2}(t_{1})\otimes[\alpha(t_{2}),x_{1}]\otimes\alpha(x_{2}) =(-1)^{{|x_{1}|}{|x_{2}|}}\alpha(x_{2})\otimes\alpha^{2}(t_{1})\otimes[\alpha(t_{2}), x_{1}].$ \\ $\bullet$ $(-1)^{{|x_{1}|}{|t_{2}|}}\alpha^{2}(t_{1})\otimes\alpha(x_{1})\otimes[\alpha(t_{2}), x_{2}]=(-1)^{{|t_{1}|}{|t_{2}|}}(-1)^{{|x_{2}|}{|t_{1}|}}\alpha(x_{1})\otimes[\alpha(t_{2}), x_{2}]\otimes\alpha^{2}(t_{1}).$ That the first two terms and the last two terms above cancel out. Using the commutation of $\alpha$ with $[\cdot ,\cdot ]$ and $\Delta$ and $\alpha^{\otimes2}(t)=t$, the above computation continues as follows: \begin{align*} &= \ \circlearrowleft\textbf{(}\alpha([x, t_{1}])\otimes\alpha^{\otimes2}(\Delta(t_{2}))+(-1)^{{|x|}{|t_{1}|}}\alpha^{2}(t_{1})\otimes[\alpha(x), t'_{2}]\otimes\alpha(t''_{2})\\ &+(-1)^{{|x|}({|t_{1}|+|t'_{2}|})}\alpha^{2}(t_{1})\otimes\alpha(t'_{2})\otimes [\alpha(x), t''_{2}]\textbf{)}\\ &= \ \circlearrowleft\textbf{(}\alpha([x, \alpha(t_{1})])\otimes\alpha^{\otimes2}(\Delta(\alpha(t_{2})))+(-1)^{{|x|}{|t_{1}|}}\alpha^{3}(t_{1})\otimes[\alpha(x), \alpha(t'_{2})]\otimes\alpha^{2}(t''_{2})\\ &+(-1)^{{|x|}({|t_{1}|+|t'_{2}|})}\alpha^{3}(t_{1})\otimes\alpha^{2}(t'_{2})\otimes [\alpha(x), \alpha(t''_{2})]\textbf{)}\\ &=\alpha^{\otimes3}\textbf{(} \circlearrowleft\textbf{(}[x, \alpha(t_{1})]\otimes\Delta(\alpha(t_{2}))+(-1)^{{|x|}{|t_{1}|}}\alpha^{2}(t_{1})\otimes[x, t'_{2}]\otimes\alpha(t''_{2}) +(-1)^{{|x|}({|t_{1}|+|t'_{2}|})}\alpha^{2}(t_{1})\otimes\alpha(t'_{2})\otimes [x, t''_{2}]\textbf{)}\textbf{)}\\ &=\alpha^{\otimes3}(\circlearrowleft ad_{x}(\alpha(t_{1})\otimes t'_{2}\otimes t''_{2})) =\alpha^{\otimes3}(\circlearrowleft ad_{x}((\alpha\otimes\Delta)(t))). \end{align*} This proves (\ref{m10}). \end{proof} The following result is a special case of the previous theorem. \begin{cor}\ Let $(\mathfrak{L}, [\cdot ,\cdot ], \Delta, \alpha)$ be a multiplicative Hom-Lie superbialgebra and $t\in\mathfrak{L}^{\otimes2}$ be an element such that $\alpha^{\otimes2}(t)=t, \ \ t_{21}=-t,$\ (\ref{2007}) and (\ref{777}), ((\ref{777})\ and\ (\ref{2007}),\ i.e., $|t|=\bar{0})$ and $[[t, t]]^{\alpha}+\circlearrowleft(\alpha\otimes\Delta)(t)=0$ for all $x\in\mathfrak{L}$. Then $\mathfrak{L}_{t}=(\mathfrak{L}, [\cdot ,\cdot ], \Delta_{t}=\Delta+ad(t), \alpha)$ is multiplicative Hom-Lie superbialgebra. \end{cor}
1,108,101,563,059
arxiv
\section{Introduction} Optimal transport theory is becoming an essential tool for modern machine learning research. The development of efficient optimal transport algorithms led to a wide range of machine learning applications \cite{peyre2017computational, cuturi2013sinkhorn, solomon2014wasserstein, kloeckner2015geometric, ho2017multilevel, arjovsky2017wasserstein, patrini2018sinkhorn, lee2018minimax, genevay18a, staib2017parallel, ambrogioni2018wasserstein, mi2018variational}. A notable example is approximate Bayesian inference, where optimal transport techniques have been used for constructing probabilistic autoencoders \cite{tolstikhin2017wasserstein, patrini2018sinkhorn} and for general purpose variational Bayesian inference \cite{ambrogioni2018wasserstein}. However, the field that has been most deeply influenced by optimal transport theory is arguably generative modeling \cite{arjovsky2017wasserstein, genevay18a, gulrajani2017improved, adler2018banach, genevay2017gan, gemici2018primal}. The introduction of the Wasserstein generative adversarial network (wGAN) \cite{arjovsky2017wasserstein} was a milestone as it provided a more stable form of adversarial training. Generative adversarial networks (GANs) greatly improved the state of the art in image generation. Nevertheless, GAN training often leads to mode collapse, where part of the data space is ignored by the generative model. A possible way to mitigate this phenomenon is to use a collection of GANs, each modeling a part of the data space \cite{wang2016ensembles}. However, existing ensembling techniques are often heuristic in nature and do not provide a principled way for ensuring that the different generators model non-overlapping parts of the data space. In this paper, we derive an ensemble of generative models algorithm from first principles using the theory of semi-discrete optimal transport. The basic idea is to jointly learn a series of elements (prototypes) of a discrete distribution and the optimal transportation functions from these prototypes to the data space. The notion of optimality is determined by a transportation cost that measures the dissimilarity between the prototypes and the data points. The resulting k-GANs algorithm has strong theoretical connections with the k-means and k-medoids algorithms. In the k-GANs algorithm we learn $k$ prototypes that implicitly define a partitioning of the data space into non-overlapping cells. The distribution of the data in each of these cells is generated by a stochastic transportation function that maps the prototype into the data space. These transportation functions are parameterized by deep networks and are trained as regular GANs within their cell. The prototypes and the transportation functions are learned jointly so that the boundary of the cells shifts during training as a consequence of the changes in the prototypes. \section{Related work} From a theoretical point of view, our algorithm has a strong connection with the traditional k-means and k-medoid clustering methods \cite{forgy1965cluster, graf2007foundations}. This connection between k-means and semi-discrete optimal transport stems from the fact that semi-discrete transport problems implicitly define a Laguerre tessellation of the space, which reduces to the more familiar Voronoi tessellation in special cases \cite{peyre2017computational, graf2007foundations}. Recently, this connection has been exploited in a variational clustering algorithm which uses optimal transport theory in order to derive a more powerful clustering method \cite{mi2018variational}. \section{Background on optimal transport} In machine learning and statistics, optimal transport divergences are often used for comparing probability measures. Consider two probability measures $\nu(\de{x})$ and $\nu(\de{y})$. The optimal transport divergence between them is defined by the following optimization problem: \begin{equation} \OP{\mu}{\nu}{c} = \inf_{\gamma(\de{x},\de{y}) \in \Gamma} \int_{\mathcal{X} \times \mathcal{Y}} c(x,y) \gamma(\de{x},\de{y})~, \end{equation} where $c(x,y)$ is the cost of transporting probability mass from $x$ to $y$ and $\Gamma$ is the set of probability measures that have $\nu(\de{x})$ and $\mu(\de{y})$ as marginal measures over $x$ and $y$ respectively. The transportation nature of this problem can be seen by slightly reformulating the objective function by writing the joint measure $\gamma(\de{x},\de{y})$ as the product of a conditional measure $\gamma(\de{x}|y)$ and of the marginal measure $\mu(\de{y})$: \begin{equation} \OP{\mu}{\nu}{c} = \inf_{\gamma(\de{x}|y) \in \Gamma_\nu} \int_{\mathcal{Y}} \left( \int_{\mathcal{X}} c(x,y) \gamma(\de{x}|y) \right) \mu(\de{y})~, \end{equation} where the integral of the conditional measures $\gamma(\de{x}|y)$ has to be equal to $\nu(\de{x})$: \begin{equation} \Gamma_\nu = \set{\gamma(\de{x}|y)}{\int_\mathcal{Y} \gamma(\de{x}|y) \mu(\de{y}) = \nu(\de{x})}~. \label{eq: marginalization constraint} \end{equation} Therefore, the conditional measures $\gamma(\de{x}|y)$ can be interpreted as stochastic transportation functions that map each element of $y$ into a probability measure over $x$. \section{Semi-discrete optimal transport as ensemble of generative models} \label{sec: semi-discrete as ensemble} One of the main advantages of using optimal transport divergences is that they can be defined between probability measures with very different support. An important case is semi-discrete transport, where $\nu(\de{x})$ is is absolutely continuous with respect to Lebesgue measure $\de{x}$ while $\mu(\de{y})$ is a Dirac measure: $$ \mu(\de{y}) = \sum_j w_j \delta_{y_j}(\de{y})~. $$ Semi-discrete optimal transport has important machine learning applications. For our purposes, the minimization of a semi-discrete optimal transport divergences can be used for approximating the probability distribution of the data with a discrete distribution over a finite number of ``prototypes''. The semi-discrete optimal transport divergence can be rewritten as follows: \begin{align} \label{eq: semi-discrete optimal transport} \OP{\nu}{\mu}{c} &= \inf_{\gamma(x|y)} \int_\mathcal{Y} \left( \int_\mathcal{X} c(x,y) \gamma(\de{x}|y) \right) \mu(\de{y})\\ &= \sum_j w_j \inf_{\gamma(x|y_j)} \int_\mathcal{X} c(x,y_j) \gamma(\de{x}|y_j)~, \label{eq: semi-discrete transport} \end{align} with the following constraint: \begin{equation}\label{eq: marginalization constraint II} \nu(\de{x}) = \sum_j w_j \gamma(\de{x}|y_j)~. \end{equation} Note that each conditional measure $\gamma(\de{x}|y_j)$ can be interpreted as a generative model that maps a prototype $y_j$ into a probability distribution over the data points $x$. The optimization in Eq.~\ref{eq: semi-discrete transport} assures that these distributions are centered around their prototype (in a sense given by the cost function) while the marginalization constraint in Eq.~\ref{eq: marginalization constraint II} guarantees that the sum of all generative models is the real distribution of the data. In other words, the solution of this semi-discrete optimal transport problem provides an ensemble of local generative models. \section{Geometry of semi-discrete optimal transport problems} In this section we will summarize some known results of semi-discrete optimal transport that will provide the theoretical foundation for our work. Semi-discrete optimal transport has a deep connection with geometry as it can be proven that the transportation maps are piecewise constant and define a tessellation of the space $\mathcal{X}$. In order to show this, it is useful to introduce the unconstrained dual formulation of the optimal transport problem \cite{peyre2017computational}: \begin{equation} \OP{\mu}{\nu}{c} = \sup_{\boldsymbol{g}} \int_\mathcal{X} g^c(x) \nu(\de{x}) + \sum_j g_j w_j~, \label{eq: dual optimization} \end{equation} where $g^c(x)$ denotes the c-transform of the vector of dual weights $\boldsymbol{g}$: \begin{equation} g^c(x) = \inf_j \left(c(x,y_j) - g_j \right)~. \label{eq: c-transform} \end{equation} We can now reformulate the objective function in terms of a tessellation of $\mathcal{X}$: \begin{equation} \OP{\mu}{\nu}{c} = \sup_{\boldsymbol{g}} \mathcal{E}(\boldsymbol{g}) = \sup_{\boldsymbol{g}} \sum_j \int_{L_j(\boldsymbol{g})} \left( c(x,y_j) - g_j \right) \nu(\de{x}) + \sum_j g_j w_j~, \label{eq: dual optimization II} \end{equation} where the sets $L_j(g_j)$ are defined as $$ L_j(\boldsymbol{g}) = \set{x}{\forall{k},~~ c(x,y_j) - g_j < c(x,y_k) - g_k}~, $$ yielding a so-called Laguerre tessellation of $\mathcal{X}$. We can finally state the following important theorem, expressing the transportation maps in terms of the optimized Laguerre tessellation: \begin{theorem} The optimal transportation maps in the optimization problem in Eq.~\ref{eq: semi-discrete optimal transport} are given by the following formula: \begin{equation} \hat{\gamma}(\de{x}|y_j) = \restr{\nu}{L_j(\hat{\boldsymbol{g}})}(\de{x})~, \end{equation} where $\restr{\nu}{L_j(\hat{\boldsymbol{g}})}$ denotes the renormalized restriction of the probability measure $\nu$ to the set $L_j(\hat{\boldsymbol{g}})$ and $\hat{\boldsymbol{g}}$ is the solution of the optimization in Eq.~\ref{eq: dual optimization}. \label{th 1} \end{theorem} \begin{proof} The derivative of Eq.~\ref{eq: dual optimization II} with respect to $b_j$ is given by \cite{peyre2017computational}: \begin{equation} \frac{\partial \mathcal{E}(\boldsymbol{g})}{\partial g_j} = - \int_{L_j(\boldsymbol{g})} \nu(\de{x}) + w_j~. \end{equation} Since the problem is unconstrained, this implies that, for the optimal dual weights $\hat{\boldsymbol{g}}$, we have \begin{equation} \int_{L_j(\hat{\boldsymbol{g}})} \nu(\de{x}) = w_j~. \end{equation} By plugging this result into Eq.~\ref{eq: dual optimization II}, we obtain: \begin{align} \OP{\mu}{\nu}{c} &= \sum_j \int_{L_j(\hat{\boldsymbol{g}})} c(x,y_j) \nu(\de{x}) \\ &= \sum_j w_j \int_{\mathcal{X}} c(x,y_j) \restr{\nu}{L_j(\hat{\boldsymbol{g}})}(x)~. \end{align} By comparing this expression with the primal formulation in Eq.~\ref{eq: semi-discrete transport}, it follows immediately that the optimal transportation maps are given by the measures $\restr{\nu}{L_j(\hat{\boldsymbol{g}})}$. \end{proof} \section{Simultaneous optimization of weights and transportation maps} In this section, we prove the main theoretical result behind the method. Consider the problem of finding the set of prototypes, weights and transportation maps that minimize the semi-discrete optimal transport divergence in Eq.~\ref{eq: semi-discrete optimal transport}. This results in the following joint optimization problem: \begin{align}\label{eq: joint-optimization} \arginf_{y_j, w_j, \gamma(x|y_j)} \sum_j w_j \int_\mathcal{X} c(x,y_j) \gamma(\de{x}|y_j)~. \end{align} The solution of this optimization problem is an optimal ensemble of generative models. From the previous section, we know that the solution of the semi-discrete optimal transport problem is given by a tessellation of the target space into Laguerre cells $L_j(\boldsymbol{g})$ parameterized by a vector of dual weights $\boldsymbol{g}$. These cells are the support sets of the transportation maps. In the general case, these cells can be computed using computational geometry algorithms \cite{peyre2017computational}. Fortunately, the problem can be solved in closed form if we simultaneously optimize the weights and the transportation maps, as stated in the following theorem: \begin{theorem}[Formal solution of the joint optimization problem] \label{th: formal solution} The optimization problem given by \begin{equation} \arginf_{w_j, \gamma(\de{x}|y_j)} \sum_j w_j \int_\mathcal{X} c(x,y_j) \gamma(\de{x}|y_j)~, \end{equation} \label{eq: minimax weights} under the marginalization constraint given in Eq.~\ref{eq: marginalization constraint} is solved by the following Voronoi tessellation: \begin{equation} V_j = \set{x}{\forall{k},~~ c(x,y_j) < c(x,y_k)}~, \label{eq: voronoi set} \end{equation} where transportation maps are obtained by restricting the data distribution $\nu(\de{x})$ to each set of the tessellation \begin{equation} \hat{\gamma}(\de{x}|y_j) = \restr{\nu}{V_j}(\de{x}) \label{eq: minimax transporation maps} \end{equation} and the optimal weight are given by \begin{equation} \hat{w}_j = \int_{V_j} \nu(\de{x})~. \end{equation} \end{theorem} \begin{proof} It is easier to work with the dual optimization problem (see Eq.~\ref{eq: dual optimization II}). Enforcing the fact that the weight vector $\boldsymbol{w}$ should sum to one using Lagrange multipliers, we obtain the following unconstrained minimax optimization: \begin{align} &\inf_{\lambda} \inf_{\boldsymbol{w}} \sup_{\boldsymbol{g}} \mathcal{L}(\boldsymbol{w}, \boldsymbol{g}, \lambda) \\ &= \inf_{\lambda}\inf_{\boldsymbol{w}} \sup_{\boldsymbol{g}} \sum_j \int_{L_j(\boldsymbol{g})} \left( c(x,y_j) - g_j \right) \nu(\de{x}) + \sum_j g_j w_j + \lambda (1 - \sum_j w_j) ~. \end{align} We can find the critical point by setting the gradient to zero: \begin{align} &\frac{\partial \mathcal{L}}{\partial g_j} = - \int_{L_j(\boldsymbol{g})} \nu(\de{x}) + w_j = 0\\ &\frac{\partial \mathcal{L}}{\partial w_j} = g_j - \lambda~. \\ \end{align} The second equation implies that all the dual weights are equal to a constant. This implies that the Laguerre sets $L_j(\lambda)$ are Voranoi sets (Eq.~\ref{eq: voronoi set}). The first equation gives Eq.~\ref{eq: minimax weights} and the transportation maps in Eq.~\ref{eq: minimax transporation maps} are a consequence of Theorem~\ref{th 1}. Note that the resulting weights clearly respect the marginalization constraint. \end{proof} Using Theorem~\ref{th: formal solution}, we can write a simple expression for the optimal prototypes: \begin{align}\label{eq: optimal prototypes} \hat{y}_j = \arginf_{y_j} \int c(x,y_j) \restr{\nu}{V_j}(\de{x})~. \end{align} In other words, the prototypes are the medoid of the Voronoi sets with respect to the cost function $c(x, y)$. \section{Learning the transportation maps} The solution given in Eq.~\ref{eq: joint-optimization} is purely formal and does not directly provide a useful algorithm since the distribution that generated the data is not available. However, a practical algorithm can be obtained by minimizing a statistical divergence between this formal solution and a parametric model, such as a deep generative network. We will denote the probability measure induced by passing a latent measure $p(\de{z})$ though a deep network $F$ as ${F}_* p$ (where the bottom star denotes the pushforward of a measure through a measurable function). We approximate each optimal transportation map $\restr{\nu}{L_j}(\de{x})$ as follows: $$ q_j(\de{x}) = {F_j}_* p_j~. $$ In practice, in the most naive implementation, this means that we reject samples that land outside $L_j$. We train each network $F_j$ by minimizing a statistical divergence: \begin{equation}\label{eq: divergence minimization} \mathcal{L}_j = \divergence{\restr{\nu}{L_j}}{q_j}~. \end{equation} This is possible when the divergence $D$ solely requires the ability to sample from $\restr{\nu}{L_j}$ since we can sample from the dataset and reject all samples that do not land in $L_j$. For example, using the dual formulation of the Wasserstein distance, we can train the generators (Wasserstein GANs) by optimizing the following minimax problem using stochastic gradient descent (SGD) \cite{arjovsky2017wasserstein}: \begin{equation} \inf_F \sup_{f \in L^1} \W{\restr{\nu}{L_j}}{q_j}~ = \inf_F \sup_{f \in L^1} \left( \mean{f(x)}{q_j(\de{x})} - \mean{f(x)}{\restr{\nu}{L_j}} \right). \label{eq: wasserstein gan} \end{equation} where $L^1$ is the space of Lipschitz continuous functions. In practice, we approximate the samples $ $ with samples from a finite dataset and we parameterized both $F$ and $f$ as deep neural networks. Furthermore, $f$ is regularized by the following soft Lipschitz regularization term: \begin{equation}\label{eq: regularization} \mathcal{R}[F] = \gamma \mean{\text{ReLu}(|f(x) - f(y)| - 1} {x, y \sim_{\text{iid}} \restr{\nu}{L_j}}~. \end{equation} Using the trained generators, we can obtain a parameterized proxy for the loss in Eq.~\ref{eq: optimal prototypes} that we will use to train the prototypes: \begin{align}\label{eq: optimal prototypes, proxy loss} \mathcal{W}_j[y_j] = \int c(x,y_j) q_j(\de{x})~. \end{align} \section{The k-GANs algorithm} We are finally ready to formulate the algorithm. The basic idea is to minimize Eq.~\ref{eq: joint-optimization} using a two step approach similar to the expectation maximization scheme used in the k-means algorithm \cite{forgy1965cluster}. In the first step, we keep the generators fixed and we train the prototypes by minimizing Eq.~\ref{eq: optimal prototypes, proxy loss} with $n$ SGD steps. In the second step, we keep the prototypes (and consequently the tessellation) fixed and we train the generators by minimizing Eq.~\ref{eq: wasserstein gan} with $m$ SGD steps (cf. Algorithm~\ref{alg:kGANs}). We named this algorithm k-GANs since it can be interpreted as a parametric version of the well-known k-medoids method. Specifically, a stochastic version of k-medoids is obtained if we replace the trained deep generators $F_j$ with nonparametric generators $G_j$ that sample with uniform probability the elements of the training set that belongs to the $L_j$ set. This further reduces to a stochastic version of the k-means algorithm is we use the squared euclidean distance as cost function. \begin{algorithm} \caption{k-GANs. k: Number of GANs, N: Number of epochs, M: Number of iterations per epoch. }\label{alg:kGANs} \begin{algorithmic}[1] \Procedure{k-GANs}{$k, N, M$} \State \texttt{Initialize k generators} \State \texttt{Initialize k discriminators} \State \texttt{Initialize the prototypes} \For{\texttt{n from 1 to N}} \Comment{loop over epochs} \For{\texttt{j from 1 to k}} \Comment{loop over GANs} \For{\texttt{m from 1 to M}} \Comment{loop over iterations} \State $\text{batch} \sim \text{Dataset}$ \State $\text{batch}_j = [x ~ \text{for} ~x ~\text{in} ~ \text{batch} ~ \text{if} ~ x ~ \text{in} ~ V_j(\text{prototypes})]$ \Comment{reject outside the set $V_j$} \State \texttt{Train discriminator and generator using $\text{batch}_j$} \Comment{(Eq.~\ref{eq: wasserstein gan}, \ref{eq: regularization})} \State \texttt{Train prototypes using samples from the generator} \Comment{(Eq.~\ref{eq: optimal prototypes, proxy loss})} \EndFor \EndFor \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \subsection{The k-generators algorithm} The theory outlined in this papar is not specific to GANs and can be directly applied to any generative model based on the minimization of a statistical divergence. For example, the approach can be used with variational autoencoders \cite{kingma2014auto} and sequential generative models such as those used in natural language processing \cite{sundermeyer2012lstm}. \section{Choosing the cost function} The clustering behavior of the k-GANs algorithm depends on the choice of the cost function $c(x,y)$. The shape of the cost determines the boundaries between the sets of the Voronoi tessellation. These boundaries are in general curved, except when $c(x,y)$ is a monotonic function of a quadratic form. \subsection{$L_p$. Euclidean and feature costs} The simplest choice for the cost function is given by the $p$-th power of a $l_p$ norm: \begin{equation} c_p(x, y) = \lpnorm{x - y}{p}^p~. \label{eq: lp norm} \end{equation} The boundaries induced by this family of norms are very well-studied and leads to different clustering behaviors \cite{hathaway2000generalized}. The most common choice is of course the familiar $L_2$ norm which to the familiar (Euclidean) k-means clusters. However, $l_p$ norms can lead to sub-optimal clustering in highly structured data such as in natural images as the boundaries tend to be driven by low-level features and ignore semantic information. A possible way of basing the partitioning on more semantic feature is to consider a $l_p$ norm in an appropriate feature space: \begin{equation} c_p^{(f)}(x, y) = \lpnorm{f(x) - f(y)}{p}^p~, \label{eq: feature lp norm} \end{equation} where the feature map $f$ maps the raw data to a feature space. Usually, $f$ is chosen to been a deep network trained on a supervised task. \subsection{Semi-supervised costs} Another interesting way to insert semantic information into the cost function is to use labels on a subset of data. For example, we can have a cost of the following form: \begin{equation} c_{ss}(x, y) = \theta(\mathfrak{l}(x), \mathfrak{l}(y)) c(_p(x,y) ~, \label{eq: semi-supervised} \end{equation} where the function $\mathfrak{l}(x)$ assign a discrete value based on whether the data-point $x$ is labeled and on its label. The function $\theta$ then scales the loss based on this label information. For example, $\theta$ can be equal to $0.1$ when two data-points have the same label, equal to $10$ when datapoints have different labels and equal to one when one or both of the datapoints are unlabeled. Note that, in order to use this semi-supervise cost in the k-GANs algorithm we need to be able to assign a label on the prototypes. A possibility is to train a classifier on the labeled part of the dataset. Alternatively, we can simply select the label of the closest labeled data-point. \section{Experiments} In this section we validate the k-GANs method on a clustered toy dataset and on two image datasets: MNIST and fashion MNIST \cite{xiao2017fashion, lecun1998gradient}. We compare the performance of the k-GANs example against the performance of individual GANs. In all our experiments we used the Euclidean distance as cost function. \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{quantitative}    \caption{Results of the experiments in the toy dataset. A) Generated samples and Voronoi partition induced by the prototypes. The top row shows the result of the Wasserstein GAN baseline while the bottom shows the results for the k-GANs. B) Coverage and precision of the generated samples ensembled over the three toy datasets.}    \label{figure 1} \end{figure} \subsection{Toy dataset} We constructed several toy datasets (TD) comprised of randomly sampled coordinates on 2D plane, which were masked to create circular clusters. The first TD had two circular clusters of data points that fell within a radius of 0.25, centered on (-0.5, 0) and (0.5, 0); similarly the second TD had three circular clusters of data points, centered on (-0.5, -0.5), (0.5, -0.5) and (0, 0.5); and finally the third TD had four circular clusters centered on (-0.5, -0.5), (0.5, -0.5), (0.5, 0.5) and (-0.5, 0.5). We trained a Wasserstein k-GANs for $k$ ranging from 1 (baseline) to 4. We repeated the experiment 10 times. We used a 10-dimensional latent space for each of our generators. The generator network architecture was constructed as follows: a fully connected input layer of 32 units with batch normalization and leaky ReLU activation function, followed by fully connected layers of 16, 8 and finally 2 units. First two layers had batch normalization and leaky ReLUs, while the last one had sigmoid activation. The discriminator network had two fully connected layers with 16 and 8 units. For optimization, we used Adam with $\alpha = 10^{-4}$ for the generator and the discriminator networks, and $\alpha = 10^{-3}$ for the prototype. A burn-in parameter of 600 was introduced to the 60 000 iterations of training of each prototype and the corresponding generator/discriminator networks by minimizing Wasserstein distance. Figure~\ref{figure 1} shows the resulting tessellation and the sampled produced by each generator. The case corresponding to k equal to one is the baseline Wasserstein GAN. We evaluated the performance of the methods using two metrics: coverage and precision. Coverage is quantified by binning the plane in a 2D grid and counting the fraction of bins inside the circular masks that contain a generated data-point. The precision metric is given by the fraction of generated datapoints that are inside the masks. We compared a GAN baseline with two k-GANs runs where k was set equal to the number of clusters in the dataset. In one of these two runs the prototypes were initialized using k-means on the generated data while in the other they were sampled randomly from uniform distributions ranging from -1 to 1. Figure~\ref{figure 1} shows the metrics for all methods. Both k-GANs methods reach significantly higher performance than the baseline. \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{partition_figure}    \caption{Results on MNIST with k = 4. A) Partition induced by the prototypes (black and white figures) in the t-SNE space. B) real (top row) and generated (bottom row) images corresponding to each prototype. The color surrounding the images matches the color scheme of the partition.}    \label{figure 2} \end{figure} \subsection{MNIST and fashion MNIST} We applied the k-GANs algorithm on MNIST and Fashion MNIST. We trained a Wasserstein k-GANs for $k$ ranging from 1 (baseline) to 4. Given our limited computational resources, we could train a single run on both models. Prototypes were initialized using k-means algorithm, and samples were assigned to the nearest prototype in batches of 100 during training. We used a 100-dimensional latent space for each of our generators. We used the following generator network architecture: a fully connected input layer of 12544 units with batch normalization and leaky ReLU activation function (output of which was reshaped to 256 x 7 x 7), followed by three deconvolution layers of 128, 64 and 1 units. First two had batch normalization and leaky ReLUs, while the last one had sigmoid activation. All of them had 5 x 5 kernels with a stride of 2 x 2 except for the first which had a stride of 1 x 1. The discriminator network had two convolutional layers with 64 and 128 units of size 5 x 5 and stride 2 x 2 and a linear layer of with a single output unit. Figure~\ref{figure 2} shows the results corresponding to $k = 4$. The figure shows the partition of the image space embedded into a 2D plane using t-SNE embedding. The images inside the sets are their prototypes. Figure~\ref{figure 2}B shows the real images and generated samples corresponding to each prototype. Figure \ref{figure 3} shows prototypes and samples on MNIST and fashion MNIST for the baseline and k-GANs with $k=4$. The k-GANs produced diversified samples except for one of the generator in fashion MNIST that collapsed on its mode. On the other hand, both the baseline models suffered from severe mode collapse. While it is difficult to draw strong conclusions from a single run, the results suggest that the k-GANs approach improves the stability of the base model. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{samplesfig.pdf}    \caption{Samples of k-GANs and baselines for MNIST and fashion MNIST.}    \label{figure 3} \end{figure} \section{Discussion} In this paper we introduce method for training an ensemble of generators based on semi-discrete optimal transport theory. Each generator of the ensemble is associated to a prototype. These prototypes induce a partition of the data space and each set of the partition is modeled by an individual generator. This protects the algorithm from mode collapse as each generator only needs to cover a localized portion of the data space. \bibliographystyle{unsrtnat}
1,108,101,563,060
arxiv
\section{Introduction} In this paper we consider lattice packings of spheres in real $n$-dimensional space $\R^n$ and their kissing numbers. Recall that the maximum kissing number is known only in a handful of dimensions, the largest being $n=24$ for which the Leech lattice $\Lambda_{24}$ gives the optimal kissing number $\tau(\Lambda_{24}) =196560.$ Recall also, that the random choice procedure guarantees ([Ch],[Sh],[W]) the existence of non-lattice packings $P_n$ with $\frac{\log_2\tau(P_{n})}{n}\ge \log_2\frac{2}{\sqrt 3}\simeq 0.2075...$, the upper bound of Kabatianski-Levenstein [KL] being $0.4041...\,.$ However, for lattice packings this procedure does not work, and as far as I know, no reasonable lower bound for the maximum lattice kissing number $\tau_n^l$ is known for $n\lra \infty.$ For instance, the Barnes-Wall lattices $BW_n$ with $\,n=2^m$ give the quasipolynomial bound $\tau_n^l\ge n^{c\log n}$, i.e. $\log\tau_n^l\ge {c\log^2 n},$ which can hardly be qualified as "reasonable". The main purpose of the present paper is to give an exponential lower bound for $\tau_n^l$. This is achieved by applying Constuctions D and E from [BS],[BCS] to codes from [ABV] having exponentially many light vectors. In order to apply Constuctions D and E we need specific good curves (the curves in the Garcia-Stichtenoth towers [GS],[GS1] do not perfectly match our construction) and some Drinfeld modular curves [Ge1],[E] perfectly suit our purposes. \smallskip Our main result is \begin{theorem}\label{t3}We have \beq\label{1.30}\frac{\log(\tau^l_{N})}{N}\ge\frac{1}{20}\left(1-\frac{2\log 33}{31}\right)- \frac{2+2\log N }{N} \eeq for $N=5\cdot 2^{10n +2}$ and any $n\ge 2$; also, \beq\label{1.3}\frac{\log(\tau^l_{N})}{N}\ge \frac{1}{24}\left(1-\frac{2\log 65}{63}\right)- \frac{2+2\log N}{N} \eeq for $N=3\cdot 2^{12n +3}$ and any $n\ge 2$; \beq\label{1.41}\frac{\log(\tau^l_{N})}{N}\ge\frac{1}{28}\left(1-\frac{2\log 129}{127}\right)-\frac{2+2\log N}{N} \eeq for $N= 7\cdot 2^{14n+2} $ and any $n\ge 2$, where $$\frac{1}{20}\left(1-\frac{2\log 33}{31}\right)\simeq 0.033727...,\;\frac{1}{24}\left(1-\frac{2\log 65}{63}\right)\simeq 0.033700...,\;$$ $$ \frac{1}{28}\left(1-\frac{2\log 129}{127}\right)\simeq 0.0317709...\,.$$ \end{theorem} All our logarithms are binary. \begin{corollary}\label{c1}We have \beq\label{c}\frac{\log(\tau^l_n)}{n}\ge c_0\,\eeq for some $c_0>0$ and any $n\ge 1$. \end{corollary} The exact value of $c_0$ is not clear, but $c_0=0.02$ is probably sufficient. \smallskip It is possible to ameliorate the constants slightly, if we do not insist on the effectivity of results: \begin{theorem}\label{t1}We have \beq\label{1.10}\frac{\log(\tau^l_{N})}{N}\ge\frac{1}{20}\left(\frac{21}{31}-\log\frac{1024}{1023}\right)- o(1)\simeq 0.033800... -o(1) \eeq for $N=5\cdot 2^{10n +2}$, \beq\label{1.1}\frac{\log(\tau^l_{N})}{N}\ge \frac{1}{24}\left(\frac{17}{21}-\log\frac{4096}{4095}\right)- o(1) \simeq 0.033715...- o(1)\,\eeq for $N= 3\cdot 2^{12n+3},$ \beq\label{1.21}\frac{\log(\tau^l_{N})}{N}\ge \frac{1}{28}\left(\frac{113}{127}-\log\frac{16384}{16383}\right)- o(1)\simeq 0.031774...- o(1)\,\eeq for $N= 7\cdot 2^{14n+2}.$ \end{theorem} In fact, the implied functions in $o(1)$ terms can be made explicit, but they decrease slowly and their precise calculation is not justified. \smallskip \begin{corollary}\label{c3}We have $$\lim\sup_{n\lra \infty}\frac{\log(\tau^l_n)}{n}\ge \frac{1}{20}\left(\frac{21}{31}-\log\frac{1024}{1023}\right)\,. $$ \end{corollary}\smallskip For the lower limit we can prove \begin{theorem}\label{t22}Denote $A=\log\,\frac{4096}{4095}$. We have then \beq\label{1.5}\lim\inf_{n\lra \infty}\frac{\log(\tau^l_n)}{n}\ge \frac{(17-21A)\de_0}{504} \simeq 0.021937...\;, \eeq where $\de_0\simeq 0.6506627...$ is the unique root of the equation $$ {21H\left(\de\right)} ={2\de}(4+21A +(17-21A)\de) $$ in the interval $(0.5, 1).$ \end{theorem} One can think that $c_0$ in \eqref{c} can be chosen rather close to that value.\smallskip The rest of the paper is organized as follows: in Section 2 we recall some basic definitions and results on lattices and error-correcting codes. Section 3 is devoted to Constructions D and E from [BS],[BCS], while Section 4 recalls and slightly modifies the constructions from [ABV]. We describe some known good curve families in Section 5 and prove our results in Section 6. \smallskip {\em Acknowledgement.} I thank G. Kabatianski for drawing my attention to the problem of asymptotics for lattice kissing numbers. \section{Preliminaries} In this section we recall some basic definitions and results on lattices and linear error-correcting codes. \subsection{Lattice packings}\smallskip A sphere packing is a configuration of nonintersecting equal open spheres in $\R^ N$. Let $d$ be the diameter of the spheres; then the distance between any two sphere centers is at least $d.$ Thus a packing is a set of points $P$ in $\R^ N$ such that the minimum distance between any two of them is at least $d.$ If $P$ is an additive subgroup of $\R^ N$ it is called a lattice or a lattice packing; below we are concerned mainly with such packings. For any packing $P$ its density $\De(P)$ is defined as the fraction of space covered by spheres (which can be defined as the upper limit of this fraction inside a large cube of tending to infinity size). If $L$ is a lattice then a choice of basis gives an embedding $e_L:\Z^n \lra \R^n$; its matrix is called a generating matrix of the lattice. For the diameter of spheres one can take $d(L)=\min\{ |v|: v\in L, v\ne 0\}$. For any packing $P\ss \R^n$ the ratio $\nu (P)= \De(P)/V_n$ is called its center density, where $V_n= \frac{\pi^{n/2}}{\Gamma(n/2 + 1)}$ is the volume of the unit sphere. The ratio $\lambda(P) ={\log \De(P)}/{n}$ is called the density exponent of $P$; thus,\linebreak $\De(P)=2^{-\lambda(P)n}.$ The Minkowski bound, which is a corollary of the Minkowski-Hlavka theorem, says that some lattice families $\{L_n\ss \R^n\}$ satisfy $\lambda(L_n)\le 1;$ however, no construction is known for such families. On the other hand, the Kabatiansky-Levenstein bound says that $\lambda(P_n)\ge 0.599-o(1)$ for any family of packings $\{P_n\ss \R^n\}$. Families of packings with $\lim\inf_{n\lra \infty }\lambda(P_n) <\infty$ are called {\em asymptotically good.} It is not easy to construct such families, especially for lattice packings. The best known results in that direction use algebraic geometry codes and similar constructions (see [LT],[RT]).\smallskip Another important parmeter of a packing $P\ss \R^n$ is its kissing number $$\tau(P)=\max_{x\in P}|\{y\in P:|x-y|=d\}| .$$ A random choice argument gives (see [Ch],[Sh]) the existence of (non-lattice) packings $P_n\ss \R^n$ with $$\lim\inf_{n\lra \infty }\frac{\log\tau(P_{n})}{n}\ge \log\frac{2}{\sqrt 3}\simeq 0.2075..,$$ whereas the Kabatiansky-Levenstein bound [KL] for $\tau$ says that $$\lim\sup_{n\lra \infty }\frac{\log \tau(P_{n})}{n}\le 0.4041...\,.$$ We will say that a family of packings $P_n\ss \R^n$ is {\em $\tau$-asymptotically good} whenever $$\lim\sup_{n\lra \infty }\frac{\log \tau(P_{n})}{n}>0.$$ Since the random choice argument does not work for lattices, it is not clear whether $\tau$-asymptotically good lattice families exist, and our main purpose is to confirm their existence. \subsection{Error-correcting codes}\smallskip Let us recall several facts about (linear error-correcting) codes; for additional information we refer to [MWS]; see also [TVN, Ch. 1]. We fix a finite field $\F_q.$ A $q$-ary linear code is simply a subspace $C\subseteq \F_q^n,$ where $n$ is called the length of $C,$ and the ratio $R= k/n$ for $\,k= \dim\, C$ is called the rate of $C$. The minimum distance $d=d(C)$ is the minimum Hamming weight $wt(c)$, i.e. the number of nonzero coordinates, of $c\in C\setminus\{0\}$; the ratio $\de= d/n$ is called the relative minimum distance. We say in this case that $C$ is an $[n, k, d]_q$-code. A choice of basis in $C$ defines a linear map $\phi_C: \F_q^k\lra\F_q^n$ and its matrix is called a generating matrix of $C$. A set of codes $C_1\ss \ldots \ss C_m\subseteq \F_q^n $ is called a nested family. For $C\subseteq \F_q^n$ its dual code $ C^\perp$ is the orthogonal complement of $C$: $$ C^\perp=\{v\in \F_q^n: v\cdot c =0,\,\forall c\in C \}, $$ where $ v\cdot c =v_1c_1+\ldots+v_nc_n$; $C^\perp$ is an $[n, n-k, d^\perp]_q$-code for some $d^\perp$. \smallskip A random choice argument shows that asymptotically for $n\lra\infty$ and fixed $\de$ the rate $R$ of the best linear codes satisfies the Gilbert-Varshamov bound $$ R=R_q(\de)\ge 1-H_q(\de)=1-\frac{\de\log (q-1)+H(\de)}{\log q}$$ where $H(\de)=-\de\log \de-(1-\de)\log (1-\de)$ is the binary entropy function. \subsection{Algebraic geometry codes}\smallskip All our curves here and below are smooth projective absolutely irreducible over a finite field $\F_q$; let $X$ be such a curve of genus $g$, let $D$ be an $\F_q$-rational divisor of degree $a \ge g-1$, and let (see, e.g., [TVN], Sec.2.2) $$L(D) =\{ f \in \F_q(X):(f) + D \ge 0\}$$ be the associated function space. For a set ${\mathcal P} =\{P_1,\ldots, P_n\}$ of $\F_q$-rational points on $X$ with ${\mathcal P}\bigcap {\rm Supp} \,D=\emptyset$ the evaluation map $$ev_{\mathcal P}:L(D)\lra\F^n_q,\; ev_{\mathcal P}(f)=(f(P_1),\ldots,f(P_n))$$ is well defined. Whenever $a < n$ this map is injective and its image is a linear \linebreak $q$-ary code $C(X, D, {\mathcal P})$ of length $n,$ dimension $k \ge a-g + 1$ (by the Riemann-Roch theorem), and distance $d > n-a$ (since the number of zeros of a function cannot exceed the number of poles). If $D =aP_0$ for an $\F_q$-rational point $P_0\ne P_i,$ $\,i=1,..., n$, we get a nested family of codes $C_a$ for $a =n -1, n -2,...,g-1.$ In the particular case $g=0,\,a\ge 0,\,P_0=\infty$ (i.e., $X$ is the projective line) we get nested Reed-Solomon codes with parameters $ n= q, k=a+1, d=q-a $.\smallskip Algebraic geometry codes (AG-codes below) have good parameters when the ratio of the number of $\F_q$-rational points on the curve to its genus is high enough. The Drinfeld-Vl\u adu\c t bound says that asymptotically this ratio cannot exceed $\sqrt q -1.$ For $q =p^{2h}$ there exist many families of curves over $\F_q$ attaining this bound (see, e.g., Section 5 below) which implies the lower bound $$R_q(\de)\ge 1-\frac{1}{\sqrt q -1}$$ for the best asymptotical rate of $\,\F_q$-linear codes (see, e.g., [TVN] Section 4.5). If $q \ge 49$ it improves (on some interval) the Gilbert-Varshamov bound.\smallskip One can dispense with the above condition ${\mathcal P}\bigcap {\rm Supp} \,D=\emptyset$ not spoiling the parameters of the codes $C(X, D, {\mathcal P})$; for instance, if $P_i\in {\rm Supp} D$ we can replace the term $f(P_i)$ in $ev_{\mathcal P}$ by $f_i(P_i)$ with $f_i= t_i^sf$, where $t_i$ is some fixed local parameter at $P_i$ and $s $ is a suitable integer (see [TVN], Section 4.1, pp. 194-197, where the $H$- and $P$-constructions are discussed). \pagebreak \section{Constructions D and E} We recall now two consructions from [BS],[BCS] (see also Chapter 8 in [CS]), which permit to construct good lattices from good codes.\smallskip \subsection{Construction D} Let $C_0=\F^n_2\supset C_1\supset\ldots\supset C_a$, $a\ge1$ be a finite decreasing family of linear binary codes with parameters $[n,k_i, d_i]$ for $C_i, i=0,\ldots,a$, where $d_i=4^i$ (we will need only the case $n=2^{2a+1}$ and thus $\de_a=d_a/n=1/2$). We can and will consider $C_0$ as a subset of $\R^n$. We choose a basis $c_1,\ldots, c_n$ for $\F^n_2$ such that $c_1,\ldots, c_{k_i}$ span $C_i$ for $i = 0, \ldots , a$ and define $L$ as the lattice in $\R^n$ generated by $(2\Z)^n$ and the vectors $\{c_j\cdot 2^{1-i}\}$ for $i = 1, \ldots ,a,\,k_{i+1}+1\le j\le k_i$. Then we have ([BS], Theorem 1): \begin{proposition} The lattice $L$ has minimum distance $d_L=2$ and its center density satisfies $$\de\ge 2^{K-n} $$ for $K=\sum_{i=1}^a k_i$. \end{proposition} Note that we will need only the statement $d_L=2$ which is easy in view of the minimum distances $d_i$ of $C_i$ for $ i=0,\ldots,a$.\medskip \subsection{Construction E} Here we need more elaborated techniques. First we define $T$-lattices as follows ( [BCS],[BS], cf. [LT]): a lattice $\Lambda\ss \R^m$ is a {\em $T$-lattice} if it satisfies the following four conditions.\smallskip $(i)$ The minimal vectors of $\Lambda$ span $\Lambda$.\smallskip $(ii)$ There is a linear map $T$ from $\R^m$ to $\R^m$ that sends all the minimal vectors of $\Lambda$ into elements of $\Lambda$ which have norm $R^2$ and are at a distance $R$ from $\Lambda$ for some $R>0$.\smallskip $(iii)$ There is a positive integer $\nu$ dividing $m$ and an element $A \in Aut (\Lambda)$ such that \hskip 0.5 cm $(iii)_1$ $T^\nu= \frac12 A$ and\smallskip \hskip 0.5 cm $(iii)_2$ $\frac12(A^2 - A) = \sum_{i=0}^{\nu-1} a_iT^i, a_i\in \Z.$\medskip We set $b = \frac{m}{\nu}$ and $q=2^b.$\smallskip $(iv)$ $\Lambda \subseteq T \Lambda$ and\smallskip \hskip 0.5 cm $(iv)_1$ $[T\Lambda : \Lambda] = q.$\smallskip It follows from $(iii)_1$ that $T = tP$ where $t= 2^\frac{1}{\nu}$ and $P$ is an orthogonal transformation satisfying $P^{\nu} = A.$ If $M$ is the minimal square norm of $\Lambda$, we have $t = R/\sqrt M,$ and from $(iv)_1$ we get\smallskip $(v)$ $t^m = |\det\, T| = 2^{-b} = {q^{-1}}.$ Note that the square lattice $\Z^2$ is a $T$-lattice with $T=\frac{1}{\sqrt 2}R_{\pi/4}$ for the rotation $R_{\pi/4}$ through the angle $\pi/4=45^\circ$. \smallskip Construction E produces from a $T$-lattice together with a nested family of linear codes $C_0=\F_{2^b}^n \supset C_1\supset\ldots\supset C_a$ over $\F_{2^b}$ another $T$-lattice $L\ss \R^{mn}$ in the following way. We suppose that the parameters of the code $C_i, 0\le i \le a$ are $[n,k_i, d_i]$ and we choose a basis $c_1,\ldots, c_n$ for $\F^n_{2^b}$ such that $c_1,\ldots, c_{k_i}$ span $C_i$ for $i = 0, \ldots , a.$ Define then the lattices $\Lambda_i$ as follows. Let $v_i, . . . , v_m$ be minimal vectors of $\Lambda$ that span $\Lambda.$ Then $Tv_i, . . . , T v_m$ span $T\Lambda$ and $T\Lambda/\Lambda$ is an elementary abelian group of order $q $, so that there are $b$ vectors $u_i^{(1)} = Tv_{r_1},\ldots,u_b^{(1)} = Tv_{r_b},$ for appropriate $r_1,\ldots , r_b,$ such that $T\Lambda/\Lambda$ is isomorphic to the $\F_2-$span of $u_i^{(1)} ,\ldots,u_b^{( 1)}.$ Let $$\Lambda_i=T^i\Lambda,\;u_j^{(i)} =T^iv_{r_j},\, j=1,\ldots,b \;\;{\hbox{\rm for all}}\;\; i \in \Z. $$ The lattice $\Lambda_i$ has minimal square norm $t^{2i}M,$ and $\mathrm{dist} (u_i^{(1)}, \Lambda_{i})\ge t^{i-1}R$. Define now the maps $ \si_i:\F_q\lra\Lambda_{i}$ by $$\si_i\left(\sum_{j=1}^b\al_j\om_j\right)= \sum_{j=1}^b\al_ju_j^{(i)}$$ for some generators $\om_1,\ldots,\om_b$ for $\F_q$ over $\F_2$ and any $\al_j\in\F_2,\,j=1,\ldots,b$; those maps define the maps $\si_i:\F_q^n\lra\R^{mn}.$ \smallskip {\em The construction.} The lattice $L\ss \R^{mn}$ consists of all vectors of the form $$ x=l+\sum_{i=1}^a\sum_{j=1}^{bk_i}\al_j^{(i)}\si_i(c_j)$$ for $l\in \Lambda^n, \al_j^{(i)}\in \F_2.$ Note that $L$ is a $T$-lattice, since it inherits $T$ from $\Lambda$; the parameter $t$ remains the same, while $b$ becomes $nb$, see also Proposition \ref{ce} below. The main property of this Construction E, which coinsides with Construction D for $ \Lambda=2\Z$ is ([BS, Theorem 3]): \begin{proposition} \label{ce} The lattice $L$ is fixed under the transformation $\hat A$ which applies $A$ simultaneously to each component and its minimum distance equals $$\sqrt{\bar M}\;\;{\hbox{\rm for }}\;\;\bar M = \min_{i=1,\ldots,a}\{M, {d_iR^{2i}}{M^{1-i}}\}. $$ \end{proposition} Theorem 3 of [BS] gives also the density of $L$, but we do not need it.\smallskip Applying Construction E to $\Z^2$ with $a=1,M=4, R=\sqrt 2$ and the single parity check $[2,1,2]_q$ code $C_1$, we get successfully the $T$-lattices $D_4,E_8, \Lambda_{16},\bar\Lambda_{32}$ in the corresponding dimensions; one can take this description as a definition for those lattices. Moreover, applying Construction E to $D_4$ and the single parity check $[m,m-1,2]_4$ code for any $m\ge 2$ we get a $T$-lattice $\tilde \Lambda_{4m}$ in $4m$ dimensions. The Leech lattice $\Lambda_{24}$ is also a $T$-lattice (see [BCS], p.177); note, however that $\Lambda_{24}\ne \tilde \Lambda_{24}$. \section{Codes with many light vectors} Recall the following principal result of [ABV]: \begin{theorem}\label{t2} Let $q=2^{2s}, s=3, 4, ...$ be fixed. Then for any $\de_1 < \de <\de_2$ there exists a sequence of binary linear codes $\{C_n\}$ of length $n=qN, N \lra \infty$ and distance $d_n=n\de/2$ such that \beq\label{2.1}\frac{\log A_{d_n}}{n} \ge \frac{E_s(\de)}{2^{2s}}-o(1).\eeq \end{theorem} Here $A_{d_n}$ is the number of minimum weight vectors in $C_n$, and the function \beq \label{2.2}E_s(\de) =H(\de)-\frac{2s}{2^s-1}-\log\frac{2^{2s}}{2^{2s}-1}\eeq has two zeros $0<\de_1 <\de_2<1-2^{-2s}$ and is positive for $\de_1 < \de <\de_2$. In particular, for $s=3,q=64,\,\de=1/2$ we have $$E_3(0.5)= \frac17-\log\frac{64}{63} \simeq 0.1201...\,, \;\frac{E_3(0.5)}{64}\simeq 0.001877...\,.$$ Theorem \ref{t2} is a simple consequence of the following result concerning AG codes. Consider a curve $X$ of genus $g$ over $\F_q$, where $q =2^{2s}, s\ge 3$. Suppose that $N \ge (2^s-1)g$ where $N=|X(\F_q)|$ is the number of $\F_q$-rational points of $X$ (e.g., $X$ is a curve from Subsections 5.1, 5.2 below). Let $D$ be an $\F_q$-rational positive divisor of degree $a>0,$ and let $ C=C(X, D, X(\F_q))$ be the corresponding AG code of length $N$, dimension $k(C) \ge a-g+1,$ and distance $d(C) \ge N-a. \begin{proposition}\label{p2} Let $\de=(N-a)/N$ satisfy the inequality $\de_1 < \de< \de_2 $. Then there exists an $\F_q$-rational positive divisor with $ \deg(D)=a$ such that the corresponding AG code $C $ has the minimum distance $d=N-a=\de N$ and for the number $A_d$ of vectors of weight $d$ we have $$\log A_d \ge NE_s(d)-o(N).$$ \end{proposition} Recall that this is proved using an averaging procedure applied to the set of linearly equivalent classes of $\F_q$-rational positive divisors $D$ with $\deg(D)=a$ which form the set $J_X(\F_q)$ of $\F_q$-rational points on the Jacobian $J_X$ of $X$. This result is based on the estimate \beq\label{jac}\frac{\log|J_X(\F_q)|}{g}=q+(\sqrt{q} -1)\log\frac{q}{q-1}+o(1).\eeq In order to deduce Theorem \ref{t2} from Proposition \ref{p2} we take the binary simplex code, that is, the linear code dual to $[n=q-1,n-2s,3]$ Hamming code and augment each vector in it with a zero coordinate. This gives a binary linear $[q,2s,q/2]$ code $C_0$ in which every nonzero vector has Hamming weight $q/2.$ Using then a linear bijection $\phi: \F_q\lra C_0$ and replacing every coordinate by its image, we obtain from $C(D)$ a linear binary code $C_n$ in Theorem \ref{t2}.\smallskip {\em Remark 4.1.} Propostion \ref{p2} is valid for any even prime power $q\ge 49$, but we do not use this below. Note also that its proof guarantees in general only the existence of {\em one} divisor class $D$ satisfying the conclusion (and not of exponentially many such divisor classes); however, when the bound is strictly bigger than $k(C)$, we get exponentially many such divisor classes in $J_X(\F_q)$.\smallskip\smallskip {\bf Effective version.} Note that at the expense of a small decline in parameters the above estimate can be made completely explicit, namely, we have \begin{theorem}\label{p22}Let $q=p^h$ be a prime power, let $X$ be a curve of genus $g$ over $\F_q$, let $S\subseteq X(\F_q), |S|=N,$ and let $a\in \N$ with $1 \le a\le N-1$. Then there exists an $\F_q$-rational positive divisor $D\ge 0, \deg(D)=a$ such that the corresponding AG code $C=C(X,D,S)$ has the minimum distance $d=N-a=\de N$ and we have $$A_d\ge \frac{{N\choose{a}}}{(\sqrt q +1)^{2g}}\,.$$ \end{theorem} The proof simply replaces the asymptotic inequality \eqref{jac} by a simpler effective inequality $$ |J_X(\F_q)|\le (\sqrt q +1)^{2g}. $$ Applying Stirling's formula, we get \begin{corollary}\label{c2}We have $$\frac{\log A_d}{N}\ge H(\de)-\frac{2g}{N}\log (\sqrt q +1)-\frac{\log(2\pi a d)}{2N}-\frac{1}{12ad}\,.$$ In particular, if $ N=2a=2d\ge (\sqrt q -1)g$, then $$\frac{\log A_d}{N}>1 -\frac{2\log (\sqrt q +1)}{\sqrt q -1}-\frac{2+2\log N }{N}\,.$$ \end{corollary} Note, that Theorem \ref{p22} and Corollary \ref{c2} are applicable, e.g., for $g=0$, where we get an estimate for the Reed-Solomon codes. \section{Some good families of curves } We recall now some constructions of curves over $\F_q$ with many rational points. Let $q$ be a prime power (we will be interested only by the case $q=p^{2h}$), and let $$N_q(g) := \max\{|C(\F_q)| : \;\;C\;\; \hbox{{\rm is a curve of genus}}\;\; g\;\; {\hbox{\rm over}} \;\;\F_q\}.$$ Define then $$A(q) := \lim\sup_{g\lra \infty}\frac{N_q(g)}{g}\le \sqrt q-1,\; A^-(q) :=\lim\inf_{g\lra \infty}\frac{N_q(g)}{g}$$ as the corresponding upper and lower asymptotic quantities. We begin with some families attaining the bound for $A(q)$ (the Drinfeld-Vl\u adu\c t bound). \subsection{Garcia-Stichtenoth tower} The tower $ X_n,n=1,2,\ldots$ from [GS1] is defined recursively by the equations \beq\label{gs} x^q_{i+1}+x_{i+1}=\frac{x^q_i}{x^{q-1}_i +1},\;\; {\rm for} \;\;i=1,\ldots, n-1.\eeq Therefore, the function field $T_n :=\F_{q^2}(X_n)$ of the curve $X_n$ is given by $T_n =\F_{q^2}(x_1, \ldots, x_n)$ where $x_i, i=1,\ldots n$ are related by \eqref{gs}. The main result of [GS1] gives the parameters of that tower. \begin{theorem}\label{gs1} We have $ (i)$ for the genus $g_n=g(X_n):$ $$ g_{n}= (q^{m}-1)^2 \;\;\hbox{\rm for } \;\; n=2m ,$$ $$ g_{n} = {(q^{m}-1)(q^{m-1}-1)} \;\;\hbox{\rm for } \;\; k=2m-1,$$ and the number $N(n)=|X_n(\F_{q^2})|$ of $\F_{q^2}$-rational poits of $X_n$ satisfies $$N(n)\ge (q-1) q^n .$$ \end{theorem} Let then describe an optimal tower of Drinfeld curves closely related to the tower $X_n$. \subsection{ Drinfeld modular curves} The general reference for Drineld modular curves is [Ge], but we use a particular case from [E] (cf. [Ge1]). \smallskip {\bf A tower of Drinfeld curves.} For any field $L\supseteq \F_q$, we denote by $L\{\tau\}$ the non-commutative $L$-algebra generated by $ \tau $ and satisfying the relation $ \tau a = a^q\tau $ for all $a \in L.$ Let $A = \F_q[T ]$; then a rank 2 Drinfeld module $\phi$ over $A$ is an $\F_q$-algebra homomorphism from $A$ to $L\{\tau\}$ such that \beq\label{phi}\phi(T) = l_0 + l_1\tau + l_2\tau^2 = l_0 + g\tau + \De \tau^2\in L\{\tau\}\eeq with nonzero {\em discriminant} $\De=\De(\phi)$. The map $\gamma : A\,\lra \, L$ taking any $a\in A$ to the constant term of $a$ is a ring homomorphism; thus, $\gamma(T)=l_0$ in \eqref{phi}. If $\phi,\psi$ are two Drinfeld modules, an isogeny from $\phi$ to $\psi$ is an element $u \in \bar L\{\tau\}$ such that $$u \circ \phi_a = \psi_a \circ u $$ for all $a \in A$ and its kernel is the following $A$-submodule of $\bar L:$ $${\rm ker}(u) := \{x\in \bar L : u(x) = 0\}, $$ which is of finite dimension over $\F_q$ unless $u = 0.$ In particular, if $u = \phi_{a}$ then $u$ is an isogeny from $\phi$ to itself, called multiplication by $a$, and its kernel is isomorphic with $(A/aA)^2$ as an $A$-module for $\gamma(a)\ne 0$; elements of ${\rm ker}(a )$ are called $a$-torsion points of $\phi$. If $\gamma$ is not injective then ${\rm ker}\,\gamma = Ab$ for some irreducible $b\in A$; $\phi$ is then said to be supersingular if ${\rm ker}(b) = \{0\}$ and for $ \deg(b) = 1,$ we have $\phi _{b} = g\tau + \De \tau^2$ and $\phi _{b} $ is supersingular if and only if $g = 0.$ An isomorphism between Drinfeld modules is simply an element $u \in \bar L^*$, and it multiplies each coefficient $l_i$ in \eqref{phi} by $u^{1-q^i}$. Let $$J(\phi) =\frac{g^{q+1}}{\De},$$ then $\phi$ and $\psi$ with the same $\gamma$ are isomorphic over $\bar L$ if and only if $J(\phi)=J(\psi)$. Thus, we can refer to the $J$-line as the Drinfeld modular curve $X(1)$ for a given $\gamma$. Moreover, for $N \in A$ with $\gamma(N) \ne 0,$ we have Drinfeld modular curves $X_0(N)$ parametrizing Drinfeld modules with a choice of torsion subgroup $G \simeq A/NA$ (and fixed $\gamma$). If $\gamma(T ) \in \F_q$, we may regard the curves $X(1)$ and $X_0(N)$ as the “reduction $\mod\, (T-\gamma(T ))$” of the corresponding modular curves for $\gamma(T ) = T .$ Below we suppose that $\gamma(T)=1$ and we say that a point on $X_0(N)$ is supersingular if the corresponding Drinfeld module is supersingular; such points are $ \F_{q^2}$-rational. Let us consider the case $N=T^{k+1}$; for the curve $\tilde X_k:=X_0(T^{k+1})$ of genus $\tilde g_{ k} =g(\tilde X_k)$ we have [Ge1, Ex.10.2]: $$ \tilde g_{ k}=\frac{(q^{m}-1)^2}{q-1} \;\;\hbox{\rm for } \;\; k=2m,$$ $$ \tilde g_{ k} =\frac{(q^{m+1}-1)(q^{m}-1)}{q-1} \;\;\hbox{\rm for } \;\; k=2m+1,$$ $$\tilde N(k)=\big|\tilde X_k( \F_{q^2})\big|\ge q^{k} +4\;\;\hbox{\rm for } \;\; k\ge 2;$$ thus, $$\tilde N(k)\ge (q -1)\tilde g_{k} \;\;\hbox{\rm for } \;\; k\ge 2$$ and the number of supersingular points on $\tilde X_k$ equals $q^{k}$.\smallskip Elkies proves in [E] that the function field $\tilde K_k=\F_q(\tilde X_k),\, k\ge 2$ is given by $$\tilde K_k=\F_q(x_1, \ldots, x_{k}) \;\;\hbox{\rm with } \;\;x_{j+1}(x_{j+1}+1)^{q-1}(x_j+1)^{q-1}=x_j^q, \;j=1,\ldots,k-1,$$ and the set of $q^{k}$ supersingular points of $\tilde X_k( \F_{q^2})$ is determined by the conditions $\Phi_{q+1}(x_j)=0$ for $j=1,\ldots,k$, where $\Phi_{q+1}(t)=(t^{q+1}-1)/(t-1).$ Note also that the Garcia-Stichtenoth curve $X_n$ is a degree $q+1$ cyclic covering of $\tilde X_n,$ but we do not need this fact. \smallskip {\bf More general Drinfeld curves.} We will need also more general Drinfeld modular curves which do not form a tower and as yet have no explicit equations. However, the family of those curves is optimal and their genera are explicitly known [Ge1]. Let $M$ be a monic element of $A$ with $M(1)\ne 0,\deg M\ge 3$ and let $M=\prod_{i=1}^s P_i^{r_i}$ be its prime factorization; thus each $P_i\in A$ is a monic irreducible polynomial of degree $l_i$ and $r_i\ge1$ for $ 1\le i\le s .$ We put $q_i :=q^{l_i}$ and define the arithmetic functions $$\epsilon =\epsilon (M)=\prod_{i=1}^s q_i^{r_i-1}(q_i+1)\,,\;\kappa=\kappa(M)=\prod_{i=1}^s \left(q_i^{\left[\frac{r_i}{2} \right]} +q_i^{\left[\frac{r_i-1}{2} \right ]} \right)\,.$$ Consider the curve $\tilde X_0(M)$ over $\F_q$ which is the Drinfeld modular curve $X_0(M)$ with $\gamma(T)=1$. We have then [Ge, Sections 8-10] \begin{proposition}\label{p3} Suppose that at least one degree $l_i$ is odd. Then\smallskip \hskip 1.6 cm $(i)$ The curve $\tilde X_0(M)$ is smooth of genus $g_0(M) $ given by $$\quad\quad\quad g_0(M) =1+\frac{\epsilon -(q+1)\kappa -2^{s-1}(q+1)(q-2)}{q^2-1}\le \frac{\epsilon }{q^2-1}; $$ $$(ii)\quad\quad\quad \left|\tilde X_0(M)\big(\F_{q^2}\big)\right|\ge \frac{\epsilon }{q+1}\ge (q-1)g_0(M).$$ \end{proposition} Therefore, for any sequence $M_i$ with $\deg(M_i)\lra \infty$ the family $\tilde X_0(M_i)$ is asymptotically optimal over $\F_{q^2}$. \subsection{ Curves of every genus with many points} Note the the genera of curves in Subsections 5.1-5.2 are of a special form and thus they give no estimate for the quantity $ A^-(q)$ measuring the maximal number of points on curves of every genus. However, in [EHKPWZ] it was shown that $ A^-(q)\ge c \log q$ for any prime power $q$ and a positive constant $c$. Moreover, for an even square $q$ the result gets much better: \begin{theorem}\label{evg} For $q=2^{2h}$ we have $$ A^-(q)\ge \frac{ \sqrt q -1}{2 + \frac{1}{\log q }}=\frac{ 2^{h} -1}{2 + \frac{1 }{2h }}\,. $$ \end{theorem} Thus $ A^-(q)$ is, roughly speaking, only twice smaller than $ A (q);$ a similar result holds also for the odd squares. \section{Proofs } We begin with an easy construction which gives a small positive constant lower bound for the ratio $ {\log(\tau^l_n)}/{n}$ assuring thus the exitence of $\tau$-asymptotically good lattice families. Indeed, let us take $N=2^{K+1},$ $ d=a= N/2= 2^{K}$ for some $K\ge 2$, and let us apply Theorem \ref{t2} with $s=3, q=64$ and the Drinfeld curves $ \tilde X_k$ over $\F_8$ having at least $ 8^{k}=2^{K+1}, K=3k-1,$ points rational over the field $\F_{64}$. We get then a binary $[N, k, d]$ code $C_{K}$ with $$\log A_d\ge \frac{1}{64}E_3(0.5)N-o(N)= \frac{1}{64}\left(\frac17-\log\frac{64}{63} \right)N-o(N). $$ We can consruct then a decreasing family $C_0=\F^N_2\supset C_1\supset\ldots\supset C_{K}$ defining inductively $C_{K-i}$ for $i=1,\ldots, K-1$ as generated by $C_{K-i+1}$ and $c_i$ for some binary vector $c_i\in \F_2^N$ with $wt(c_i)=2^{K-i}.$ Applying then Construction D we get a lattice $L_N\ss\R^N$ with $d_L=2$, and each minimum weight vector of $ C_{K}$ produces a minimum norm vector in $L$. Therefore we have $$\frac{\log \tau(L_N)}{N}\ge\frac{\log A_d}{N}\ge\frac{1}{64}\left(\frac17-\log\frac{64}{63} \right)-o(1)> 0.00187 -o(1).$$ This formula implies Corollary \ref{c1} albeit with a very small $ c_0$.\smallskip {\em Remark 6.1.} We do dot care here about the density of $L$, but the constructed family is still asymptotically good albeit very poor for its density; however, it is easy to modify the construction to get a better (yet rather poor) family while conserving the ratio $\frac{\log \tau(L_N)}{N}$.\smallskip {\em Remark 6.2.} If we replace in the above construction the Drinfeld curve $ \tilde X_k$ by the Garcia-Stichtenoth curve $ X_k$ over $\F_{64}$ which has $63\cdot 64^k+O(1)$ points rational over $\F_{64}$ we can use $\de=32/63$, since the minimum distance should be a power of 2. This leads to the bound $ \frac{1}{64}\left( H(\frac{32}{63})-\frac67-\log\frac{64}{63} \right)\simeq 0.001874...$ instead of $\frac{1}{64}\left(\frac17-\log\frac{64}{63} \right)\simeq 0.001877...,$ and in that sense the Garcia-Stichtenoth tower is not optimal for our construction. The same remark applies to the constructions below, but the deterioration of the parameters is always very small. It is then clear how to proceed: we can replace Contruction D by Contruction E applied to suitable $T$-lattices and codes from Theorem \ref{p22}, which we complete in an appropriate manner. The best results are obtained using the $T$-lattices $\tilde\Lambda_{20}$, $\Lambda_{24}$ (or $\tilde\Lambda_{24}$), and $\tilde\Lambda_{28}$ which give the lattice families in Theorem \ref{t2}. More precisely, in the case of $\Lambda_{24}$ we take $q=2^{12}=4096,$ the curve $ \tilde X_k$ over $\F_{64}$ having $N= 2^{12k}= 4^{6k}$ points rational over $\F_{ 2^{12}},$ put $d=a=N/2$ and apply Construction E to $\Lambda_{24}$ and the family $C_0=\F^N_2\supset C_1\supset\ldots\supset C_{6k}$ of $[N,k_i, 4^i]$-codes over $\F_ { 2^{12}}$ for $ i=0, \ldots, 6k,$ where $d_i=4^i, d_{6k}=d=N/2$ and $C_{6k-i}$ is defined inductively for $i=1,\ldots, 6k-1$ as generated by $C_{6k-i+1}$ and $c_i$ for some vector $c_i\in \F_{ 4096}^N $ with $wt(c_i)=4^{6k-i}.$ Exactly as above, each minimum weight vector of $C_{6k}$ gives rise to a minimum norm vector of the resulting lattice $L_{24N}$ and applying Corollary \ref{c2} we get \eqref{1.3}. If we apply the same construction to $\tilde\Lambda_{4m}$, $q= 2^{2m}$ and the curve $ \tilde X_k$ over $\F_{q}$ having $N= 2^{2mk}= 4^{mk}$ points rational over $\F_{ q},$ we get a lattice with \beq\label{m}\frac{\log(\tau^l_{N})}{N}\ge\frac{1}{4m}\left(1-\frac{2\log (2^m+1)}{2^m-1}\right)- \frac{2+2\log N}{N} \eeq which gives \eqref{1.30},\eqref{1.3} and \eqref{1.41} for $m=5,6$ and 7, respectively (the result is $<0.03$ for any other value of $m$). Applying in the same way Proposition \ref{p2} instead of Corollary \ref{c2} we get the lattices with \beq\label{m1}\frac{\log(\tau^l_{N})}{N}\ge\frac{1}{4m}\left(1-\frac{2m}{2^m-1}-\log\frac{ 2^{2m} }{ 2^{2m} -1}\right)- o(1) \eeq and thus Theorem \ref{t1} for $m=5,6$ and 7. \smallskip We begin the proof of Theorem \ref{t22} with the following \begin{proposition}\label{p4} For any $q=p^h$ there exist a monic polynomials $M_i\in\F_q[T]$ for $i=1,2,\ldots,$ with $\deg M_{i+1}\ge\deg M_{i}$, satisfying $$ \lim_{i\lra \infty}\frac{ \tilde g_{i+1}}{ \tilde g_{i}}=1,\;\tilde g_{i}<\tilde g_{i+1}$$ for $\tilde g_{i}:=g(\tilde X_0(M_i))>0$. \end{proposition} To prove that we "densify" the tower $\{\tilde X_k\}$, inserting between its consecutive levels some curves from the family $\{\tilde X_0(M)$\}. Indeed, let us consider two consecutive curves $ \tilde X_{2m}$ of genus $\tilde g_{ 2m} = { (q^{m}-1)^2}/({q-1})$ and $\tilde X_{2m+1}$ of genus $\tilde g_{ 2m+1} = {(q^{m+1}-1)(q^{m}-1)}/({q-1}) =q\tilde g_{ 2m}+O(\sqrt {\tilde g_{ 2m}})$, say, for $k=2m\ge 100$. Set $s=s(k)$ for a suitable non-decreasing unbounded function $s:\N\lra\N$ (to be chosen afterwards), then the number $P(s)$ of monic irreducible polynomials in $A $ of degree $s$ satisfies $(q^s-q^{s/2})/s\le P(s)\le q^s /s$. We consider then the curves $\tilde X_{k,j}, j=1,\ldots, l_k$ for $l_k=\min\{P(s), \lfloor\frac{k}{s}\rfloor\}$ defined by $ \tilde X_{k,j}=\bar X_0(T^{k+1-js}M_{s,j})$ for $M_{s,j}=\prod_{i=1}^j M_i^{(s)}$, where $\{ M_1^{(s)},\ldots, M_{P(s)}^{(s)}\}$ is the list of all monic degree $s$ irreducible polynomials in $A$. The genus of $ \tilde X_{k,j}$ equals $ \tilde g_{k,j}= { q^{2m-sj} (q^{s}+1)^j}/({q-1})+O(\sqrt {\tilde g_{2m}}) $ which is increasing with $j$ and $ { \tilde g_{k,j+1}}/{\tilde g_{k,j }}$ tends to 1 for growing $k$. If $ \tilde g_{k,l_k}$ is still less than $ { q^{2m+1} }/({q-1})$, we can increase further the genus taking $s+1$ instead of $s$ and continuing to replace the factors $T^{s+1}$ consecutively by irredusible polynomials of degree $s+1$, until the expurgation of those polynomials. If $k-sP(s)-(s+1)P(s+1)>0$ we can continue with the polynomials of degree $s+2$ and so on. The procedure stops when either we reach the genus $\tilde g_{ 2m+1}$ and we have densified our level, or there is no factors $T^l$ to replace by the next polynomial of degree, say, $s+h, h\ge1.$ We want to show that choosing $s(k)$ appropriately, we can always reach $\tilde g_{ 2m+1}$ and thus densify our initial tower which will end the proof. Indeed, for a given $s$, using all $P(s)$ degree $s$ irreducible polynomials, we multiply the genus by the factor $(1+q^{-s})^{P(s)}\simeq \exp(\frac1s). $ Therefore, using all irreducible polynomials of degrees from $s$ to, say $s+t$, we can multipy the genus by $\exp(\frac1s+\ldots+\frac{1}{s+t})\simeq 1+\frac{t}{s},$ whereas this is possible whenever $ sP(s)+\ldots+(s+t)P(s+t)\simeq q^s+\ldots+q^{s+t}\le k.$ It is then sufficient to take $ {t/}{s}>q,\; (s+t)q^{s+t}\le k;$ for example, we can choose $t=(q+1)s, s=\log k/ ({2q}\log q)$ to guarantee those inequalities for sufficiently large $k$, and the proof is finished (the case of an odd $k$ is similar).\smallskip {\em Remark 6.3.} This proof can replace the sketchy proof of Claim (3.2)-(3.3) in [STV, Sec.3], equivalent to Proposition \ref{p4}. \smallskip Let us deduce Theorem \ref{t22} from Proposition \ref{p4}. Let $q=2^{12}=4096,$ and let $k\in\N$ satisfy $\tilde g_{ k }< {n}/{24}\le\tilde g_{k+1}$ for a given large dimension $n$; moreover, let $2^a\tilde g_{ k }< {n}/{24}\le 2^{a+1}\tilde g_{ k }$ for some $0\le a\le 11$ (recall that $ {\tilde g_{k+1}}/{\tilde g_{k}}\simeq q$). Let us take the curve $X_0(M_{i})$ from Proposition \ref{p4} of genus closest to $2^a\tilde g_{ k }$ and the curve $X_0(M_{j})$ of genus closest to $2^{a+1}\tilde g_k.$ Then we construct, by Proposition \ref{p2}, an $[N_i, k_i, 2^{a+12k}=d_i]$-code $C_i$ on $X_0(M_{i})$ with exponentially many light vectors and the same with $[N_j, k_j, 2^{a+1+12k}=d_j]$ code $C_j$ on $X_0(M_{j});$ note that relative distances of both codes are asymptotic to $\frac12$ and the ratio $N_j/N_i$ is asymptotic to 2. We can then construct the lattices $L_{24N_i}$ and $L_{24N_j}$ in dimensions $24N_i$ and $24N_j$ using Construction E for the Leech lattice $\Lambda_{24}$ (or $\tilde\Lambda_{24}$) and nested families of codes beginning, respectively, by $C_i$ and $C_j.$ The lattices $L_{24N_i}$ and $L_{24N_j}$ have then kissing numbers satisfying \eqref{1.1}. Since $ {24N_i}\le n \le {24N_j}\simeq 48 N_i$ the kissing number of the lattice $L_{24N_i}$ gives the following estimate \beq\label{6.1} \frac{\log(\tau^l_n)}{n}\ge \frac{1}{24}\left(\frac{17}{21}-\log\frac{4096}{4095}\right) \de,\eeq for $\de=\frac{24N_i}{n}\in [0.5,1],$ whereas we can shorten the code $C_j$ deleting some \linebreak $\F_q$-rational points from the corresponding curve to get a code of length $\frac{n}{24}$ and then apply Construction E with $\Lambda_{24}.$ This gives the estimate \beq\label{6.2} \frac{\log(\tau^l_n)}{n}\ge \frac{1}{24}\left(\lambda H\left(\frac{1}{2\lambda}\right)-\frac{4}{21}-\log\frac{4096}{4095}\right) ,\eeq with $\lambda\simeq \frac{1}{2\de}=\frac{n}{24N_j}\in [0.5,1], $ and taking the minimax we get \eqref{1.5}. \smallskip {\em Remark 6.3.} Using the lattices $\tilde\Lambda_{4m}$ together with the codes over $\F_{2^{2m}}$ with similar properties constructed on the curves from Theorem \ref{evg}, instead of the above "densified" curves, we get the lattices with somewhat worse parameters, which are optimal for $m=7$ and give the estimate $$\lim\inf_{n\lra \infty}\frac{\log(\tau^l_{N})}{N}\ge 0.020715..\,.$$ \bigskip \bigskip \centerline{REFERENCES} \bigskip \noindent [ABV] A. Ashikhmin, A.Barg, S. Vl\u adu\c t, {\em Linear codes with exponentially many light vectors. } J. Combin. Theory Ser. A 96 (2001), 396--399. \medskip\noindent [BCS] A. Bos, J. H. Conway, N. J. A. Sloane, {\em Further lattice packings in high dimensions. } Mathematika 29 (1982), 171--180. \medskip\noindent [BS] E. S. Barnes, N. J. A. Sloane, {\em New lattice packings of spheres.} Canad. J. Math. 35 (1983), 117--130. \medskip\noindent [Ch] C. Chabauty, {\em R\'esultats sur l'empilement de calottes \'egales sur une p\'erisph\`ere de $\R^n$ et correction d'un travail ant\'erieur,} C.R.Acad. Sci. Ser. A, vol. 236(1953), 1462--1464. \medskip\noindent [CS] J. H. Conway, N. J. A. Sloane, {\em Sphere packings, lattices and groups.} With contributions by E. Bannai, J. Leech, S. P. Norton, A. M. Odlyzko, R. A. Parker, L. Queen and B. B. Venkov. Springer-Verlag, NY, 1988. xxviii+663 pp. \medskip\noindent [DV] V.G. Drinfel’d, S. Vl\u adu\c t, {\em The number of points of an algebraic curve, } Funktsional. Anal. i Prilozh. 17 (1983), 68--69. [Funct. Anal. Appl. 17 (1983), 53--54.] \medskip\noindent [E] N. Elkies, {\em Explicit towers of Drinfeld modular curves.} European Congress of Mathematics, Vol. II (Barcelona, 2000), 189--198, Birkh\"auser, Basel, 2001. \medskip\noindent [EHKPWZ] H.Elkies, E.Howe, A.Kresch, B.Poonen, J.Wetherell, M.Zieve, {\em Curves of every genus with many points. II. Asymptotically good families.} Duke Math. J. 122 (2004), 399--422. \medskip\noindent [Ge] E.-U. Gekeler, {\em Drinfeld Modular Curves.} Berlin: Springer, 1980 (Lecture Notes in Math. 1231). \medskip\noindent [Ge1] E.-U. Gekeler, {\em Invariants of some algebraic curves related to Drinfeld modular curves.} J. Number Theory 90 (2001), 166--183. \medskip\noindent [GS] A. Garcia, H. Stichtenoth, {\em A tower of Artin-Schreier extensions of function fields attaining the Drinfelʹd-Vl\u adu\c t bound.} Invent. Math. 121 (1995), 211--222. \medskip\noindent [GS1] A.Garcia, H.Stichtenoth, {\em On the asymptotic behaviour of some towers of function fields over finite fields. } J. Number Theory 61 (1996), 248--273. \medskip\noindent [KL] G.Kabatjanskiĭ, V.Leven\v ste\v in, {\em Bounds for packings on the sphere and in space.} (Russian) Problemy Pereda\v ci Informacii 14 (1978), 3--25. \medskip\noindent [LT] S.Litsyn, M.Tsfasman, {\em Constructive high-dimensional sphere packings.} Duke Math. J. 54 (1987), 147--161. \medskip\noindent [MWS] F.J.MacWilliams and N.J.A. Sloane, {\em The theory of error-correcting codes,} North-Holland, Amsterdam, 1981. \medskip\noindent [RT] M.Rosenbloom, M.Tsfasman, {\em Multiplicative lattices in global fields.} Invent. Math. 101 (1990), 687--696. \medskip\noindent [Sh] C. Shannon, {\em Probability of error for optimal codes in a Gaussian channel,} Bell. Syst. Tech. J. 38(1959), 611--656. \medskip\noindent [STV] I.Shparlinski, M.Tsfasman, S.Vl\u adu\c t, {\em Curves with many points and multiplication in finite fields.} Coding theory and algebraic geometry (Luminy, 1991), 145--169, Lecture Notes in Math. 1518, Springer, Berlin, 1992. \medskip\noindent [TVN] M.Tsfasman, S.Vl\u adu\c t, D.Nogin, {\em Algebraic geometric codes: basic\linebreak notions.} Math. Surv. Monogr., 139. AMS, Providence, RI, 2007. xx+338. \medskip\noindent [W] A.Wyner, {\em Capabilities of bounded discrepancy decoding.} Bell Systems Tech. J. 44 (1965), 1061--1122. \end{document}
1,108,101,563,061
arxiv
\section{Introduction} The generation of on-demand photonic Fock states is at the heart of many photonic quantum technologies. Single-photon sources have been realized in a variety of quantum systems such as nitrogen-vacancy (NV) centres in diamond \cite{NaturePhoton5p738}, or using quantum dots \cite{Rivoire2011}. However the creation of photonic Fock states with a high photon number is an open challenge to date. In this paper we propose a novel Jaynes Cummings quantum random walk (QRW) protocol that drives the cavity to accumulate a photonic Fock state deterministically. We describe in detail how to implement the theoretical protocol using a Nitrogen-Vacancy defect in a nano-diamond evanescently coupled to circulating light modes in a high-Q toroidal resonator at moderately low temperatures ($<10$ $\;{\rm K}$). We show that even in the case where one has error in the timing of the control pulses there is very high probability to generate photonic Fock states up to $n=6$. Synthesising high-number Fock states has received much attention in the literature. A high-number Fock state has been conditionally produced with a probability $P_n$ via the state collapse from a coherent or thermal state \cite{Nature448p889,PRL87p093601,PRL65p976}. The probability $P_n$ of success is equal to the initial overlap probability of the target Fock state with the initial state of light ($P(n)={\rm Tr}[ |n\rangle\langle n| \rho_{init}]$), and this probability can be quite low: for $|n=3\rangle$, $P_3\approx 0.22$ if starting from a pure coherent state $|\alpha=\sqrt{3}\rangle$. An arbitrary quantum state of a cavity field can also be engineered if the cavity-qubit coupling can be very accurately controlled and recent experiments using low-temperature superconducting circuit-QED have synthesised microwave cavity states up to nine photons \cite{PRL76p1055,EPL67p941,Nature454p310}. When an excited two level system interacts with a cavity mode via the Jaynes Cummings (JC) interaction of strength $g$, the emission probability $P_{emit}(g,\tau,n)$ of the excited atom depends on the duration of coupling $\tau$ and the choice of Fock state $n$. For certain values, terming trapping values, of $(g, \tau,n)$, $P_{emit}$ vanishes and a Fock state can be trapped in the cavity \cite{JOSAB3p906,PRL86p3534,NJP6p97}. In this way by sending a train of Rydberg atoms through a superconducting high-Q microwave cavity, Walther {\it et al.} trapped a microwave Fock state \cite{PRL86p3534,NJP6p97}. Indeed all the experimental demonstrations in high-Fock number state generation have been in the microwave regime \cite{Nature454p310,PRL86p3534,NJP6p97}. At optical frequencies, Brown {\it et al.} propose a system of $N$ three-level atoms in a high-finesse cavity \cite{PRA67p043818}, for Fock state generation but this requires the preparation of complicated nonclassical states of the atoms. Until now only single photon sources at optical frequencies have been realized in solid-state quantum systems. {\it Jaynes Cummings Damped Quantum Random Walk:-} A coined quantum random walk involves a coin, which we take as a qubit with Hilbert space $\mathcal{H}_{c}=span\{\left\vert e\right\rangle,\left\vert g\right\rangle\}$, together with a walk on the discretised non-negative real line $\mathcal{H}_{w}=span\{\left\vert n\right\rangle ;n=0,1,\cdots\}$. The normal coined quantum random walk on the full real line ($-\infty\le n \le \infty$), is an iteration of a basic step involving a conditional displacement of the walker on the line depending on the internal state of the walker $\hat{U}_d\equiv |e\rangle\langle e|\otimes |n+1\rangle\langle n|+ |g\rangle\langle g|\otimes |n-1\rangle\langle n|$, followed by a ``scrambling'' of the internal state of the walker by the action of a Hadamard operation on the internal states. This coined version of the QRW where the walker moves on the discretized real line $\mathbb{Z}$ has been studied intensively over the past decade. In the following we will examine the case when the space upon which the walker walks is the Fock ladder, $n\in \mathbb{Z}^*$, i.e. the non-negative integers. It is no longer possible to have a unitary operator that implements a conditional displacement with constant displacement independent of the position of the walker. To achieve a unitary operation for the conditional displacement the walker can execute a step up/down the half-line with ``step sizes'' that depend on $n$. Our QRW step will consist of a period of Jaynes Cummings evolution between the internal states of the coin and conditional displacements up/down the Fock ladder, followed by a manipulation of the internal states of the walker. Rather than a complete scrambling of the internal states we will just consider a flip where $|g\rangle \leftrightarrow |e\rangle$, are swapped. We have found that such a unitary QRW on the half line using the JC walk step exhibits complex temporal dynamics but simplifies greatly when we allow for periodic damping of the internal state of the coin. We now consider analytically the above QRW on the half-line and derive a formulae for the resulting map on the reduced Fock space of the walker. We will see that, if starting at $|n=0\rangle$, the walker will, on average, step to greater values of $n$ and will hit a ceiling value of $n$ that depends on the chosen value for the JC interaction strength/time. Let the JC\ Hamiltonian in the RWA be given by $H_{JC}=g(\left\vert e\right\rangle \left\langle g\right\vert \otimes \hat{a}+\left\vert g\right\rangle \left\langle e\right\vert \otimes \hat{a}^{\dagger }),$ then the resulting unitary evolution operator $\hat{U}_{JC}(\tau)=e^{-iH_{JC}\tau},$ can be expressed as \begin{equation} \hat{U}_{JC}(\tau)=\left( \begin{array}{cc} \cos g\tau\sqrt{N+1} & -i\frac{\sin (g\tau\sqrt{N+1})}{\sqrt{N+1}}a \\ -ia^{\dagger} \frac{\sin (g\tau\sqrt{N+1})}{\sqrt{N+1}} & \cos g\tau\sqrt{N}% \end{array}% \right)\;\; , \end{equation} where $N=a^\dag a$ is the photon number operator. Considering the initial product state for the density matrix of the coin and the walker to be $\rho _{C}\otimes \rho _{W}$, then following evolution by the JC Hamiltonian we obtain $\rho _{C}\otimes \rho _{W}\rightarrow \hat{U}_{JC}\,(\rho _{C}\otimes \rho _{W})\,\hat{U}_{JC}^{\dagger }$. Subsequently we allow spontaneous emission (amplitude damping channel) to operate on the atomic system i.e. \ \begin{equation} \rho _{C}\otimes \rho _{W}\rightarrow \hat{U}_{JC}\,(\rho _{C}\otimes \rho _{W})\,\hat{U}_{JC}^{\dagger }\rightarrow \hat{\mathcal{E}}_{SE}\otimes \hat{id}_{W}\,\left[\hat{U}_{JC}\,(\rho _{C}\otimes \rho _{W})\,\hat{U}_{JC}^{\dagger }\right] , \end{equation}% where $\hat{id}_{W}$ stands for the identity map in walker's space. Here $\hat{\mathcal{E}}_{SE}=Ad\,\hat{S}_{0}+Ad\,\hat{S}_{1}$ is the spontaneous emission channel with non-unitary Kraus generators% \begin{equation} \hat{S}_{0}=\left\vert g\right\rangle \left\langle g\right\vert +\left\vert e\right\rangle \left\langle e\right\vert \sqrt{\eta },\;\;% \hat{S}_{1}=\left\vert g\right\rangle \left\langle e\right\vert \sqrt{1-\eta }, \end{equation}% where $\eta (t)=e^{-t/T}$ a positive parameter quantifying the degree by which the atomic system is reset by the channel, with $t$ the nominal time over which the channel operates and $T$ a constant characterising how rapid the reset process it. We have also used the notation of the adjoint action $Ad\,(\hat{A})$ of an operator $\hat{A}$ on some other operator $\hat{X}$ as follows: \ $\hat{X}\rightarrow Ad(\hat{A})\hat{X}=\hat{A}\hat{X}\hat{A}^{\dagger },$ noticing the property $Ad(\hat{A}\hat{B})\hat{X}=Ad(\hat{A})Ad(\hat{B})\hat{X}.$ For a general pure input state of the coin: \begin{equation} \rho _{C}=(\alpha \left\vert g\right\rangle +\beta \left\vert e\right\rangle )(\alpha ^{\ast }\left\langle g\right\vert +\beta ^{\ast }\left\langle e\right\vert )=\left( \begin{array}{cc} \left\vert \beta \right\vert ^{2} & \alpha^{\ast} \beta \\ \alpha \beta^{\ast} & \left\vert \alpha \right\vert ^{2}% \end{array}% \right) , \end{equation} the channel $\hat{\mathcal{E}}_{SE}$ outputs \begin{equation} \hat{\mathcal{E}}_{SE}[\rho _{C}](t)=\left( \begin{array}{cc} \eta \left\vert \beta \right\vert ^{2} &~~~ \alpha^{\ast}\beta \sqrt{% \eta } \\ \alpha \beta^{\ast} \sqrt{\eta } &~~~ 1-\eta \left\vert \beta \right\vert ^{2}% \end{array}% \right) . \end{equation} In view of the limit $\lim_{t\rightarrow \infty }\eta (t)=0,$ and normalization relation $\left\vert \alpha \right\vert ^{2}+\left\vert \beta \right\vert ^{2}=1,$ the last expression leads to the reset state $% \lim_{t\rightarrow \infty }\hat{\mathcal{E}}_{SE}[\rho _{at}](t)=\left\vert g\right\rangle \left\langle g\right\vert =\left( \begin{array}{cc} {\scriptsize 0} & {\scriptsize 0} \\ {\scriptsize 0} & {\scriptsize 1}% \end{array}% \right) $. Next we form the composite map beginning with the JC unitary, spontaneous emission channel and then finally a flipping of the atomic state $\hat{X}\equiv \exp(-i\pi/2 \sigma_x)$ and denote the entire process by $\hat{\mathcal{E}}$: \begin{equation} \hat{\mathcal{E}}\equiv \;\;Ad\hat{X}\circ (\hat{\mathcal{E}}_{SE}\otimes \hat{id}_{W})\circ Ad\hat{U}_{JC}, \label{map} \end{equation}% where $\hat{\mathcal{E}}$ \ acts in total coin-walker (atom-mode) density matrix. Choosing the action $\hat{\mathcal{E}}(\left\vert e\right\rangle \left\langle e\right\vert \otimes \rho _{W}\mathcal{)},$ gives \cite{ellinas2005}, \begin{eqnarray} &&\hat{\mathcal{E}}(\left\vert e\right\rangle \left\langle e\right\vert \otimes \rho _{W}\mathcal{)} \nonumber \\ &=&\left( \begin{array}{cc} \hat{\mathcal{E}}_{W}(\rho _{W})-\cos (g\tau\sqrt{N+1})\rho _{W}\cos (g\tau\sqrt{N+1}% )\eta & ia^{\dagger }\frac{\sin (g\tau\sqrt{N+1})}{\sqrt{N+1}}\rho _{W}\cos (g\tau% \sqrt{N+1})\sqrt{\eta } \\ -i\cos (g\tau\sqrt{N+1})\rho _{W}\frac{\sin (g\tau\sqrt{N+1})}{\sqrt{N+1}}a\sqrt{% \eta } & \cos (g\tau\sqrt{N+1})\rho _{W}\cos (g\tau\sqrt{N+1})\eta \end{array}% \right) , \nonumber \\ && \end{eqnarray}% where the positive map% \begin{equation} \hat{\mathcal{E}}_{W}(\rho _{W})=\cos (g\tau\sqrt{N+1})\rho _{W}\cos (g\tau\sqrt{N+1}% )+a^{\dagger }\frac{\sin (g\tau\sqrt{N+1})}{\sqrt{N+1}}\rho _{W}\frac{\sin (g\tau% \sqrt{N+1})}{\sqrt{N+1}}a, \label{cpmapw} \end{equation}% appearing above can be shown to be trace preserving i.e. \begin{equation} {\rm Tr}\hat{\mathcal{E}}_{W}(\rho _{W})={\rm Tr}\{[\cos (g\tau \sqrt{N+1})^{2}+\sin (g\tau \sqrt{N+1}% )^{2}]\,\rho _{W}\}={\rm Tr}\rho _{W}=1. \end{equation} The model described leads to a sequence of walker's density matrices \cite{bracken2004}, and after $m>1$ steps the reduced state of the walker is \begin{equation} \rho _{W}^{(m)}={\rm Tr}_{c}[\hat{\mathcal{E}}^{m}\mathcal{(}\left\vert e\right\rangle \left\langle e\right\vert \otimes \rho _{W}\mathcal{)}]= \hat{\mathcal{E}}_{W}^{m}(\rho _{W})+\mathcal{O}(\eta ^{\frac{3}{2}}). \end{equation} Consider the case of number state input for the walker $\rho _{W}=\left\vert n\right\rangle \left\langle n\right\vert $. Then \begin{equation} \hat{\mathcal{E}}_{W}(\left\vert n\right\rangle \left\langle n\right\vert )= \cos (g\tau\sqrt{n+1})^{2}\left\vert n\right\rangle \left\langle n\right\vert +\sin (g\tau\sqrt{n+1})^{2}\left\vert n+1\right\rangle \left\langle n+1\right\vert, \end{equation} and repeated action of $\hat{\mathcal{E}}$ in the $\eta \rightarrow 0$ limit leads to a progressive increase in Fock number $n$. From the form of $\hat{\mathcal{E}}_{W}$ in this limit we now observe that we can halt this upwards motion to accumulate at the trapping value $n=n_T$, if we choose the Jaynes Cummings coupling strength and duration $\tau$ to satisfy the trapping condition: $g\tau\sqrt{n_T+1}=k\,\pi$, where $k\in\mathbb{Z}$. {\em This is the main result of this section: by executing a sequence of operations: Jaynes Cummings for a period, followed by spontaneous decay and then a complete flip of the coin space, one can, with unit probability, arrange for the walker to reach a steady state at the position $n=n_T$}. The behaviour of the quantum random walk on the half line with and without damping can be observed in Fig. \ref{fig:QRW}. The dramatically different behaviour from the position independent step operator used in the conventional quantum walk is apparent. Next we show how to implement this map for an optical cavity-QED setup consisting of a Nitrogen-Vacancy centre in a nano diamond coupled to a high-Q optical cavity to produce multi-photon optical Fock states. { We perform more detailed numerical simulations taking into account cavity and atomic decay to determine a figure of merit to synthesis Fock states of light.} \\ \begin{figure} \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(8.5,4) \put(-2.3,0){\includegraphics[width=13cm]{combinedQRW}} \put(-1.8,-.3){(a)} \put(2.6,-.3){(b)} \put(7,-.3){(c)} \end{picture} \end{center} \caption{Jaynes Cummings quantum random walk: Plots showing how the walker evolves starting in the vacuum, i.e. ${\rm Tr}\{ |n\rangle\langle n|\, \hat{\cal E}^m[ |e\rangle\langle e|\otimes |0\rangle\langle 0|]\}$, as a function of the number of steps $m$ and Fock number $n$. (a) for a completely unitary evolution with $\hat{\cal E}=Ad\hat{H}\circ Ad\hat{U}_{JC}$, where one executes a Hadamard on the coin space (b) completely unitary evolution with $\hat{\cal E}=Ad\hat{X}\circ Ad\hat{U}_{JC}$, where one executes a $\pi$ flip instead of the Hadamard and (c) including spontaneous damping channel $\hat{\cal E}=Ad\hat{X}\circ Ad\hat{\cal E}_{SE}\circ Ad\hat{U}_{JC}$, and the Jaynes Cumming coupling strength and duration chosen so that $|n=16\rangle$ is a trapping state. We see that the latter evolution clearly leads to accumulation of the walker at the target trapping state.}\label{fig:QRW} \end{figure} Now we theoretically discuss the maximum Fock number that can be reached with high fidelity. We consider a case where the noise in the timing of the JC interactions is vanishingly small. However the decay of cavity and the effect of population in the ground state of the qubit must be taken into account. We denote the effective cavity loss of the photons with the rate $\gamma_c$. The higher the Fock state is, the larger this effective decay rate. For the target Fock state $n_T$, the effective decay rate increases to $n_T\gamma_c$. Another factor limiting whether one can achieve the target state relates to the downward transfer of population with probability $P_D$ from the target state $|n_T\rangle$ to the lower Fock state $|n_T -1\rangle$ due to the net population $P_g$ of the ground state. In the stationary state, the pumping probability $P_U$ from the state $|n_T-1\rangle$ must balance the loss from the target Fock state $|n_T\rangle$. A formula describing this balance takes the form \begin{equation} P_{n_T} \left( 1- e^{-n_T\gamma_c t}\right) + P_gP_D=P_{n_T-1}P_U \,, \end{equation} where $P_{n_T} (P_{n_T-1})$ is the population in Fock state $|n_T \rangle (|n_T -1\rangle)$. Because $\eta$ can not be practically zero after waiting for a time $t$, the net population in the excited state of qubit is $\eta=e^{-t\gamma_q}$ with the effective decay rate of qubit $\gamma_q$ which can be modified using STED beam in our setup. After the state flipping, this population is transferred to the ground state. If the time $t$ is measured as $t=M\gamma_q^{-1}$, $P_g=e^{-M}$. We are interested in the case of high fidelity $F$ of achieving the target state. We observe that the population is a good approximation of fidelity, $P_{n_T} \sim F$. The population of the lower Fock state is $P_{n_T-1} = \alpha (1-F)$ with constant $0\leq \alpha \leq 1$. For our Hermitian system, we have $P_U=P_D=\sin^2\left(\pi \frac{\sqrt{n_T}}{\sqrt{n_T+1}}\right)$. There we have \begin{equation} F(1-e^{-N_T \gamma_c M \gamma_q^{-1}}) + e^{-M} \sin^2 \left( \pi \frac{\sqrt{n_T}}{\sqrt{n_T+1}} \right) =\alpha (1-F)\sin^2\left( \pi \frac{\sqrt{n_T}}{\sqrt{n_T+1}}\right)\,. \end{equation} This formula shows the relation between the decay rates, target state Fock number and the achievable fidelity. Assuming that $N_T \gamma_c M \gamma_q^{-1} \ll 1$ and $\frac{\sqrt{n_T}}{\sqrt{n_T+1}} \approx 1$, the fidelity as a function of the decay rates and the photon number $n_T$ takes the form \begin{equation}\label{eq:FN} F=\frac{\pi^2 \left(\alpha - e^{-M}\right)}{\pi^2 \alpha + 4M n_T^3 \gamma_c/\gamma_q}\,. \end{equation} {\it Implementation:-} To implement the above protocol we propose to use a single nitrogen-vacancy (NV$^-$) center in a nanodiamond coupled to a high-finesse toroidal optical cavity at low temperature, while the latter is also connected to an optical interferometer and where the NV's optical transition is initialised via optical pumping, brought in/out of resonance with the cavity via Stark shift tuning resulting from an electric field, and undergoes periodic optical $\pi$ flips via resonant optical laser pulses. In more detail: when the cavity interacts on-resonance with the zero-phonon line (ZPL) of the single NV center, the de-excitation probability of the NV (treated as a two level system (TLS)), [and consequently excitation probability of the cavity], is given by $P_{emit}(g,\tau,n)=\sin^2 (\sqrt{n+1}g\tau)$, where $n$ is the number of photons in the cavity, $g$ is the JC coupling strength, and $\tau $ is the interaction time. Choosing $\tau=\tau_T$ such that the $n_T$ photon is trapped in the cavity we have $P_{emit}(g,\tau_T,n_T)=0$. Using a fixed $\tau_T$ as the time step in the damped JCQRW above and starting the cavity in the vacuum state leads to the cavity field undergoing a deterministic ratchet-like increase in Fock number until it accumulates at $n=n_T$. The trapping-state condition means that the field in the cavity reaches an upper bound and is prevented from being excited to a higher photonic number state. Thus via a precise control of the Jaynes Cummings coupling $\tau_T$, an on-demand Fock state can be deterministically trapped in the cavity starting from the vacuum state. To do this we start the following process (Eq.~(\ref{map})) from the excited state of the NV center, which is resonantly prepared by a $\pi$ laser pulse: (i) we first switch on the JC coupling by tuning off the electric field. During this stage, the NV center emits a photon with the probability $P_g$ into the cavity. (ii) After a time $\tau_T$, the JC coupling is turned off by bringing the NV's optical transition out of resonance with the cavity via electrical Stark control \cite{Starkshift} and the NV center is allowed to completely decay to its ground state (GS). (iii) Then we resonantly pumping the NV center to its excited state again via a $\pi$ pulse. Repeating these operations, the field in the cavity can be trapped in a selected target Fock state. {\it The System:-} Our setup for creation of photonic Fock state is shown in Fig.~\ref{fig:setup}. An optical toroidal cavity with high quality factor $Q$ and resonance frequency $\omega_c$ couples to the optical ZPL transition in an NV$^-$ center in a type IIa nanodiamond with $C_{3v}$ symmetry, which is positioned on or nearby the toroid so that it has a large evanescent overlap with the whispering gallery optical modes of the toroidal resonator. The nanodiamond is oriented so that the $[111]$ axis of NV center is fabricated to be along the $z$ direction (see Fig.~\ref{fig:setup}). This setup can be realized using current technology \cite{APL96p241113,NanoLett9p1447,NanoLett6p2075}. The toroidal cavity supports two degenerate modes, clockwise (CW) mode $\hat{b}$ and counterclockwise (CCW) mode $\hat{a}$, which propagate around the cavity along two opposite directions and form a standing wave if both modes were excited \cite{Science319p1062}. These two modes can be viewed in a basis of anti/symmetric modes $\hat{A}_{a,s}=(\hat{a} \mp \hat{b})/\sqrt{2}$, which are mutually orthogonal in space. They split in frequency due to the scattering $h$ from the NV center and rough surface. Depending on it's position the coupling of NV$^-$ centre to one of these two normal cavity modes $\hat{A}_{s}$ or $\hat{A}_{a}$ may occur predominantly or even exclusively with respect to the other mode \cite{Science319p1062}. In our setup, we take the NV$^-$'s position to be at the antinode of mode $\hat{A}_s$ (node of the antisymmetric mode $\hat{A}_a$). Thus the NV center couples dominantly to the mode $\hat{A}_s$. We neglect the small coupling to mode $\hat{A}_{a}$. The mode $A_s$ is set to be red detuned to the ZPL transition of the NV center in the absence of any applied Stark shift \cite{Starkshift}. The latter we propose can be used to control the coupling between the NV center and the cavity. \begin{figure} \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(8.5,7) \put(-1,-0.6){\includegraphics[width=9.5cm]{JT_fig}} \end{picture} \end{center} \caption{Solid state setup for deterministic generation of an on-demand Fock state of photons. A two level system (nitrogen vacancy in a nanodiamond - here shown as yellow at top of the toroidal resonator), interacts with counter propagating optical modes $\hat{a}$ and $\hat{b}$ in a high-Q toroidal resonator with intrinsic decay rate $\gamma_c$ and which is coupled to a nearby waveguide interferometer at a coupling rate $\kappa_{ext}$. Shown are the input and output modes $\hat{a}/\hat{b}_{in/out}$ and the resulting anti/symmetric modes $\hat{A}_{a/s}$ modes from the interferometer with each associated photon detector $D_{a/s}$. Also shown is the incident (red arrow), laser pulse resonant on the NV zero phonon line required to implement an optical $\pi-$pulse and the Stark shift electrode used to bring the NV's optical transition in/out of resonance with the cavity. Not shown is the initialising green laser and stimulated depletion laser.}\label{fig:setup} \end{figure} An optical waveguide precisely positioned close to the cavity couples light in/out to/from the cavity with external coupling rate $\kappa_{ext}$ via the input-output relations \cite{PRA30p1386,PRA31p3761} \begin{align}\label{eq:InputOutput} \hat{a}_{out} & =-\hat{a}_{in}+ \sqrt{2\kappa_{ext}}\, \hat{a} \,,\\ \hat{b}_{out} & =-\hat{b}_{in}+ \sqrt{2\kappa_{ext}}\, \hat{b} \,, \end{align} where the input and output fields of the waveguide are denoted by $\{\hat{a}_{in},\hat{b}_{in},\hat{a}_{out},\hat{b}_{out}\}$, respectively. $[\hat{a}_{in}(t),\hat{a}_{in}^\dag(t)]=[\hat{a}_{out}(t),\hat{a}_{out}^\dag(t)]=\delta(t-t')$ and similarly $[\hat{b}_{in}(t),\hat{b}_{in}^\dag(t)]=[\hat{b}_{out}(t),\hat{b}_{out}^\dag(t)]=\delta(t-t')$. The output fields $\hat{a}_{out}$ and $\hat{b}_{out}$ are mixed by a $50:50$ directional coupler \cite{PRL105p200503}. We take the inputs to the cavity to be vacuum states, i.e. $\langle \hat{a}_{in}\rangle = \langle \hat{b}_{in}\rangle =0$, and thus the outputs $\hat{a}_{out}$ and $\hat{b}_{out}$ are proportional to $\hat{a}$ and $\hat{b}$, respectively. Thus the outputs of the directional coupler yields modes $\hat{A}_{s}$ and $\hat{A}_{a}$ \cite{splitter}, leading to the detectors. Here we aim to create Fock state of the symmetric mode $\hat{A}_s$. Assuming the intrinsic loss of the cavity is denoted by the decay rate $\gamma_c$, if we take into account the scattering $h$ between two modes $\hat{a}$ and $\hat{b}$, the critical coupling condition is given by $\kappa_{ext}=\sqrt{h^2+\gamma_c^2}$ \cite{Science319p1062}. To switch the interaction between the NV center and the cavity, an electric field perpendicular to the axis of NV center is applied to induce a Stark shift. This static electric field (SEF) can be created by two electrodes positioned $10$ ${\mu}{\rm m}$~above the setup \cite{PRL107p266403}. This distance is much larger than the wavelength of the field in the cavity and the extent of the evanescent field and thus results in negligible scattering loss of the cavity modes. During the excitation of cavity, this SEF is applied to shift the NV center in/out of resonance with the cavity, thus executing the JC step in the map Eq. (\ref{map}). A critical step in the process Eq. (\ref{map}) is the rapid decay of the two level system $\hat{{\cal E}}_{SE}(\hat{\rho})$, the spontaneous emission decay of the two level system (coin). This decay must be executed with a rate much higher than the cavity decay rate. The natural excited state lifetime of the NV ZPL is $\sim 11$ns and this is too long to permit many repetitions of our process Eq. (\ref{map}) even with high-Q cavities. To shorten this, after the JC coupling is switched off by applying the SEF we use a stimulated emission depletion (STED) laser beam ($\lambda_{STED}=775$~${\rm nm}$), to dynamically create a fast decay channel from the excited state to the ground state of the NV center \cite{NaturePhoton3p144}, and this can increase the effective decay rate of the NV by almost four orders of magnitude. After almost all of the population has decayed to the ground state $|g\rangle$, another laser beam on-resonant with the ZPL repumps the NV center from $|g\rangle$ to $|e\rangle$ \cite{PRL103p256404,Starkshift,Nature477p574,Batalov2009}, i.e. an optical $\pi-$pulse. This is the final $\hat{X}$ portion of the map Eq. (\ref{map}). \section{Detailed model of experimental protocol} We propose to implement the JC Damped QRW Fock state synthesis using a NV cavity-QED setup. The relevant energy level scheme for the NV center is shown in Fig.~\ref{fig:NVLevel}(a). \begin{figure}[htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(8.5,12) \put(-2,4){\includegraphics[width=7cm]{JT_levels}} \put(5.5,3.5){\includegraphics[width=5cm]{JT_Stark}} \put(1,-1){\includegraphics[width=6.5cm]{JT_pulse}} \put(3,9){\large (a)} \put(6,9){\large (b)} \put(7,2){\large (c)} \end{picture} \end{center} \caption{(a) Level diagram of a NV center showing spin-triplet ground and excited states, as well as the singlet system involved in intersystem crossing \cite{PRB82p201202R,PRB81p041204,NJP13p025025}. Another triplet excited state ${\bf E}_y$ is not shown here. Also shown is the JC coupling (blue) $g$, decay rate from $|e\rangle$ to $|g\rangle$, the STED illumination (red), and the laser transition for the $\pi$ flip generated by $\Omega_x$ (orange). (b) Eigenvalues of the excited state triplet as a function of applied SEF \cite{NJP10p045004}. The vertical dashed line at $V=0$ marks the splitting due to the strain. At this position the NV$^-$ center can be excited resonantly by a $\pi$ laser pulse $\Omega_x$. An electric field is applied to bring the NV center into resonance with the symmetric mode (resonant frequency $\omega_s$). (c) Time sequence for generation of photonic Fock state showing the initialisation, JC coupling, Decay and $X$ flip.}\label{fig:NVLevel} \end{figure} We neglect both the hyperfine electron-nuclear spin coupling and the weak electronic spin-spin interaction \cite{Nature478p221}. The center has an optically allowed transition between an orbital ground state ${^3{\bf A}_2}$ and an orbital excited state ${}^3{\bf E}$. Both the ground and excited states are $S=1$ spin triplets. The ground state has $^3A_2$ symmetry and is split into an $S_x,S_y$ doublet $2.87$${\rm GHz}$~above an $S_z$ singlet due to the zero-field splitting \cite{NJP10p045004}. The lifetime of excited state ${}^3{\bf E}$ is about $11.6$ ${\rm ns}$, corresponding to a decay rate $\gamma=14$ ${\rm MHz}$ {\cite{PRL97p083002}. Low temperature transform-limited single photon emission spectra from individual NV defects \cite{PRL97p083002} indicates that dephasing is negligible at these low temperatures. Further low temperature experiments demonstrate that the optical excited states of the NV can be isolated from the effects of the nearby phonon sidebands \cite{Batalov2009} at low temperatures. Throughout our operation below the non-radiative decay to the intersystem state from $|{\bf E}_x,S_z \rangle$ is taken to be negligible. Such intersystem crossing decay contributes to an effective decay to the singlet ground state $S_z$.} At low temperature (T$<10$ ${\rm K}$), strain in the nanodiamond causes the excited state $^3\bf{E}$ to split into an orbital upper branch ${\bf E}_x$ and an orbital lower branch ${\bf E}_y$ (see Fig.~\ref{fig:NVLevel}(b)). Each branch is a spin triplet formed by three spin states $S_x,S_y$ and $S_z$. The sublevel $|{\bf E}_x,S_z \rangle$ is well isolated from the other five sublevels by several ${\rm GHz}$. Actually the state $|{\bf E}_y,S_z \rangle$ can be isolated from $|{\bf E}_x,S_z \rangle$ because these two sublevels are associated to orthogonal transition dipoles \cite{PRL103p256404,Batalov2009}. Therefore the spin-conserving transition $|^3{\bf A}_2,S_z\rangle \leftrightarrow |{\bf E}_x,S_z \rangle$, can be excited resonantly at low strain \cite{Nature477p574,PRL103p256404,Batalov2009}. To suppress further any small spin mixing and phonon-induced transitions within these two excited states \cite{PRL103p256404,Batalov2009}, our setup works with low-strain NV$^-$ centers at cryogenic temperatures. Moreover, the JC coupling $g$ is assumed to be much larger than the decay rate $\gamma$ and the thermal orbital coupling and relaxation rates. In our protocol the duration when the NV center is excited into state $|e\rangle$ state is small and thus the spin mixing can be neglected \cite{PRL103p256404,Batalov2009}. Our protocol primarily involves the transition between $|g\rangle \leftrightarrow |e\rangle$, ($|^3{\bf A}_2,S_z\rangle \leftrightarrow |{\bf E}_x,S_z \rangle$) and excludes other sublevels \cite{Nature477p574}. Thus we are able to treat the NV as a two level system with a transition in the optical $\sim$ 637${\rm nm}$. The dynamics of our system is given by the Hamiltonian $\hat{H}$ in the rotating wave approximation (RWA) \begin{equation} \begin{split} \hat{H} = & \hat{H}_s +\hat{H}_a +\hat{H}_x \,,\\ \hat{H}_s =& -\hbar (\Delta_g + \Delta_s(t)) \hat{A}_s^\dag \hat{A}_s + \hbar g [\hat{A}_s^\dag \sigma_- + H.c.] \,,\\ \hat{H}_x = & \hbar \Omega_x(t)[\hat{\sigma}_- + H.c.]\,, \end{split} \end{equation} where $\hat{\sigma}_-= |g\rangle \langle e|$. $\Delta_{g}=\omega_{zpl}- \omega_s$ is the detuning of the mode $\hat{A}_{s}$ and the ZPL transition between states $|g\rangle$ and $|e\rangle$ (with frequency $\omega_{zpl}$), in the absence of any Stark shift. $\omega_s=\omega_c+h$ is the resonant frequency of mode $\hat{A}_{s}$ shifted by the scattering $h$. $g$ is the JC coupling strength between a single NV center and a single photon in the cavity. In our scheme, the NV center only couples to the symmetric mode $\hat{A}_s$. This is reasonable because this coupling can be predominately to mode $\hat{A}_s$ by specialized positioning of the nanodiamond \cite{Science319p1062}, and the scattering can also introduce a large detuning between the unwanted mode $A_a$ and the ZPL of NV center. The Stark shift $\Delta_s(t)$ is used to dynamically control the creation of cavity photons by the NV center due to the JC coupling \cite{Starkshift}. For $\Delta_s(t)=0$, the cavity decouples from the NV center because of the large detuning. This process can be considered as ``the JC coupling off''. The excitation of cavity is turned on if ``the JC coupling on'', i.e. $ \Delta_s(t)=- \Delta_g$. According to our numerical simulation, a fast relaxation from $|e \rangle$ to the ground state $|g\rangle$ is required following the JC coupling phase for the preparation of a high-number Fock state with a high fidelity. Here we make use of the concept of ``Stimulated Emission Depletion'' (STED) to dynamically enhance the relaxation process \cite{NaturePhoton3p144} during the ``decay phase'' of the map (see Eq. (\ref{map}) and Fig. \ref{fig:NVLevel}(c)). When the STED beam is applied, the stimulated emission rate $\gamma_{STED}$ becomes to $I_{STED} \gamma/I_s$, with $I_{STED}/I_s$ denoting the ratio of the STED pulse intensity $I_{STED}$ and the saturation intensity $I_s$. For a lifetime $11.6$~${\rm ns}$, $I_s$ is $\sim 1.85$~${\rm MW}$ ${\rm cm}$$^{-2}$ \cite{NaturePhoton3p144} if a continuous wave (cw) STED beam is applied. A cw STED beam of $20$~${\rm GW}$ ${\rm cm}$$^{-2}$ can enhance the decay rate by four orders of magnitude. During the initial ``JC coupling on'' phase (see Fig. \ref{fig:NVLevel}(c)), we turn off the STED beam and the nominal decay rate of NV center remains $\gamma \sim 14$~${\rm MHz}$. We now describe the detailed steps in synthesising the process described in Eq. (\ref{map}). We assume that the optical modes in both the cavity and waveguide are initially in the vacuum state yielding initially zero photon number in the cavity. The time sequence for creating a photonic Fock state is shown in Fig.~\ref{fig:NVLevel}(c): Initially the NV center is in its ground state $|g\rangle$ after a short $532$ ${\rm nm}$~ laser pulse optically prepares the defect into the $m_s=0\, (|{}^3A_2,S_z\rangle)$ state. A $\pi$ laser pulse $\Omega_x$ is used to resonantly pump it to the excited state $|e\rangle$. Then the ZPL is tuned on-resonance with the mode $\hat{A}_s$ to enable the JC coupling $g$ using a SEF. After time $\tau_T$, we turn off the JC coupling but use the STED laser beam to create a fast decay channel to the ground state $|g\rangle$. Waiting for time $\tau_{\gamma}$, almost all population decays to the ground state $|g\rangle$ from $|e\rangle$. Then a further optical $\pi$ pulse generated by $\Omega_x$ is applied to resonantly excite the NV center to $|e\rangle$ again. We repeat these operations until the target state is trapped. When $\Delta_g + \Delta_s=0$, the NV center resonantly couples to the cavity mode $\hat{A}_s$. The dynamics of the system can then be described by a unitary time evolution operator and we further now assume that the time duration of this unitary may not be precisely controlled, i.e. we assume some noise in the target JC coupling time $\tau_T$. More precisely we take $U_{JC}=e^{-i\tau_T (1+\delta \tau) H_s/\hbar}$, where $\delta \tau$ is a normally distributed additional noise in timing with a standard deviation given by a parameter $\sigma_n$. The excitation probability of the cavity to state $|n+1\rangle$ when the NV is in the excited state is thus now given by $P_g=\sin^2 [g \tau_T\,\sqrt{n+1}\,(1+\delta\tau)]$. Once the SEF is turned off, the STED laser pulse is switched on. The NV center is decoupled from the cavity and relaxes to its GS $S_z$. The timing noise during the decay is not considered in $\tau_\gamma$ because this damping process is insensitive to the timing error. The population in the excited state $|3\rangle$ decays at an effective rate $\gamma_{STED}$ to the ground state $|g\rangle$. Such process can be described by a supperoperator $\varepsilon$ as \cite{QUTool} \begin{equation}\label{eq:decay} {\mathcal{E}}\,[ \hat{\rho}]=e^{\left(\hat{\hat L}-i \hat{\hat H}_s/\hbar \right) \tau_\gamma}\,\hat{\rho} \,, \end{equation} where the superoperators are defined as $\hat{\hat H}_s \hat{\rho}=[\hat{H}_s,\hat{\rho}]$, $\hat{\hat L}\hat{\rho}=\gamma_c/2 (2 \hat{a}\hat{\rho} \hat{a}^\dag -\hat{a}^\dag \hat{a} \hat{\rho} -\hat{\rho} \hat{a}^\dag \hat{a}) +\gamma_{STED}/2 (2\hat{\sigma}_- \hat{\rho} \hat{\sigma}_+ -\hat{\sigma}_z \hat{\rho} -\hat{\rho} \hat{\sigma}_z)$ with $\hat{\sigma}_z=|e\rangle \langle e|-|g\rangle \langle g|$ and the density matrix $\hat{\rho}$. After a time $\tau_\gamma=5\gamma_{STED}^{-1}$, the GS $S_z$ is polarized more than $99\%$ again. The flip $\pi$ laser pulse generated by $\Omega_x$ ($\lambda_x\approx 637$ ${\rm nm}$) turns on successively to flip the NV center to the ES $S_z$. We define a flip operator $X=e^{-i\pi(1+\delta x) \sigma_x /2}$ with $\sigma_x=\sigma_+ + \sigma_-$ to model this flip process as $\hat{X}\hat{\rho} \hat{X}^\dag$. $\delta_x$ is a noise having the same statistic property but independent of $\delta_\tau$. Then the density matrix after $l+1$ steps is determined by a recurrence relation \begin{equation} \label{eq:PhotonFock1} \hat{ \rho}_{l+1} = \hat{ X} {\mathcal E}_{SE}\, [\hat{U}_{JC}\, \hat{ \rho}_l\, \hat{ U}_{JC}^\dag]\, \hat{ X}^\dag \,. \end{equation} The system is initialized in the state $\rho_0= |e\rangle\langle e|\otimes \rho^c_{vac}$, where $\rho^c_{vac}$ is the density matrix of vacuum state of cavity mode. \section{Results} Next we discuss the generation of the photonic Fock state. Throughout our simulation below, we neglect the excitation of $\hat{A}_a$. This requisite can be satisfied by positioning the NV center at the antinode of $\hat{A}_s$ \cite{Science319p1062} or introducing a large scattering. To control the JC coupling, we consider a setup shown in Fig.~\ref{fig:setup}, in which the cavity is designed to be off-resonance with the transition $|g\rangle \leftrightarrow |e\rangle$ such that $|\Delta_g| \gg |g|$. A detuning of $\Delta_g=10g$ is large enough to decouple the cavity from the NV center. To switch on the JC coupling, the transition $|g\rangle \leftrightarrow |e\rangle$ is tuned to be on resonance with the cavity, i.e. $\Delta_g +\Delta_s=0$, by the Stark shift $\Delta_s$ induced by the SEF \cite{Starkshift}. However during the JC step the NV center will decay to its ground state $|g\rangle$ at a rate given by $\gamma$ and this decay during the JC step decreases the fidelity of the target Fock state. We assume a static, larger JC coupling $g=30\gamma$ to suppress this detrimental process. By assuming a good optical cavity with $\gamma_c \ll \gamma$, in combination with the large JC coupling strength further improves the ultimate fidelity of the trapped photon state. \begin{figure} \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(8.5,5) \put(-2.2,-.4){\includegraphics[width=5.8cm]{Fig4a_jt}} \put(3.7,-.2){\includegraphics[width=7.0cm]{bar_chart}} \put(2.3,-.28){(a)} \put(9.4,-.18){(b)} \end{picture} \end{center} \caption{(a) Time evolution of fidelities of target Fock state $|n=6\rangle$. (i) $\sigma_n=0,\gamma_c=0$; (ii) $\sigma_n=0,\gamma_c=0.1\gamma$; (iii) $\sigma_n=1\%,\gamma_c=0.1\gamma$; (iv) $\sigma_n=2\%, \gamma_c=0$. (b) Probabilities of photon number states at step $73$. Red bar for only cavity decay $\sigma_n=0, \gamma_c=0.1\gamma$; blue bar for $\sigma_n=1\%, \gamma_c=0.1\gamma$. Other parameters are $g=30\gamma,\Delta_g=300\gamma, \gamma_{STED}=10^4\gamma$.}\label{fig:result} \end{figure} In the ideal case of no cavity decay, a complete switching on/off JC coupling and no timing error, one can stably trap a Fock state $|n_T\rangle$ with unit fidelity, see the black solid line (i) in Fig.~\ref{fig:result}(a) for instance. Now we discuss the influences of timing error and the decay of cavity on the generated trapped photonic Fock state taking as an example $n_T=6$. Before the fifth step, only the Fock states with $|n<6\rangle$ are excited, because in each step the prepared photon number state can only excite the next one. As the operation continues, the target state $|n_T=6\rangle$ is essentially populated. It can be clearly seen from Fig.~\ref{fig:result}(a) that the fidelity $F={\text Tr}[\hat{\rho}\hat{\rho}_{T}]$ with $|n_T=6\rangle$ \cite{PRL93p130501} increases quickly at first because the excitation probability $P(g,\tau_T,n_T=6)$ is large when the population in $|n=6\rangle$ is small. As more population transfers to $|n=6\rangle$, the probability to excite this state decreases. Overall we have found the generation of Focks states is fairly robust with the number of repetitions of the process Eq. (\ref{map}), providing the timing error is not too large (ii and iii). The cavity mode becomes stable after $\sim 73$ steps. If only the cavity decay is included (red dashed-dotted line (ii) in Fig.~\ref{fig:result}(a)), the target state is stable once it is prepared after about $100$ steps, and then the fidelity $F$ is very high, about $0.97$. The loss of cavity photon cancel the small probability of pumping from $|n=5\rangle$ to $|n=6\rangle$ if $F$ is large, and consequently leads to the reduce of fidelity. Thus the excitation of state $|n=5\rangle$ is considerable, see red bar in Fig.~\ref{fig:result}(b). We notice that the timing error in the JC coupling causes a leakage of the population to higher photon number states. This leakage results in a reduction of the fidelity as the operation continues (iii and iv). To provide a limit for the fidelity of a Fock state we can prepare with $n_T \leq 6$, we perform the simulation including both kinds of imperfection: the timing error ($\sigma_n=1\%$) and the cavity decay $\gamma_c=0.1\gamma$. In this case, the probability of $|n=6\rangle$ is about $0.9$ from step $64$ to $94$ (blue dashed line (iii) in Fig.~\ref{fig:result}(a)). Obviously, the prepared Fock state is stable within a wide operation step range. This is an advantage of the trapping state \cite{PRL86p3534}. However about $6\%$ population leaks to the higher photon number state (blue bar in Fig.~\ref{fig:result}(b)). The fidelity gradually reduces as the Q factor of cavity decreases. Hailin Wang's group has demonstrated a microspherical cavity with $\gamma_c \sim 0.4\gamma$ coupling to a nanodiamond \cite{NanoLett6p2075}. Using this number, our simulation shows that the fidelity can still be $0.88$ if $\sigma_n=1\%$. A timing error of $2\%$ substantially destroys the trapping condition (iv) and causes a considerable excitation of higher state. As a result, a considerable part, about $15\%$, of population leaks to higher photon number states. The fidelity of the target state $|n=6\rangle$ decreases fast from a maximum $F=0.81$ after $60$ steps. However if the operation stops at step $60$, one still can obtain the Fock state $|n=6\rangle$ with high probability. A large error in the interaction time is the crucial reason why a train of atoms successively entering a cavity can not trap a Fock state with high probability \cite{NJP6p97,PRA36p744}. Thanks to a solid-state setup, this timing error can be much smaller during our operation. In contrast, our setup can generate a higher Fock state with a higher fidelity. \begin{figure} \begin{center} \includegraphics[width=7cm]{FN} \end{center} \caption{Numerical proof of Relation Eq.~(\ref{eq:FN}). Blue solid line shows the available fidelity $F$ evaluated by Eq.~(\ref{eq:FN}) as a function of target state $|n_T\rangle$ for $\alpha=0.5$ and $M=5, \gamma_q/\gamma_c=10^5$. Blue triangle marks the numerical results for $n_T=2,4,6,8,10,12,14,20,30$ and $\sigma_n=0,\gamma_c=0.1\gamma$. Here $\gamma_q$ is equal to $\gamma_{STED}$. Numerical evaluation of $\alpha$ is marked as red stars. Other parameters are $g=30\gamma, \Delta_g=300\gamma$.}\label{fig:FN} \end{figure} Even if the timing of JC interaction can be controlled perfectly, the decay of the cavity also limit the available number of photon of Fock state for a set fidelity $F$. Equation~(\ref{eq:FN}) provides a good estimation for the maximum of $n_T$ if $F$ is set. The constant $\alpha$ (about $0.5$) is numerically evaluated for $\sigma_n=0,\gamma_{STED}/\gamma_c=10^5$. Using this value in Eq.~(\ref{eq:FN}), the fidelity for a certain target state $|n_T\rangle$ is shown in Fig.~\ref{fig:FN}. Clearly, the estimation given by Eq.~(\ref{eq:FN}) is consistent with the numerical results. To generate a Fock state $|n=6\rangle$, we need $\gamma_{STED} > 10^4 \gamma_c$. In the presence of noise, the decay rate $\gamma_{STED}$ need be larger. To perform these simulations we used a cavity with $Q\sim 3\times 10^8$ \cite{NaturePhoton6p369,Nature424p839,OL23p247,OL21p453,PRA74p063806,OE15p3390}, corresponding to a decay rate of $\gamma_c\sim 2\pi \times 1.4$~${\rm MHz}$. The nanodiamond embedded in the cavity contributes an extra loss channel to the cavity and subsequently reduce the Q factor. However this induced loss is proportional to $r^6$, where $r$ is the radius of particle \cite{PRL99p173603,NPhoton4p46}. This contribution of loss is negligible if $r<10$~nm and nanodiamonds containing nitrogen vacancy centres in such small nano diamonds have been made \cite{NPhoton7p11,Small5p1649}. Experiments have demonstrated that the Q factor of a cavity embedding a nanoparticle, such as a nanodimaond \cite{PRA74p063806}, or potassium chloride nanoparticle \cite{NPhoton4p46}, can be larger than $3\times 10^8$. The nanodiamond also causes scattering in the cavity and leads to a doubling of the linewidth of the cavity or mode splitting. This scattering rate decreases quickly ($\propto r^3$) as the size of particle decreases \cite{PRL99p173603,NPhoton4p46}. On the other hand we use the nanodiamond to selectively excite the symmetric mode. Therefore the effect of scattering on the generation of the target state can be neglected for $r<10$~nm. We use the typical value $\gamma /2\pi=14$~${\rm MHz}$~ for the decay rate of the excited state $|3\rangle$ of a single NV center \cite{PRL97p083002,PRL105p177403}. This decay rate can be enhanced by four orders in magnitude if a $20$ ${\rm GW}$ ${\rm cm}$$^{-2}$ cw STED beam is applied. To suppress the decay of population from the state $|3\rangle$ during the JC coupling on, we need a JC coupling strength $g=30\gamma \sim 400$ ${\rm MHz}$, which can be reached in the current experiments \cite{OE17p8081}. This large value of $g$ allows for shorter $\tau_T$ and on these times scales the mixing between states ${\bf E}_x$ and ${\bf E}_y$ is negligible. For this coupling strength, a Stark shift of $\Delta_s=10g \sim 2\pi \times 4$ ${\rm GHz}$~ is large enough to switch on/off the excitation of cavity. Such Stark shift can be created using two electrodes separated by $10$ ${\mu}{\rm m}$~ and positioned $10$ ${\mu}{\rm m}$~ above the NV center \cite{PRL107p266403}. \section{Conclusion} In conclusion, we have proposed a solid-state setup consisting of a single NV$^-$ center and a high-$Q$ toroidal cavity for the generation of a multi-photon optical Fock state through the iteration of a damped Jaynes Cummings quantum random walk. By iterating this walk step we found a method to trap an on-demand photonic Fock state with high fidelity within the cavity. \section*{Acknowledgments} One of us (D.E.) is grateful to the Macquarie University Research Centre for Quantum Science and Technology, for hospitality during a sabbatical stay during which this work was initiated. We also acknowledge support from the ARC Centre of Excellence in Engineered Quantum Systems and EU Project Quantip. \end{document}
1,108,101,563,062
arxiv
\section{Introduction} \label{sec:intro} Nuclear Magnetic Resonance (NMR) spectroscopy is an indispensable tool for resolving molecular structures in organic chemistry and biochemistry research, especially when it is challenging to crystallize the target system and analyze it by X-ray crystallography.\cite{gil2011constitutional,becette2020solution, krivdin2019computationalH, krivdin2019computationalC} However, it is not always straightforward to map the molecular structure to the experimental spectra for complex systems. Therefore, \textit{ab initio} quantum chemistry now plays an increasingly important role in efforts to reduce ambiguities and confirm structures by predicting the spectrum as a function of stoichiometry or conformation. One of the primary observables determining NMR spectra is the magnetic shielding tensor, which is a second-order property defined at nucleus $A$, that can be defined as: \begin{equation} \boldsymbol{\sigma}_A = \frac{\partial^2 E} { \partial \mathbf{M}_A \partial \mathbf{B}^{\mathrm{ext}} } = - \frac{\partial \mathbf{B}^{\mathrm{ind}}_A } { \partial \mathbf{B}^{\mathrm{ext}} } \label{eq:shielding} \end{equation} where $E$ is the molecular energy, $\mathbf{B}^{\mathrm{ext}}$ is the applied magnetic field, and $\mathbf{M}_A$ is the nuclear spin of $A$. The shielding thus determines the locally induced field, $\mathbf{B}^{\mathrm{ind}}_A = - \boldsymbol{\sigma}_A \mathbf{B}^{\mathrm{ext}} $. Eq. \ref{eq:shielding} indicates that $\boldsymbol{\sigma}_A$ is a (somewhat) spatially localized response to the externally applied field. Typically the isotropic shielding, $\sigma_A = \frac{1}{3}\mathrm{Tr} \boldsymbol{\sigma}_A$, is observed experimentally or simulated. Any electronic structure method can be used to approximate energy ($E$) and the shielding may then be evaluated as either an analytical or numerical derivative. It is well-established that highly accurate methods such as the coupled-cluster theory with single and double excitations and perturbative triple excitations [CCSD(T)] together with large basis sets can provide reliable predictions of NMR shielding constants.\cite{teale2013benchmarking, schattenberg2021extended}. The estimated mean absolute error (MAE) of CCSD(T) at the complete basis set limit (CBS) is on the order of 0.15 ppm for hydrogen nuclei, 0.4 ppm for carbon, 3 ppm for nitrogen, and 4 ppm for oxygen.\cite{teale2013benchmarking, reid2015approximating} These approaches are, however, impractical for any molecules with more than 10 non-hydrogen atoms due to their high computational cost. To study larger systems like proteins, which are often of current interest, various fragmentation methods have been developed.\cite{de1993methods, he2009protein, zhao2017accurate, kobayashi2018application, herbert2019fantasy, chandy2020accurate} These methods employ the local property of NMR shielding, reducing the calculation time to linear scaling with molecule size without significant loss of accuracy (if the fragments are suitably chosen). However, suitable molecular fragments can sometimes contain more than 10 non-hydrogen atoms, which is still prohibitively expensive for high-accuracy calculations. Therefore, composite method approximations, a common tool of quantum chemistry for energy evaluations\cite{curtiss2011gn,narayanan2019accurate,thorpe2019high}, have been introduced to this area.\cite{kupka2011ccsd, kupka2013estimation, sun2013accurate, reid2015approximating, semenov2019calculation} Composite approaches employ different levels of theories and basis sets, usually combining high-level theory with a small basis set and low-level theory with a large basis set to approximate the results of high-level theory with a large basis set. Specifically, Reid et al. explored basic composites and double composites in detail and proved that some composite methods can accurately reproduce the CCSD(T)/large-basis-set results.\cite{reid2015approximating} A key concern related to the accuracy-efficiency trade-off in NMR shielding calculations is the components of the basis set. Jensen found that different from energy calculations, some special basis functions like tight p-type functions can have a significant effect on predicting NMR shielding constants.\cite{jensen2008basis} Therefore, he designed a family of specialized basis sets, the pcS-n (n $=0-4$) sequence, which have been shown to converge NMR shielding constants faster than other basis set sequences.\cite{jensen2008basis, reid2014systematic, flaig2014benchmarking, jensen2016magnetic} Since the pcS-n sets are generally contracted, Jensen later developed segmented versions of pcS-n, which he called the pcSseg-n family.\cite{jensen2015segmented} With most quantum chemistry codes optimized for segmented basis sets, pcSseg-n calculations are expected to be faster than those with pcS-n series, with nearly equal accuracy. However, to the best of our knowledge, no paper has proven this from practical calculations. Researchers can further utilize the locality of NMR shielding to optimize the computational costs associated with the size of the basis set. In the 1980s, Chesnut and Moore first introduced the idea of locally dense basis sets (LDBS), which assigns a large basis set only to the target atom (the dense part) and allocates smaller basis sets elsewhere in the molecule.\cite{chesnut1989locally} This can reduce the computation time substantially while keeping acceptable levels of accuracy. Their subsequent studies showed that one can obtain more accurate results when a multiatom segment or chemical functional group is selected as the dense part.\cite{chesnut1993use, chesnut1996use} Recently, Reid et al. performed a systematic study of partition schemes using Jensen’s pcS-n basis sets and recommended defining a dense group as a single non-hydrogen atom with connected hydrogens.\cite{reid2014systematic} Since the composite method approximations and LDBS are designed for different aspects of NMR shielding calculations, it is possible to combine them to retain accuracy while gaining further saving on cost. In this work, we will study the accuracy-efficiency trade-off systematically by employing these two promising approximation methods and provide valuable references for researchers to choose the most suitable NMR calculation method based on their demands. First, we will briefly introduce the notations and computational details in the paper (Section~\ref{sec:method}). Then we will show the difference in accuracy and time requirement between pcS-n and pcSseg-n basis set series (Section~\ref{subsec:pcS}) and revisit the accuracy of different partition schemes of LDBS using pcSseg-n basis sets (Section~\ref{subsec:bench_LDBS}). Finally, we will provide recommendations regarding different accuracy requirements (Section~\ref{subsec:overall}). The conclusions are summarized in Section~\ref{sec:conclusion}. \section{Methods} \label{sec:method} \subsection{Composite method approximations}\label{subsec:composite} A commonly used form in composite method approximations is to approximate a computationally expensive target model, $T_{\text{high}}/B_{\text{large}}$, using 3 computationally much cheaper calculations: \begin{equation} \begin{aligned} T_{\text{high}}/B_{\text{large}} &\approx T_{\text{low}}/B_{\text{large}} + \left( T_{\text{high}}/B_{\text{small}} - T_{\text{low}}/B_{\text{small}} \right) \\ &= T_{\text{high}}/B_{\text{small}} + \left( T_{\text{low}}/B_{\text{large}} - T_{\text{low}}/B_{\text{small}} \right) \end{aligned} \label{eq:composite} \end{equation} Here $T_{\text{high}}$ and $T_{\text{low}}$ are two levels of theory and $B_{\text{large}}$ and $B_{\text{small}}$ are two basis sets with different sizes. The composite energy defined by Eq. \ref{eq:composite} can then be used to evaluate chemical shifts via Eq. \ref{eq:shielding}. This model can be viewed as correcting a low level of theory in a large basis set for missing correlation effects (first line of Eq. \ref{eq:composite}) on the assumption that such effects can be captured in a small basis set. Or, it can be equivalently viewed as correcting a high level of theory in a small basis set for missing basis set effects on the assumption that such effects can be captured at a lower level of theory (second line of Eq. \ref{eq:composite}). Either view can be justified based on perturbation theory arguments, although similar rates of convergence of the energy with basis set for $T_{\text{high}}$ and $T_{\text{low}}$ are desirable. In other words, the composite approach implicitly assumes that the incomplete basis set error of chemical shifts at $T_{\text{high}}$ and $T_{\text{low}}$ levels are of comparable size, which is usually true for density functional theory (DFT), second-order M{\o}ller{\textendash}Plesset perturbation theory (MP2), and CCSD(T) methods. In this paper, we denote a composite method as $T_{\text{high}}\left( B_{\text{small}} \right) \cup T_{\text{low}} \left( B_{\text{large}} \right)$. We choose pcSseg-3 (or pcSseg-3 for the dense region in LDBS) as $B_{\text{large}}$, and we select pcSseg-1 as $B_{\text{small}}$. On the theory side, two levels of accuracy will be investigated: 1) MP2 or double hybrid (DH) DFT as $T_{\text{high}}$ and lower rungs\cite{perdew2005prescription} of DFT as $T_{\text{low}}$; 2) CCSD(T) as $T_{\text{high}}$ and MP2 or DHDFT as $T_{\text{low}}$. \subsection{Locally dense basis set}\label{subsec:LDBS} The local nature of NMR shielding tensors motivates the idea of LDBS as an approach to facilitate calculations. In this work, we explore two kinds of partition schemes. The first one is based on Reid et. al's recommendations.\cite{reid2014systematic} We regard a target non-hydrogen atom and its bonded hydrogen atoms as a group and denote it as pcSseg-XYZ. X refers to the allocated (large) basis set of the target group. Similarly, Y refers to the chosen basis set of nearest-neighbor groups, while Z refers to the (smallest) basis set used for more distant groups. We choose pcSseg-321 and pcSseg-331 here. For a detailed comparison, please consult Ref~\citenum{reid2014systematic}. The other scheme selects chemical functional groups according to Chesnut’s suggestions.\cite{chesnut1993use} We denote an LDBS using this approach as pcSseg-func-XYZ, and we will choose pcSseg-func-321 as an example to explore. More details on the implementation of the partitioning process are described in the first section in the Supporting Information. \subsection{Computational details}\label{subsec:comp} Four sets of molecules are chosen as our data sets for different purposes: \begin{enumerate} \item NS372 set: Shielding constants at H, C, N, and O nuclei of the large NS372 set\cite{schattenberg2021extended} are chosen as our overall benchmark reference in Section~\ref{subsec:bench_LDBS} and Section~\ref{subsec:overall}, which provides a quite comprehensive assessment of the light main-group elements with CCSD(T)/pcSseg-3 reference data. This set comprises 290 shielding values of 106 molecules containing 123 $^{1}$H, 93 $^{13}$C, 43 $^{15}$N, and 31 $^{17}$O after discarding BH for its large static correlation. The molecular geometries are directly adopted from the Supporting Information of the NS372 paper.\cite{schattenberg2021extended} \item NS212 set: A subset of NS372 containing 89 molecules and 212 nuclei evaluated at the CCSD(T)/pcSseg-4 level is used for comparison of accuracy and efficiency of the pcS-n and pcSseg-n series in Section~\ref{subsec:pcS}. \item M20 set: a set of twenty larger molecules with various common functional groups is applied in comparing the effect of different partition schemes of LDBS in Section~\ref{subsec:bench_LDBS}. Q-Chem 5.4 software\cite{epifanovsky2021software} is used to optimize the molecule structures at the $\omega$B97X-V/aug-cc-pVTZ level\cite{kendall1992thom} after MMFF94 force field\cite{halgren1996merck} pre-optimization. \item Time evaluation set: three molecules containing two non-hydrogen atoms, three molecules containing four non-hydrogen atoms, and three molecules containing eight non-hydrogen atoms are collected to test the time cost of different methods. \end{enumerate} We performed CCSD(T) shielding calculations with the CFOUR program package, version 2.1.\cite{matthews2020coupled,stanton2010cfour,harding2008parallel} All other calculations, if not specified, were carried out using ORCA 5.0.3.\cite{neese2020orca} For all calculations carried out with ORCA, self-consistent field (SCF) convergence was set to $10^{-9}$ while the coupled perturbed self-consistent field convergence was set to a threshold of $10^{-7}$. For DFT calculations, local xc integrals were calculated over ORCA default grid DefGrid3 for all atoms, which is accurate enough for our purposes. Gauge-including atomic orbitals (GIAOs) were employed in all calculations. The resolution of identity approximation (RI) was used for double hybrid DFT (DHDFT) and most MP2 calculations, with the cw5C\cite{hattig2005optimization} auxiliary basis set. For further acceleration,\cite{stoychev2018efficient} def2-JK\cite{weigend2008hartree} auxiliary basis set was employed for the Coulomb and exchange part of MP2 in Section~\ref{subsec:pcS} and \ref{subsec:bench_LDBS}. The pcSseg-n basis sets are used in Section~\ref{subsec:bench_LDBS} and Section~\ref{subsec:overall} following the conclusions of Section~\ref{subsec:pcS}. Basis sets not built in the computational packages were downloaded from Basis Set Exchange (http://www.basissetexchange.org/).\cite{pritchard2019new} The performance of DFT functionals for predicting magnetic shielding has been extensively benchmarked in previous work.\cite{flaig2014benchmarking, schattenberg2021extended, de2021double}. We selected some best ones from each rung for our present assessment: B97-D (Rung 2),\cite{grimme2006semiempirical} KT3 (Rung 2),\cite{keal2004semiempirical} B97M-V (Rung 3),\cite{mardirossian2015mapping} SCAN (Rung 3),\cite{sun2015strongly} M06-L (Rung 3),\cite{zhao2006new} PBE0 (Rung 4),\cite{adamo1998toward,adamo1999toward} $\omega$B97X-V (Rung 4),\cite{mardirossian2014omegab97x} $\omega$B97X-D3 (Rung 4),\cite{lin2013long} B2GP-PLYP (Rung 5),\cite{karton2008highly} and DSD-PBEP86 (Rung 5).\cite{kozuch2011dsd,kozuch2013spin} For these $\tau$-dependent meta-GGAs, Dobson’s $\tau_{\text{D}}$ model\cite{dobson1993alternative} is used for B97M-V while the ORCA default $\tau_{\text{GI}}$ model\cite{schattenberg2021effect} is used for others in the light of their reported performance.\cite{schattenberg2021extended} All timing jobs were run on a single Haswell node of the NERSC supercomputer. Each Haswell node (Intel Xeon Processor E5-2698 v3) has two sockets, each populated with a 2.3 GHz 16-core Haswell processor. The computational cost is evaluated by averaging the wall time of single computation tasks of molecules with the same number of non-hydrogen atoms across the time evaluation set. \section{Results and Discussions} \label{sec:bench_method} \subsection{Comparison of accuracy and efficiency of pcS-n and pcSseg-n series}\label{subsec:pcS} We first explore the basis set convergence of magnetic shieldings using the pcS-n and pcSseg-n series for DFT (taking B97-D as the representative functional) and wavefunction theory [i.e., HF, resolution of identity MP2 (RIMP2), and CCSD(T)]. Figure~\ref{fig:basis} displays the Root-Mean Square Errors (RMSEs) of H, C, O, and N nuclei as a function of n compared to the same method using a CBS (approximated with pcSseg-4 basis set here) on NS212 set. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{figures/basis_plot.pdf} \caption{Comparison of the RMSEs (in ppm) of pcS-n and pcSseg-n series (n $\leq 3$) for B97-D, HF, RIMP2, and CCSD(T) shielding calculations with respect to pcSseg-4 results with the same method on H, C, O, and N nuclei. The red horizontal lines indicate the intrinsic method error, represented by the RMSEs of the method with pcSseg-4 against CCSD(T)/pcSseg-4. The blue horizontal lines indicate the target error: 0.1 ppm for H nuclei, 1 ppm for C, 3 ppm for N, and 4 ppm for O.} \label{fig:basis} \end{figure} Comparing the panels of Figure~\ref{fig:basis} horizontally reveals that DFT and HF converge faster than post-HF methods [RIMP2 and CCSD(T)] for both pcS-n and pcSseg-n series, consistent with earlier results\cite{reid2014systematic}. Slower convergence of post-SCF methods reflects the polynomial convergence of the wavefunction-based correlation energy with the cardinal number of the AO basis. If we consider the intrinsic method errors, double-zeta basis sets (pcS-1 or pcSseg-1) are sufficient for the use of DFT and HF while triple- or even quadruple-zeta basis sets (n $ \geq 2$) are required to achieve the best performance of post-HF methods. Fortunately, the basis set errors of these different methods for shieldings are still of the same magnitude, suggesting that composite correction methods could be successful. When examining Figure~\ref{fig:basis} more closely, we observe that the pcS-n and pcSseg-n series behave similarly on H nuclei, but their RMSEs on non-hydrogen nuclei cross---pcSseg-0 outperforms pcS-0 for nearly all methods and pcSseg-2 outperforms pcS-2 for DFT and HF, while pcS-2 outperforms pcSseg-2 for post-HF methods and pcS-3 outperforms pcSseg-3 for DFT and HF. However, the differences are quite small compared to the method error (also illustrated in Figure~S4). The elapsed (wall) times for calculations with the pcSseg-n and pcS-n series are similarly quite close. Table~\ref{tab:basis_time} shows the difference is only around 5-20\% except for the CFOUR program. This indicates that either series can be applied in practice. We will utilize pcSseg-n basis sets in the following subsections because the basis sets employed in this study are primarily double-zeta and quadruple-zeta, where the pcSseg-n series takes a bit less time. \begin{table}[ht!] \caption{Comparison of average wall time (in hours) of pcS-n and pcSseg-n basis sets using different methods and different programs for molecules with 8 non-hydrogen atoms. Only one physical core is used here.} \begin{tabular}{lcccc} \hline Basis set & \multicolumn{1}{l}{B97-D (ORCA)} & \multicolumn{1}{l}{B97-D (Q-Chem)} & \multicolumn{1}{l}{RIMP2 (ORCA)} & \multicolumn{1}{l}{MP2 (CFOUR)} \\ \hline pcS-0 & 0.015 & 0.020 & 0.158 & 0.023 \\ pcSseg-0 & 0.016 & 0.022 & 0.156 & 0.025 \\ \hline pcS-1 & 0.059 & 0.050 & 0.286 & 0.358 \\ pcSseg-1 & 0.054 & 0.048 & 0.276 & 0.366 \\ \hline pcS-2 & 0.55 & 0.45 & 1.49 & 7.32 \\ pcSseg-2 & 0.50 & 0.40 & 1.57 & 13.93 \\ \hline pcS-3 & 6.45 & 10.51 & 18.65 & \\ pcSseg-3 & 5.69 & 10.01 & 16.59 & \\ \hline \end{tabular} \label{tab:basis_time} \end{table} \subsection{Relative accuracy and computational cost of 3 different LDBS partition schemes.}\label{subsec:bench_LDBS} We chose to assess three different partition schemes, labeled as pcSseg-321, pcSseg-331, and pcSseg-func-321, in the notation defined in Section~\ref{subsec:LDBS}. These LDBS partition schemes are compared for DFT (B97-D as the representative) and wavefunction theory (RIMP2 as the representative). Figure~\ref{fig:ldbs_vsself} shows the RMSEs with the 3 different LDBS partition schemes for RIMP2 and B97-D taking their CBS value (approximated with the pcSseg-3 basis set) as the reference on the M20 and NS372 sets. First, consistent with the global basis set convergence trends seen in Section~\ref{subsec:pcS}, we find that the error induced by the LDBS approximation is much lower for B97-D than RIMP2 on all four elements. Shielding constants calculated at the RIMP2 level are more sensitive to the choice of basis set, implying that LDBS may work better for DFT and DFT-based composite methods. Second, pcSseg-321 consistently performs the worst. The RMSE of pcSseg-331, and pcSseg-func-321 are comparable for H and N nuclei, whereas pcSseg-func-321 prevails for the O nucleus and pcSseg-331 prevails for the C nucleus. It is worth to note that pcSseg-func-321 does not recognize any functional groups with more than 4 non-hydrogen atoms and thus uses fewer basis functions than pcSseg-331 when describing some important chemical structures like aromatic rings. Therefore, the trends in Figure~\ref{fig:ldbs_vsself} tend to reflect the element-specific numbers of basis functions included in pcSseg-331 versus pcSseg-func-321 versus pcSseg-321. \begin{figure}[ht!] \centering \includegraphics[width=0.95\textwidth]{figures/LDBS_vsself.pdf} \caption{Comparison of the RMSEs (in ppm) of pcSseg-321, pcSseg-331, and pcSseg-func-321 for RIMP2 and B97-D with respect to each method itself with pcSseg-3 for NMR shieldings on H, C, O, and N nuclei contained in two different benchmark sets, represented by solid bars and dashed line bars respectively.} \label{fig:ldbs_vsself} \end{figure} Additionally, RMSEs for RIMP2, B97-D, and certain composite methods using various LDBS partition schemes with regard to CCSD(T)/pcSseg-3 on the NS372 set are displayed in Figure~\ref{fig:ldbs_vsCCSDt}. It is evident that the use of LDBS has little to no impact on the RMSE for B97-D and related composite approaches. However, the error associated with the LDBS is more noticeable for the highly accurate composite method CCSD(T)(1)$\ \cup \ $RIMP2($B_{\text{large}}$), where the RMSE increases above the target error. \begin{figure}[ht!] \centering \includegraphics[width=0.95\textwidth]{figures/LDBS_vsCCSDt.pdf} \caption{Comparison of the RMSEs (in ppm) of pcSseg-321, pcSseg-331, pcSseg-func-321, and pcSseg-3 for RIMP2, B97-D, and related composite methods with respect to CCSD(T)/pcSseg-3 on H, C, O, and N nuclei of the NS372 set. Different basis sets to be compared work as larger basis sets ($B_{\text{large}}$) in composite methods. The blue horizontal lines indicate the target error: 0.1 ppm for H nuclei, 1 ppm for C, 3 ppm for N, and 4 ppm for O.} \label{fig:ldbs_vsCCSDt} \end{figure} Table~\ref{tab:ldbs_time} compares the computational cost for RIMP2 and B97-D with various LDBS partition schemes and the global pcSseg-3 basis set on molecules with 8 non-hydrogen atoms. The LDBS technique (pcSseg-func-321 and pcSseg-321) can save more than half of the computational time and the reduction is expected to grow for larger molecules. For a single molecule, we need to calculate different numbers of jobs under different partition schemes. Usually, pcSseg-func-321 will have fewer jobs than pcSseg-331 and pcSseg-321. Therefore, we believe that pcSseg-func-321 is the best partition scheme assessed in terms of accuracy and computing efficiency and we employ it in Section~\ref{subsec:overall}. \begin{table}[ht!] \caption{Comparison of average wall time (in hours) for molecules with 8 non-hydrogen atoms using the pcSseg-321, pcSseg-331, pcSseg-func-321, and pcSseg-3 basis sets for RIMP2 and B97-D. The LDBS wall times are calculated by summing all component jobs needed. We use MPI parallelization with four physical cores here, and shieldings are evaluated at all nuclei.} \centering \begin{tabular}{l c c} \hline & RIMP2 & B97-D \\ \hline pcSseg-321 & 1.977 & 0.635 \\ pcSseg-331 & 3.979 & 1.367 \\ pcSseg-func-321 & 1.994 & 0.645 \\ pcSseg-3 & 4.390 & 1.554 \\ \hline \end{tabular} \label{tab:ldbs_time} \end{table} As shown in Table \ref{tab:ldbs_time}, the compute advantage of the LDBS approach is already useful even when evaluating NMR shieldings at all nuclei in a medium-sized molecule. Larger speedups can be obtained in some special cases. An interesting example is when shieldings are only needed at a single nucleus (or within a single functional group in the LDBS pcSseg-func-321 approach). The speedup then approaches the ratio of pcSseg-3 time to the pcSseg-1 time ($\sim 4^3 - 4^4$). Another scenario in which that same speedup is approached is when using double numerical differentiation of energies with finite applied fields and nuclear spins to obtain the shielding. \subsection{Overall benchmark}\label{subsec:overall} Figures~\ref{fig:overall_H}, \ref{fig:overall_C}, \ref{fig:overall_N}, and \ref{fig:overall_O} show the RMSEs of all tested methods across the hydrogen, carbon, oxygen, and nitrogen nuclei respectively against their average wall time for molecules with 8 non-hydrogen atoms using MPI parallelization on four physical cores. These methods can be divided into three levels according to their computational costs, and the best methods for each of the three levels are also in order of overall accuracy. We only labeled the recommended methods for each level here and full numerical data is contained in Tables~S1.3 and S1.4. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/NS372_H.pdf} \caption{Comparison of the RMSEs for proton shieldings (in ppm) of different methods with different basis sets relative to CCSD(T)/pcSseg-3 on the NS372 set against the average wall time (in hours) for molecules with eight non-hydrogen atoms calculated with four physical cores. ``(X)" in the method labels represents the basis set pcSseg-X. Only the recommended methods are labeled here. The blue horizontal lines indicate the target error: 0.1 ppm for H nuclei.} \label{fig:overall_H} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/NS372_C.pdf} \caption{Comparison of the RMSEs for carbon shieldings (in ppm) of different methods with different basis sets relative to CCSD(T)/pcSseg-3 on the NS372 set against the average wall time (in hours) for molecules with eight non-hydrogen atoms calculated with four physical cores. ``(X)" in the method labels represents the basis set pcSseg-X. Only the recommended methods are labeled here. The blue horizontal lines indicate the target error: 1 ppm for C nuclei.} \label{fig:overall_C} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/NS372_N.pdf} \caption{Comparison of the RMSEs for nitrogen shieldings (in ppm) of different methods with different basis sets relative to CCSD(T)/pcSseg-3 on the NS372 set against the average wall time (in hours) for molecules with eight non-hydrogen atoms calculated with four physical cores. ``(X)" in the method labels represents the basis set pcSseg-X. Only the recommended methods are labeled here. The blue horizontal lines indicate the target error: 3 ppm for N nuclei.} \label{fig:overall_N} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/NS372_O.pdf} \caption{Comparison of the RMSEs for oxygen shieldings (in ppm) of different methods with different basis sets relative to CCSD(T)/pcSseg-3 on the NS372 set against the average wall time (in hours) for molecules with eight non-hydrogen atoms calculated with four physical cores. ``(X)" in the method labels represents the basis set pcSseg-X. Only the recommended methods are labeled here. The blue horizontal lines indicate the target error: 4 ppm for O nuclei.} \label{fig:overall_O} \end{figure} Starting with the low-level methods, for protons, PBE0(1) is preferred, while DSD-PBEP86(1) is the best or near-best functional on all other nuclei. However, the accuracy of the low-level methods is not acceptable for practical use and the result of these methods can only be used for rough calculations or possibly as input for machine learning networks. If DHDFT (DSD-PBEP86) is not cheap enough for these purposes, then semi-local functionals (like KT3, B97M-V, SCAN, and M06-L) are recommended. For the middle-level methods, it is clear that LDBS (green points) can significantly reduce calculation time compared with the pcSseg-3 basis set (orange/yellow points) while maintaining nearly the same accuracy. Except for N nuclei, the composite methods corrected by RIMP2 or DHDFT (star points) can decrease the RMSEs a lot compared with the original methods (some square points) while increasing compute costs only a little. We have labeled the recommended methods for each nucleus in Figures~\ref{fig:overall_H}, \ref{fig:overall_C}, \ref{fig:overall_N}, and \ref{fig:overall_O}. If all the four kinds of nuclei are wanted in the lowest computational time, we suggest RIMP2(1) $\cup$ B97-D(func-321) for the C nucleus and DSD-PBEP86(1) $\cup$ B97-D(func-321) for other nuclei. It is reasonable to use both RIMP2 and DSD-PBEP86 as $T_{\text{high}}$ because their cost is small compared with that of B97-D(func-321) for reasonable molecule sizes. Only proton shieldings achieve their target accuracy with the recommended middle-level methods. Regarding high-level methods, we note that CCSD(T)(1) $\cup$ RIMP2(3) performs the best on H and C nuclei and CCSD(T)(1) $\cup$ DSD-PBEP86(3) performs the best on N and O nuclei. They are also the only methods that reach the target accuracy for C, N, and O nuclei. When the molecule of interest has more than 4 non-hydrogen atoms, the CCSD(T)(1) part of the composite method will be more expensive than the RIMP2(3) or DSD-PBEP86(3) part (Table~S1.4). Therefore, we can use the two composite methods simultaneously. If predicting all types of nuclei by one method is needed, researchers can use CCSD(T)(1) $\cup$ RIMP2(3) since it is also close to the target error of N and O nuclei. Additionally, the LDBS technique only marginally cuts down on time but doubles or even triples the errors. Therefore, with the codes used here, the LDBS models that we have tested cannot be recommended for high-level methods. \section{Conclusions} \label{sec:conclusion} Building on the work of prior researchers on locally density basis sets (LDBS) and composite methods for NMR shielding calculations, we have investigated the most effective strategies to use under various time and accuracy requirements. Regarding basis sets and LDBS approaches, our main conclusions are as follows: \begin{enumerate} \item We demonstrated that there is relatively little difference in either accuracy or compute cost between the pcSseg-n and pcS-n basis sets for n $\geq 1$. Simply because pcSseg-n is about 5-10\% cheaper than pcS-n, the pcSseg-n series was selected for this work. \item We assessed three different LDBS partition schemes and concluded (pcSseg-)func-321 preferable, which allocates pcSseg-3, pcSseg-2, and pcSseg-1 basis sets to the target group, nearest-neighbor groups, and more remote groups respectively after splitting the molecule by functional groups. \end{enumerate} Our main results were calculations on a large set of NMR shieldings on H, C, N, and O nuclei to evaluate the compute costs and accuracy of many methods employing the LDBS and composite methods. We divided the methods into three levels: \begin{enumerate} \item To reach the desired high accuracy (0.1 ppm for H, 1 ppm for C, 3 ppm for N, and 4 ppm for O), we recommend CCSD(T)(1) $\cup$ RIMP2(3) for H and C, and CCSD(T)(1) $\cup$ DSD-PBEP86(3) for N and O nuclei. \item At a middle level of cost and accuracy, we recommend RIMP2(1) $\cup$ B97-D(func-321) for C and DSD-PBEP86(1) $\cup$ B97-D(func-321) for other nuclei. This reaches a high level of accuracy for H and is 2-3 times larger than the target for C, N, and O. \item When the lowest compute cost is essential, such as to generate a large data set of chemical shielding constants, the best option appears to be the use of a semi-local functional (like KT3, B97M-V, SCAN, and M06-L) with the pcSseg-1 basis set. \end{enumerate} We note that all conclusions drawn in this work are dictated by the performance characteristics of the codes used to evaluate the shieldings. Advances in those codes, or the development of new algorithms, could significantly change some of our recommendations. It also seems clear that the development of new electronic structure methods which offer improved trade-offs between cost and accuracy would be highly desirable to further advance calculations whose cost is at the low or middle levels. \section*{Supporting Information} Additional infomations and figures (SI.pdf) S1-data\_analysis.xlsx S2-raw\_data.xlsx \begin{acknowledgement} This work was primarily supported by funding from the National Institute of General Medical Sciences (National Institutes of Health) under grant number 5U01GM121667]. Additional support to complete the project came from the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy through the Gas Phase Chemical Physics Program, under Contract No. DE-AC02-05CH11231. This research used computational resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. \end{acknowledgement} \clearpage \section{The method to partition molecules by functional groups} As shown in Figure~\ref{fig:segmentation}, we can divide molecules into groups base on the non-hydrogen atoms or functional groups. While the first scheme is easy to implement, the second is nontrivial. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{SI_figures/segmentation_schemes_for_LDBS.pdf} \caption{Different partitioning methods for LDBS selected for this work, including (a) by non-hydrogen atoms and the bonded hydrogen atoms, and (b) by functional groups and} \label{fig:segmentation} \end{figure} Firstly, the xyz2mol code\cite{kim2015universal} is employed to convert the molecule geometry in xyz file to a sdf file, which contains the bonding information of each structure. Then the RDKit package\cite{landrum2016rdkit} and SMARTS strings\cite{smarts} in Table~\ref{tab:SMARTS} is used to determine typical chemical functional groups according to the sdf file. Once all functional groups have been identified, we designate each group as the target group individually and then define the nearest-neighbor groups and more distant groups according to bonding information. In the case of 3-butyn-2-one (Figure~\ref{fig:segmentation}), if the target group is alkynyl (blue circle), then carbonyl (red circle) will be the nearest-neighbor group while methyl (black circle) will be the more distant group. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{SI_figures/M20_structures.pdf} \caption{Structures of the M20 data set.} \label{fig:M20} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{SI_figures/Time_evaluation_data_set_structures.pdf} \caption{Structures of the time evaluation data set.} \label{fig:time_evaluation_data_set} \end{figure} \begin{table}[ht!] \caption{SMARTS used in functional groups identification in this work. Only functional groups containing no more than three non-hydrogen atoms except nitrate are identified to take full advantage of LDBS.} \centering \begin{tabular}{c} \hline SMARTS \\ \hline [O-][N+](=O)[O-] \\\relax [C]=[C]=[O] \\\relax [O]=[C]=[O] \\\relax [C]=[N+]=[N-] \\\relax [N-][N+]\#[N] \\\relax [C]=[C]=[C] \\\relax [N]=[C]=[O] \\\relax [O]=[C]=[S] \\\relax [N+](=O)[O-] \\\relax [C](=O)[O] \\\relax [C](=O)[N] \\\relax [O][N]=[O] \\\relax [C]=[O] \\\relax [N]\#[N] \\\relax [N]=[N] \\\relax [N]=[O] \\\relax [C]\#[N] \\\relax [C]\#[P] \\\relax [P]=[O] \\\relax [P]\#[N] \\\relax [P]=[N] \\\relax [C]\#[C] \\\relax [C;v4]=[N] \\\relax [C]=[C] \\ \hline \end{tabular} \label{tab:SMARTS} \end{table} \newpage \section{Supplemental figures for main paper} \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{SI_figures/basis_plot_TBE.pdf} \caption{Comparison of the RMSEs (in ppm) of pcS-n and pcSseg-n series for B97-D, HF, and RIMP2 with respect to CCSD(T)/pcSseg-4 on H, C, O, and N nuclei. The red horizontal line indicates an error of 0.1 ppm for H nuclei, 1 ppm for C, 3 ppm for N, and 4 ppm for O.} \end{figure} \newpage
1,108,101,563,063
arxiv
\section{Introduction} \label{sec:intro} \begin{deluxetable*}{ccccccc}[t!] \tablecaption{Stellar model atmosphere grids available with the first release of ExoTETHyS \label{tab:stellar_grids}} \tablecolumns{6} \tablenum{1} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{Geometry\tablenotemark{a}} & \colhead{Range $T_{\mbox{eff}}$(K)} & \colhead{Range $\log{g}$} & \colhead{Range $[M/H]$} & \colhead{Range $\lambda$ ($\mu$m)} & \colhead{Reference} } \startdata \texttt{ATLAS} & P-P & 3500-50000 & 0.0-5.0 & --5.0-1.0 & 0.009-160.0 & \citet{claret00} \\ \texttt{PHOENIX}\_2012\_13 & S1 & 3000-10000 & 0.0-6.0 & 0.0 & 0.25-10.0 & \citet{claret12,claret13} \\ \texttt{PHOENIX}\_2018 & S1 & 2300-12000 & 0.0-6.0 & 0.0 & 0.05-2.6 & \citet{claret18} \\ \enddata \tablenotetext{a}{Geometry types: P-P=plane-parallel; S1=spherical 1D} \end{deluxetable*} More than 3000 transiting exoplanets have been discovered in the last 20 years. The number of transiting exoplanets accounts for about three-quarters of the current exoplanet census\footnote[7]{source: \url{https://exoplanetarchive.ipac.caltech.edu}}, although this large fraction is due to targeted research programs rather than being a random sample from the exoplanet population. The success of the transit method is due to several contributing factors, including its ability to characterize them in great detail. A transit is revealed by a decrement in flux while the planet occults part of the stellar disk. The main observables are the transit depth and durations, leading to measurements of the exoplanet size, orbital semimajor axis and inclination, and stellar mean density \citep{seager03}. Transit spectroscopy is now routinely used to investigate the chemistry and physics of exoplanet atmospheres, through differences in transit depth of $\sim$10-100 parts per million (ppm) relative to the stellar flux at multiple wavelengths (e.g., \citealp{iyer16, sing16, tsiaras18}). Accurate modeling of the host star effects is mandatory to achieve the spectrophotometric precision required for characterizing the atmosphere of transiting exoplanets. The most prominent effect is stellar limb-darkening \citep{mandel02}, followed by magnetic activity \citep{ballerini12, zellem17}, granulation \citep{chiavassa17}, and, in some cases, rotational oblateness and gravity darkening \citep{howarth17}, and tidal deformations \citep{akinsanmi19, hellard19}. Among the nonstellar effects, the exoplanet nightside emission can also play a significant role \citep{kipping10, morello19}. The \texttt{ExoTETHyS} package is conceived as a toolbox for those who analyze the exoplanetary transits. The first release focuses on the tools for modeling the stellar limb-darkening effect, the importance of which is ubiquitous in transit observations, as well as in optical interferometry, microlensing, and eclipsing binary observations. Future versions of \texttt{ExoTETHyS} will include useful tools for modeling other effects, as well as for estimating their impact on specific observations, based on the astrophysical system parameters, the instrument passband, and the noise level. Accurate modeling of all of the aforementioned effects proved to be crucial in the analysis of several \emph{CoRoT} and \emph{Kepler} objects (e.g., \citealp{mazeh10, barnes11, mazeh12, masuda15, howarth17, reinhold17, shporer17, nielsen19}), because of the high-precision photometry down to the $\lesssim$10~ppm level \citep{christiansen12}. A similar photometric precision is expected for some of the ongoing \emph{Transiting Exoplanet Survey Satellite} (\emph{TESS}) observations \citep{ricker14}, future observations with the \emph{CHaracterising ExOPlanet Satellite} (\emph{CHEOPS}; \citealp{isaak19}) and \emph{PLAnetary Transits and Oscillations} (\emph{PLATO}; \citealp{rauer14}), and in spectroscopic observations with the upcoming \emph{James Webb Space Telescope} (\emph{JWST}; \citealp{beichman14}) and \emph{Atmospheric Remote-sensing Infrared Exoplanet Large-survey} (\emph{ARIEL}; \citealp{pascale18}) space missions. Stellar limb-darkening is the wavelength-dependent radial decrease in specific intensity. Consequently, the transit light curve deviates from the flat-bottomed shape that would be observed in the case of a uniform stellar disk; the difference signal can be as large as $\sim$10$^4$ ppm for the transit of a hot Jupiter observed at UV or visible wavelengths. Typically, the radial intensity distribution computed from specific stellar atmosphere models is parameterized by a set of limb-darkening coefficients, which are fixed in the analyses of transit light curves. Many researchers have produced multiple grids of stellar atmosphere models with different codes, then used to compile precalculated tables of limb-darkening coefficients (e.g., \citealp{claret00, claret03, claret04, claret08, claret17, claret18, sing10, howarth11b, claret11, claret12, claret13, claret14, neilson13, neilson13b, magic15, reeve16}). The lack of empirical validation for stellar limb-darkening prevents the final choice of the most reliable model(s). The presence of unocculted stellar spots during an exoplanetary transit may alter the effective limb-darkening coefficients, which will be slightly different from those calculated for the case of unspotted stellar surface \citep{csizmadia13}. In some cases, significantly different parametric intensity profiles have been obtained from the same model atmosphere, depending on the sampling of the model intensity profile, the functional form (so-called limb-darkening law), and/or the fitting algorithm adopted \citep{claret00, heyrovsky07, howarth11, espinoza15}. The system parameters obtained from the light curve fits with the alternative sets of limb-darkening coefficients can vary by more than the respective 1$\sigma$ error bars, typically, if the relative photometric precision of the observations is of the order of (or better than) 100~ppm per minute interval. In this paper, we probe an optimized fitting algorithm for the limb-darkening coefficients that minimizes the difference between (numerically integrated) reference light curves and the corresponding approximated transit models with limb-darkening coefficients. Therefore we eliminate the degeneracy from the choice between several fitting algorithms that were leading to significantly different parametric profiles for the same stellar atmosphere model (e.g., \citealp{espinoza15}). The high-fidelity match between the stellar intensity profiles and the transit light curve models facilitates comparative studies of the model atmospheres, especially with the increasing number of observations with a spectrophotometric precision down to $\sim$10~ppm (e.g., \emph{CoRoT}, \emph{Kepler}, \emph{TESS}, and \emph{Hubble Space Telescope} (\emph{HST})/WFC3 data). \subsection{Structure of the paper} Section~\ref{sec:exotethys} provides a technical description of the \texttt{ExoTETHyS} package and the algorithms adopted. Section~\ref{sec:performances} discusses the precision of the limb-darkening calculator for the analysis of exoplanetary transits. In particular, Section~\ref{ssec:fitting_methods} compares various algorithms that are adopted in the other publicly available codes and their variants, Section~\ref{ssec:fitting_ld_laws} compares the performances of the alternative limb-darkening laws, and Section~\ref{ssec:GOF_ppm} provides a formula to estimate the potential error in the transit model based on the goodness of fit for the limb-darkening coefficients that should be compared with the noise level in the observations. Section~\ref{sec:usage} discusses the main functionality of the \texttt{ExoTETHyS} package, its current and future usage. Finally, Section~\ref{sec:conclusions} summarizes the key points discussed in this paper. \section{Description of the \texttt{ExoTETHyS} package} \label{sec:exotethys} The first release of \texttt{ExoTETHyS} includes the following subpackages: \begin{enumerate} \item Stellar Atmosphere Intensity Limb (SAIL), which can calculate the limb-darkening coefficients for specific stellar targets or over predetermined parameter grids; \item Transit Ring-Integrated Profile (TRIP), which can compute an exact transit light curve by direct integration of the occulted stellar flux, without using an analytical function (limb-darkening law) to approximate the stellar intensity profile. \end{enumerate} The TRIP subpackage was conceived to model exoplanetary transits. Following requests by users, we are adding a function to model eclipsing binaries. \subsection{The SAIL subpackage} The SAIL subpackage is a generic stellar limb-darkening calculator that is not specific to a predetermined list of instruments or standard passbands. It is conceptually similar to the calculator provided by \cite{espinoza15}, but with different features. A technical difference is the use of a novel fitting algorithm for obtaining the limb-darkening coefficients, specifically optimized for modeling the exoplanetary transits, instead of multiple algorithm options with unclear performances (see Sections~\ref{ssec:fit_ints} and \ref{ssec:fitting_methods}). \subsubsection{Input and output} \label{ssec:sail_IO} The SAIL subpackage requires a configuration file to specify the desired calculation. The user can choose either ``individual'' or ``grid'' calculation type. The first option enables calculation of the limb-darkening coefficients for a star or for a list of stars with the parameters specified by the user, while the latter will provide the limb-darkening coefficients for a grid of precalculated stellar model atmospheres. In both cases, the user must select one of the available stellar model grids, which were computed with different codes and settings (see Table~\ref{tab:stellar_grids} and references therein). For each grid, the stellar models are identified by a set of three parameters, i.e., the effective temperature ($T_{\mbox{\footnotesize{eff}}}$), the surface gravity ($\log{g}$), and the metallicity ($[M/H]$). As the limb-darkening coefficients are mostly dependent on the effective temperature, the user must provide the effective temperatures of all the individual stars. The other parameters have default values of $\log{g}=$4.5 and $[M/H]=$0.0, corresponding to a main-sequence star with solar abundances, if they are not given by the user. For the grid calculation type, the default option is to calculate the limb-darkening coefficients for all the stellar models in the selected database. Alternatively, the user can select a subgrid by specifying the minimum and/or maximum values for each stellar parameter. Another key input is the passband, i.e., the total spectral response of the observing instrument. For most instruments, the spectral response is available as a table of photon-to-electron conversion factors at given wavelengths. The limb-darkening coefficients do not depend on the absolute values of the spectral response, so that a scaled/normalized version of the spectral response will give identical results. The spectral responses of the most common instruments for transiting exoplanets are built into the package. The code can accept any user-defined passband with the same file format. It is also possible to calculate the limb-darkening coefficients for multiple wavelength bins within a given passband by specifying the two wavelengths delimiting each bin. This option is particularly useful for exoplanet spectroscopic observations, such as those currently performed with \textit{HST}/WFC3. The last mandatory input in the configuration file is the list of limb-darkening laws to adopt (at least one). The code includes several built-in limb-darkening laws, including all of the most commonly used (see Section~\ref{ssec:ld_laws}), but it can also accept user-defined laws. The ``basic'' outputs are python dictionaries containing the best-fit limb-darkening coefficients obtained for the required passbands, wavelength bins, and limb-darkening laws. The output dictionaries also provide the corresponding weighted rms of the fitting residuals to allow for a quick quality check (see Section~\ref{ssec:GOF_ppm}). For the case of individual calculation type, the results obtained for each target are stored in separate pickle files. Optionally, the user can request a ``complete'' output, whic includes intermediate products such as the numeric intensity profiles at various stages of the calculation (see Sections~\ref{ssec:integ_ints}-\ref{ssec:ldc_interp}). The additional information of the complete output is offered, mainly, as a way to identify bugs in the code and/or issues with certain stellar model atmospheres and wavelengths. Usually, the exoplanetary scientists will be interested to the basic output only. \subsubsection{From the stellar model atmospheres to the passband-integrated intensities} \label{ssec:integ_ints} The stellar model atmosphere grids consist of one file for each triple of stellar parameters ($T_{\mbox{\footnotesize{eff}}}$, $\log{g}$, $[M/H]$), providing the specific intensities ($I_{\lambda}(\mu)$) in units of erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$ sr$^{-1}$ at several positions on the sky-projected stellar disk over a given spectral range. For historical reasons, the independent variable is $\mu=\cos{\theta}$, where $\theta$ is the angle between the line of sight and the corresponding surface normal. The radial coordinate in the sky-projected disk is $r=\sqrt{1-\mu^2}$, where $r=1$ ($\mu=0$) corresponds to the spherical surface radius. Table~\ref{tab:stellar_grids} reports the information about the databases available with the first release of ExoTETHyS. We refer to the relevant papers and references therein for comparisons between the models. The passband-integrated intensities are calculated as \begin{equation} \label{eqn:integrated_intensities} I_{\mbox{\footnotesize pass}} (\mu) \propto \int_{\lambda_1}^{\lambda_2} I_{\lambda}(\mu) R_{\mbox{\footnotesize pass}}(\lambda) \lambda d \lambda , \end{equation} where $R_{\mbox{\footnotesize pass}}(\lambda)$ is the spectral response of the instrument in electrons photon$^{-1}$, and $\lambda_1$ and $\lambda_2$ are the passband or wavelength bin limits. The passband-integrated intensities are obtained in units proportional to electrons cm$^{-2}$ s$^{-1}$ sr$^{-1}$. As the limb-darkening coefficients are not affected by the (omitted) proportionality factor in Equation~\ref{eqn:integrated_intensities}, the final intensities are normalized such that $I_{\mbox{\footnotesize pass}} (\mu=0) = 1$. The intensity profiles, $I_{\lambda}(\mu)$, have distinctive behaviors depending on the plane-parallel or spherical geometry adopted by the selected grid of model atmospheres. In particular, the spherical intensity profiles show a steep drop-off close to the stellar limb, which is not observed in the plane-parallel models. The explanation for the different behaviors is exhaustive in the literature \citep{wittkowski04, espinoza15, morello17}. The almost null intensities at small $\mu$ are integrated over lines of sight that intersect only the outermost atmospheric shells, which have the smallest emissivity. Here $\mu=$0 ($r=$1) corresponds to the outermost shell of the model atmosphere, which is typically outside the stellar radius that would be observed in transit. Our algorithm calculates the photometric radius at the inflection point of the spherical intensity profile, i.e., where the gradient $|dI(r)/dr|$ is the maximum \citep{wittkowski04, espinoza15}. The radial coordinates are then rescaled such that $r=1$ ($\mu=$0) at the photometric radius, and those intensities with rescaled $r>$1 are rejected. No rescaling is performed for the plane-parallel models. \subsubsection{Limb-darkening laws} \label{ssec:ld_laws} A long list of analytical forms, so-called limb-darkening laws, has been proposed in the literature to approximate the stellar intensity profiles. The following options are built in the package: \begin{enumerate} \item the linear law \citep{schwarzschild06}, \begin{equation} \label{eqn:ld_law_linear} I_{\lambda}(\mu) = 1 - a(1-\mu) ; \end{equation} \item the quadratic law \citep{kopal50}, \begin{equation} \label{eqn:ld_law_quadratic} I_{\lambda}(\mu) = 1 - u_1(1-\mu) - u_2(1-\mu)^2 ; \end{equation} \item the square-root law \citep{diaz-cordoves92}, \begin{equation} \label{eqn:ld_law_sqrt} I_{\lambda}(\mu) = 1 - v_1(1-\sqrt{\mu}) - v_2(1-\mu) ; \end{equation} \item the power-2 law \citep{hestroffer97}, \begin{equation} \label{eqn:ld_law_power2} I_{\lambda}(\mu) = 1 - c(1-\mu^{\alpha}) ; \end{equation} \item the four-coefficient law \citep{claret00}, hereinafter referred to as claret-4, \begin{equation} \label{eqn:ld_law_claret4} I_{\lambda}(\mu) = 1 - \sum_{k=1}^{4} a_n(1-\mu^{k/2}) ; \end{equation} \item a generalized $n^{\mbox{\footnotesize{th}}}$-degree polynomial law, \begin{equation} \label{eqn:ld_law_gen_poly} I_{\lambda}(\mu) = 1 - \sum_{k=1}^{n} b_k(1-\mu^{k}) ; \end{equation} \item a generalized claret-$n$ law, \begin{equation} \label{eqn:ld_law_gen_claret} I_{\lambda}(\mu) = 1 - \sum_{k=1}^{n} c_k(1-\mu^{k/2}) ; \end{equation} \end{enumerate} Additionally, user-defined limb-darkening laws can be easily implemented. We recommend using the claret-4 law to achieve a model precision of $\lesssim$10~ppm in the analysis of exoplanetary transits (see Section~\ref{ssec:fitting_ld_laws}). The next release of \texttt{ExoTETHyS} will include a grid of white dwarf models, for which we have also found the claret-4 law to be significantly more accurate than the two-coefficient laws \citep{claret20}. \subsubsection{From the passband-integrated intensities to the limb-darkening coefficients} \label{ssec:fit_ints} The limb-darkening coefficients are obtained through a weighted least-squares fit of the passband-integrated intensity profile with weights proportional to the sampling interval in $r$, hereinafter referred to as \emph{weighted}-$r$ fit. The corresponding cost function is the weighted rms of residuals, \begin{equation} \label{eqn:w-rRMS} \mbox{\emph{weighted}-}r \, \mbox{rms} = \left ( \frac{\sum_{i=1}^{n} w_i ( I_{\mbox{\footnotesize pass}} (\mu_i) - I_{\mbox{\footnotesize pass}}^{\mbox{\footnotesize law}} (\mu_i) )^2}{ \sum_{i=1}^{n} w_i } \right )^{\frac{1}{2}}, \end{equation} with weights \begin{equation} \label{eqn:w-r_weights} w_i = \begin{cases} (1-r_1) + 0.5 \, (r_1-r_2), & \mbox{if} \ i=1 \\ 0.5 \, (r_{i-1}-r_{i+1}), & \mbox{if} \ 1<i<n\\ 0.5 \, r_{n-1}, & \mbox{if} \ i=n \end{cases}, \end{equation} where the $r_i$ are arranged in descending order, and $r_n =0$. The choice of cost function is optimized for the study of exoplanet transits, as detailed in Section~\ref{ssec:fitting_methods}. The performances of the spherical model fits are further enhanced by discarding those points with $r>0.99623$ (after rescaling as explained in Section~\ref{ssec:integ_ints}). This cut is a generalization of that implemented in the quasi-spherical (QS) fits by \cite{claret12}. For this reason, we rename the total fitting procedure explained here for the spherical intensity profiles as the \emph{weighted}-$r$ QS fit. Further details about the alternative fitting procedures are discussed in Section~\ref{ssec:fitting_methods}. \subsubsection{Interpolation from the grid of stellar models} \label{ssec:ldc_interp} The process described in Sections~\ref{ssec:integ_ints}-\ref{ssec:fit_ints} enables the calculation of limb-darkening coefficients for the stellar-atmosphere models contained in the grid, starting from their precalculated specific intensities. The limb-darkening coefficients for an individual target with a generic set of stellar parameters are obtained by sequential linear interpolation through the following steps: \begin{enumerate} \item identification of the neighbors in the model-grid, i.e., the vertices of the cube in parameter space that contains the requested model (maximum 8 models); \item calculation of the limb-darkening coefficients for each of the neighbors; \item interpolation in $[M/H]$ between models with the same $T_{\mbox{\footnotesize{eff}}}$ and $\log{g}$, leading to a maximum of 4 sets of limb-darkening coefficients with the requested $[M/H]$; \item interpolation in $\log{g}$ between the above calculated sets of coefficients with the same $T_{\mbox{\footnotesize{eff}}}$, leading to a maximum of 2 sets of limb-darkening coefficients with the requested $\log{g}$ and $[M/H]$; \item interpolation in $T_{\mbox{\footnotesize{eff}}}$ between the above calculated sets of coefficients. \end{enumerate} We note that this sequential interpolation is possible because of the regularity of the model grids. \begin{figure*}[t] \plotone{figures/f1.eps} \caption{Example with a model intensity distribution for a star similar to HD209458 ($T_{\mbox{\footnotesize eff}} = 6100 \, \mbox{K}$, $\log{g} = 4.5$), integrated over the 7.59--7.61~$\mu$m wavelength range, by using the \texttt{PHOENIX}\_2012\_13 database (see Table~\ref{tab:stellar_grids}). Top, left panel: normalized specific intensities vs. $\mu$ from the stellar atmosphere model (black circles), \emph{unweighted} (gray), \emph{weighted}-$r$ (orange), and \emph{weighted}-$r$ QS (red) model fits with claret-4 coefficients. The vertical dashed line denotes the cutoff value for the quasi-spherical fit (see Section~\ref{ssec:fitting_methods}). Top, right panel: analogous plot vs. $r$. Bottom panels: residuals between the fitted and model intensity values. The corresponding unweighted and weighted rms amplitudes of residuals are also reported. Note that, in this case, the unweighted least-squares fit leads to a non-monotonic radial intensity profile, which is physically unexpected. \label{fig:unphysical_ldfit}} \end{figure*} \begin{figure*} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=0.5\textwidth}}]{figure}[\FBwidth] {\caption{Top panel: simulated transit light curve (black) of HD209458~b as it would be observed by \emph{TESS}, and best-fit model with claret-4 limb-darkening coefficients obtained with the \emph{weighted}-$r$ QS method (red). Bottom panels: residuals between the reference light curve and the best-fit models with claret-4 limb-darkening coefficients obtained with different limb-darkening laws and fitting methods (see Section~\ref{ssec:fitting_methods}). The peak-to-peak and rms amplitudes of the residuals are reported.}\label{fig:TESS_transits}} {\hspace{-1.7cm}\includegraphics[width=0.37\textwidth]{figures/f2a.eps}} \includegraphics[width=\textwidth]{figures/f2b.eps} \end{figure*} \begin{figure*}[t] \plotone{figures/f3.eps} \caption{Peak-to-peak of residuals between the reference spectral light curves for the transit of HD209458~b and the best-fit models with claret-4 limb-darkening coefficients obtained with different fitting methods (see Section~\ref{ssec:fitting_methods}). Left panel: results obtained with the spherical methods, i.e., taking into account the whole spherical intensity profiles. Right panel: results obtained with the quasi-spherical methods, i.e., with a cutoff of $r \le 0.99623$, and the \emph{weighted}-$r$ method (dotted, orange line). The \emph{unweighted} QS (gray line) and the \emph{weighted}-$\mu$ QS (green line) overlap in the plot. Note the scale difference between the two panels. \label{fig:spectrum_residuals}} \end{figure*} \subsection{The TRIP subpackage} The TRIP subpackage is used to generate exact transit light curves by direct integration of the occulted stellar flux at given instants. It assumes a dark spherical planet transiting in front of a spherically-symmetric (unspotted) star. In this simple case, the normalized flux (i.e., relative to the stellar flux) is a function of two geometric variables, as reported by \cite{mandel02}, and of the stellar intensity profile: \begin{equation} \label{eqn:F_p_z} F(p,z, I(\mu)) = 1 - \Lambda (p,z,I(\mu)) , \end{equation} where $p$ is the planet-to-star radii ratio ($p=R_p/R_*$), $z$ is the sky-projected distance between the centers of the two spheres in units of the stellar radius, and $I(\mu)$ is the stellar intensity profile. TRIP does not use an analytical approximation of the limb-darkening profile, unlike most transit light curve calculators such as those provided by \cite{mandel02}, \cite{gimenez06}, \cite{agol19}, \texttt{JKTEBOP} \citep{southworth04}, \texttt{TAP} \citep{gazak12}, \texttt{EXOFAST} \citep{eastman13}, \texttt{PyTransit} \citep{parviainen15}, \texttt{BATMAN} \citep{kreidberg15}, and \texttt{PYLIGHTCURVE}\footnote[8]{\url{https://github.com/ucl-exoplanets/pylightcurve}} \citep{tsiaras16}. \subsubsection{Input and output} The TRIP subpackage requires a configuration file, where the user has to specify the name of the text files containing the limb-darkening profile, the phase, time, or $z$-series for which to calculate the normalized flux, and a list of parameter values that includes $p$ and those parameters eventually needed to compute the $z$-series (see Section~\ref{ssec:z_dist}). The limb-darkening file consists of two columns with the $\mu$ or $r$ values (first column) and the corresponding specific intensities (second column). A list of optional parameters can be used to set the calculation details, i.e., the number of annuli, the interpolation variable, and the polynomial order for the spline interpolation (see Section~\ref{ssec:norm_flux}). It is also possible to define simple operations on the original limb-darkening profile, i.e., a possible cutoff in $\mu$ or $r$ with or without rescaling the $\mu$ or $r$ values to the cutoff radius. The output is a text or pickle file containing the normalized flux series for the requested phase, time, or $z$-series. \subsubsection{Computing the $z$-series} \label{ssec:z_dist} In general, $z$ is a function of the orbital phase ($\Phi$), i.e., the fraction of orbital period ($P$) from the closest transit event: \begin{equation} \label{eqn:phi_definition} \Phi = \frac{t - \mbox{E.T.}}{P} - n, \end{equation} where $t$ denotes time, E.T. is the Epoch of Transit (i.e., a reference mid-transit time), and $n$ is the number of orbits from the E.T. rounded to the nearest integer. Conventionally, $\Phi$ values are in the range of $[-0.5, \ 0.5]$ or $[0, 1]$ and $\Phi=0$ at mid-transit time. The projected star--planet separation is given by \begin{equation} \label{eqn:z_dist} z = \begin{cases} a_R \sqrt{1 - \cos^2{( 2 \pi \Phi )} \sin^2{i} } & \mbox{circular orbit} \\ a_R \frac{1-e^2}{1+e \cos{f}} \sqrt{1-\sin^2{(f+\omega)} \sin^2{i}} & \mbox{eccentric orbit} \end{cases} , \end{equation} where $a_R$ is the orbital semimajor axis in units of the stellar radius, $i$ is the inclination, $e$ is the eccentricity, $\omega$ is the argument of periastron, and $f$ is the true anomaly. In the eccentric case, the true anomaly is calculated from the orbital phase by solving Kepler's equation, \begin{equation} \label{eqn:kepler_ecc} \frac{\pi}{2} - \omega + 2 \pi \Phi = E - e \sin{E} , \end{equation} then \begin{equation} \label{eqn:true_anomaly} f = 2 \arctan{ \left ( \sqrt{\frac{1+e}{1-e}} \tan{\frac{E}{2}} \right )} . \end{equation} \subsubsection{Calculating the normalized flux} \label{ssec:norm_flux} The total and occulted stellar flux are given, respectively, by the integrals \begin{equation} \label{eqn:Fstar_integral} F_{*} = \int_{0}^{1} I(r) \, 2 \pi r \, dr , \end{equation} and \begin{equation} \label{eqn:Focc_integral} F_{*,\mbox{\footnotesize occ}} = \int_{0}^{1} I(r) \, 2 \pi r \, f_{p,z}(r) \, dr , \end{equation} with \begin{multline} \label{eqn:fpzr_fraction} f_{p,z}(r) =\\ \left. \begin{cases} \frac{1}{ \pi} \arccos{ \frac{r^2 + z^2 - p^2}{2zr}} & |z-p| < r < z+p \\ 0 & r \le z-p \ \mbox{or} \ r \ge z+p \\ 1 & r \le p-z \end{cases} \right |_{0 \le r \le 1} . \end{multline} $I(r)$ is the specific intensity at the normalized radial coordinate $r=\sqrt{1-\mu^2}$, and $f_{p,z}(r)$ is the fraction of circumference with radius $r$ covered by the planet. Equations~\ref{eqn:Fstar_integral} and \ref{eqn:Focc_integral} rely on the assumed spherical symmetry for the star; Equation~\ref{eqn:fpzr_fraction} also makes use of the planet sphericity. Finally, the normalized flux is given by Equation~\ref{eqn:F_p_z} with \begin{equation} \label{eqn:lambda_p_z} \Lambda (p,z,I(\mu)) = \frac{F_{*,\mbox{\footnotesize occ}}}{F_{*}} . \end{equation} The integrals in Equations~\ref{eqn:Fstar_integral} and \ref{eqn:Focc_integral} are calculated numerically by using the mid-point rule with a uniform partition in $r$. The specific intensities are evaluated at the partition radii by interpolating in $\mu$ or $r$ from the input limb-darkening profiles. The TRIP algorithm with default settings is identical to the ``tlc'' described by \cite{morello17}. \section{Performance of \texttt{ExoTETHyS}} \label{sec:performances} \subsection{Comparison between fitting algorithms for the stellar intensity profiles} \label{ssec:fitting_methods} \begin{figure*}[t] \plotone{figures/f4.eps} \caption{Best-fit transit parameters to the reference spectral light curves for the transit of HD209458~b assuming claret-4 limb-darkening coefficients obtained with different fitting methods (see Section~\ref{ssec:fitting_methods}). The true parameter values are reported in black. Left panels: results obtained with the spherical methods, i.e., taking into account the whole spherical intensity profiles. Right panels: Results obtained with the quasi-spherical methods, i.e., with a cutoff of $r \le 0.99623$, and the \emph{weighted}-$r$ method (dotted, orange line). Note the scale difference between the two panels. \label{fig:spectrum_params}} \end{figure*} A long list of methods has been adopted in the literature for fitting the limb-darkening laws to the model intensity profiles leading to significantly different limb-darkening coefficients. The coefficients obtained with a simple least-squares fit depend on the spatial distribution of the precalculated intensities. The effect of sampling is particularly evident for the \texttt{PHOENIX} profiles because of a much finer sampling near the drop-off region. For example, Figure~\ref{fig:unphysical_ldfit} shows the case of a star similar to HD209458 in the mid-infrared, for which the simple least-squares solution presents a non-monotonic (unphysical) profile with unexpected undulations. In this paper, we compare the following fitting procedures: \begin{enumerate} \item \textit{unweighted}, i.e., simple least-squares fit; \item \textit{weighted}-$r$, i.e., weighted least-squares fit with weights proportional to the sampling interval in $r$, as detailed in Equations~\ref{eqn:w-rRMS} and \ref{eqn:w-r_weights}; \item \textit{weighted-$\mu$}, i.e., weighted least-squares fit with weights proportional to the sampling interval in $\mu$; \item \textit{interp-$\mu$~100}, i.e., least-squares fit on the intensities interpolated over 100 $\mu$ values with a uniform separation in $\mu$, as suggested by \cite{claret11}; \item \textit{interp-$\mu$~1000}, i.e., least-squares fit on the intensities interpolated over 1000 $\mu$ values with a uniform separation in $\mu$; \item \textit{interp-$r$~100}, i.e., least-squares fit on the intensities interpolated over 100 $r$ values with a uniform separation in $r$, as suggested by \cite{parviainen15b} (with an unspecified number of interpolated values); \item \textit{interp-$r$~1000}, i.e., least-squares fit on the intensities interpolated over 1000 $r$ values with a uniform separation in $r$; \item \textit{unweighted} QS, i.e., least-squares fit with a cutoff $r \le 0.99623$; \item \textit{weighted}-$r$ QS, i.e., analogous to \textit{weighted}-$r$ with a cutoff $r \le 0.99623$; \item \textit{weighted}-$\mu$ QS, i.e., analogous to \textit{weighted}-$\mu$ with a cutoff $r \le 0.99623$. \end{enumerate} The cutoff is used to remove the steep drop-off characteristic of the spherical models, hence the term QS. The QS approach was first proposed by \cite{claret12}, who applied a cutoff $\mu \ge 0.1$ to their library of \texttt{PHOENIX} models with the original $\mu$ values. In this work, we redefine the cutoff using the rescaled $r$, such that it corresponds to the same fraction of the photometric stellar radius for all the models (see Section~\ref{ssec:integ_ints}). Our new definition with $r \le 0.99623$ is equivalent to the previous one for the majority of models, particularly for those models that may correspond to main-sequence stars. However, the libraries of \texttt{PHOENIX} models incorporated in the \texttt{ExoTETHyS} package also include models of stellar atmospheres with lower gravities than those analyzed by \cite{claret12}, corresponding to subgiant, giant, and supergiant stars. For some of these models, the intensity drop-off occurs at $\mu>0.1$, so that the cutoff of $\mu \ge 0.1$ (not rescaled) would be ineffective. In order to evaluate the merits of the alternative fitting procedures to the stellar intensity profile, we generated exact synthetic transit light curves using the TRIP subpackage and compared these light curves with their best-fit solutions obtained with the various sets of claret-4 limb-darkening coefficients. Figure~\ref{fig:TESS_transits} shows the residuals obtained for a noiseless simulation of the transit of HD209458~b in the \emph{TESS} passband when adopting the different sets of limb-darkening coefficients. The \emph{weighted}-$r$ QS method implemented in \texttt{ExoTETHyS}.SAIL gives the smallest residuals, with a peak-to-peak of 2~ppm and rms amplitude below 1~ppm. The other QS methods, \textit{weighted}-$\mu$ QS and \textit{unweighted} QS, lead to almost identical residuals, with a peak-to-peak of 3~ppm. Among the spherical methods, the \textit{weighted}-$r$ gives the smallest residuals with a peak-to-peak of 9~ppm and rms amplitude of 2~ppm, followed by the \textit{interp-$r$~100} and \textit{interp-$r$~1000} with about 1.5 and 2 times larger residual amplitudes, respectively. All the other methods lead to significantly larger residuals of tens to a few hundred ppm, which are comparable with the predicted noise floor of 60~ppm for the \emph{TESS} observations \citep{ricker14}. Figure~\ref{fig:spectrum_residuals} shows the peak-to-peak of the residuals for the same transit as a function of wavelength, based on simulated light curves with 20~nm passband widths. This spectral analysis confirms the relative ranking of the fitting methods derived from the \emph{TESS} simulation. In particular, the \textit{weighted}-$r$ QS method leads to a peak-to-peak of residuals below 2~ppm at wavelengths longer than 1~$\mu$m, and overall below 8~ppm. The other quasi-spherical methods are marginally worse than \textit{weighted}-$r$ QS at wavelengths shorter than 2~$\mu$m, but the worst case peak-to-peak of residuals is less than 13~ppm. The \textit{weighted}-$r$ method leads to peak-to-peak of residuals in the range of 5-15~ppm, with a sawtooth-like modulation in wavelength. We noted that the small, but abrupt, jumps that occur at certain wavelengths correspond to changes of the inflection point in the stellar intensity profile as defined in Section~\ref{ssec:integ_ints}. The same phenomenon occurs for all the other spherical models with larger sawtooth-like modulations. It may appear surprising that the peak-to-peak of residuals obtained with the spherical methods tends to be larger at the longer wavelengths, for which the limb-darkening effect is expected to be smaller. The cause of the poor performances of most spherical methods in the infrared is the intensity drop-off, which is typically steeper than the drop-off in the UV and visible. Such drop-off has a negligible effect in the numerically integrated transit light curves, hence the better performances of the QS fits. Figure~\ref{fig:spectrum_params} shows the best-fit transit parameters corresponding to the same spectral light curves, and compared with the respective input parameters corrected for the rescaled $r$ (see Section~\ref{ssec:integ_ints}). We retrieved the correct transit depth within 5~ppm, the impact parameter within 6$\times$10$^{-4}$, and the transit duration within 1~s at all wavelengths, when using the \textit{weighted}-$r$ or QS limb-darkening coefficients. However, slightly larger spectral trends appear in these parameters because of the wavelength-dependent stellar radius. The peak-to-peak variation in transit depth over the spectral range of 0.25--10~$\mu$m is 10~ppm. The other sets of limb-darkening coefficients introduce orders-of-magnitude larger biases in the retrieved transit parameters, also larger spectral sawtooth-like modulations in the infrared (few tens of ppm in transit depth across 1-10~$\mu$m), and severe discrepancies between the parameter values obtained in the UV/visible and those obtained in the infrared. \begin{figure*}[t] \plotone{figures/f5.eps} \caption{Top, left panel: peak-to-peak of residuals between the reference spectral light curves for the transit of HD209458~b and the best-fit models using the limb-darkening coefficients calculated for the different laws (see Section~\ref{ssec:ld_laws}). Top, right panel: \emph{weighted}-$r$ QS rms of residuals to the model intensity profiles. Bottom panels: zoom-in of the panels above. \label{fig:spectrum_residuals_ldlaws}} \end{figure*} \begin{figure*}[t] \plotone{figures/f6.eps} \caption{Best-fit transit parameters to the reference spectral light curves for the transit of HD209458~b using the limb-darkening coefficients calculated for the different laws (see Section~\ref{ssec:ld_laws}). The true parameter values are reported in black. \label{fig:spectrum_params_ldlaws}} \end{figure*} \begin{table*}[t] \centering \caption{Spectral analysis of the error in transit depth when adopting different limb-darkening laws.} \label{tab:p2_bias} \begin{tabular}{cccccc} \tablewidth{0pt} \hline \hline & Wavelength range & Claret-4 & Power-2 & Quadratic & Square-root \\ \hline Maximum bias & 0.25--10.0~$\mu$m & 5 & 165 & 235 & 174 \\ (ppm) & $<$1~$\mu$m & 4 & 165 & 235 & 174 \\ & $>$1~$\mu$m & 5 & 19 & 27 & 18 \\ & $>$5~$\mu$m & 3 & 4 & 10 & 5 \\ \hline Rms bias & 0.25--10.0~$\mu$m & 1 & 20 & 20 & 23 \\ (ppm) & $<$1~$\mu$m & 2 & 71 & 62 & 81 \\ & $>$1~$\mu$m & 1 & 5 & 11 & 4 \\ & $>$5~$\mu$m & 1 & 2 & 6 & 2 \\ \hline Spectrum & 0.25--10.0~$\mu$m & 10 & 177 & 258 & 341 \\ peak-to-peak & $<$1~$\mu$m & 7 & 177 & 254 & 341 \\ (ppm) & $>$1~$\mu$m & 10 & 27 & 17 & 25 \\ & $>$5~$\mu$m & 2 & 3 & 4 & 3 \\ \hline Spectrum std & 0.25--10.0~$\mu$m & 2 & 20 & 18 & 23 \\ (ppm) & $<$1~$\mu$m & 1 & 45 & 58 & 64 \\ & $>$1~$\mu$m & 2 & 7 & 4 & 5 \\ & $>$5~$\mu$m & $<$1 & $<$1 & $<$1 & $<$1 \\ \hline \end{tabular} \end{table*} \subsection{Performance of the limb-darkening laws} \label{ssec:fitting_ld_laws} Figure~\ref{fig:spectrum_residuals_ldlaws} compares the peak-to-peak of the spectral light curve residuals when adopting the limb-darkening coefficients calculated by \texttt{ExoTETHyS}.SAIL for different limb-darkening laws, as well as the corresponding \emph{weighted}-$r$ QS rms of the residuals to the stellar intensity profiles. The correlation between the two goodness-of-fit measures is explored in Section~\ref{ssec:GOF_ppm}. At wavelengths $\gtrsim$3~$\mu$m, the precision of the power-2 and square-root limb-darkening coefficients is comparable to that of the claret-4 coefficients, resulting in light curve residuals below 5~ppm. While the claret-4 law performs similarly well even at shorter wavelengths, the two-coefficient laws lead to larger light curve residuals up to $\sim$100~ppm in the UV and visible. The quadratic law is less precise, leading to light curve residuals above 25~ppm even at 10~$\mu$m. Figure~\ref{fig:spectrum_params_ldlaws} shows the fitted transit parameters and their expected values. Typically, the bias in transit depth is of the same order of magnitude of the light curve residuals, but it can be both larger or smaller than their peak-to-peak amplitudes owing to parameter degeneracies. Table~\ref{tab:p2_bias} reports the statistics of the errors in transit depth obtained with the different limb-darkening laws across given spectral ranges. The maximum bias in transit depth at 5--10~$\mu$m is within 10~ppm for any limb-darkening parameterization, which is just below the minimum photon noise floor for \emph{JWST}/Mid-InfraRed Instrument (MIRI) observations \citep{beichman14}. At $\sim$1~$\mu$m, the two-coefficient laws may introduce a spectral slope of a few tens of ppm, which may have an impact in the analysis of the \emph{HST}/WFC3 spectra \citep{tsiaras18}. At wavelengths shorter than 1~$\mu$m the two-coefficient laws are unreliable for exoplanet spectroscopy, so that the claret-4 law must be preferred. These conclusions are in agreement with previous studies based on both simulated and real data \citep{espinoza16, morello17, morello18, maxted18}. \begin{figure}[t] \plotone{figures/f7.eps} \caption{\emph{Weighted}-$r$ QS rms of residuals to the model intensity profiles vs. peak-to-peak of the transit light curve residuals for the spectral templates of HD20458~b adopting different limb-darkening laws. The black line is the global linear fit. \label{fig:resints_vs_reslc}} \end{figure} \subsection{Predicted precision in light curves} \label{ssec:GOF_ppm} Figure~\ref{fig:resints_vs_reslc} shows that, for a fixed transit geometry, the peak-to-peak of light curve residuals is roughly proportional to the \emph{weighted}-$r$ QS rms of stellar intensity residuals. We found an approximately linear correlation between the two goodness-of-fit measures for the simulated spectral light curves and stellar intensity profiles, therefore obtaining a wavelength-independent factor. We repeated this test for analogous sets of spectral light curves with different transit parameters, then obtaining different proportionality factors. Our preliminary study suggests that \begin{multline} \label{eqn:GOF_prop} (\mbox{peak-to-peak})_{\mbox{\footnotesize ppm}} = (k \times 10^6) \times p^2 \\ \times (\mbox{\emph{weighted}-}r \, \mbox{QS} \, \mbox{rms} ) , \end{multline} where $k$ is a factor of order unity ($k \gtrsim 1$). Equation~\ref{eqn:GOF_prop} provides a useful tool for estimating the systematic noise in the light curve models solely due to the limb-darkening parameterization. The systematic noise in the light curve models should be smaller than the photon noise limit of the observation in order to avoid significant parameter biases. Note that Equation~\ref{eqn:GOF_prop} does not account for uncertainties in the stellar parameters, discrepancies between real and model intensity profiles, and other contaminating signals that may increase the total systematic noise. \section{Usage of \texttt{ExoTETHyS}} \label{sec:usage} Currently, the main use of the \texttt{ExoTETHyS} package is to compute stellar limb-darkening coefficients through the SAIL subpackage. These coefficients can be adopted to simulate the exoplanetary transit light curves, which are largely used by the scientific consortia of the future exoplanet missions for multiple studies. In particular, \texttt{ExoTETHyS} will be linked with \emph{ARIEL}-Sim \citep{sarkar16} and \texttt{ExoNoodle} (a generator of timeseries spectra of exoplanetary systems originally designed for \emph{JWST} observations; M. Martin-Lagarde et al., in prep.), and it has already been adopted by several members of the two mission consortia. It is also common practice to fix the limb-darkening coefficients obtained from stellar models, such as those calculated with \texttt{ExoTETHyS}.SAIL, in the analysis of exoplanetary transit light curves. This approach relies on the perfect match between the model and the real stellar intensity distributions, otherwise introducing a potential bias in the derived exoplanet and orbital parameters. Some authors recommended setting free limb-darkening coefficients in the light curve fits to minimize the potential bias, but the strong parameter degeneracies may lead to larger error bars or prevent the convergence of the fit \citep{southworth08, csizmadia13}. The parameter degeneracies can be mitigated by using multiwavelength transit observations to better constrain the orbital parameters \citep{morello17, morello18}. Here we suggest an approach to take advantage of the knowledge on the stellar parameters in the form of bayesian priors. The stellar parameters will then be optimized in the light curve fits instead of using fixed or fully unconstrained limb-darkening coefficients. The limb-darkening coefficients for a given set of stellar parameters, and a given passband or spectroscopic bin, can be interpolated from a precalculated grid. The grid calculation type (see Section~\ref{ssec:sail_IO}) was specifically designed for this purpose. \section{Conclusions} \label{sec:conclusions} We introduced \texttt{ExoTETHyS}, an open-source python package that offers accessory tools for modeling transiting exoplanets and eclipsing binaries. It includes a versatile stellar limb-darkening calculator with multiple choices of model atmosphere grids, parameterizations, passbands (also accepting user input), and specific user-friendly calculation settings. We demonstrated an optimal fitting algorithm for the limb-darkening coefficients, thus eliminating the degree of freedom associated with the choice of fitting algorithm. The claret-4 coefficients obtained through this algorithm ensure a precision level $\lesssim$10~ppm in the relevant transit light curves at all wavelengths. The precision achieved exceeds by one order of magnitude that obtained with most of the algorithms proposed in the previous literature for stellar models with spherical geometry. We also proposed a simple formula for estimating the light curve model precision, based on the goodness-of-fit for the limb-darkening coefficients. Finally, we discussed the current and future usage of \texttt{ExoTETHyS} with emphasis on exoplanet atmospheric spectroscopy in the era of \emph{JWST} and ARIEL. \acknowledgments The authors would like to thank Ren{\'e} Gastaud and Daniel Dicken for useful discussions. G. M., M. M.-L. and P.-O. L. were supported by the LabEx P2IO, the French ANR contract 05-BLANNT09-573739 and the European Unions Horizon 2020 Research and Innovation Programme, under Grant Agreement N$^\circ$776403. G.M. also acknowledges the contribution from INAF through the ``Progetto Premiale: A Way to Other Worlds'' of the Italian Ministry of Education, University, and Research. A.C. acknowledges financial support from the Spanish MEC (AYA2015-71718-R and ESP2017-87676-C5-2-R), the State Agency for Research of the Spanish MCIU through the ``Center of Excellence Severo Ochoa'' award for the Instituto de Astrof\'isica de Andaluc\'ia (SEV-2017-0709). \bibliographystyle{aasjournal}
1,108,101,563,064
arxiv
\section{Experiments} \label{sec:experiments} To demonstrate the benefits of our approach, we summarize selective experimental results from work on dynamic word embeddings, which used our proposed BBVI algorithm. \paragraph{Dynamic Skip-Gram Model.} \citet{bamler2017dynamic} proposed the dynamic skip-gram model as a generalization of the skip-gram model to time-stamped text corpora~\citep{mikolov_distributed_2013}. The model, thus, represents words as latent trajectories in an embedding space, and is able to track how words change their meaning over time. To train the model, one first summarizes the text corpus in terms of certain sufficient statistics, using a finite vocabulary. For each time stamp, one builds co-occurrence matrices of words and their surrounding words. In addition, the model requires so-called \emph{nagative} examples, expressing the likelihood of co-occurrance by mere chance. The model then models these statistics in terms of the geometrical arrangement of certain latent word and context embedding vectors. The prior over these embedding vectors is a latent Ornstein-Uhlenbeck process (a Wiener process in the presence of a restoring force), enforcing a continuous drift over time. For more details, we refer to~\citep{bamler2017dynamic}. \paragraph{Algorithms and Baselines.} {\bf \begin{sc}SGI\end{sc}{}} denotes the non-Bayesian skip-gram model with independent random initializations of word vectors~\citep{mikolov_distributed_2013,hamilton2016diachronic}. {\bf \begin{sc}SGP\end{sc}{}} denotes the same approach as above, but with word and context vectors being pre-initialized with the values from the previous year, as in~\citep{kim2014temporal}. Both approaches have no dynamical priors and hence no overhead and just scale linearly with $T$. {\bf \begin{sc}\mbox{DSG-F}\end{sc}{}} denotes the dynamic skip-gram filtering algorithm, proposed in~\citep{bamler2017dynamic}, which also runs in $O(T)$, but uses the mean field approximation and only uses information form the past. Finally, {\bf \begin{sc}\mbox{DSG-S}\end{sc}{}} denotes the dynamic skip-gram smoothing algorithm. This is the $O(T)$ algorithm proposed in this paper, applied to the dynamic skip-gram model. \paragraph{Data.} We used data from the Google books corpus~\citep{michel_quantitative_2011} from the last two centuries ($T=209$). We also used the ``State~of~the~Union'' ({SoU}) addresses of U.S.~presidents, which spans more than two centuries. Finally, we used a {Twitter} corpus of news tweets for $21$ randomly drawn dates from $2010$ to $2016$. Details on hyperparameters and preprocessing are given in~\citep{bamler2017dynamic}. \paragraph{Results.} We focus on the quantitative results of~\citep{bamler2017dynamic}, showing show that the dynamic skip-gram smoothing algorithm (described and generalized in this paper) generalizes better to unseen data than all baselines at similar run time. We thereby analyze held-out predictive likelihoods on word-context pairs at a given time $t$, where $t$ is excluded from the training set. The predictive likelihoods as a function of time are shown in Figure~\ref{fig:pred-log-likelihoods}. On all three corpora, differences between the two implementations of the static model (\begin{sc}SGI\end{sc}{} and \begin{sc}SGP\end{sc}{}) are small, which suggests that pre-initializing the embeddings with the previous result seems to have little impact on the predictive power. Log-likelihoods for the skip-gram filter (DSG-F) grow over the first few time steps as the filter sees more data, and then saturate. Skip-gram smoothing (this paper) further outperforms skip-gram filtering. To conclude, we stress that a structured BBVI algorithm with quadratic instead of linear run time in $T$ would be impractical. We therefore hope that our structured reparameterization trick will spur further research on complex latent time series models. \section{Introduction} \label{sec:motivation} Continuous latent time series models are popular tools in Bayesian machine learning. One thereby combines multiple copies of a likelihood model through a time series prior, thus enabling the latent variables to drift over time while still sharing statistical strength accross all times~\citep{blei2006dynamic,wang2012continuous,sahoo2012hidden,gultekin_collaborative_2015,charlin2015dynamic,ranganath2015survival,jerfel2016dynamic, bamler2017dynamic}. Variational Inference (VI) enables approximate inference for complex models by solving an optimization problem~\citep{jordan_introduction_1999}. One chooses a variational distribution and fits it to the posterior, where the fully-factorized mean field class is the most widely used. However, the standard mean field class is not a good choice when it comes to time series models. Instead, one often uses \emph{structured} variational distributions in time~\citep{wainwright2008graphical,blei2006dynamic}. The perhaps simplest such choice is a multivariate Gaussian with a tridiagonal precision matrix. This can be seen as using the probabilistic model of the Kalman filter as a variational distribution (variational Kalman filter)~\citep{blei2006dynamic}, and reflects the fact that the prior is a first-order Markov process. In this paper, we introduce an efficient VI algorithm for fitting such tridiagonal Gaussian variational models to a large class of complex latent time series models. We draw on Black Box Variational Inference (BBVI) with reparameterization gradients~\citep{salimans2013fixed,kingma2013auto,rezende2014stochastic,ruiz2016generalized}, where one forms a Monte-Carlo estimate of the variational lower bound's gradient. The problem with structured variational distributions is that computing this reparameterization gradient is expensive; a naive implementation involves a dense matrix multiplication and scales as $O(T^2)$, where $T$ is the number of time steps. In this paper, we lay out a general algorithm which gives the reparameterization gradient in $O(T)$. This algorithm can be thought of a variant of the forward-backward algorithm~\citep{murphy2012machine} in the context of BBVI, and relies on a reparameterization procedure that never involves the inversion or multiplication of a dense matrix. We illustrate our approach on the dynamic word embedding model~\citep{bamler2017dynamic}. \section{Background and Problem Setting} We start by specifying our generative and variational models and give an overview of the proposed algorithm. \label{sec:problem} \paragraph{Generative model.} We consider models with time-dependent observations $\mathbf x\equiv(x_1,\ldots, x_T)$, where $T$ is the number of time steps. At each time step $t\in\{1,\ldots,T\}$, the generative process is described by an arbitrary likelihood function $p(x_t|z_t)$, and $z_t$ is a latent variable. We furthermore assume that the prior is Markovian. The joint probability thus factorizes as follows, \begin{align} p(\mathbf x, \mathbf z) = \prod_{t=1}^T p(z_t|z_{t-1})\, p(x_t|z_t), \label{eq:model_class} \end{align} where $p(z_1|z_0) \equiv p(z_1)$ denotes an arbitrary prior on the first latent variable. Many models fall into this class. Our goal is an efficient posterior approximation for this model class, using a structured variational approximation and black box inference techniques. \paragraph{Kalman smoothing revisited.} Before we describe our approach, we revisit the Kalman smoother \citep{rauch1965maximum} as an efficient algorithm for a particularly simple realization of Eq.~\ref{eq:model_class} where all conditional distributions are Gaussian. This is often called a Wiener process (more precisely, it is a Gauss-Markov process). The prior is Gaussian with tridiagonal precision matrix $\Lambda_0$, and the likelihood is a Gaussian centered around $z_t$ with precision $\tau$. Thus, the posterior is also Gaussian, $p(\mathbf z|\mathbf x)=\mathcal N(\mathbf z; \boldsymbol\mu, \Lambda^{-1})$, and can be obtained analytically. One finds $\Lambda = \Lambda_0 + \tau I$ and $\boldsymbol\mu = \tau \Lambda^{-1} \mathbf x$. Obtaining the posterior modes $\boldsymbol\mu$ involves solving the linear system of equations $\Lambda\boldsymbol\mu = \tau\mathbf x$. For an arbitrary matrix $\Lambda$, this involves $O(T^2)$ operations. However, for the Wiener process, $\Lambda$ is tridiagonal and one can solve for $\boldsymbol\mu$ in linear time using a specialized algorithm. The forward-backward algorithm~\citep{murphy2012machine} implicitly decomposes $\Lambda=AB$ into a product of two bidiagonal matrices $A$ and $B$, where $A$ is lower triangular and $B$ is upper triangular. One starts with a forward pass through the data, in which one solves $A\boldsymbol{\tilde\mu} = \tau \mathbf x$ for the auxiliary vector $\boldsymbol{\tilde\mu}$ using forward substitution, followed by a backward pass in which one solves $B\boldsymbol\mu = \boldsymbol{\tilde\mu}$ for $\boldsymbol\mu$ using back substitution. As detailed in section~\ref{sec:backprop}, we use a similar philosophy. \paragraph{Variational model.} For a general likelihood model $p(x_t|z_t)$, the exact posterior of Eq.~\ref{eq:model_class} is intractable. To circumvent this problem, we use BBVI to minimize the KL divergence between a variational distribution $q_{\boldsymbol\lambda}$ and the true posterior. Here, $\boldsymbol\lambda$ summarizes all variational parameters. We use a structured variational distribution as our variational model that is motivated by the exact posterior of the analytically solvable model discussed above. We thus consider a Gaussian with a tridiagonal precision matrix $\Lambda$, \begin{align}\label{eq:def-q} q_{\boldsymbol\lambda}(\mathbf z) \equiv \mathcal N(\mathbf z; \boldsymbol\mu, \Lambda^{-1}). \end{align} This can be understood as varying the parameters of a (fictitious) Wiener process so that its posterior approximates the posterior of the true model~\citep{blei2006dynamic}. As we discuss below, the tridiagonal structure of $\Lambda$ allows us to fit the variational distribution efficiently. Note that the covariance matrix $\Lambda^{-1}$ is dense, thus encoding long-range correlations between any two time steps. Modelling correlations is important for many time series applications, in particular when the prior is strong e.g. due to little evidence per time step. \paragraph{Black box variational inference.} In BBVI, one fits $q_{\boldsymbol\lambda}(\mathbf z)$ to the posterior $p(\mathbf z|\mathbf x)$, maximizing the evidence lower bound (ELBO) using stochastic gradients. The reparameterization trick amounts to representing the ELBO as \begin{align} \label{eq:elbo-reparameterized} \mathcal L(\boldsymbol\lambda) = \mathbb E_{\boldsymbol\epsilon \sim \mathcal N(0, I)}[\log p(\mathbf x, \mathbf \mathbf f(\boldsymbol\lambda; \boldsymbol\epsilon))] + H[q_{\boldsymbol\lambda}]. \end{align} The entropy $H[q_{\boldsymbol\lambda}] \equiv -\mathbb E_{q_{\boldsymbol\lambda}}[\log q_{\boldsymbol\lambda}(\mathbf z)]$ is often an analytic function or can be estimated using other tricks. Here, $\boldsymbol\epsilon$ is a vector of $T$ independent standard-normal distributed random variables, and $\mathbf f\equiv (f_1, \ldots, f_T)$ denotes $T$ functions that are defined such that the random variable $\mathbf f(\boldsymbol\lambda; \boldsymbol\epsilon)$ has probability density $q_{\boldsymbol\lambda}$ (see Section~\ref{sec:forward-prop} below). In order to implement an efficient BBVI algorithm, one needs to be able to estimate the gradient of $\mathcal L$ efficiently, i.e., in $O(T)$ time. This involves the following three tasks: \begin{enumerate} \item Efficiently evaluate the entropy $H[q_{\boldsymbol\lambda}]$ (Section~\ref{sec:entropy}); \item Efficiently evaluate $\mathbf f(\boldsymbol\lambda; \boldsymbol\epsilon)$ (Section~\ref{sec:forward-prop}); \item Efficiently estimate the reparameterization gradient (Section~\ref{sec:backprop}). \end{enumerate} All three of the above tasks can easily be solved efficiently if one chooses a mean field variational distribution, i.e., if $q_{\boldsymbol\lambda}(\mathbf z)$ factorizes over all time steps. However, the mean field approximation ignores correlations between time steps, which are important in many time series models as discussed above. In Section~\ref{sec:inference} we address each one of the above tasks individually. \section{Inference} \label{sec:inference} In this section, we give the details of our new black box variational inference algorithm. \subsection{Evaluating the Entropy} \label{sec:entropy} The entropy of a multivariate Gaussian with precision matrix $\Lambda$ is $H[q_{\boldsymbol\lambda}]=-\frac12 \log(\det \Lambda)$, up to an additive constant. Evaluating the determinant of a general $T\times T$ matrix takes $O(T^3)$ operations in practice. To avoid this expensive operation, we parameterize the precision matrix via its Cholesky decomposition, \begin{align} \label{eq:cholesky-lambda} \Lambda = B^\top B. \end{align} Here, $B$ is an upper triangular $T\times T$ matrix. Since we restrict $\Lambda$ to have a tridiagonal structure, $B$ is bidiagonal \citep{kilic_inverse_2013}, i.e., it has the structure \begin{align}\label{eq:bmatrix} B(\boldsymbol\nu, \boldsymbol\omega) = \begin{pmatrix} \nu_{1} & \omega_{1} & & & \\ & \nu_{2} & \omega_{2} & & \\ & & \ddots & \ddots & \\ & & & \nu_{T-1} & \omega_{T-1} \\ & & & & \nu_T \end{pmatrix}, \end{align} with $\nu_t>0\; \forall t\in\{1,\ldots,T\}$. As the mapping from $B$ to $\Lambda$ is unique, it suffices to optimize the $(2T-1)$ non-zero components of $B$ in order to find the optimal entries of $\Lambda$. It turns out that we never have to construct the matrix $\Lambda=B^\top B$ explicitly. The variational parameters ${\boldsymbol\lambda}\equiv(\boldsymbol\mu, \boldsymbol\nu, \boldsymbol\omega)$ are thus the marginal means $\boldsymbol\mu$ (see Eq.~\ref{eq:def-q}) and the non-zero components of $B$. Using the relation $\det \Lambda = (\det B)^2$, we can evaluate the entropy in linear time, \begin{align} H[q_{\boldsymbol\lambda}] = - \sum_{t=1}^T \log\nu_t + const. \label{eq:entropy} \end{align} \subsection{Evaluating the reparameterization functions} \label{sec:forward-prop} In contrast to the entropy, the expected log-joint in Eq.~\ref{eq:elbo-reparameterized} cannot be evaluated analytically for a general model. We obtain an unbiased gradient estimator $\mathbf{\hat g}$ of the expected log-joint by drawing $S$ independent samples $\boldsymbol\epsilon^{[s]} \sim \mathcal N(0,I)$ for $s\in\{1,\ldots, S\}$. For what follows, let $\lambda_i$ denote any of the $(3T-1)$ variational parameters. Using the chain rule, the estimate of the gradient of the expected log-joint with respect to $\lambda_i$ is \begin{align} \hat g_i &\equiv \frac{\partial}{\partial \lambda_i}\left[ \frac{1}{S}\sum_{s=1}^S \log p(\mathbf x, \mathbf f(\boldsymbol\lambda, \boldsymbol\epsilon^{[s]}))\right] \nonumber\\ &= \frac{1}{S}\sum_{s=1}^S \sum_{t=1}^T \gamma_t^{[s]} \frac{\partial f_{t}({\boldsymbol\lambda}; \boldsymbol\epsilon^{[s]})}{\partial \lambda_i} \label{eq:reparam-gradient} \end{align} where we defined \begin{align} \gamma_t^{[s]} &\equiv \left. \frac{\partial\log p(\mathbf x,\mathbf z^{[s]})}{\partial z_t^{[s]}}\right|_{\mathbf z^{[s]} = \mathbf f(\boldsymbol\lambda, \boldsymbol\epsilon^{[s]})} \label{eq:def-gamma}\\ &= \frac{\partial [ \log p(z_t^{[s]}|z_{t-1}^{[s]}) \!+\! \log p(z_{t+1}^{[s]}|z_t^{[s]}) \!+\! \log p(x_t|z_t^{[s]})]}{\partial z_t^{[s]}}.\nonumber \end{align} To further simplify Eq.~\ref{eq:reparam-gradient} we specialize the reparameterization function to our variational model. Using Eqs.~\ref{eq:def-q} and \ref{eq:cholesky-lambda}, we find \begin{align}\label{eq:def-f} \mathbf f(\boldsymbol\mu,\boldsymbol\nu, \boldsymbol\omega; \boldsymbol\epsilon^{[s]}) = \boldsymbol\mu + B(\boldsymbol\nu,\boldsymbol\omega)^{-1} \boldsymbol\epsilon^{[s]}. \end{align} Here, $B^{-1}$ is a dense (upper triangular) $T\times T$ matrix. Instead of computing the inverse, we evaluate the term $\mathbf y^{[s]}\equiv B^{-1}\boldsymbol\epsilon^{[s]}$ on the right-hand side of Eq.~\ref{eq:def-f} by solving the linear system $B\mathbf y^{[s]}=\boldsymbol\epsilon^{[s]}$ for $\mathbf y^{[s]}$ using back substitution. This takes only $O(T)$ operations due to the bidiagonal structure of $B$, see Eq.~\ref{eq:bmatrix}. We can therefore evaluate Eq.~\ref{eq:def-gamma} in $O(T)$ time for each sample $s\in\{1,\cdots,S\}$. \subsection{Estimating the Reparameterization Gradient} \label{sec:backprop} Last, we show how to efficiently estimate the reparameterization gradient in $O(T)$ time. In this subsection we describe an efficient method to evaluate the Jacobian $\partial\mathbf f/\partial \boldsymbol\lambda$ that appears on the right-hand side of Eq.~\ref{eq:reparam-gradient}. A naive evaluation of all gradient estimates $\hat g_i$ would require $O(S\times T^2)$ operations: the sums on the right-hand side of Eq.~\ref{eq:reparam-gradient} run over $S\times T$ terms, and the number of variational parameters $\lambda_i$ for which the gradient estimate has to be evaluated grows at least linearly in $T$ for a reasonably flexible variational distribution. We use the following trick to evaluate all gradient estimates in linear time in $T$. For any invertible matrix $B$ that depends on some parameter $\lambda_i$, we have \begin{align} 0 = \frac{\partial I}{\partial \lambda_i} = \frac{\partial (B B^{-1})}{\partial \lambda_i} = \frac{\partial B}{\partial \lambda_i} B^{-1} + B \frac{\partial B^{-1}}{\partial \lambda_i}. \end{align} Solving for $\partial B^{-1}/\partial \lambda_i$, we obtain \begin{align}\label{eq:derivative-inverse} \frac{\partial B^{-1}}{\partial \lambda_i} = - B^{-1} \frac{\partial B}{\partial \lambda_i} B^{-1}. \end{align} Eq.~\ref{eq:derivative-inverse} expresses the derivative of the dense (upper triangular) matrix $B^{-1}$ in terms of the derivative of the bidiagonal matrix $B$. In fact, both $\partial B/\partial \nu_t$ and $\partial B/\partial \omega_t$ are sparse matrices with only a single non-zero entry, see Eq.~\ref{eq:bmatrix}. The dense matrix $B^{-1}$ still appears on the right-hand side of Eq.~\ref{eq:derivative-inverse} but we avoid evaluating it explicitly. Instead, we again solve a bidiagonal linear system of equations. Combining Eqs.~\ref{eq:reparam-gradient}, \ref{eq:def-f} and \ref{eq:derivative-inverse}, we obtain the formal expressions \begin{align} \hat g_{\mu_t} &= \frac{1}{S} \sum_{s=1}^S \gamma_t^{[s]}; \label{eq:grad-elbo-mu} \\ \hat g_{\nu_t} &= - \frac{1}{S} \sum_{s=1}^S \underbrace{((\boldsymbol\gamma^{[s]})^\top B^{-1})_t}_{y^{\prime[s]}_t} \, \underbrace{(B^{-1} \boldsymbol\epsilon^{[s]})_t}_{y^{[s]}_t}; \label{eq:grad-elbo-nu} \\ \hat g_{\omega_t} &= - \frac{1}{S} \sum_{s=1}^S \underbrace{((\boldsymbol\gamma^{[s]})^\top B^{-1})_t}_{y^{\prime[s]}_t}\, \underbrace{(B^{-1} \boldsymbol\epsilon^{[s]})_{t+1}}_{y^{[s]}_{t+1}} \label{eq:grad-elbo-omega} \end{align} where bold face $\boldsymbol\gamma^{[s]} \equiv (\gamma^{[s]}_1,\ldots, \gamma^{[s]}_T)^\top$ denotes a column vector of $T$ derivatives of the log-joint as defined in Eq.~\ref{eq:def-gamma}. Instead of computing the inverse $B^{-1}$ in Eqs.~\ref{eq:grad-elbo-nu}--\ref{eq:grad-elbo-omega}, we obtain the vectors $(\mathbf y^{\prime[s]})^\top \equiv (\boldsymbol\gamma^{[s]})^\top B^{-1}$ and $\mathbf y^{[s]} \equiv B^{-1} \boldsymbol\epsilon^{[s]}$ by solving the linear systems $B^\top \mathbf y^{\prime[s]} = \boldsymbol\gamma^{[s]}$ and $B \mathbf y^{[s]} = \boldsymbol\epsilon^{[s]}$, respectively, in $O(T)$ time using the bidiagonal structure of $B$. Since $B^\top$ is lower-diagonal and $B$ is upper-diagonal these operations carry-out a forward and a backward pass through the time steps, respectively. This is similar to the forward-backward algorithm for the Wiener process with Gaussian likelihood discussed in Section~\ref{sec:problem}. Eqs.~\ref{eq:grad-elbo-mu}--\ref{eq:grad-elbo-omega} conclude the derivation of the gradient estimator for the expected log-joint. The gradient of the ELBO with respect to $\nu_t$ contains an additional term $-1/\nu_t$ due to the entropy, see Eq.~\ref{eq:entropy}. \input{experiments}
1,108,101,563,065
arxiv
\section{Introduction} Electronic mechanisms of electron pairing with high binding energy is a challenging problem that opens up broad prospects for the discovery of novel many-particle effects in various low-dimensional structures and modern materials, not to mention the high-temperature superconductivity~\cite{combescot2015excitons,kagan2013modern}. Recently we have proposed a purely electronic mechanism that potentially could provide a high enough binding energy~\cite{PhysRevB.98.115137,2018arXiv180410826G}. It is caused by a spin-dependent component of the electron-electron (e-e) interaction that appears because of the Rashba-like spin-orbit interaction (SOI) induced by the Coulomb field between electrons~\cite{PhysRevB.95.045138}. The origin of the spin-orbit component of the pair interaction of electrons is similar to that of the spin-dependent component of the impurity potential that causes skew scattering and side-jumping in the theory of the extrinsic spin Hall effect~\cite{Vignale2009}. This mechanism can be effective in materials with a strong Rashba SOI\@. The conditions under which electron pairs are formed, the bound-state spectrum and electronic structure were studied for the quantum wires and two-dimensional (2D) electron systems. For realistic conditions the binding energy was estimated to be in the meV range. In the present paper we show that the binding energy can be strongly increased by a suitable choice of the dielectric environment. The pairing mechanism has unusual properties due to the key role that the SOI plays in the formation of the pairs. The SOI component of the e-e interaction depends on both the spin and momentum of electrons. Therefore the e-e interaction becomes attractive for a certain electron spin orientation tied to the momentum. This leads to the formation of the pairs of two distinct kinds with different spin structure depending on what type of motion creates the SOI: the relative motion of electrons or the motion of their center of mass. The binding energy of the electron pair is set by the SOI constant of the material, the magnitude of the electric field and its coordinate dependence. In experiments, the 2D electron system is implemented in a thin film, the surrounding environment of which is known to strongly affect the electric field in the film. In a recent paper, we considered a 2D electron system embedded in a dielectric medium with the same dielectric constant as that of the material of the 2D layer with SOI\@. In this case the problem is solved analytically~\cite{PhysRevB.98.115137}, which allows us to prove the existence of the two-electron bound states, find their general properties and estimate the binding energy to be on the level of meVs. However, from the point of view of the experimental implementation, of greater interest is the situation where the dielectric constant $\epsilon$ of the surroundings is much lower than that of the material with strong SOI\@. This situation is also interesting theoretically, since the presence of the low-$\epsilon$ surroundings leads not only to an increase in the interaction potential, but also to the significant change in its spatial dependence, especially at a small distance between the particles~\cite{rytova,keldysh1979coulomb}. The latter is especially important in our case, since the attractive component of the interaction caused by the SOI is determined by both the magnitude of the electric field and its coordinate dependence. In this paper, we study the bound states in a thin film with strong SOI in a low-$\epsilon$ dielectric environment taking fully into account the dielectric screening. Such electronic systems are realized on the basis of graphene, 2D transition metal dichalcogenides, and thin layers of Bi\textsubscript{2}Se\textsubscript{3}. Although in such materials the band spectrum can be quite complex, in the present work we confine ourselves to a single-band model, which is nevertheless sufficient to capture the new effect of SOI\@. We find that the dielectric screening in the layer strongly facilitates the pairing to increase the binding energy by an order of magnitude. \section{The model} We start with a Hamiltonian of two interacting electrons in a layer situated in the $x$-$y$ plane. The kinetic energy is $H_{\mathrm{kin}} = (\mathbf{p}_1^2 + \mathbf{p}_2^2) /2m$, where $\mathbf{p}_i= -i \hbar\nabla_{\mathbf{r}_i}$ is the momentum operator, $\mathbf{r}_i = (x_i,y_i)$ is the position of the $i$th electron, with $m$ being the effective electron mass. The layer width $d$ is assumed small, so that only one transverse subband is populated. The \textit{e-e} interaction potential for a thin layer in vacuum is given by~\cite{rytova,keldysh1979coulomb} \begin{equation} \label{RK} U(\mathbf{r}) = \frac{\pi e^2}{2 r_0}\left[ H_0\left(\frac{r}{r_0}\right) - Y_0\left(\frac{r}{r_0}\right) \right]\,, \end{equation} with $H_0$ being the Struve function, and $Y_0$ being the Bessel function of the second kind~\cite{olver}. The screening length $r_0$ sets the crossover scale between the long-range $\sim 1/r$ Coulomb tail of the potential and its short-range logarithmic $\sim \log r$ divergence. The screening length can be estimated as $r_0 = \epsilon d/2$, with $\epsilon$ being the in-plane component of the dielectric tensor of the bulk material~\cite{PhysRevB.88.045318}. The two-body SOI is given by~\cite{PhysRevB.98.115137} \begin{equation} \label{SOI} H_{\mathrm{SOI}} = \frac{\alpha}{\hbar} \sum_{i \ne j} \left[E_y(\mathbf{r}_i - \mathbf{r}_j) p_{i x} - E_x(\mathbf{r}_i - \mathbf{r}_j) p_{i y} \right] \sigma_{z_i}\,, \end{equation} with $\sigma_{z_i}$ being the Pauli matrix, and $\alpha$ being the material-dependent SOI constant, which we assume positive for definiteness. The electric field, acting on the $i$th electron from the $j$th electron, is related to the Rytova-Keldysh potential of Eq.~\eqref{RK} via $\mathbf{E}(\mathbf{r}) = \frac{1}{e}\nabla U(\mathbf{r})$. Equation~\eqref{SOI} describes a two-particle interaction, which is attractive for a certain spin orientation locked to momentum. The Schr\"odinger equation for the two-electron wave function $\Psi(\mathbf{r}_1,\mathbf{r}_2) = {\left(\Psi_{\uparrow \uparrow},\Psi_{\uparrow \downarrow},\Psi_{\downarrow \uparrow},\Psi_{\downarrow \downarrow}\right)}^{\intercal}$ splits into four uncoupled equations for the spinor components. Switch from the positions of the individual electrons to the relative position $\mathbf{r} = \mathbf{r}_1 - \mathbf{r}_2$ and the center-of-mass position $\mathbf{R} = (\mathbf{r}_1 + \mathbf{r}_2)/2$. Also introduce the corresponding momentum operators, $\mathbf{p} = -i \hbar\nabla_{\mathbf{r}}$ and $\mathbf{P} = -i \hbar\nabla_{\mathbf{R}}$. The equations for $\Psi_{\uparrow \uparrow}$ and $\Psi_{\uparrow \downarrow}$ read as \begin{equation} \label{rel} \left [- \frac{\hbar^2}{m} \nabla^2_{\mathbf{r}} - \frac{\hbar^2}{4m} \nabla^2_{\mathbf{R}} + U(\mathbf{r}) + \frac{2\alpha}{\hbar} \frac{E(\mathbf{r})}{r} {(\mathbf{r} \times \mathbf {p})}_z \right ] \Psi_{\uparrow \uparrow} = \varepsilon_{\uparrow \uparrow} \Psi_{\uparrow \uparrow} \end{equation} and \begin{equation} \label{conv} \left[- \frac{\hbar^2}{m} \nabla^2_{\mathbf{r}} - \frac{\hbar^2}{4m} \nabla^2_{\mathbf{R}} + U(\mathbf{r}) + \frac{\alpha}{\hbar} \frac{E(\mathbf{r})}{r} {(\mathbf{r} \times \mathbf {P})}_z \right] \Psi_{\uparrow \downarrow} = \varepsilon_{\uparrow \downarrow} \Psi_{\uparrow \downarrow}\,. \end{equation} The equations for $\Psi_{\downarrow \downarrow}$ and $\Psi_{\downarrow \uparrow}$ are obtained by changing the sign before $\alpha$ in the above equations, respectively. Analysis shows that Eqs.~\eqref{rel} and~\eqref{conv} have solutions describing the bound states of electrons of different nature quite similarly to Ref.~\cite{PhysRevB.98.115137}. We call the solutions of Eq.~\eqref{rel} that belong to the discrete part of the spectrum the relative bound states, since the effective electron attraction caused by the SOI is determined only by the relative motion of electrons. The solutions of Eq.~\eqref{conv} are called the convective bound states, because it is the motion of the electron pair as a whole that creates the SOI\@. Taking into account that the full solution of the system should be antisymmetric with respect to the particle permutation, we conclude that in the 2D system the relative bound states are triplet pairs, whereas the electrons with opposite spins are coupled in the convective bound state, which does not possess a definite spin~\cite{PhysRevB.98.115137}. \section{Results} Because of the translational invariance the wave functions can be written in the form $\Psi_{\uparrow \uparrow}(\mathbf{r},\mathbf{R}) = \exp (i \mathbf{K} \cdot \mathbf{R}) \psi_{\uparrow \uparrow}(\mathbf{r})$ and $\Psi_{\uparrow \downarrow}(\mathbf{r},\mathbf{R}) = \exp (i \mathbf{K} \cdot \mathbf{R}) \psi_{\uparrow \downarrow}(\mathbf{r},\mathbf{K})$. First we consider the convective states, where the center-of-mass wave vector $\mathbf{K}$ affects the wave-function of the relative motion $\psi_{\uparrow \downarrow}(\mathbf{r},\mathbf{K})$ via the binding potential that equals \begin{equation} \label{cnbnd} V(r,\phi) = \alpha E(r) K \sin \phi \,, \end{equation} with $\phi$ being the polar angle measured from the $\mathbf{K}$-direction. The short-range asymptotics of the potential is \begin{equation} \label{as} V(r,\phi) \sim - \frac{Z e^2}{r} \sin \phi \,, \end{equation} with the dimensionless SOI magnitude $Z = \alpha K/(e r_0)$. For sufficiently large $Z$, the binding potential of Eq.~\eqref{as} prevails over the weakly diverging repulsive potential $U(r) \sim \log (r/r_0)$ to allow for the bound states in the spectrum. It is interesting that owing to the dielectric screening in the layer, the attractive potential has a Coulomb-like form at small distance in contrast to the case of the bulk screening where the attractive potential diverges as $r^{-2}$. Therefore no regularization is needed to solve Eq.~\eqref{conv}. Let us exploit a similarity to the Coulomb potential to make a crude estimate of the binding energy as $|\varepsilon| \propto Z^2 \cdot Ry$, the Rydberg constant in the material being $Ry = \hbar^2/2 m a_B^2$, with the Bohr radius $a_B = \epsilon \hbar^2/me^2$. Thus, the binding energy varies with the center-of-mass momentum as $|\varepsilon|\propto K^2$. We expect the size of the electron pair to be $\propto a_B/Z$. Of course, the angular dependence of the binding potential makes a correction to this estimate. To account for this, we resort to numerical calculations with full potential of Eqs.~\eqref{RK} and~\eqref{cnbnd}. To be specific, assume $a_B = 100$~\AA, the layer thickness $d=0.2 a_B$, $\epsilon = 20$, and the dimensionless SOI constant $\tilde{\alpha} = \alpha/e a_B^2 = 1$, which is close to the parameters of such materials as Bi\textsubscript{2}Se\textsubscript{3}~\cite{manchon2015new}. \begin{figure}[htb] \includegraphics[width=0.9\linewidth]{fig1} \caption{The system energy levels (solid lines) and the kinetic energy of the center of mass (dashed line) vs $K a_B$.} \label{fig1} \end{figure} Figure~\ref{fig1} shows the energies of the three lowest-lying convective states, with the kinetic energy of the center of mass included, as a function of the center-of-mass momentum. In other words, this is the energy dispersion of the convective electron pair. At the respective critical value of $K$, each bound state appears in the spectrum, with the binding energy growing approximately like $K^2$, in accordance with the above estimate. Taking into account dielectric screening in the layer, the binding energy increases by a factor of about $\epsilon$ compared to that found in Ref.~\cite{PhysRevB.98.115137}, i.e.\ by an order of magnitude. Also note the SOI-induced renormalization of the effective mass of the electron pair, which even becomes negative. Figure~\ref{fig2} shows the wave function of two lowest-lying convective states. Two surfaces, shown in different color in each figure, are the two spinor components $\psi_{\uparrow \downarrow}(\mathbf{r},\mathbf{K})$ and $\psi_{\downarrow \uparrow}(\mathbf{r},\mathbf{K})$. Note the strong angular dependence of the solutions, which is due to the highly anisotropic binding potential of Eq.~\eqref{cnbnd}. \begin{figure}[htb] \includegraphics[width=0.9\linewidth]{fig2} \caption{The spinor components of the convective state wave function for the ground state (left) and first excited state as functions of relative coordinates. The arrows show the direction of vector $\mathbf{K}$.} \label{fig2} \end{figure} Turning to the relative bound states, we note that since the orbital angular momentum along the $z$ direction $l_z = - i \partial_{\phi}$ commutes with the Hamiltonian, the wave function of the relative motion can be chosen as the eigenfunction of $l_z$, $\psi_{\uparrow \uparrow}(\mathbf{r}) = u(r) e^{i l \phi}$. The antisymmetric properties of $\Psi_{\uparrow \uparrow}$ require that the orbital angular quantum number $l$ be an odd integer. The binding potential for the relative states is thus \begin{equation} \label{relsing} V(r) = 2 \alpha l \frac{E(r)}{r}\,. \end{equation} Depending on the sign of $l$, this term can be repulsive or attractive. The relative bound state $\Psi_{\uparrow \uparrow}$ is supported by $l < 0$, and $\Psi_{\downarrow \downarrow}$ by $l > 0$. The spin projection of the relative state is seen to be locked to the orbital angular momentum. In what follows, we consider the case of $|l| = 1$ to minimize the centrifugal barrier $\propto l^2$. The binding potential behaves as \begin{equation} \label{pat} V(r) \sim - \frac{2 \alpha}{r_0}\frac{e}{r^2} \end{equation} at small $r$. The \textit{e-e} attraction overcomes not only the much weaker $\propto \log (r/r_0)$ potential of repulsion, but also prevails over the centrifugal potential as long as $\tilde{\alpha} > 3 d/16 a_B$. This condition holds in our case. The attractive $-1/r^{2}$ potential in Eq.~\eqref{pat} is a transitional singular potential that has been exciting interest for decades~\cite{RevModPhys.43.36}, not least because of its ubiquity in quantum physics. The inverse square potential appears in the three-body problem in nuclear physics~\cite{Efimov:1971zz}, it describes the point-dipole interactions in molecular physics~\cite{PhysRev.153.1} and the attraction of atoms to a charged wire~\cite{PhysRevLett.81.737}. Meanwhile, it has produced a lot of controversy when used with the Schr\"odinger equation. The requirement that its solutions are square integrable does not define a discrete orthogonal set of eigenfunctions with its eigenvalues; bound states with arbitrary energy $\varepsilon <0$ are possible. Imposing the orthogonality of the eigenfunctions does lead to a discrete spectrum of bound states that is nonetheless unbounded below, so there is no ground state~\cite{PhysRev.80.797}. This is interpreted as a fall to the center~\cite{landau1958course}. The problem is that the Hamiltonian is symmetric but not self-adjoint~\cite{Meetz1964}. To fix the problem, a number of regularization techniques was developed~\cite{PhysRevLett.85.1590,PhysRevA.64.042103,PhysRevA.76.032112}, which are essentially based on introducing a short-distance cut-off~\cite{PhysRevD.48.5940}. The cut-off should be considered as a phenomenological parameter, the value of which can not be determined within the model considered, unless some outer mechanisms are taken into consideration or e.g.\ scaling-invariance requirements are imposed. A possible mechanism of cutting off the binding potential at small $r$ is related to the Zitterbewegung of electrons in crystalline solids~\cite{0953-8984-23-14-143201}, which leads to the cut-off $a$ that may actually be of the order of the film thickness $d$ or even larger. By cutting the potential of Eq.~\eqref{pat} at $r=a$ and imposing the zero boundary condition for the solution, we obtain the following estimate for the binding energy of the lowest-lying relative state, \begin{equation} |\varepsilon| = \frac{2 Ry}{(a/a_B)^2} x_1^2\left(\sqrt{\frac{4 \tilde{\alpha}}{d/a_B}}\right)\,, \end{equation} where $x_1 (\mu)$ is the first (largest) zero of the Macdonald function $\mathcal{K}_{i \mu}(x)$~\cite{olver}. This gives the $|\varepsilon|$ magnitude of tens of Rydberg for the parameters considered. \section{Conclusion} We studied the Coulomb mechanism of electron pairing in low-dimensional structures with a strong Rashba SOI in the case where the e-e interaction is not screened by the environment. This situation is realized in recent experiments on freely suspended 2D structures~\cite{ROSSLER2010861,doi:10.1063/1.5019906}. It attracts growing interest because in this case the e-e interaction effects should be more pronounced. We have found that dielectric screening in the film crucially affects the pairing conditions and binding energy, which is increased by an order of magnitude as compared to the previously considered case of the bulk screening. This work was partially supported by Russian Foundation for Basic Research (Grant No 17--02--00309) and Russian Academy of Sciences. {\small
1,108,101,563,066
arxiv
\section{\label{Introduction}Introduction} Materials on the edge of their stability can have enormous lattice response to a perturbing field. A responsive lattice is necessary for such phenomena as, for example, a giant caloric effect or a high-$T_c$ superconductivity. {\par} A first-order phase transformation happens between two phases of unequal density. Phases can coexist at equal pressure $P$, temperature $T$, and chemical potential $\mu$. Volume $V$ [$\mbox{\AA}^3$ per formula unit (f.u.)], inter\-atomic distances $d_a$ [\AA], density $\rho$ [g/cm$^3$], and local electronic [$\mbox{\AA}^{-3}$] and charge [$e^-/\mbox{\AA}^3$] densities are discontinuous at a 1st-order phase transition. Intermediate structures between the two phases are unstable; their segregation into the stable phases lowers the total Gibbs free energy $G$. {\par} In our model, we consider a high-$T_c$ superconductor as a phase-segregated material. In this neutral material both phases are charged, but the volume of one phase is much smaller than that of the other. The charged phase with small volume has a high charge density, and the Coulomb repulsion fractures it into tiny precipitates, which behave as quasi\-particles, with a quantized electric charge. In conventional superconductors, such quasi\-particles are known as the Cooper pairs, composed by a collective motion of electrons and a lattice deformation, which binds an even number of electrons (fermions) into one charged quasi\-particle (boson). {\par} Conventional superconductivity \cite{Onnes1913,Abrikosov2003,JETP35p1558y1959} happens due to electron-phonon coupling \cite{GinsburgLandau1950,PR106p162p1175y1957,JETP34p58p73y1958,JETP34p66y1958,PRB93p054517y2016}. More generally, superconductivity (both conventional and unconventional) happens due to coupling between a collective electronic excitation and a lattice response to it, which is a collective athermal displacement of atoms and ions. Lattice deformations are responsible for coupling fermions (electrons) into bosons, and a Bose-Einstein condensate \cite{Bose1924} of charged quasi\-particles is responsible for the superconductivity. A larger lattice response can result in a larger critical temperature $T_c$. Hence, a guided search for high-$T_c$ superconductors starts with a study of electronic and lattice instabilities. {\par} A high-$T_c$ superconductivity occurs around an instability. Electronic and structural instabilities result in phase transformations. One of them is the Mott transition \cite{Mott1949}. {\par} The Mott transition is electronic by nature. It happens due to a change of electronic structure, accompanied by a change in interatomic interactions, which drive atoms to their new equilibrium positions, thus relaxing interatomic distances $d_a$, volume $V$, and density $\rho$. At the same external stress, temperature, and composition, two different electronic states have equilibrium at different lattice constants. Any intermediate crystal structures between those two terminal equilibria are not stable: they are destined to transform. What is the speed of electronic and structural transformations? {\par} Electromagnetic interactions, including those between electrons, propagate with the speed of light $c_{light} \approx 3 \! \times \! 10^8\,$m/s. Fermi velocity of conductive electrons in a metal is $v_F \sim 10^6\,$m/s (e.g., 1570 km/s in copper). Lattice vibrations (phonons) propagate with the speed of sound $v_{sound} \sim 10^3\,$m/s (e.g., 4760 m/s for longitudinal waves in an annealed copper \cite{CRChandbook2008} at room $T$). In contrast, drift velocity $u_D = J / q n_q $ of carriers with charge $q$ and concentration $n_q$ in a conductor with current density $J = I/S$ is very modest. For example, in a copper wire ($n_q=8.5 \times 10^{28}\,\mbox{m}^{-3}$) with a cross section $S=1\, \mbox{mm}^2$ at constant direct current $I=1$A, drift velocity of electrons constitutes only $7 \times 10^{-5}\,$m/s. {\par} In a superconductor, each charged bosonic quasi-particle is accompanied (and held together) by a local lattice deformation, which follows its drift. From the other hand, thermal atomic motion disturbs such a local lattice deformation, which acts as a ``glue'' for the quasi-particle. Without such a ``glue'', those quasi-particles can no longer exist, and superconductivity is destroyed above the critical temperature $T_c$. A larger lattice response results in a higher $T_c$. In general, the lattice response is the largest near phase transitions. The quantum critical point (QCP) is an example of a phase boundary at 0K. {\par} Below we propose a model of superconductivity in the vicinity of a phase boundary. This model can help in a guided search for novel high-$T_c$ superconductors. \begin{figure}[t] \includegraphics[width=75mm]{Fig1Gn.pdf} \caption{\label{FigGn} Gibbs free energy $G$ of two phases (1 and 2) versus average electronic density $n$. Its change $\Delta n$ can be comprehended as a level of doping. The tangent (black line) is horizontal, if two segregated phases are at a thermal equilibrium. } \end{figure} \section{\label{Model}Theoretical Model} {\par} Let us consider a bulk solid with an instability, resulting in phase transformations. Such transformations can be driven by changing temperature $T$, pressure $P$, composition $c$ (and hence chemical potential $\mu$), average electronic density $n$, or the level of doping $\Delta n$. \subsection{\label{Charge}Instability in electronic density} {\par} Let us expand the Gibbs free energy $G(T,P,c,n)$ at fixed $\{T,P,c\}$ in a Taylor series around its minimum in each phase $i=\{1,2\}$: \begin{equation} \label{eqG1} G_i (n) = G_i^{(0)} + (\partial^2 G_i / \partial n^2) (n-n_i)^2 + O(n-n_i)^3. \end{equation} {\par} The mixture of phases with the same $n$ has \begin{equation} \label{eqGmix} G(n,x_i) = \sum_i x_i G_i (n) , \end{equation} where $x_i$ is the $i$-phase fraction, and \begin{equation} \label{eqxsum} \sum_i x_i =1 . \end{equation} {\par} At thermodynamic equilibrium, $G_2^{(0)} = G_1^{(0)} \equiv G_0$. Neglecting the higher-order terms $O(n-n_i)^3$, we get: \begin{equation} \label{eqGn} G(n,x_1) \approx G_0 + x_1 (n-n_1)^2 G_1^{(2)} + (1-x_1) (n-n_2)^2 G_2^{(2)} \end{equation} If both phases are stable, then $G_i^{(2)} \equiv (\partial^2 G_i / \partial n^2)_{n_i} >0$. We can generalize consideration to any continuous curves $G_i (n)$ with minima at $G_i (n_i) = G_i^{(0)}$; each curve is convex at the minimum $n_i$ and monotonic on both sides (decreases at $n<n_i$ and increases at $n>n_i$), see Fig.~\ref{FigGn}. {\par} Without a loss of generality, let us assume $n_1 < n_2$ (i.e., we label the phase with lower $n$ by index 1). Than for any intermediate electronic density $ n_1 <n< n_2 $, segregation into two phases with electronic densities $n_1'$ and $n_2'$, where $n_1 < n_1' < n_b$ and $n_b < n_2' < n_2$, results in the lowering of $G$, and is favorable, see Fig.~\ref{FigGn}. \subsection{\label{Instability}Instability is repulsive} {\par} Obviously, electronic and lattice structure at the instability is unstable, while it becomes more stable further from the instability. Increased ``distance'' form the instability in terms of the phase space coordinates results in lower $G$. Thus, instability is ``repulsive'' in the phase space. This is illustrated by Fig.~\ref{FigGn}, where $G_b(n_b)$ is the instability, and $n$ is a phase space coordinate. The ``repulsive'' region is at $n_1 < n < n_2$. \subsection{\label{SS}Superconductivity due to charged segregation} {\par} Again, one way to move away from the instability in terms of electron charge density $n$ or doping $\Delta n$ is a charge density segregation, which leads to creation of charged precipitates. {\par} Let us consider a charged phase with a fixed total volume $V_Q$ and fixed total charge $Q$. If this phase is allowed to fracture to $N_q = Q/q$ small precipitates of charge $q$, then its total potential energy $ U \sim q^{2/3} Q^{4/3} V_Q^{-1/3}$ will be minimal for the smallest $q$, which nevertheless cannot be smaller than a certain quantum limit. This leads to quantum charges $q$ of the precipitates, which can behave as quasi\-particles. If these quasi\-particles are bosons, then they can form a Bose-Einstein condensate \cite{Bose1924} at low $T$. A condensate of charged bosonic quasi\-particles is responsible for superconductivity. {\par} Charged quasi\-particles repel each other. This repulsion distributes them uniformly (in the absence of external fields), and can order them into a ``quasi\-lattice'' (a geometric lattice-like arrangement of quasi\-particles). {\par} Without doubt, a variation of an electronic structure and charge density causes a lattice deformation. From the other hand, a local lattice deformation causes a local variation of the electronic structure: this reminds ``the chicken and the egg'' problem. Charged precipitates are so small, that they must be coherent with the lattice, but this coherency does not prevent them from creating a local strain. Symmetry of $d_a$ distribution around a quasi\-particle in a superconductor differs from a thermal distribution of inter\-atomic distances, especially at low $T$. This lattice deformation could be detected in experiment using diffraction of x-rays or neutrons. \subsection{\label{Magnetism}Magnetism} {\par} The role of magnetism in the high-$T_c$ superconductors is still debatable. Typically there are several competing magnetic orderings around the instability, and the border between those spin states is also an instability of the electronic structure. Our model is applicable to any electronic instability. To remain generic, we do not restrict our consideration to a particular kind of instability, which might \cite{PhysicsToday51n10p40y1998,Nature468p283y2010} or might not \cite{PRB95p174301y2017} be magnetic. \begin{figure}[b] \includegraphics[width=75mm]{Fig2Gap.pdf} \caption{\label{FigGap} Transformation between an insulator (with a band gap $E_{gap} \! > \! 0$) and a metal (with a band overlap, $E_{gap} \! < \! 0$). } \end{figure} \begin{figure}[t] \includegraphics[width=75mm]{Fig3TE.pdf} \caption{\label{FigTE} Temperature $T$ vs. the band gap $E_{gap}$ near the Mott transition. Mott transition is 1st-order below the critical point at $T_M$. Superconducting state (SC) appears at low $T \le T_S = \mbox{max}(T_S^- , T_S^+)$. In a metal, $E_{gap} < 0$ is an overlap of bands, which changes monotonically with a finite electronic density at the Fermi level for small overlaps. Material with a band gap $E_{gap} >0$ is either an insulator or a semiconductor, which conducts electricity if $E_{gap}<k_B T$. } \end{figure} \begin{figure}[t] \includegraphics[width=75mm]{Fig4Td.pdf} \caption{\label{FigTd} Temperature $T$ vs. characteristic interatomic distance $d_a$ near the Mott transition. For two phases with different electronic structure at thermodynamic equilibrium, there is a gap in $d_a$, as well as lattice constants, density, and unit cell volume below $T_M$. The QCP at ($d_0$,$T$=0) is in this gap. On the two sides of the gap, $T_S^-$ and $T_S^+$ can differ. } \end{figure} \section{\label{Mott}Mott transition} {\par} An example of a phase transformation, which changes topology of the electronic structure (Fig.~\ref{FigGap}), is the Mott transition. Figs.~\ref{FigTE} and \ref{FigTd} show 5 distinctive phases: metal ($E_{gap} \! < \! 0$), a small-gap semiconductor ($0 \! < \! E_{gap} \! < \! k_B T$), insulator ($E_{gap} \! > \! k_B T$), and two superconducting phases at $T \! < \! T_c(E_{gap}) \! \le \! T_S$ on both sides of the instability at the phase transition at $E_{gap} \! \equiv \! 0$, which is of the first-order at $0 \! \le \! T \! \le \! T_M$. Again, Fig.~\ref{FigTE} shows one instability and 5 different solid phases around it. Some of those phases can be uniform, while others (including both CS) are segregated states. \subsection{\label{Segregation}Neutral and Charged Segregation} {\par} Mott transition is accompanied by both electronic and lattice instability. Fig.~\ref{FigTd} shows a shaded region, where crystal structures are unstable. Such unstable structures segregate into charge-neutral stable phases of higher and lower density ($\rho$, as well as $V$ and $d_a$). {\par} In addition, the blue line in Fig.~\ref{FigTd} is the border of a region of electronic instability, where the electronic structure wants to segregate into charged regions of higher and lower electronic density $n$, and above we provided a model explaining why this segregation is energetically favorable. Superconductivity is a result of this charged segregation. Conservation of electric charge (and consequently charge neutrality of the whole system) is an additional constraint, imposed on charged segregation. \subsection{\label{MottDiagrams}Comparison to other diagrams} {\par} {\par} Figs.~\ref{FigTE} and \ref{FigTd} are generic phase diagram for the Mott transition. A compatible $T$--$P$ diagram for a compressible lattice is shown in Fig.~1 in \cite{PRL109p176401y2012}, while the gap in strain and $d_a$ is shown in their Fig.~4. Generic $T$--$c$ diagrams in Fig.~1 in \cite{nmat8p630y2009} and Fig.~2 in \cite{PhilTransRSocA369p1574y2011} show the small-gap semiconductor (at $0 \! < \! E_{gap} \! < \! k_B T$) as a ``strange metal''. {\par} Early attempts to draw a generic diagram may contain errors. In particular, Fig.~150 in \cite{RevModPhys70p1039y1998} provides a schematic phase diagram of pseudogap structure in high-$T_c$ cuprates, but it does not show a QCP at $T=0\,$K. \section{\label{Experiment}Comparison to Experiment} {\par} In theory, phase diagrams showing $T$ versus a phase-space variable $x$ such as the band gap $E_{gap}$ (Fig.~\ref{FigTE}), overlap of electronic orbitals, characteristic inter\-atomic distance $d_a$ (Fig~\ref{FigTd}), average electronic density $n$, or its change $\Delta n$, should be comparable regardless of the cause of variation of $x$. Examples of such causes are variations of composition $c$ or applied pressure $P$ \cite{PRL101p057006y2008}. Indeed, experiment \cite{NatMater8n6p471y2009} shows similarities between structural distortions under pressure and chemical doping in superconducting BaFe$_2$As$_2$. Similar effects are found in SrFe$_2$As$_2$ doped by Co \cite{PSSB254n1p1600154y2017} and CaFe$_2$As$_2$ doped by Sr \cite{PRB94p144513y2016}. {\par} Examples of experimental $T$--$c$ phase diagrams are Fig.~6 in \cite{PRB79p014506y2009} for the electron-doped Ba(Fe$_{1-x}$Co$_{x}$)$_{2}$As$_{2}$, Fig.~1 in \cite{PRB93p094513y2016} for Ba$_{1-x}$Rb$_x$Fe$_2$As$_2$, Fig.~4 in \cite{Science329p824y2010} for Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$, and Fig.~1d in \cite{NMat7p953y2008} for CeFeAsO$_{1- x}$F$_x$. Examples of experimental $T$--$P$ diagrams are Fig.~3 in \cite{SRep4p3685y2014} for SrFe$_2$As$_2$, or Fig.~3 in \cite{nmat8p630y2009} for Fe$_{1.01}$Se. Structural instability and superconductivity in SrNi$_2$(P$_{1−x}$Ge$_x$)$_2$ solid solutions was studied in \cite{PSSB254n1p1600351y2017}. Magnetic and structural transitions of SrFe$_2$As$_2$ at high $P$ were investigated in \cite{SRep4p3685y2014}. Experiment finds that superconductivity happens around an instability, and theory claims that it happens due to an instability. \section{\label{Discussion}Discussion} \subsection{\label{SC2}Two types of superconductivity} {\par} Our model of a charged segregation predicts that charges $q$ of tiny precipitates (quasi\-particles) should be quantized (e.g., $q=2e$ for the Cooper pair), but it allows both signs of $q$. Hence, we anticipate existence of two distinctive types of superconductivity with positive and negative $q$. Here one can find similarity to two types semiconductors: p- and n-type. However, charge carriers in semiconductors can be fermions, while in a SC they are bosons. Two distinctive types of superconductivity are labeled $SC^-$ and $SC^+$ in Figs.~\ref{FigTE} and \ref{FigTd}. \subsection{\label{Collective}Collective excitations} {\par} We mentioned that electrons move much faster than their collective excitations. In particular, in a metal the Fermi velocity of electrons $v_F \sim 10^6\,$m/s is huge compared to the drift velocity of charge carriers $u_D \ll 10^{-3}\,$m/s. Thus, all quasi\-particles in a superconductor (including the Cooper pair) are collective electronic excitations, which should not be confused with propagation of a pair of particular electrons. Electrons in this collective excitation change, but the charge of the excitation and its total spin (responsible for bosonic behavior) remain constant. {\par} Each quasi\-particle is a collective electronic excitation (which locally changes the charge density), accompanied by a lattice deformation. Its motion with a small drift velocity $u_D$ is accompanied by equally slow motion of that lattice deformation. Particular atoms or ions vibrate around their lattice positions; they do not follow the quasi\-particle. However, a local change in density can be positive or negative; it is responsible for the mass of a quasi\-particle. {\par} There is a coupling between a collective electronic excitation and a collective atomic displacement. An electron-phonon coupling is one type of such coupling, but not the only one. \subsection{\label{Phonons}Lattice deformations and Phonons} {\par} Conventional superconductivity \cite{Onnes1913,Abrikosov2003,JETP35p1558y1959} happens due to electron-phonon coupling \cite{GinsburgLandau1950,PR106p162p1175y1957,JETP34p58p73y1958,JETP34p66y1958,PRB93p054517y2016}. However, not every lattice deformation can be explained in terms of phonons. In particular, atomic positions in one phase are not always related to phonons in another phase in a phase-segregated material. Next, phonons are (quasi)harmonic vibrations of atoms, but not every collective atomic motion in a solid is harmonic. Hence, there are several reasons, why conventional theory of superconductivity might fail in several classes of ``unconventional'' superconductors. {\par} From the other hand, our description of a superconductor as a phase segregated material with charged segregation is applicable to both conventional and high-$T_c$ superconductors. \section{\label{Summary}Summary} {\par} We proposed a qualitative model of superconductivity, based on thermodynamics of a charged phase segregation. We described a superconductor as a segregated material with the quantized charge of tiny precipitates, which behave as charged bosonic quasi\-particles. A Cooper pair was mentioned as an example of such quasi\-particle. With cautions, our model can be viewed as a generalization of the conventional theory of superconductivity \cite{JETP35p1558y1959,GinsburgLandau1950,PR106p162p1175y1957,JETP34p58p73y1958,JETP34p66y1958}. {\par} We pointed at instability of the electronic structure and the lattice as a cause for phase segregation. As an example, we considered instability at the Mott transition, around which we labeled 5 distinctive solid phases (shown in Figs.~\ref{FigTE} and \ref{FigTd}), two of which are superconductive. {\par} We linked superconductivity with both the instability of the electronic structure and the lattice response to variations of charge density. We claimed that a superconductor with a higher $T_c$ has a larger lattice response, which can stabilize the charged bosonic quasi\-particles at higher $T$. Thus, our model can be used in a guided search for novel high-$T_c$ superconductors. \acknowledgments \begin{acknowledgments} We acknowledge Paul C. Canfield and Duane D. Johnson for discussion. This work was supported in part by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Science and Engineering Division. The research was performed at the Ames Laboratory, which is operated for the U.S. DOE by Iowa State University under contract \# DE-AC02-07CH11358. \end{acknowledgments}
1,108,101,563,067
arxiv
\section{Introduction} High power semiconductor lasers have been of great interest to the industrial market for their wide applications, prompting enormous efforts in improving their performance. A straightforward approach for increasing laser output power is to widen the emitting area, as adopted in tapered\cite{tapered1996}, broad-area\cite{Crump2013} and array lasers\cite{Fan2005}. However, wider emitting areas in general result in multi-mode lasing and thereby in the degradation of laser beam quality. To overcome this issue, various techniques have been examined to minimize the effects of unwanted lateral guided modes over the last few decades\cite{Ackley1986,Glova2003,Weyers2014,Erbert2012,Medina2021}. Singlemodeness can often be improved by a delicate cavity design that cleverly takes advantages of the difference between the spatial mode profiles of the target and other undesired modes\cite{Sarangan1996,Sumpf2009,Zheng2019,Forbes2018,Marsh2005,Harder2006,Erbert2008,Christodoulides2012,Feng2014,Miri2019,SUSY2019}. A remarkable example reported recently is based on a two-dimensional photonic crystal band-edge resonator with 10W-class output from a single optical mode\cite{Noda2019}. However, the designs of these structures tend to be highly delicate and sometimes significantly complicate the fabrication process of the device. Such complexity in design may motivate to find a simpler scheme that enables high power single mode semiconductor lasers. A potential approach in this direction is that of topological lasers, which leverage topological photonics for designing lasing optical modes \cite{Bahari2017,Amo2017,Khajavikhan2018,Segev2018,Parto2018,Zhao2018,polariton2018,Ota2018,PhC2019,bulklaser2020,Wang2020,corner2020-1,corner2020-2,RenMinMa2020,Kante2020}. Topological photonics offers a novel route for designing optical modes with distinctive properties compared to conventional approaches\cite{Soljacic2014,ShvetsReview2017,Ozawa2019,Ota2020,Nonlinear2020,IwamotoReview2021}. A typical topological laser consists of a single topological edge mode that deterministically appears in a topological bandgap as a result of a topological mechanism called the bulk-edge correspondence\cite{Jackiw1976,Hatsugai1993a,Hatsugai1993b}. The topological edge modes are known to behave robustly against certain perturbations due to topological protection, which is suitable for developing robust single mode lasers. Topological ring lasers have been demonstrated using one-dimensional topological edge states propagating at the exterior of the bulk emulating quantum Hall\cite{Bahari2017}, quantum spin Hall \cite{Khajavikhan2018,RenMinMa2020} and quantum valley Hall systems \cite{Kante2020,Wang2020}. Single mode lasing devices have been demonstrated in these systems and the possibility of realizing robust single mode lasers with high slope efficiencies has been discussed. More recently, an electrically pumped topological laser has also been reported at mid-infrared wavelengths\cite{Wang2020}. Surface-emitting lasers utilizing Dirac cones or those with mass vortices also have been discussed as another candidate for a large-area laser\cite{DiracLaser2014,DiracVortex2020}. Topological lasers based on zero dimensional edge states are another topic that has gathered interest recently. Localized topological modes in arrays of resonators, such as micropillars\cite{Amo2017} and microring cavities\cite{Parto2018,Zhao2018}, have been combined with semiconductor gain to demonstrate lasing. Topological nanolasers have also been studied using topological photonic crystals supporting zero dimensional interface states \cite{Ota2018,PhC2019} and corner states as higher-order topological states as well\cite{OtaCorner2019,Cornerlasing2020}. So far, most of the works employing zero dimensional topological states aimed to investigate the lasing properties of tightly localized topological edge modes or to explore the physics of non-Hermitian topology therein\cite{Weimann2017,Parto2018,Zhao2018,Takata2017,Takata2018,Henning2015,Henning2018}. As such, there have been limited discussions for the application of topological edge modes for high power lasers by significantly expanding the mode profile in space. In this paper, we theoretically investigate a large-scale single-mode topological laser. We consider a sizable array laser that supports a single zero-dimensional topological edge state distributed over a few hundreds of site resonators. We formulate this model based on the tight binding approximation. Akin to a conventional analysis of semiconductor lasers\cite{DelAlpha1990,Henmi1988,Noda2019}, we analyze the stability of single mode operation by evaluating the threshold gain difference between the first topological mode lasing and the second bulk mode lasing. We find that the stability of the single mode lasing increases with a stronger coupling of the site resonators and reducing optical loss in them. Furthermore, we study the robustness of the single mode lasing under the presence of imperfections. From the discussion, we deduce a possible direction of the device design for robust single mode lasing with high output power. We believe our results pave a new path towards single mode high power lasers based on topological photonics. \section{Characteristics of an ideal topological edge-mode laser} \label{sec:ideal} \subsection{Theoretical model} \begin{figure*}[h!] \centering\includegraphics[width=\linewidth]{Scheme.pdf} \caption{(a)Schematic of the investigated topological laser structure. (b)Tight-binding model of the laser structure. An edge mode deterministically emerges at the interface of the two topologically distinct photonic lattices. $N$ and $L$ correspond to the total number of dimers and the number specifying the topological phase boundary, respectively. Note that there exists a single auxiliary site at the end of the topological chain. $\kappa_1$ (single line) and $\kappa_2$ (double lines) indicate weak and strong coupling strengths between neighboring sites, respectively. $\gamma_{\rm{gain}}$ expresses gain supplied on A-site. The application of this tight binding model to the system described in (a) would be valid as long as the longitudinal cavity modes behave independently or when assuming arrays of cavities supporting only single longitudinal mode such as $\lambda/4$-shifted distributed feedback resonators.} \label{fig:scheme} \end{figure*} We discuss a large-scale topological array laser composed of a number of site resonators. Figure \ref{fig:scheme}(a) shows a schematic implementation of such a laser based on Fabry-Pérot cavities, which can in principle be substituted by any other laser resonators. Each cavity supports a well defined single lateral mode and optically couples to neighboring cavities. Well-designed couplings between the cavities allow for the appearance of a topological lateral mode distributed over the nearly all of the cavity array, as we will describe shortly later. Electrodes are patterned on specific site cavities to selectively supply gain, which promotes lasing from the designed topological mode. We target a system including a few hundred resonators. If each resonator delivers $\sim$100 mW output, the topological laser could be operated as a 10W-class laser. For theoretical analysis, we map this array laser to a simple tight binding model. We consider an array of single-mode resonators that resembles the Su-Schriffer-Heeger (SSH) model\cite{SSHcourse}. In the SSH model, the resonator chain is dimerized and its unit cell contains two resonators called A- and B-sites. When the coupling strengths for both the inter- and intra-unit cell hopping are the same, the model exhibits gapless energy bands in momentum space. Meanwhile, when the two coupling strengths are unequal, a gap appears between the two bands. For a SSH chain with a larger inter-cell coupling than the intra-cell coupling, its band topology becomes topologically non-trivial and topological localized modes emerge at the edges of the bulk chain according to the bulk-edge correspondence. More quantitatively, the topological properties of the energy bands can be characterized using Zak phases, which are defined by the integral of the Berry connection over the first Brillouin zone\cite{Zak1989}. For a topological band, its Zak phase takes a nonzero value and becomes $\pi$ when inversion symmetry is preserved in the system. To obtain the desired laser cavity mode, we interface two SSH chains at the center of the system, as schematically illustrated in Figure\ref{fig:scheme}(b). The two chains are topologically trivial and non-trivial, respectively. In this case, a single topological interface mode appears deterministically around the interface\cite{Henning2013,Ota2018}, with which we design a single mode laser. Since the other end of the topological SSH chain could support another edge state, we terminate the chain with an auxiliary site resonator strongly coupled to the bulk chain, by which we can suppress the emergence of the unwanted extra edge state. Note that this configuration is similar to the design reported in Ref.\cite{Zhao2018}. However, they studied a tightly localized edge mode at the interface in a small lattice, in stark contrast with the current work investigating a broadly distributed interface mode in a large-scale lattice. The system under consideration is described by the following Hamiltonian, \begin{eqnarray} \begin{split} \mathcal{H} &= \sum_{m=1}^{N+1} \left( i\gamma_{A,m} + \omega_{A,m} \right) \ket{m,A} \bra{m,A} \\ &+ \sum_{m=1}^{N} \left( i\gamma_{B,m} + \omega_{B,m} \right) \ket{m,B} \bra{m,B} \\ &+ \sum_{m=1}^{L} \left[\kappa_{2,m} \left( \ket{m,B} \bra{m,A}+ h.c. \right) \right. \\ &\left.+ \kappa_{1,m} \left( \ket{m+1,A} \bra{m,B}+ h.c. \right) \right] \\ &+ \sum_{m=L+1}^{N} \left[\kappa_{1,m} \left( \ket{m,B} \bra{m,A}+ h.c. \right) \right. \\ &\left. + \kappa_{2,m} \left( \ket{m+1,A} \bra{m,B}+ h.c. \right) \right], \end{split} \label{eq:Hamiltonian} \end{eqnarray} where $\omega_{A,m}$ and $\omega_{B,m}$ are the resonant frequencies of site A and B in a dimer $m$, respectively, while $\gamma_{A,m}$ and $\gamma_{B,m}$ denote gain and loss. Site-to-site coupling strengths are described by $ \kappa_{1,m}$ and $\kappa_{2,m}$. We suppose $\kappa_{1,m} < \kappa_{2,m}$, such that the topological SSH chain always remains topological. The total number of the dimers and the number specifying the topological phase boundary are set as $N= 100$ and $L= 50$, respectively. Thus, the number of sites in the trivial array becomes $n_{\rm tri}=2L$ = $100$, while that in the topological array does $n_{\rm topo}=2(N-L)+1$= $101$. The latter number includes the single auxiliary site at the end of the topological chain. We neglect the presence of unwanted longitudinal modes in each resonator to simplify the analysis. This model is valid as long as the longitudinal modes behave independently or when assuming arrays of single longitudinal mode cavities such as $\lambda/4$-shifted distributed feedback resonators. Note that, in this section, we consider an ideal case where we henceforth set ($\kappa_{1,m}$, $\kappa_{2,m}$, $\gamma_{A,m}$, $\gamma_{B,m}$)=($\kappa_{1}$, $\kappa_{2}$, $\gamma_{A}$, $\gamma_{B}$) and $\omega_{A,m} = \omega_{B,m} = \omega$ for any dimer $m$ , unless otherwise indicated. \subsection{Eigenmodes in the absence of gain and loss} To understand the basic properties of the investigated system formulated in Eq.(\ref{eq:Hamiltonian}), we first analyze it in the absence of gain and loss. We set the coupling parameters to ($\kappa_{1,m}$, $\kappa_{2,m}$)= (1.0, 1.04), which serves as a basic parameter set for the subsequent discussion. We diagonalize the Hamiltonian and analyze the energy spectrum and the spatial profiles of the eigenmodes of the system. Figure \ref{fig:ideal}(a) shows computed eigenenergies $\varepsilon$ plotted in the complex energy plane. In the real energy spectrum, ${\rm Re}(\varepsilon)$, one can see an energy gap of approximately $2|\kappa_{1} - \kappa_{2}|$, in which an in-gap mode exists as expected from the topological design discussed above. The topological mode is fixed to the zero energy and the entire energy spectrum is symmetric with respect to the zero energy according to chiral symmetry existing in the system. We inspect the spatial profile of the zero energy topological mode and plot this in Figure \ref{fig:ideal}(b). The mode profile distributes over the entire lattice with amplitudes only on A-sites\cite{Zhao2018,Parto2018}. The spatial profile is well described by an approximated analytical expression given as $a_m =(-\kappa_{1}/ \kappa_{2})^{|m-L|}\times a_L, b_m=0$ for any $m$, where $a_m$ ($b_m$) is the field amplitude at $m$th A-site (B-site). The extent of the spatial profile depends on the ratio of coupling constants. The current ratio of $\kappa_2/\kappa_1$ = 1.04 is sufficiently small so that the edge mode profile is distributed over the entire 201 sites. Figure \ref{fig:ideal}(c) shows a spatial profile of one of the two band-edge bulk modes. In contrast to the topological edge mode, the amplitudes are essentially equally distributed over both A and B-site. The difference of the mode profile suggests that lasing from the topological edge mode can be selectively promoted by supplying gain only to A-sites. \begin{figure*}[h!] \centering\includegraphics[width=\linewidth]{ideal.pdf} \caption{(a) The eigenenergies $\varepsilon$ in the complex energy plane for the Hermitian case, zoomed around origin. (b) Spatial profile of the topological edge mode. The inset shows that the edge mode has non-zero amplitudes only on A-sites. (c) Spatial profile of the band-edge bulk mode for comparison. For (a)-(c), the parameters used are $\kappa_2/ \kappa_1= 1.04$, $\gamma_{\rm{loss}}=\gamma_{\rm{gain}}= 0$, $n_{\rm{tri}}= 100$ and $n_{\rm{topo}}= 101$. (d) The eigenenergies $\varepsilon$ in the complex energy plane for the topological laser system with gain and loss, zoomed around origin. (e) Spatial profile of the first lasing mode, i.e. topological edge mode. (f) Spatial profile of the second lasing mode steming from an amplified bulk mode, exhibiting non-zero amplitudes only on A-sites. For (d)-(f), the parameters used are $\kappa_1$= 1.0, $\kappa_2$= 1.04, $\gamma_{\rm{loss}}$= 0.2, and $\gamma_{\rm{gain}}$= 0.219. The system size is $n_{\rm{tri}}= 100$ and $n_{\rm{topo}}= 101$. In (b,c,e,f), blue and red bars indicate the amplitudes on A-site and B-site, respectively. } \label{fig:ideal} \end{figure*} \subsection{Eigenmodes under the presence of gain and loss} Next, we investigate the properties of the system when introducing gain and loss to assess the capability of single mode lasing. To account for modal loss normally existing in photonic devices, we assume that all site resonators experience an uniform loss at a rate $\gamma_{\rm loss}$. Then, we supply gain on the A-sites at a rate of $\gamma_{\rm gain}$. Thus, we introduce $\gamma_A=\gamma_{\rm gain}-\gamma_{\rm loss}$ and $\gamma_B=-\gamma_{\rm loss}$ as imaginary onsite potentials across all the sites. Figure \ref{fig:ideal}(d) shows representative eigenenergies in the complex energy plane with $\gamma_{\rm loss}= 0.2$ and $\gamma_{\rm gain}$= 0.218. Most eigenstates show negative ${\rm Im}(\varepsilon)$ and are expected to behave as lossy states. In contrast, the topological edge state solely acquires an explicit positive ${\rm Im}(\varepsilon)$, indicating that the state becomes the first lasing mode in the system. This result confirms that our design can promote single mode lasing from the designed topological edge state broadly distributed in the lattice. Meanwhile, in the plot, there is a bulk state with nearly zero real and imaginary energies, which, with additional gain, could be positive in the imaginary part and hence a second lasing state. The presence of such a bulk mode capable of lasing leads to the unwanted competition of lasing modes in the system. To stabilize the single mode lasing from the topological edge state, it is vital to design a system that suppresses lasing from the bulk modes. The reason why the topological edge and bulk states under concern preferentially acquire non-negative ${\rm Im}(\varepsilon)$ can be understood from their mode profiles presented in Figure\ref{fig:ideal}(e) and (f). Both of their mode profiles have dominant amplitudes on A-sites, where gain is selectively supplied. We identified that the bulk mode with spatial profile only on the A-sites arises from a phase transition similar to that occurring in parity-time (PT) symmetric systems\cite{Nori2019,El-Ganainy2018,Bo2014}. Note that, while the gain and loss are totally balanced in PT symmetric systems, we consider a system with varied gain and fixed loss. Subjected to a large supplied gain, some bulk modes experience a phase transition and choose to split in its imaginary energies (while in turn degenerate in real energies), which accompanies a drastic change of the field profiles. To gain more insight into the phase transition, we consider an infinite bulk SSH chain without any interface. In this case, the Hamiltonian represented in momentum space takes the form \begin{eqnarray} H(k)= \begin{pmatrix} i \gamma_A & \kappa_1+\kappa_2 {\rm e}^{-ika} \\ \kappa_1+\kappa_2 {\rm e}^{ika} & i \gamma_B \\ \end{pmatrix}, \end{eqnarray} where $a$ is the lattice constant and $k$ is a wave number. For band edge modes supported at the Brillouin zone edge, the eigenvalues are given by $\varepsilon (\omega) =\left[i(\gamma_A+\gamma_B) \pm \sqrt{-(\gamma_A-\gamma_B)^2 + 4(\kappa_1-\kappa_2) ^2}\right] /2$. The eigenvalues split in either real or imaginary part depending on the sign in the square root. One of the modes split in imaginary energy corresponding to the second lasing bulk mode in our case, as we will discuss later. Since we define $\gamma_A= \gamma_{\rm gain} -\gamma_{\rm loss}$ and $\gamma_B= -\gamma_{\rm loss}$, the critical gain that induces the imaginary energy splitting in the bulk mode is given by \begin{eqnarray} \gamma_{\rm gain}^{\rm{critical}} = 2 \left| \kappa_1- \kappa_2\right|. \label{eq:EP} \end{eqnarray} Across $\gamma_{\rm gain}^{\rm{critical}}$, one observes a phase transition in the bulk eigenstates. As has been anticipated from the Hamiltonian and the expression of eigenvalues above, the phase transition in the bulk system resembles that in PT symmetric systems. When $\gamma_{\rm gain} < \gamma_{\rm gain}^{\rm critical} $, the band-edge modes are in a phase analogous to the PT symmetric phase and exhibit mode profiles homogeneously-distributed for both sites. In contrast, when $\gamma_{\rm gain} > \gamma_{\rm gain}^{\rm critical}$, the band-edge modes are in a phase analogous to the broken-PT phase and therefore exhibit mode profiles that dominantly populate in either A- or B-site. The spatial profile of the bulk mode in Figure 2(f) shows the one in the broken phase. We note that $\gamma_{\rm gain}^{\rm critical}$ becomes larger when considering a bulk system with a finite size. For our system with 201 arrays, $\gamma_{\rm gain}^{\rm critical}$ is computed to be $ \sim 0.12$, instead of the analytical value $\gamma_{\rm gain}^{\rm critical}$ = 0.08 for the infinite system with $|\kappa_1 - \kappa_2|$ = 0.04. \subsection{Threshold gain difference} \label{subsec:EP} A way to assess the capability of single mode lasing is to measure the threshold gain difference among the lasing modes. In this work, the threshold gain for a mode is defined as the supplied gain at which the mode reaches Im$(\varepsilon)$=0. We will consider the threshold gain difference $\Delta\alpha$ between the first lasing topological mode and the second lasing bulk trivial mode. The former is defined to lase at $\gamma_{\rm{gain}}$=$\gamma_{\rm{th}}^{\rm{1st}}$ and the latter at $\gamma_{\rm{th}}^{\rm{2nd}}$, thus $\Delta\alpha=\gamma_{\rm{th}}^{\rm{2nd}}-\gamma_{\rm{th}}^{\rm{1st}}$. It is known that single mode lasing becomes more stable as $\Delta\alpha$ increases. The analysis based on $\Delta\alpha$ employs only the eigenmode analysis and thus is very simple, but nevertheless can effectively evaluate the single mode lasing stability. Figure \ref{fig:net gain}(a) shows the calculated Im($\varepsilon$) as a function of $\gamma_{\rm{gain}}$ for a system with $\gamma_{\rm{loss}} = 0.2$. A loss value of $\gamma_{\rm{loss}}= 0.2$ is consistent with conventional Fabry Perot semiconductor lasers as we will discuss in section 4. In the plot, it is clearly seen that the topological mode (colored in red) acquires gain much faster than the bulk modes (blue) and exhibits positive Im($\varepsilon$) at the lowest $\gamma_{\rm{gain}}$ among all the modes. The threshold gain for the edge mode $\gamma_{\rm{th}}^{\rm{1st}}$ equals to $\gamma_{\rm{loss}}$, since the edge mode distributes only on A-sites where gain is selectively supplied. With increasing $\gamma_{\rm{gain}}$, a bulk mode also reaches Im($\varepsilon$)=0 at $\gamma_{\rm{th}}^{\rm{2nd}} = 0.219$. Thus, $\Delta\alpha$ is equal to 0.019 in this particular example. If all bulk modes maintained an equal mode distribution on the A- and B-sites, $\Delta\alpha$ is expected to be $|\gamma_{\rm{loss}}|$ = 0.2, since they simply need additional gain to compensate the loss also in B-site. However, as already discussed above, some bulk modes undergo a phase transition that largely modifies their mode profiles. As such, the branched bulk modes acquire gain much faster than the rest of the bulk modes. This is the reason why $\Delta\alpha$ reduces in the case in Figure 3(a). Meanwhile, the largest $\Delta\alpha$ can be obtained when the bulk mode reaches its lasing threshold $\gamma_{\rm th}^{\rm 2nd}$ at $\gamma_{\rm gain}^{\rm critical}$, that is $\gamma_{\rm th}^{\rm 2nd}$ =$\gamma_{\rm gain}^{\rm critical}$, which is more preferable for stable single mode lasing from the topological mode. This situation is realized in Figure 3(b), where $\gamma_{\rm loss}$ is set to 0.06. The overall behaviors of the Im$(\varepsilon)$ curves are exactly the same as those in Figure 3(a), except for the difference in the imaginary energy offset. This indicates that $\gamma_{\rm loss}$ is a critical factor for controlling $\Delta\alpha$. We note that $\gamma_{\rm loss}= 0.06$ may be too small to properly account for conventional loss in semiconductor lasers, which we will discuss in section 4. \begin{figure*}[h!] \centering\includegraphics[width=\linewidth]{DeltaAlpha.pdf} \caption{(a)(b) Imaginary parts of eigenenergies of the system plotted as a function of supplied gain on A-site $\gamma_{\rm{gain}}$ for $\gamma_{\rm{loss}}$= 0.2 and 0.06, respectively. The red and blue lines indicate the energy of the edge mode and bulk modes. (c) Loss dependence of the threshold gain difference $\Delta \alpha$. The parameters used are $\kappa_1$= 1.0 and $\kappa_2$= 1.04, with a finite system consisting of $n_{\rm tri}= 100$ trivial and $n_{\rm topo}= 101$ topological cavities.} \label{fig:net gain} \end{figure*} In Figure 3(c), we evaluate $\Delta\alpha$ as a function of $\gamma_{\rm{loss}}$ for the system defined in Figure 1(b) with $\kappa_1$= 1.0 and $\kappa_2$= 1.04. The plot of $\Delta\alpha$ shows a peak at $\gamma_{\rm loss}$ = 0.06 where $\gamma_{\rm th}^{\rm 2nd}=\gamma_{\rm gain}^{\rm critical}$ holds, as discussed in Figure 3(b). For the region of $\gamma_{\rm{loss}} < 0.06$, there is a linear increase of $\Delta\alpha$ with increasing $\gamma_{\rm{loss}}$. In this situation, $\gamma_{\rm th}^{\rm 2nd}$ is lower than $\gamma_{\rm gain}^{\rm critical}$ and the second lasing starts before the bulk modes get branched. For the region of $\gamma_{\rm loss} > 0.06$, there is a monotonic decrease of $\Delta\alpha$ with increasing $\gamma_{\rm{loss}}$. In this situation, the second bulk-mode lasing occurs from a branched mode and thus $\gamma_{\rm{th}}^{\rm{2nd}}$ becomes closer to $\gamma_{\rm{th}}^{\rm{1st}}$. We here summarize the points to be considered for increasing $\Delta\alpha$ in our system. (i) The maximum possible $\Delta\alpha$ is obtained when $\gamma_{\rm th}^{\rm 2nd}$ = $\gamma_{\rm gain}^{\rm critical}$. (ii) A large $\gamma_{\rm gain}^{\rm critical}$ is preferable for enhancing $\Delta\alpha$. (iii) $\gamma_{\rm gain}^{\rm critical}$ can be increase by increasing $|\kappa_1 - \kappa_2|$, while $|\kappa_1/\kappa_2|$ should be close to one for maintaining a large field extent of the topological mode. Thus, one should take large $\kappa_1$ and $\kappa_2$ with $|\kappa_1/\kappa_2| \sim 1$. (iv) There is an optimal $\gamma_{\rm loss}$ in the system with respect to $\gamma_{\rm gain}^{\rm critical}$ for maximizing $\Delta\alpha$. For semiconductor array lasers based on conventional lossy resonators, the above discussion suggests that it is important to employ low-loss resonators with high resonator-resonator couplings. We will revisit more practical considerations for achieving a single-mode large-area topological laser in section 4. \section{Effects of disorders and long-range interactions on the single-mode lasing operation} In this section, we evaluate the stability of the single mode lasing under the presence of imperfections by primarily considering $\Delta\alpha$. We examine the effects of inhomogeneous coupling strengths and resonator frequencies that are the most likely types of disorder induced by fabrication imperfections. Previous works have studied the effect of such disorders in 1D SSH models, however, most of them are focusing on the properties of tightly localized topological edge modes\cite{Amo2017,Prodan2014,Platero2019,Bauer2019,Kennett2020}. In contrast, our interest lies in the broadly distributed topological edge mode and its stability of single mode lasing in competition with a bulk mode. We also discuss the effect of interactions between next-nearest neighbor resonators, which are likely to occur in optical resonator arrays in the course of increasing nearest neighbor couplings. \subsection{Inhomogeneous coupling strengths} First, we investigate the effects of inhomogeneity in the coupling strengths on the laser array systems discussed so far, i.e. those constructed with $\kappa_{1}$= 1.0 and $\kappa_{2}$= 1.04 for 201 sites. We prepare coupling strengths randomly distributed among all sites by generating different sets of Gaussian random variables with means $\kappa_{1}$= 1.0 (for intra-dimer coupling) and $\kappa_{2}$= 1.04 (inter-dimer) and a common standard deviation of randomness $r_{\kappa}$. For each set of parameters with the randomness, we solve the Hamiltonian in Eq.(\ref{eq:Hamiltonian}) by diagonalization. In order to study $r_{\kappa}$ dependence of the laser system, we generate 100 different sets of parameters for each $r_{\kappa}$, and average the outcomes. The error bar represents the half of the standard deviation $\sigma/2$, throughout this section. We note that the disorder discussed here can be interpreted as random distances between the site resonators, hence it only breaks parity symmetry, while preserving chiral symmetry. \begin{figure*}[h!] \centering\includegraphics[width=\linewidth]{inhomo coupling2.pdf} \caption{(a) Threshold gain difference between the first and second lasing modes as a function of coupling disorder $r_{\kappa}$ in a finite system consisting of $n_{\rm{tri}}= 100$ trivial and $n_{\rm{topo}}=101$ topological cavities. (b) Representative sample of the edge mode profile. (c) Representative sample of bulk mode profile. Blue bars indicate the amplitudes on A-site. (d) Imaginary parts of eigenenergies of the system plotted as a function of supplied gain on A-site $\gamma_{\rm gain}$. The red and blue lines indicate the energy of the edge mode and bulk modes, respectively. The parameters used are $\kappa_{1}$= 1.0, $\kappa_{2}$= 1.04 and $\gamma_{\rm loss}$= 0.06. In (b)-(d), the randomness is $r_{\kappa}$= 0.1.} \label{fig:inhomo coupling} \end{figure*} Figure \ref{fig:inhomo coupling}(a) shows the computed threshold gain differences $\Delta\alpha$'s for a system subject to $\gamma_{\rm{loss}} = 0.06$. This is the case for realizing the largest $\Delta\alpha$ for the disorder-free case. As the randomness or $r_{\kappa}$ increases, a decreased $\Delta\alpha$ is observed. However, $\Delta\alpha$ remains $\sim$70$\%$ of the maximum even when $r_{\kappa}$ = 0.1, where the strength of randomness as the standard deviation exceeds the bandgap of the infinite Hermitian system, $2|\kappa_1-\kappa_2|$ = 0.08. This result indicates the robustness of the single mode lasing from the resonator array device. In the current case, the threshold gain for the first lasing mode, $\gamma_{\rm th}^{\rm 1st}$, remains unchanged even when introducing the disorders. This is a consequence of the preserved chiral symmetry, which leads to a zero energy mode with its mode amplitude only on A-sites, thus always reaching the threshold gain exactly when compensating the loss in A-sites. Therefore, the observed decrease of $\Delta\alpha$ arises solely from the decrease of the threshold gain for the second lasing mode $\gamma_{\rm th}^{\rm 2nd}$. As discussed in the previous sections, $\gamma_{\rm th}^{\rm 2nd}$ diminishes for a lower $\gamma_{\rm gain}^{\rm critical}$, which scales with $2|\kappa_1-\kappa_2|$ for the unperturbed case. We consider that the introduction of randomness masks the difference between the couplings by $\kappa_1$ and $\kappa_2$ and hence effectively reduces $|\kappa_1-\kappa_2|$. Accordingly, we found a gradual reduction of the width of average bandgap in the system with increasing $r_{\kappa}$. To further verify the above discussion, we computed the spatial profile of the first and second lasing mode for the case with $r_{\kappa}$ = 0.1, as plotted in Fig. 4(b) and (c), respectively. We plot typical mode profile providing the average $\Delta\alpha$ among the 100 trials. The mode profiles resemble those computed for $r_{\kappa}$ = 0. This observation confirms that the first lasing mode originates from the topological interface mode and the second one originates from the bulk edge mode as observed in the unperturbed case. Figure 4(d) shows a computed ${\rm Im}(\varepsilon)$ as a function of $\gamma_{\rm gain}$ for the parameter set used in Fig. 4 (b) and (c). As anticipated above, one can see the reduction of $\gamma_{\rm gain}^{\rm critical}$ to 0.10 and hence of $\Delta\alpha$ by 70$\%$ in comparison to the disorder-free case in Fig. 3(b). Overall, it was found that the topological mode robustly behaves even under the presence of the disorder for coupling strengths with $r_{\kappa} > 2|\kappa_1 - \kappa_2|$. In section 4, we will quantitatively discuss $r_{\kappa}$ by referring a required accuracy in the actual device fabrication for an example case. It is interesting to note that the interface of the two topologically distinct chains may effectively remain even with such a large $r_{\kappa}$, as indicated in the spatial profile of the zero energy mode plotted in Fig. 4(b). The mode has a peak near the center of the system, where the interface is originally located. Another important note is that a very similar tendency was observed for the case that only replaces $\gamma_{\rm{loss}}$ from 0.06 to 0.2. Even in this case, we observed a reduction of the average $\Delta\alpha$ by 70$\%$ when $r_{\kappa}$ = 0.1. This result implies that loss does not essentially alter the behavior of the system subject to inhomogeneous coupling strengths. \subsection{Inhomogeneous site resonator frequencies} Next, we perform calculations for the cases with fluctuations in the resonance frequencies of the site resonators. We treat inhomogeneity in the resonator detunings $\Delta$ after subtracting a common frequency offset $\omega$ from the Hamiltonian in Eq. (1). For the perfectly regular case, $\Delta$ equals to zero for any $m$th resonator. We prepare 100 sets of random $\Delta$'s distributed by Gaussian random variables with means $\Delta = 0$ and the standard deviation $r_{\Delta}$. We introduce each set of generated random detunings in Eq.(\ref{eq:Hamiltonian}) and solve it by diagonalization for the system with $\kappa_1$ = 1.0 and $\kappa_2$ = 1.04. The ways of averaging the data for each $r_{\Delta}$ and of its plot are the same as in the previous section. \begin{figure*}[h!] \centering\includegraphics[width=\linewidth]{inhomo detuning.pdf} \caption{ (a) Threshold gain of the first lasing mode as a function of strength of inhomogeneity in detuning $r_{\Delta}$ in a finite system consisting of $n_{\rm{tri}}= 100$ trivial and $n_{\rm{topo}}= 101$ topological cavities. The inset shows a representative sample of the edge mode profile for $r_{\Delta}= 0.1$ where blue and red bars indicate the amplitudes on A-site and B-site, respectively. (b) Threshold gain difference $\Delta\alpha$. (c) Imaginary parts of eigenenergies versus supplied gain on A-site $\gamma_{\rm gain}$. The red and blue lines indicate the energy of the edge mode and bulk modes, respectively. (d) Threshold gain difference $\Delta\alpha$ for the system with higher resonator loss of $\gamma_{\rm loss}$= 0.20. The coupling constant is $\kappa_2/\kappa_1$= 1.04 and the loss is $\gamma_{\rm loss}$= 0.06 in (a)-(c) and $\gamma_{\rm loss}$= 0.20 in (d). } \label{fig:inhomo detuning} \end{figure*} Figure \ref{fig:inhomo detuning}(a) shows the average $\gamma_{\rm{th}}^{\rm{1st}}$ with varying $r_{\Delta}$ for a system with $\gamma_{\rm{loss}}$ = 0.06. Unlike the case with the coupling disorders, the average $\gamma_{\rm{th}}^{\rm{1st}}$ slightly increases with $r_{\Delta}$. In the system with non-zero $r_{\Delta}$, chiral symmetry is broken and thus the topological mode acquires a field amplitude also in lossy B-sites, resulting in the increase of $\gamma_{\rm th}^{\rm 1st}$. This behavior can be confirmed in the mode profile in the inset in Figure \ref{fig:inhomo detuning}(a) calculated for a representative example when $r_{\Delta}$ = 0.1. The mode profile consists mainly of the original topological interface mode, but slightly contains B-site amplitudes, which is consistent with the modest increase of $\gamma_{\rm th}^{\rm 1st}$. Figure \ref{fig:inhomo detuning}(b) shows average $\Delta\alpha$ calculated for the system with $\gamma_{\rm loss}$ = 0.06. A monotonic decrease of $\Delta\alpha$ is found, which is much larger amount than the increase in $\gamma_{\rm{th}}^{\rm 1st}$. Thus, the drop of $\Delta\alpha$ is expected to stem from a decrease of $\gamma_{\rm th}^{\rm 2nd}$. Figure \ref{fig:inhomo detuning}(c) shows the computed ${\rm Im}(\varepsilon)$ for the system discussed in the inset in Figure \ref{fig:inhomo detuning}(a). As anticipated, an earlier growth of ${\rm Im}(\varepsilon)$ for a bulk mode is seen when increasing $\gamma_{\rm gain}$, making the $\Delta\alpha$ smaller. In the plot, it is seen that the phase transition in the bulk modes is blurred and a diagonal bundle of the bulk modes are formed. These are the consequences of the symmetry breaking by the fluctuating $\Delta$. While sharp branches of the bulk modes are not observed in Figure \ref{fig:inhomo detuning}(c), the overall behaviors of branched curves in ${\rm Im}(\varepsilon)$ are similar with those in Figure 3(b), in particular for large $\gamma_{\rm gain}$ roughly over 0.15. This comparison suggests that the fluctuation in $\Delta$ mainly influences how the ${\rm Im}(\varepsilon)$ curves branch out from the bulk mode bundle. Figure \ref{fig:inhomo detuning}(d) shows $\Delta\alpha$ computed for the system with $\gamma_{\rm loss}$ = 0.2. In contrast to the case with lower loss, the computed $\Delta\alpha$s are less sensitive with large $\gamma_{\rm loss}$. This is because introducing the fluctuating $\Delta$ does not alter the overall behaviors of ${\rm Im}(\varepsilon)$ curves in particular for large $\gamma_{\rm gain}$, at which $\Delta\alpha$ is measured for the case of $\gamma_{\rm loss}$ = 0.2. In other words, for large $\gamma_{\rm gain}$, the relationship between the ${\rm Im}(\varepsilon)$ curves of the topological mode and the competing bulk mode does not change largely, neither does $\Delta\alpha$. \subsection{Next-nearest-neighbor cavity coupling} The discussion in section \ref{sec:ideal} reveals that larger coupling strengths between the site resonators are advantageous for achieving a large $\Delta\alpha$ and thus for stable single mode lasing from a broadly-distributed topological edge mode. Cavity array designs for increasing the coupling strengths between the nearest neighbor (NN) cavities may inevitably induce non-negligible next-nearest-neighbor (NNN) couplings, which will break chiral symmetry and thus could modify the performance of the laser device. In this section, we analyze the influence of NNN couplings on the investigated array laser. Figure \ref{fig:NNN}(a) explains the model we consider in this section. We define the ratio of the NNN couplings to NN couplings by a factor $g$: $g=\kappa^{\rm{NNN}}/\kappa_1^{\rm{NN}}$, where $\kappa^{\rm{NN}}$ and $\kappa^{\rm{NNN}}$ denote the coupling strength between NN and NNN cavities, respectively. We add a term of the NNN couplings to the Hamiltonian in Eq. (1) with $\kappa_1$ = 1.0 and $\kappa_2$ = 1.04 and solve it by diagonalization. Figure \ref{fig:NNN}(b) shows computed $\Delta\alpha$ as a function of $g$. The plot contains two curves calculated for the system with $\gamma_{\rm{loss}}$ = 0.06 and 0.2, respectively. Interestingly, both two curves do not show significant changes in $\Delta\alpha$ even when increasing the strength of NNN coupling as far as $g < 0.5$. For both cases, the change in $\Delta\alpha$ is only 20$\%$ at maximum. These behaviors can be understood by the combination of the computed mode profile and ${\rm Im}(\varepsilon)$, as plotted in Figure 6(b) and (c) for the case with $\gamma_{\rm{loss}}$ = 0.06. We find that the introduction of the NNN coupling do not largely modify the mode profile and the ${\rm Im}(\varepsilon)$ curves compared to those computed with only NN coupling. We note that, under the presence of the NNN couplings, the topological edge mode includes B-site amplitudes in its mode profile as shown in the inset in Figure 6(b) and the bulk modes resolve their degeneracy and form a bundle in ${\rm Im}(\varepsilon)$ curves as in Figure 6(c). These are the results of the absence of chiral symmetry in the system. We also note that $g > 0.5$ may be unlikely to occur for laser arrays based on evanescent mode coupling. Since evanescence fields exponentially decay in space, NN coupling is tend to be much larger than NNN coupling for most laser cavities. These insights obtained in this section are encouraging for increasing $\Delta\alpha$ by strengthening NN coupling with virtually ignoring the increase of NNN coupling. \begin{figure*}[h!] \centering\includegraphics[width=\linewidth]{NNN.pdf} \caption{ (a) Extended tight-binding model for the topological laser, including the next-nearest-neighbor (NNN) couplings. Nearest-neighbor (NN) couplings and NNN coupligs are given as {$\kappa_1^{\rm{NN}},\kappa_2^{\rm{NN}}$} and $\kappa^{\rm{NNN}}$, respectively. All sites are subject to loss at a rate of $\gamma_{\rm{loss}}$, while gain $\gamma_{\rm{gain}}$ is additionally supplied only to the A-sites. (b) Threshold gain difference $\Delta\alpha$ as a function of ratio $g$ of NNN couplings to NN couplings in a finite system consisting of $n_{\rm{tri}}= 100$ trivial and $n_{\rm{topo}}= 101$ topological cavities. Blue and red dots are for the loss $\gamma_{\rm loss}$= 0.06 and 0.2, respectively. The inset in (b) provides a representative sample of the edge mode profile for $g= 0.3$ where blue and red bars indicate the amplitudes on A-site and B-site, respectively. (c) Imaginary parts of eigenenergies versus supplied gain on A-site $\gamma_{\rm gain}$. The red and blue lines indicate the energy of the edge mode and bulk modes, respectively. The parameters used are $\kappa_1$= 1.0, $\kappa_2$= 1.04 and the loss is set to $\gamma_{\rm{loss}}$= 0.06 in (c).} \label{fig:NNN} \end{figure*} \section{Discussion} In this section, we discuss practically-achievable $\Delta\alpha$ for the topological array laser system that we discussed in the previous sections. The device under consideration consists of 201 site resonators with $\kappa_1$ = 1.0 and $\kappa_2$ = 1.04 so that it supports a broadly-distributed single topological edge mode. First, we estimate achievable strengths of $\kappa_1$ and $\kappa_2$ for conventional ridge-waveguide Fabry-Perot cavities based on GaAs/AlGaAs materials as an example. By choosing the ridge width of 1.4 {\textmu}m, height of 1.6 {\textmu}m and the gap between the ridges of 0.5 {\textmu}m, the coupling strengths of $\sim$100 cm$^{-1}$ is found to be possible by simulations using a finite element method. Thus, in the following discussion, we mainly consider the cases with $\kappa_1$ = 100 cm$^{-1}$ and $\kappa_2$ = 104 cm$^{-1}$. Note that, the fluctuation of $\kappa_1$ by 10$\%$ (corresponding to the case with $r_{\kappa} \sim$ 0.1) can only happen when the ridge-to-ridge distance varies more than 150 nm. This level of fabrication imperfection is unlikely to occur using standard semiconductor processing technologies. Once fixing the coupling strengths, the most critical factor determining $\Delta\alpha$ is the resonator loss. From Figure 3(c), it is possible to deduce a $\Delta\alpha$ of 0.019 for a loss of $\gamma_{\rm loss}$ = 0.2. This case corresponds to $\Delta\alpha$ of 1.9 cm$^{-1}$ when $\kappa_1$ = 100 cm$^{-1}$ and thus $\gamma_{\rm loss}$ = 20 cm$^{-1}$ (Table 1), which is a moderate loss for typical semiconductor lasers with careful design and fabrication. Given the previously reported values for semiconductor lasers\cite{Noda2019}, $\Delta\alpha$ of 1.9 cm$^{-1}$ could lead to stable single mode lasing in the device. As indicated in Figure 3(c), the maximum possible $\Delta\alpha$ can be obtained at the optimal point of the loss setting with $\gamma_{\rm loss}$ = 0.06. For a system with $\kappa_1$ = 100 cm$^{-1}$, these values are converted into $\Delta\alpha$ = 6 cm$^{-1}$ and $\gamma_{\rm loss}$ = 6 cm$^{-1}$. While $\Delta\alpha$ of 6 cm$^{-1}$ may be regarded as a sufficiently high for stable single mode lasing, the loss of $\gamma_{\rm{loss}}$ = 6 cm$^{-1}$ is too low when assuming the use of standard semiconductor lasers. In general, the optical loss in a semiconductor Fabry-Perot laser with zero carrier injection is composed of optical propagation loss, mirror loss and absorption in the active material. For a GaAs/AlGaAs ridge waveguide, the propagation loss can be reduced to about a few cm$^{-1}$, while mirror loss becomes 6 cm$^{-1}$ even for a 2 mm long cavity with a high reflection coating at a facet. Therefore, if including photon absorption in the unpumped active material, it is rather hard to realize the resonator optical loss of $\gamma_{\rm{loss}}$ = 6 cm$^{-1}$ to achieve the maximum possible $\Delta\alpha$ = 6 cm$^{-1}$. \begin{table*}[htb] \begin{center} \caption{Values of $\Delta\alpha$ and their corresponding $\gamma_{\rm loss}$ for two representative coupling strength $\kappa_1$.} \begin{tabular}{l||c|c|c} \hline & Maximum $\Delta\alpha$ & $\gamma_{\rm loss}$ at maximum $\Delta\alpha$ & $\Delta\alpha$ at $\gamma_{\rm loss}$ = 20cm$^{-1}$ \\ \hline $\kappa_1$ = 100 cm$^{-1}$ & 6 cm$^{-1}$ & 6 cm$^{-1}$ & 1.9 cm$^{-1}$ \\ \hline $\kappa_1$ = 150 cm$^{-1}$ & 9 cm$^{-1}$ & 9 cm$^{-1}$ & 4.2 cm$^{-1}$ \\ \hline \end{tabular} \end{center} \end{table*} There are several possible ways to significantly reduce material absorption loss in semiconductor laser resonators for achieving large $\Delta\alpha$. One straightforward way is to electrically pump lossy resonators. By introducing an additional gain of $\gamma^{\rm B}_{\rm gain}$ to B-sites, the loss effectively reduces and thus $\gamma_{\rm gain}^{\rm critical}$ increases by $\gamma^{\rm B}_{\rm gain}$: i.e. Eq.(\ref{eq:EP}) is modified to $\gamma_{\rm gain}^{\rm critical} = \pm 2 \left| \kappa_1- \kappa_2 \right| +\gamma^{\rm B}_{\rm gain}$. By recalling the fact that the largest $\Delta\alpha$ can be realized when $\gamma_{\rm th}^{\rm 2nd}$ =$\gamma_{\rm gain}^{\rm critical}$ as shown in Figure 3(b), this configuration may bring a powerful solution to reach stable single-mode lasing for a system with large $\gamma_{\rm{loss}}$. When $\gamma_{\rm loss}$ = 20 cm$^{-1}$, $\Delta\alpha$ can take the maximum possible value of 6 cm$^{-1}$ by injecting $\gamma_{\rm gain}^{\rm B}$ of 14 cm$^{-1}$. Another possibility for reducing $\gamma_{\rm{loss}}$ is to use tailored gain materials and structures. It has been predicted that sufficiently p-doped semiconductor quantum dots can quench inter-band light absorption while maintaining high differential gain under electrical current injection\cite{Arakawa1982}. Thus, $\gamma_{\rm{loss}}$ will be reduced for both A- and B-sites. However, the suppression of free-carrier absorption induced by the p-doping could be another experimental issue for achieving a low $\gamma_{\rm{loss}}$. The use of buried heterostructures \cite{NTTburied2016} could also be used to selectively reduce $\gamma_{\rm{loss}}$ from B-sites by eliminating active materials only from B-sites. Using the above-mentioned means, the absorption in the active materials may be suppressed so that the optical loss of $\gamma_{\rm{loss}}$ = 6 cm$^{-1}$ can be achieved which is the optimal for ensuring large $\Delta\alpha$. It is also interesting to discuss other ways to improve $\Delta\alpha$ for large $\gamma_{\rm{loss}}$. As we have already observed, the introduction of NNN couplings is not largely detrimental to the single mode operation. Therefore, $\Delta\alpha$ can be enlarged by increasing NN couplings $\kappa_1$ as shown in Table 1. When $\kappa_1$=150 cm$^{-1}$, $\Delta\alpha$ will be increased to 4.2 cm$^{-1}$ even in the case of $\gamma_{\rm loss}$ = 20 cm$^{-1}$. Note that the increased NN couplings also relaxes the condition for achieving the maximum $\Delta\alpha$. In such cases with large NN couplings, NNN and very long range couplings will become significant to determine the band structures, making the system more similar to photonic crystals where the long-range interactions are dominant. Designs of topological edge mode lasers using such structures toward high power output will be an interesting topic of further research. Another interesting approach for increasing $\Delta\alpha$ is to use additional auxiliary lossy resonators. According to Figure 2(e) and (f), the mode profiles of the topological edge mode and the competing bulk mode differ largely in term of their envelope: the bulk mode shows a greater extent to the exterior of the system. Therefore, it could be possible to selectively load more loss on the bulk mode by terminating the system with auxiliary loss sites. We examined this idea for the system with $\kappa_1$= 100 cm$^{-1}$ by adding 10 lossy resonators with the same amount of loss for each termination, $\gamma_{\rm loss}$ =20cm$^{-1}$. We observed an increase of $\Delta\alpha$ from 1.9 cm$^{-1}$ to 2.4 cm$^{-1}$ in this case. Note that this approach does not work well for the cases with low $\gamma_{\rm loss}$ less than 6 cm$^{-1}$. In such cases, the competing bulk mode lases before $\gamma_{\rm gain}$ reaching $\gamma_{\rm gain}^{\rm critical}$ and its mode profile differs from that in Figure 2(f), leading to a small overlap with the additional lossy sites. \section{Summary} We investigated a fundamental model of broadly-distributed single-mode topological edge mode laser in the tight-binding approximation. We considered a sizable system consisted of 201 site resonators that potentially lead to a 10W-class laser by assuming that each resonator delivers $\sim$100 mW output power. We clarified the conditions for single-mode operation by calculating threshold gain differences $\Delta\alpha$ between the first lasing edge-mode and the second lasing bulk mode as an important factor for evaluating the stability of the single-mode operation. Below is a summary of what we found through the discussion: (a) Under ideal conditions, $\Delta\alpha$ depends on the coupling strengths $\kappa_{1}$, $\kappa_{2}$ and the loss $\gamma_{\rm loss}$. There exists an optimal loss for each combination of the coupling strengths. For a system based on semiconductor lasers, large $\kappa_{1}$ and $\kappa_{2}$ with $|\kappa_1/\kappa_2|\sim 1$ and small $\gamma_{\rm loss}$ are most preferable for stable single mode lasing. (b) The single-mode operation of the edge mode is robust against disorders in coupling strengths and resonator detunings. (c) The topological laser is insensitive to the addition of resonator couplings among NNN sites. This suggests that one can design laser systems with large $\kappa_{1}$ and $\kappa_{2}$ while virtually ignoring the influence of the NNN couplings. (d) When assuming a set of realistic parameters for semiconductor lasers, $\Delta\alpha$ reaches a few cm$^{-1}$, which could be large enough for stable single-mode lasing. To conclude, we provided significant insights for topological lasers in the context of realizing high power lasers. This work may open up a new pathway for practical applications of topological photonics. \section*{Acknowledgements} Authors thank JSPS KAKENHI Grant Number JP21J40088, MEXT KAKENHI Grant Number JP15H05700, JP15H05868 and 17H06138, JST CREST (JPMJCR19T1) and NEDO. T.B. is supported by the National Natural Science Foundation of China (62071301); State Council of the People’s Republic of China (D1210036A); NSFC Research Fund for International Young Scientists (11850410426); NYU-ECNU Institute of Physics at NYU Shanghai; the Science and Technology Commission of Shanghai Municipality (19XD1423000); the China Science and Technology Exchange Center (NGA-16-001). \bibliographystyle{unsrt} \section{Introduction} High power semiconductor lasers have been of great interest to the industrial market for their wide applications, prompting enormous efforts in improving their performance. A straightforward approach for increasing laser output power is to widen the emitting area, as adopted in tapered\cite{tapered1996}, broad-area\cite{Crump2013} and array lasers\cite{Fan2005}. However, wider emitting areas in general result in multi-mode lasing and thereby in the degradation of laser beam quality. To overcome this issue, various techniques have been examined to minimize the effects of unwanted lateral guided modes over the last few decades\cite{Ackley1986,Glova2003,Weyers2014,Erbert2012,Medina2021}. Singlemodeness can often be improved by a delicate cavity design that cleverly takes advantages of the difference between the spatial mode profiles of the target and other undesired modes\cite{Sarangan1996,Sumpf2009,Zheng2019,Forbes2018,Marsh2005,Harder2006,Erbert2008,Christodoulides2012,Feng2014,Miri2019,SUSY2019}. A remarkable example reported recently is based on a two-dimensional photonic crystal band-edge resonator with 10W-class output from a single optical mode\cite{Noda2019}. However, the designs of these structures tend to be highly delicate and sometimes significantly complicate the fabrication process of the device. Such complexity in design may motivate to find a simpler scheme that enables high power single mode semiconductor lasers. A potential approach in this direction is that of topological lasers, which leverage topological photonics for designing lasing optical modes \cite{Bahari2017,Amo2017,Khajavikhan2018,Segev2018,Parto2018,Zhao2018,polariton2018,Ota2018,PhC2019,bulklaser2020,Wang2020,corner2020-1,corner2020-2,RenMinMa2020,Kante2020}. Topological photonics offers a novel route for designing optical modes with distinctive properties compared to conventional approaches\cite{Soljacic2014,ShvetsReview2017,Ozawa2019,Ota2020,Nonlinear2020,IwamotoReview2021}. A typical topological laser consists of a single topological edge mode that deterministically appears in a topological bandgap as a result of a topological mechanism called the bulk-edge correspondence\cite{Jackiw1976,Hatsugai1993a,Hatsugai1993b}. The topological edge modes are known to behave robustly against certain perturbations due to topological protection, which is suitable for developing robust single mode lasers. Topological ring lasers have been demonstrated using one-dimensional topological edge states propagating at the exterior of the bulk emulating quantum Hall\cite{Bahari2017}, quantum spin Hall \cite{Khajavikhan2018,RenMinMa2020} and quantum valley Hall systems \cite{Kante2020,Wang2020}. Single mode lasing devices have been demonstrated in these systems and the possibility of realizing robust single mode lasers with high slope efficiencies has been discussed. More recently, an electrically pumped topological laser has also been reported at mid-infrared wavelengths\cite{Wang2020}. Surface-emitting lasers utilizing Dirac cones or those with mass vortices also have been discussed as another candidate for a large-area laser\cite{DiracLaser2014,DiracVortex2020}. Topological lasers based on zero dimensional edge states are another topic that has gathered interest recently. Localized topological modes in arrays of resonators, such as micropillars\cite{Amo2017} and microring cavities\cite{Parto2018,Zhao2018}, have been combined with semiconductor gain to demonstrate lasing. Topological nanolasers have also been studied using topological photonic crystals supporting zero dimensional interface states \cite{Ota2018,PhC2019} and corner states as higher-order topological states as well\cite{OtaCorner2019,Cornerlasing2020}. So far, most of the works employing zero dimensional topological states aimed to investigate the lasing properties of tightly localized topological edge modes or to explore the physics of non-Hermitian topology therein\cite{Weimann2017,Parto2018,Zhao2018,Takata2017,Takata2018,Henning2015,Henning2018}. As such, there have been limited discussions for the application of topological edge modes for high power lasers by significantly expanding the mode profile in space. In this paper, we theoretically investigate a large-scale single-mode topological laser. We consider a sizable array laser that supports a single zero-dimensional topological edge state distributed over a few hundreds of site resonators. We formulate this model based on the tight binding approximation. Akin to a conventional analysis of semiconductor lasers\cite{DelAlpha1990,Henmi1988,Noda2019}, we analyze the stability of single mode operation by evaluating the threshold gain difference between the first topological mode lasing and the second bulk mode lasing. We find that the stability of the single mode lasing increases with a stronger coupling of the site resonators and reducing optical loss in them. Furthermore, we study the robustness of the single mode lasing under the presence of imperfections. From the discussion, we deduce a possible direction of the device design for robust single mode lasing with high output power. We believe our results pave a new path towards single mode high power lasers based on topological photonics. \section{Characteristics of an ideal topological edge-mode laser} \label{sec:ideal} \subsection{Theoretical model} \begin{figure}[h!] \centering\includegraphics[width=14cm]{Scheme.pdf} \caption{(a)Schematic of the investigated topological laser structure. (b)Tight-binding model of the laser structure. An edge mode deterministically emerges at the interface of the two topologically distinct photonic lattices. $N$ and $L$ correspond to the total number of dimers and the number specifying the topological phase boundary, respectively. Note that there exists a single auxiliary site at the end of the topological chain. $\kappa_1$ (single line) and $\kappa_2$ (double lines) indicate weak and strong coupling strengths between neighboring sites, respectively. $\gamma_{\rm{gain}}$ expresses gain supplied on A-site. The application of this tight binding model to the system described in (a) would be valid as long as the longitudinal cavity modes behave independently or when assuming arrays of cavities supporting only single longitudinal mode such as $\lambda/4$-shifted distributed feedback resonators.} \label{fig:scheme} \end{figure} We discuss a large-scale topological array laser composed of a number of site resonators. Figure \ref{fig:scheme}(a) shows a schematic implementation of such a laser based on Fabry-Pérot cavities, which can in principle be substituted by any other laser resonators. Each cavity supports a well defined single lateral mode and optically couples to neighboring cavities. Well-designed couplings between the cavities allow for the appearance of a topological lateral mode distributed over the nearly all of the cavity array, as we will describe shortly later. Electrodes are patterned on specific site cavities to selectively supply gain, which promotes lasing from the designed topological mode. We target a system including a few hundred resonators. If each resonator delivers $\sim$100 mW output, the topological laser could be operated as a 10W-class laser. For theoretical analysis, we map this array laser to a simple tight binding model. We consider an array of single-mode resonators that resembles the Su-Schriffer-Heeger (SSH) model\cite{SSHcourse}. In the SSH model, the resonator chain is dimerized and its unit cell contains two resonators called A- and B-sites. When the coupling strengths for both the inter- and intra-unit cell hopping are the same, the model exhibits gapless energy bands in momentum space. Meanwhile, when the two coupling strengths are unequal, a gap appears between the two bands. For a SSH chain with a larger inter-cell coupling than the intra-cell coupling, its band topology becomes topologically non-trivial and topological localized modes emerge at the edges of the bulk chain according to the bulk-edge correspondence. More quantitatively, the topological properties of the energy bands can be characterized using Zak phases, which are defined by the integral of the Berry connection over the first Brillouin zone\cite{Zak1989}. For a topological band, its Zak phase takes a nonzero value and becomes $\pi$ when inversion symmetry is preserved in the system. To obtain the desired laser cavity mode, we interface two SSH chains at the center of the system, as schematically illustrated in Figure\ref{fig:scheme}(b). The two chains are topologically trivial and non-trivial, respectively. In this case, a single topological interface mode appears deterministically around the interface\cite{Henning2013,Ota2018}, with which we design a single mode laser. Since the other end of the topological SSH chain could support another edge state, we terminate the chain with an auxiliary site resonator strongly coupled to the bulk chain, by which we can suppress the emergence of the unwanted extra edge state. Note that this configuration is similar to the design reported in Ref.\cite{Zhao2018}. However, they studied a tightly localized edge mode at the interface in a small lattice, in stark contrast with the current work investigating a broadly distributed interface mode in a large-scale lattice. It is also interesting to note that the topological cavity structure illustrated in Figure 1(a) is reminiscent to that of distributed feedback lasers. Indeed, the laser mode in a lambda/4-shifted distributed feedback laser can be interpreted as a special case of a topological interface mode. Nevertheless, the discrete topological lattice model preserving chiral symmetry discussed in this paper will lead to a unique field distribution capable of robust single mode lasing, making a stark contrast with conventional distributed feedback lasers, as we will demonstrate later. The system under consideration is described by the following Hamiltonian, \begin{eqnarray} \begin{split} \mathcal{H} &= \sum_{m=1}^{N+1} \left( i\gamma_{A,m} + \omega_{A,m} \right) \ket{m,A} \bra{m,A} + \sum_{m=1}^{N} \left( i\gamma_{B,m} + \omega_{B,m} \right) \ket{m,B} \bra{m,B} \\ &+ \sum_{m=1}^{L} \left[\kappa_{2,m} \left( \ket{m,B} \bra{m,A}+ h.c. \right) + \kappa_{1,m} \left( \ket{m+1,A} \bra{m,B}+ h.c. \right) \right] \\ &+ \sum_{m=L+1}^{N} \left[\kappa_{1,m} \left( \ket{m,B} \bra{m,A}+ h.c. \right) + \kappa_{2,m} \left( \ket{m+1,A} \bra{m,B}+ h.c. \right) \right], \end{split} \label{eq:Hamiltonian} \end{eqnarray} where $\omega_{A,m}$ and $\omega_{B,m}$ are the resonant frequencies of site A and B in a dimer $m$, respectively, while $\gamma_{A,m}$ and $\gamma_{B,m}$ denote gain and loss. Site-to-site coupling strengths are described by $ \kappa_{1,m}$ and $\kappa_{2,m}$. We suppose $\kappa_{1,m} < \kappa_{2,m}$, such that the topological SSH chain always remains topological. The total number of the dimers and the number specifying the topological phase boundary are set as $N= 100$ and $L= 50$, respectively. Thus, the number of sites in the trivial array becomes $n_{\rm tri}=2L$ = $100$, while that in the topological array does $n_{\rm topo}=2(N-L)+1$= $101$. The latter number includes the single auxiliary site at the end of the topological chain. We neglect the presence of unwanted longitudinal modes in each resonator to simplify the analysis. This model is valid as long as the longitudinal modes behave independently or when assuming arrays of single longitudinal mode cavities such as $\lambda/4$-shifted distributed feedback resonators. Note that, in this section, we consider an ideal case where we henceforth set ($\kappa_{1,m}$, $\kappa_{2,m}$, $\gamma_{A,m}$, $\gamma_{B,m}$)=($\kappa_{1}$, $\kappa_{2}$, $\gamma_{A}$, $\gamma_{B}$) and $\omega_{A,m} = \omega_{B,m} = \omega$ for any dimer $m$ , unless otherwise indicated. \subsection{Eigenmodes in the absence of gain and loss} To understand the basic properties of the investigated system formulated in Eq.(\ref{eq:Hamiltonian}), we first analyze it in the absence of gain and loss. We set the coupling parameters to ($\kappa_{1,m}$, $\kappa_{2,m}$)= (1.0, 1.04), which serves as a basic parameter set for the subsequent discussion. We diagonalize the Hamiltonian and analyze the energy spectrum and the spatial profiles of the eigenmodes of the system. Figure \ref{fig:ideal}(a) shows computed eigenenergies $\varepsilon$ plotted in the complex energy plane. In the real energy spectrum, ${\rm Re}(\varepsilon)$, one can see an energy gap of approximately $2|\kappa_{1} - \kappa_{2}|$, in which an in-gap mode exists as expected from the topological design discussed above. The topological mode is fixed to the zero energy and the entire energy spectrum is symmetric with respect to the zero energy according to chiral symmetry existing in the system. Note that the chiral symmetry is preserved when Hamiltonian $H$ satisfies $\Gamma^{\dagger} H \Gamma$= -H with an operator $\Gamma^2=1$, where $\Gamma$ is Hermitian and unitary. In general, a lattice with chiral symmetry will be bipartite and has two sublattices such that no direct transition occurs between the same sublattice sites. We inspect the spatial profile of the zero energy topological mode and plot this in Figure \ref{fig:ideal}(b). The mode profile distributes over the entire lattice with amplitudes only on A-sites\cite{Zhao2018,Parto2018}. The spatial profile is well described by an approximated analytical expression given as $a_m =(-\kappa_{1}/ \kappa_{2})^{|m-L|}\times a_L, b_m=0$ for any $m$, where $a_m$ ($b_m$) is the field amplitude at $m$th A-site (B-site). The extent of the spatial profile depends on the ratio of coupling constants. The current ratio of $\kappa_2/\kappa_1$ = 1.04 is sufficiently small so that the edge mode profile is distributed over the entire 201 sites. Figure \ref{fig:ideal}(c) shows a spatial profile of one of the two band-edge bulk modes. In contrast to the topological edge mode, the amplitudes are essentially equally distributed over both A and B-site. The difference of the mode profile suggests that lasing from the topological edge mode can be selectively promoted by supplying gain only to A-sites. \begin{figure}[h!] \centering\includegraphics[width=14cm]{ideal.pdf} \caption{(a) The eigenenergies $\varepsilon$ in the complex energy plane for the Hermitian case, zoomed around origin. (b) Spatial profile of the topological edge mode. The inset shows that the edge mode has non-zero amplitudes only on A-sites. (c) Spatial profile of the band-edge bulk mode for comparison. For (a)-(c), the parameters used are $\kappa_2/ \kappa_1= 1.04$, $\gamma_{\rm{loss}}=\gamma_{\rm{gain}}= 0$, $n_{\rm{tri}}= 100$ and $n_{\rm{topo}}= 101$. (d) The eigenenergies $\varepsilon$ in the complex energy plane for the topological laser system with gain and loss, zoomed around origin. (e) Spatial profile of the first lasing mode, i.e. topological edge mode. (f) Spatial profile of the second lasing mode steming from an amplified bulk mode, exhibiting non-zero amplitudes only on A-sites. For (d)-(f), the parameters used are $\kappa_1$= 1.0, $\kappa_2$= 1.04, $\gamma_{\rm{loss}}$= 0.2, and $\gamma_{\rm{gain}}$= 0.219. The system size is $n_{\rm{tri}}= 100$ and $n_{\rm{topo}}= 101$. In (b,c,e,f), blue and red bars indicate the amplitudes on A-site and B-site, respectively. } \label{fig:ideal} \end{figure} \subsection{Eigenmodes under the presence of gain and loss} Next, we investigate the properties of the system when introducing gain and loss to assess the capability of single mode lasing. To account for modal loss normally existing in photonic devices, we assume that all site resonators experience an uniform loss at a rate $\gamma_{\rm loss}$. Then, we supply gain on the A-sites at a rate of $\gamma_{\rm gain}$. Thus, we introduce $\gamma_A=\gamma_{\rm gain}-\gamma_{\rm loss}$ and $\gamma_B=-\gamma_{\rm loss}$ as imaginary onsite potentials across all the sites. Figure \ref{fig:ideal}(d) shows representative eigenenergies in the complex energy plane with $\gamma_{\rm loss}= 0.2$ and $\gamma_{\rm gain}$= 0.218. Most eigenstates show negative ${\rm Im}(\varepsilon)$ and are expected to behave as lossy states. In contrast, the topological edge state solely acquires an explicit positive ${\rm Im}(\varepsilon)$, indicating that the state becomes the first lasing mode in the system. This result confirms that our design can promote single mode lasing from the designed topological edge state broadly distributed in the lattice. Meanwhile, in the plot, there is a bulk state with nearly zero real and imaginary energies, which, with additional gain, could be positive in the imaginary part and hence a second lasing state. The presence of such a bulk mode capable of lasing leads to the unwanted competition of lasing modes in the system. To stabilize the single mode lasing from the topological edge state, it is vital to design a system that suppresses lasing from the bulk modes. The reason why the topological edge and bulk states under concern preferentially acquire non-negative ${\rm Im}(\varepsilon)$ can be understood from their mode profiles presented in Figure\ref{fig:ideal}(e) and (f). Both of their mode profiles have dominant amplitudes on A-sites, where gain is selectively supplied. We identified that the bulk mode with spatial profile only on the A-sites arises from a phase transition similar to that occurring in parity-time (PT) symmetric systems\cite{Nori2019,El-Ganainy2018,Bo2014}. Note that, while the gain and loss are totally balanced in PT symmetric systems, we consider a system with varied gain and fixed loss. Subjected to a large supplied gain, some bulk modes experience a phase transition and choose to split in its imaginary energies (while in turn degenerate in real energies), which accompanies a drastic change of the field profiles. To gain more insight into the phase transition, we consider an infinite bulk SSH chain without any interface. In this case, the Hamiltonian represented in momentum space takes the form \begin{eqnarray} H(k)= \begin{pmatrix} i \gamma_A & \kappa_1+\kappa_2 {\rm e}^{-ika} \\ \kappa_1+\kappa_2 {\rm e}^{ika} & i \gamma_B \\ \end{pmatrix}, \end{eqnarray} where $a$ is the lattice constant and $k$ is a wave number. For band edge modes supported at the Brillouin zone edge, the eigenvalues are given by $\varepsilon (\omega) =\left[i(\gamma_A+\gamma_B) \pm \sqrt{-(\gamma_A-\gamma_B)^2 + 4(\kappa_1-\kappa_2) ^2}\right] /2$. The eigenvalues split in either real or imaginary part depending on the sign in the square root. One of the modes split in imaginary energy corresponding to the second lasing bulk mode in our case, as we will discuss later. Since we define $\gamma_A= \gamma_{\rm gain} -\gamma_{\rm loss}$ and $\gamma_B= -\gamma_{\rm loss}$, the critical gain that induces the imaginary energy splitting in the bulk mode is given by \begin{eqnarray} \gamma_{\rm gain}^{\rm{critical}} = 2 \left| \kappa_1- \kappa_2\right|. \label{eq:EP} \end{eqnarray} Across $\gamma_{\rm gain}^{\rm{critical}}$, one observes a phase transition in the bulk eigenstates. As has been anticipated from the Hamiltonian and the expression of eigenvalues above, the phase transition in the bulk system resembles that in PT symmetric systems. When $\gamma_{\rm gain} < \gamma_{\rm gain}^{\rm critical} $, the band-edge modes are in a phase analogous to the PT symmetric phase and exhibit mode profiles homogeneously-distributed for both sites. In contrast, when $\gamma_{\rm gain} > \gamma_{\rm gain}^{\rm critical}$, the band-edge modes are in a phase analogous to the broken-PT phase and therefore exhibit mode profiles that dominantly populate in either A- or B-site. The spatial profile of the bulk mode in Figure 2(f) shows the one in the broken phase. We note that $\gamma_{\rm gain}^{\rm critical}$ becomes larger when considering a bulk system with a finite size. For our system with 201 arrays, $\gamma_{\rm gain}^{\rm critical}$ is computed to be $ \sim 0.12$, instead of the analytical value $\gamma_{\rm gain}^{\rm critical}$ = 0.08 for the infinite system with $|\kappa_1 - \kappa_2|$ = 0.04. \subsection{Threshold gain difference} \label{subsec:EP} A way to assess the capability of single mode lasing is to measure the threshold gain difference among the lasing modes. In this work, the threshold gain for a mode is defined as the supplied gain at which the mode reaches Im$(\varepsilon)$=0. We will consider the threshold gain difference $\Delta\alpha$ between the first lasing topological mode and the second lasing bulk trivial mode. The former is defined to lase at $\gamma_{\rm{gain}}$=$\gamma_{\rm{th}}^{\rm{1st}}$ and the latter at $\gamma_{\rm{th}}^{\rm{2nd}}$, thus $\Delta\alpha=\gamma_{\rm{th}}^{\rm{2nd}}-\gamma_{\rm{th}}^{\rm{1st}}$. It is known that single mode lasing becomes more stable as $\Delta\alpha$ increases. The analysis based on $\Delta\alpha$ employs only the eigenmode analysis and thus is very simple, but nevertheless can effectively evaluate the single mode lasing stability. Figure \ref{fig:net gain}(a) shows the calculated Im($\varepsilon$) as a function of $\gamma_{\rm{gain}}$ for a system with $\gamma_{\rm{loss}} = 0.2$. A loss value of $\gamma_{\rm{loss}}= 0.2$ is consistent with conventional Fabry Perot semiconductor lasers as we will discuss in section 4. In the plot, it is clearly seen that the topological mode (colored in red) acquires gain much faster than the bulk modes (blue) and exhibits positive Im($\varepsilon$) at the lowest $\gamma_{\rm{gain}}$ among all the modes. The threshold gain for the edge mode $\gamma_{\rm{th}}^{\rm{1st}}$ equals to $\gamma_{\rm{loss}}$, since the edge mode distributes only on A-sites where gain is selectively supplied. With increasing $\gamma_{\rm{gain}}$, a bulk mode also reaches Im($\varepsilon$)=0 at $\gamma_{\rm{th}}^{\rm{2nd}} = 0.219$. Thus, $\Delta\alpha$ is equal to 0.019 in this particular example. If all bulk modes maintained an equal mode distribution on the A- and B-sites, $\Delta\alpha$ is expected to be $|\gamma_{\rm{loss}}|$ = 0.2, since they simply need additional gain to compensate the loss also in B-site. However, as already discussed above, some bulk modes undergo a phase transition that largely modifies their mode profiles. As such, the branched bulk modes acquire gain much faster than the rest of the bulk modes. This is the reason why $\Delta\alpha$ reduces in the case in Figure 3(a). Meanwhile, the largest $\Delta\alpha$ can be obtained when the bulk mode reaches its lasing threshold $\gamma_{\rm th}^{\rm 2nd}$ at $\gamma_{\rm gain}^{\rm critical}$, that is $\gamma_{\rm th}^{\rm 2nd}$ =$\gamma_{\rm gain}^{\rm critical}$, which is more preferable for stable single mode lasing from the topological mode. This situation is realized in Figure 3(b), where $\gamma_{\rm loss}$ is set to 0.06. The overall behaviors of the Im$(\varepsilon)$ curves are exactly the same as those in Figure 3(a), except for the difference in the imaginary energy offset. This indicates that $\gamma_{\rm loss}$ is a critical factor for controlling $\Delta\alpha$. We note that $\gamma_{\rm loss}= 0.06$ may be too small to properly account for conventional loss in semiconductor lasers, which we will discuss in section 4. \begin{figure}[h!] \centering\includegraphics[width=14cm]{DeltaAlpha.pdf} \caption{(a)(b) Imaginary parts of eigenenergies of the system plotted as a function of supplied gain on A-site $\gamma_{\rm{gain}}$ for $\gamma_{\rm{loss}}$= 0.2 and 0.06, respectively. The red and blue lines indicate the energy of the edge mode and bulk modes. (c) Loss dependence of the threshold gain difference $\Delta \alpha$. The parameters used are $\kappa_1$= 1.0 and $\kappa_2$= 1.04, with a finite system consisting of $n_{\rm tri}= 100$ trivial and $n_{\rm topo}= 101$ topological cavities.} \label{fig:net gain} \end{figure} In Figure 3(c), we evaluate $\Delta\alpha$ as a function of $\gamma_{\rm{loss}}$ for the system defined in Figure 1(b) with $\kappa_1$= 1.0 and $\kappa_2$= 1.04. The plot of $\Delta\alpha$ shows a peak at $\gamma_{\rm loss}$ = 0.06 where $\gamma_{\rm th}^{\rm 2nd}=\gamma_{\rm gain}^{\rm critical}$ holds, as discussed in Figure 3(b). For the region of $\gamma_{\rm{loss}} < 0.06$, there is a linear increase of $\Delta\alpha$ with increasing $\gamma_{\rm{loss}}$. In this situation, $\gamma_{\rm th}^{\rm 2nd}$ is lower than $\gamma_{\rm gain}^{\rm critical}$ and the second lasing starts before the bulk modes get branched. For the region of $\gamma_{\rm loss} > 0.06$, there is a monotonic decrease of $\Delta\alpha$ with increasing $\gamma_{\rm{loss}}$. In this situation, the second bulk-mode lasing occurs from a branched mode and thus $\gamma_{\rm{th}}^{\rm{2nd}}$ becomes closer to $\gamma_{\rm{th}}^{\rm{1st}}$. We here summarize the points to be considered for increasing $\Delta\alpha$ in our system. (i) The maximum possible $\Delta\alpha$ is obtained when $\gamma_{\rm th}^{\rm 2nd}$ = $\gamma_{\rm gain}^{\rm critical}$. (ii) A large $\gamma_{\rm gain}^{\rm critical}$ is preferable for enhancing $\Delta\alpha$. (iii) $\gamma_{\rm gain}^{\rm critical}$ can be increase by increasing $|\kappa_1 - \kappa_2|$, while $|\kappa_1/\kappa_2|$ should be close to one for maintaining a large field extent of the topological mode. Thus, one should take large $\kappa_1$ and $\kappa_2$ with $|\kappa_1/\kappa_2| \sim 1$. (iv) There is an optimal $\gamma_{\rm loss}$ in the system with respect to $\gamma_{\rm gain}^{\rm critical}$ for maximizing $\Delta\alpha$. For semiconductor array lasers based on conventional lossy resonators, the above discussion suggests that it is important to employ low-loss resonators with high resonator-resonator couplings. We will revisit more practical considerations for achieving a single-mode large-area topological laser in section 4. \section{Effects of disorders and long-range interactions on the single-mode lasing operation} In this section, we evaluate the stability of the single mode lasing under the presence of imperfections by primarily considering $\Delta\alpha$. We examine the effects of inhomogeneous coupling strengths and resonator frequencies that are the most likely types of disorder induced by fabrication imperfections. Previous works have studied the effect of such disorders in 1D SSH models, however, most of them are focusing on the properties of tightly localized topological edge modes\cite{Amo2017,Prodan2014,Platero2019,Bauer2019,Kennett2020}. In contrast, our interest lies in the broadly distributed topological edge mode and its stability of single mode lasing in competition with a bulk mode. We also discuss the effect of interactions between next-nearest neighbor resonators, which are likely to occur in optical resonator arrays in the course of increasing nearest neighbor couplings. \subsection{Inhomogeneous coupling strengths} First, we investigate the effects of inhomogeneity in the coupling strengths on the laser array systems discussed so far, i.e. those constructed with $\kappa_{1}$= 1.0 and $\kappa_{2}$= 1.04 for 201 sites. We prepare coupling strengths randomly distributed among all sites by generating different sets of Gaussian random variables with means $\kappa_{1}$= 1.0 (for intra-dimer coupling) and $\kappa_{2}$= 1.04 (inter-dimer) and a common standard deviation of randomness $r_{\kappa}$. For each set of parameters with the randomness, we solve the Hamiltonian in Eq.(\ref{eq:Hamiltonian}) by diagonalization. In order to study $r_{\kappa}$ dependence of the laser system, we generate 100 different sets of parameters for each $r_{\kappa}$, and average the outcomes. The error bar represents the half of the standard deviation $\sigma/2$, throughout this section. We note that the disorder discussed here can be interpreted as random distances between the site resonators, hence it only breaks parity symmetry, while preserving chiral symmetry. \begin{figure}[h!] \centering\includegraphics[width=13cm]{inhomocoupling.pdf} \caption{(a) Threshold gain difference between the first and second lasing modes as a function of coupling disorder $r_{\kappa}$ in a finite system consisting of $n_{\rm{tri}}= 100$ trivial and $n_{\rm{topo}}=101$ topological cavities. The insets show representative mode profiles of the edge mode and bulk mode. Blue bars indicate the amplitudes on A-site. (b) Imaginary parts of eigenenergies of the system plotted as a function of supplied gain on A-site $\gamma_{\rm gain}$. The red and blue lines indicate the energy of the edge mode and bulk modes, respectively. The parameters used are $\kappa_{1}$= 1.0, $\kappa_{2}$= 1.04 and $\gamma_{\rm loss}$= 0.06. In (b), the randomness is $r_{\kappa}$= 0.1.} \label{fig:inhomo coupling} \end{figure} Figure \ref{fig:inhomo coupling}(a) shows the computed threshold gain differences $\Delta\alpha$'s for a system subject to $\gamma_{\rm{loss}} = 0.06$. This is the case for realizing the largest $\Delta\alpha$ for the disorder-free case. As the randomness or $r_{\kappa}$ increases, a decreased $\Delta\alpha$ is observed. However, $\Delta\alpha$ remains $\sim$70$\%$ of the maximum even when $r_{\kappa}$ = 0.1, where the strength of randomness as the standard deviation exceeds the bandgap of the infinite Hermitian system, $2|\kappa_1-\kappa_2|$ = 0.08. This result indicates the robustness of the single mode lasing from the resonator array device. In the current case, the threshold gain for the first lasing mode, $\gamma_{\rm th}^{\rm 1st}$, remains unchanged even when introducing the disorders. This is a consequence of the preserved chiral symmetry, which leads to a zero energy mode with its mode amplitude only on A-sites, thus always reaching the threshold gain exactly when compensating the loss in A-sites. Therefore, the observed decrease of $\Delta\alpha$ arises solely from the decrease of the threshold gain for the second lasing mode $\gamma_{\rm th}^{\rm 2nd}$. As discussed in the previous sections, $\gamma_{\rm th}^{\rm 2nd}$ diminishes for a lower $\gamma_{\rm gain}^{\rm critical}$, which scales with $2|\kappa_1-\kappa_2|$ for the unperturbed case. We consider that the introduction of randomness masks the difference between the couplings by $\kappa_1$ and $\kappa_2$ and hence effectively reduces $|\kappa_1-\kappa_2|$. Accordingly, we found a gradual reduction of the width of average bandgap in the system with increasing $r_{\kappa}$. To further verify the above discussion, we computed the spatial profile of the first and second lasing mode for the case with $r_{\kappa}$ = 0.1, as plotted in the insets in Fig. 4(a). We plot typical mode profile providing the average $\Delta\alpha$ among the 100 trials. The mode profiles resemble those computed for $r_{\kappa}$ = 0. This observation confirms that the first lasing mode originates from the topological interface mode and the second one originates from the bulk edge mode as observed in the unperturbed case. Figure 4(b) shows a computed ${\rm Im}(\varepsilon)$ as a function of $\gamma_{\rm gain}$ for the parameter set used in the insets in Fig. 4 (a). As anticipated above, one can see the reduction of $\gamma_{\rm gain}^{\rm critical}$ to 0.10 and hence of $\Delta\alpha$ by 70$\%$ in comparison to the disorder-free case in Fig. 3(b). Overall, it was found that the topological mode robustly behaves even under the presence of the disorder for coupling strengths with $r_{\kappa} > 2|\kappa_1 - \kappa_2|$. In section 4, we will quantitatively discuss $r_{\kappa}$ by referring a required accuracy in the actual device fabrication for an example case. It is interesting to note that the interface of the two topologically distinct chains may effectively remain even with such a large $r_{\kappa}$, as indicated in the spatial profile of the zero energy mode plotted in the insets in Fig. 4(a). The mode has a peak near the center of the system, where the interface is originally located. Another important note is that a very similar tendency was observed for the case that only replaces $\gamma_{\rm{loss}}$ from 0.06 to 0.2. Even in this case, we observed a reduction of the average $\Delta\alpha$ by 70$\%$ when $r_{\kappa}$ = 0.1. This result implies that loss does not essentially alter the behavior of the system subject to inhomogeneous coupling strengths. \subsection{Inhomogeneous site resonator frequencies} Next, we perform calculations for the cases with fluctuations in the resonance frequencies of the site resonators. We treat inhomogeneity in the resonator detunings $\Delta$ after subtracting a common frequency offset $\omega$ from the Hamiltonian in Eq. (1). For the perfectly regular case, $\Delta$ equals to zero for any $m$th resonator. We prepare 100 sets of random $\Delta$'s distributed by Gaussian random variables with means $\Delta = 0$ and the standard deviation $r_{\Delta}$. We introduce each set of generated random detunings in Eq.(\ref{eq:Hamiltonian}) and solve it by diagonalization for the system with $\kappa_1$ = 1.0 and $\kappa_2$ = 1.04. The ways of averaging the data for each $r_{\Delta}$ and of its plot are the same as in the previous section. \begin{figure}[h!] \centering\includegraphics[width=13cm]{inhomodetuning.pdf} \caption{ (a) Threshold gain of the first lasing mode as a function of strength of inhomogeneity in detuning $r_{\Delta}$ in a finite system consisting of $n_{\rm{tri}}= 100$ trivial and $n_{\rm{topo}}= 101$ topological cavities. The inset shows a representative sample of the edge mode profile for $r_{\Delta}= 0.1$ where blue and red bars indicate the amplitudes on A-site and B-site, respectively. (b) Threshold gain difference $\Delta\alpha$. (c) Imaginary parts of eigenenergies versus supplied gain on A-site $\gamma_{\rm gain}$. The red and blue lines indicate the energy of the edge mode and bulk modes, respectively. (d) Threshold gain difference $\Delta\alpha$ for the system with higher resonator loss of $\gamma_{\rm loss}$= 0.20. The coupling constant is $\kappa_2/\kappa_1$= 1.04 and the loss is $\gamma_{\rm loss}$= 0.06 in (a)-(c) and $\gamma_{\rm loss}$= 0.20 in (d). } \label{fig:inhomo detuning} \end{figure} Figure \ref{fig:inhomo detuning}(a) shows the average $\gamma_{\rm{th}}^{\rm{1st}}$ with varying $r_{\Delta}$ for a system with $\gamma_{\rm{loss}}$ = 0.06. Unlike the case with the coupling disorders, the average $\gamma_{\rm{th}}^{\rm{1st}}$ slightly increases with $r_{\Delta}$. In the system with non-zero $r_{\Delta}$, chiral symmetry is broken and thus the topological mode acquires a field amplitude also in lossy B-sites, resulting in the increase of $\gamma_{\rm th}^{\rm 1st}$. This behavior can be confirmed in the mode profile in the inset in Figure \ref{fig:inhomo detuning}(a) calculated for a representative example when $r_{\Delta}$ = 0.1. The mode profile consists mainly of the original topological interface mode, but slightly contains B-site amplitudes, which is consistent with the modest increase of $\gamma_{\rm th}^{\rm 1st}$. Figure \ref{fig:inhomo detuning}(b) shows average $\Delta\alpha$ calculated for the system with $\gamma_{\rm loss}$ = 0.06. A monotonic decrease of $\Delta\alpha$ is found, which is much larger amount than the increase in $\gamma_{\rm{th}}^{\rm 1st}$. Thus, the drop of $\Delta\alpha$ is expected to stem from a decrease of $\gamma_{\rm th}^{\rm 2nd}$. Figure \ref{fig:inhomo detuning}(c) shows the computed ${\rm Im}(\varepsilon)$ for the system discussed in the inset in Figure \ref{fig:inhomo detuning}(a). As anticipated, an earlier growth of ${\rm Im}(\varepsilon)$ for a bulk mode is seen when increasing $\gamma_{\rm gain}$, making the $\Delta\alpha$ smaller. In the plot, it is seen that the phase transition in the bulk modes is blurred and a diagonal bundle of the bulk modes are formed. These are the consequences of the symmetry breaking by the fluctuating $\Delta$. While sharp branches of the bulk modes are not observed in Figure \ref{fig:inhomo detuning}(c), the overall behaviors of branched curves in ${\rm Im}(\varepsilon)$ are similar with those in Figure 3(b), in particular for large $\gamma_{\rm gain}$ roughly over 0.15. This comparison suggests that the fluctuation in $\Delta$ mainly influences how the ${\rm Im}(\varepsilon)$ curves branch out from the bulk mode bundle. Figure \ref{fig:inhomo detuning}(d) shows $\Delta\alpha$ computed for the system with $\gamma_{\rm loss}$ = 0.2. In contrast to the case with lower loss, the computed $\Delta\alpha$s are less sensitive with large $\gamma_{\rm loss}$. This is because introducing the fluctuating $\Delta$ does not alter the overall behaviors of ${\rm Im}(\varepsilon)$ curves in particular for large $\gamma_{\rm gain}$, at which $\Delta\alpha$ is measured for the case of $\gamma_{\rm loss}$ = 0.2. In other words, for large $\gamma_{\rm gain}$, the relationship between the ${\rm Im}(\varepsilon)$ curves of the topological mode and the competing bulk mode does not change largely, neither does $\Delta\alpha$. \subsection{Next-nearest-neighbor cavity coupling} The discussion in section \ref{sec:ideal} reveals that larger coupling strengths between the site resonators are advantageous for achieving a large $\Delta\alpha$ and thus for stable single mode lasing from a broadly-distributed topological edge mode. Cavity array designs for increasing the coupling strengths between the nearest neighbor (NN) cavities may inevitably induce non-negligible next-nearest-neighbor (NNN) couplings, which will break chiral symmetry and thus could modify the performance of the laser device. In this section, we analyze the influence of NNN couplings on the investigated array laser. Figure \ref{fig:NNN}(a) explains the model we consider in this section. We define the ratio of the NNN couplings to NN couplings by a factor $g$: $g=\kappa^{\rm{NNN}}/\kappa_1^{\rm{NN}}$, where $\kappa^{\rm{NN}}$ and $\kappa^{\rm{NNN}}$ denote the coupling strength between NN and NNN cavities, respectively. We add a term of the NNN couplings to the Hamiltonian in Eq. (1) with $\kappa_1$ = 1.0 and $\kappa_2$ = 1.04 and solve it by diagonalization. Figure \ref{fig:NNN}(b) shows computed $\Delta\alpha$ as a function of $g$. The plot contains two curves calculated for the system with $\gamma_{\rm{loss}}$ = 0.06 and 0.2, respectively. Interestingly, both two curves do not show significant changes in $\Delta\alpha$ even when increasing the strength of NNN coupling as far as $g < 0.5$. For both cases, the change in $\Delta\alpha$ is only 20$\%$ at maximum. These behaviors can be understood by the combination of the computed mode profile and ${\rm Im}(\varepsilon)$, as plotted in Figure 6(b) and (c) for the case with $\gamma_{\rm{loss}}$ = 0.06. We find that the introduction of the NNN coupling do not largely modify the mode profile and the ${\rm Im}(\varepsilon)$ curves compared to those computed with only NN coupling. We note that, under the presence of the NNN couplings, the topological edge mode includes B-site amplitudes in its mode profile as shown in the inset in Figure 6(b) and the bulk modes resolve their degeneracy and form a bundle in ${\rm Im}(\varepsilon)$ curves as in Figure 6(c). These are the results of the absence of chiral symmetry in the system. We also note that $g > 0.5$ may be unlikely to occur for laser arrays based on evanescent mode coupling. Since evanescence fields exponentially decay in space, NN coupling is tend to be much larger than NNN coupling for most laser cavities. These insights obtained in this section are encouraging for increasing $\Delta\alpha$ by strengthening NN coupling with virtually ignoring the increase of NNN coupling. \begin{figure}[h!] \centering\includegraphics[width=13cm]{NNN.pdf} \caption{ (a) Extended tight-binding model for the topological laser, including the next-nearest-neighbor (NNN) couplings. Nearest-neighbor (NN) couplings and NNN coupligs are given as {$\kappa_1^{\rm{NN}},\kappa_2^{\rm{NN}}$} and $\kappa^{\rm{NNN}}$, respectively. All sites are subject to loss at a rate of $\gamma_{\rm{loss}}$, while gain $\gamma_{\rm{gain}}$ is additionally supplied only to the A-sites. (b) Threshold gain difference $\Delta\alpha$ as a function of ratio $g$ of NNN couplings to NN couplings in a finite system consisting of $n_{\rm{tri}}= 100$ trivial and $n_{\rm{topo}}= 101$ topological cavities. Blue and red dots are for the loss $\gamma_{\rm loss}$= 0.06 and 0.2, respectively. The inset in (b) provides a representative sample of the edge mode profile for $g= 0.3$ where blue and red bars indicate the amplitudes on A-site and B-site, respectively. (c) Imaginary parts of eigenenergies versus supplied gain on A-site $\gamma_{\rm gain}$. The red and blue lines indicate the energy of the edge mode and bulk modes, respectively. The parameters used are $\kappa_1$= 1.0, $\kappa_2$= 1.04 and the loss is set to $\gamma_{\rm{loss}}$= 0.06 in (c).} \label{fig:NNN} \end{figure} \section{Discussion} In this section, we discuss practically-achievable $\Delta\alpha$ for the topological array laser system that we discussed in the previous sections. The device under consideration consists of 201 site resonators with $\kappa_1$ = 1.0 and $\kappa_2$ = 1.04 so that it supports a broadly-distributed single topological edge mode. First, we estimate achievable strengths of $\kappa_1$ and $\kappa_2$ for conventional ridge-waveguide Fabry-Perot cavities based on GaAs/AlGaAs materials as an example. By choosing the ridge width of 1.4 {\textmu}m, height of 1.6 {\textmu}m and the gap between the ridges of 0.5 {\textmu}m, the coupling strengths of $\sim$100 cm$^{-1}$ is found to be possible by simulations using a finite element method. Thus, in the following discussion, we mainly consider the cases with $\kappa_1$ = 100 cm$^{-1}$ and $\kappa_2$ = 104 cm$^{-1}$. Note that, the fluctuation of $\kappa_1$ by 10$\%$ (corresponding to the case with $r_{\kappa} \sim$ 0.1) can only happen when the ridge-to-ridge distance varies more than 150 nm. This level of fabrication imperfection is unlikely to occur using standard semiconductor processing technologies. Once fixing the coupling strengths, the most critical factor determining $\Delta\alpha$ is the resonator loss. From Figure 3(c), it is possible to deduce a $\Delta\alpha$ of 0.019 for a loss of $\gamma_{\rm loss}$ = 0.2. This case corresponds to $\Delta\alpha$ of 1.9 cm$^{-1}$ when $\kappa_1$ = 100 cm$^{-1}$ and thus $\gamma_{\rm loss}$ = 20 cm$^{-1}$ (Table 1), which is a moderate loss for typical semiconductor lasers with careful design and fabrication. Given the previously reported values for semiconductor lasers\cite{Noda2019}, $\Delta\alpha$ of 1.9 cm$^{-1}$ could lead to stable single mode lasing in the device. As indicated in Figure 3(c), the maximum possible $\Delta\alpha$ can be obtained at the optimal point of the loss setting with $\gamma_{\rm loss}$ = 0.06. For a system with $\kappa_1$ = 100 cm$^{-1}$, these values are converted into $\Delta\alpha$ = 6 cm$^{-1}$ and $\gamma_{\rm loss}$ = 6 cm$^{-1}$. While $\Delta\alpha$ of 6 cm$^{-1}$ may be regarded as a sufficiently high for stable single mode lasing, the loss of $\gamma_{\rm{loss}}$ = 6 cm$^{-1}$ is too low when assuming the use of standard semiconductor lasers. In general, the optical loss in a semiconductor Fabry-Perot laser with zero carrier injection is composed of optical propagation loss, mirror loss and absorption in the active material. For a GaAs/AlGaAs ridge waveguide, the propagation loss can be reduced to about a few cm$^{-1}$, while mirror loss becomes 6 cm$^{-1}$ even for a 2 mm long cavity with a high reflection coating at a facet. Therefore, if including photon absorption in the unpumped active material, it is rather hard to realize the resonator optical loss of $\gamma_{\rm{loss}}$ = 6 cm$^{-1}$ to achieve the maximum possible $\Delta\alpha$ = 6 cm$^{-1}$. \begin{table}[htb] \begin{center} \caption{Values of $\Delta\alpha$ and their corresponding $\gamma_{\rm loss}$ for two representative coupling strength $\kappa_1$.} \begin{tabular}{l||c|c|c} \hline & Maximum $\Delta\alpha$ & $\gamma_{\rm loss}$ at maximum $\Delta\alpha$ & $\Delta\alpha$ at $\gamma_{\rm loss}$ = 20cm$^{-1}$ \\ \hline $\kappa_1$ = 100 cm$^{-1}$ & 6 cm$^{-1}$ & 6 cm$^{-1}$ & 1.9 cm$^{-1}$ \\ \hline $\kappa_1$ = 150 cm$^{-1}$ & 9 cm$^{-1}$ & 9 cm$^{-1}$ & 4.2 cm$^{-1}$ \\ \hline \end{tabular} \end{center} \end{table} There are several possible ways to significantly reduce material absorption loss in semiconductor laser resonators for achieving large $\Delta\alpha$. One straightforward way is to electrically pump lossy resonators. By introducing an additional gain of $\gamma^{\rm B}_{\rm gain}$ to B-sites, the loss effectively reduces and thus $\gamma_{\rm gain}^{\rm critical}$ increases by $\gamma^{\rm B}_{\rm gain}$: i.e. Eq.(\ref{eq:EP}) is modified to $\gamma_{\rm gain}^{\rm critical} = \pm 2 \left| \kappa_1- \kappa_2 \right| +\gamma^{\rm B}_{\rm gain}$. By recalling the fact that the largest $\Delta\alpha$ can be realized when $\gamma_{\rm th}^{\rm 2nd}$ =$\gamma_{\rm gain}^{\rm critical}$ as shown in Figure 3(b), this configuration may bring a powerful solution to reach stable single-mode lasing for a system with large $\gamma_{\rm{loss}}$. When $\gamma_{\rm loss}$ = 20 cm$^{-1}$, $\Delta\alpha$ can take the maximum possible value of 6 cm$^{-1}$ by injecting $\gamma_{\rm gain}^{\rm B}$ of 14 cm$^{-1}$. Another possibility for reducing $\gamma_{\rm{loss}}$ is to use tailored gain materials and structures. It has been predicted that sufficiently p-doped semiconductor quantum dots can quench inter-band light absorption while maintaining high differential gain under electrical current injection\cite{Arakawa1982}. Thus, $\gamma_{\rm{loss}}$ will be reduced for both A- and B-sites. However, the suppression of free-carrier absorption induced by the p-doping could be another experimental issue for achieving a low $\gamma_{\rm{loss}}$. The use of buried heterostructures \cite{NTTburied2016} could also be used to selectively reduce $\gamma_{\rm{loss}}$ from B-sites by eliminating active materials only from B-sites. Using the above-mentioned means, the absorption in the active materials may be suppressed so that the optical loss of $\gamma_{\rm{loss}}$ = 6 cm$^{-1}$ can be achieved which is the optimal for ensuring large $\Delta\alpha$. It is also interesting to discuss other ways to improve $\Delta\alpha$ for large $\gamma_{\rm{loss}}$. As we have already observed, the introduction of NNN couplings is not largely detrimental to the single mode operation. Therefore, $\Delta\alpha$ can be enlarged by increasing NN couplings $\kappa_1$ as shown in Table 1. When $\kappa_1$=150 cm$^{-1}$, $\Delta\alpha$ will be increased to 4.2 cm$^{-1}$ even in the case of $\gamma_{\rm loss}$ = 20 cm$^{-1}$. Note that the increased NN couplings also relaxes the condition for achieving the maximum $\Delta\alpha$. In such cases with large NN couplings, NNN and very long range couplings will become significant to determine the band structures, making the system more similar to photonic crystals where the long-range interactions are dominant. Designs of topological edge mode lasers using such structures toward high power output will be an interesting topic of further research. Another interesting approach for increasing $\Delta\alpha$ is to use additional auxiliary lossy resonators. According to Figure 2(e) and (f), the mode profiles of the topological edge mode and the competing bulk mode differ largely in term of their envelope: the bulk mode shows a greater extent to the exterior of the system. Therefore, it could be possible to selectively load more loss on the bulk mode by terminating the system with auxiliary loss sites. We examined this idea for the system with $\kappa_1$= 100 cm$^{-1}$ by adding 10 lossy resonators with the same amount of loss for each termination, $\gamma_{\rm loss}$ =20cm$^{-1}$. We observed an increase of $\Delta\alpha$ from 1.9 cm$^{-1}$ to 2.4 cm$^{-1}$ in this case. Note that this approach does not work well for the cases with low $\gamma_{\rm loss}$ less than 6 cm$^{-1}$. In such cases, the competing bulk mode lases before $\gamma_{\rm gain}$ reaching $\gamma_{\rm gain}^{\rm critical}$ and its mode profile differs from that in Figure 2(f), leading to a small overlap with the additional lossy sites. Before closing this section, we briefly address another important factor that dictates the capability of the topological single mode laser, namely the presence of damage threshold. The laser mode profile presented in Figure 2(e) shows a peak at the center, at which photon density will be larger than the rest of the resonator sites. The laser will be operated so as not to exceed the damage threshold at the center resonator, suggesting that the rest of the resonator sites cannot deliver their maximum output power. This will clearly reduce the maximum possible output power from the system. Topological resonator designs that support flat-top mode shapes is one solution to this issue. Such designs are available by tailoring the distribution of the coupling strengths among the resonators. We will report the impact of such designs on the laser performance elsewhere. \section{Summary} We investigated a fundamental model of broadly-distributed single-mode topological edge mode laser in the tight-binding approximation. We considered a sizable system consisted of 201 site resonators that potentially lead to a 10W-class laser by assuming that each resonator delivers $\sim$100 mW output power. We clarified the conditions for single-mode operation by calculating threshold gain differences $\Delta\alpha$ between the first lasing edge-mode and the second lasing bulk mode as an important factor for evaluating the stability of the single-mode operation. Below is a summary of what we found through the discussion: (a) Under ideal conditions, $\Delta\alpha$ depends on the coupling strengths $\kappa_{1}$, $\kappa_{2}$ and the loss $\gamma_{\rm loss}$. There exists an optimal loss for each combination of the coupling strengths. For a system based on semiconductor lasers, large $\kappa_{1}$ and $\kappa_{2}$ with $|\kappa_1/\kappa_2|\sim 1$ and small $\gamma_{\rm loss}$ are most preferable for stable single mode lasing. (b) The single-mode operation of the edge mode is robust against disorders in coupling strengths and resonator detunings. (c) The topological laser is insensitive to the addition of resonator couplings among NNN sites. This suggests that one can design laser systems with large $\kappa_{1}$ and $\kappa_{2}$ while virtually ignoring the influence of the NNN couplings. (d) When assuming a set of realistic parameters for semiconductor lasers, $\Delta\alpha$ reaches a few cm$^{-1}$, which could be large enough for stable single-mode lasing. To conclude, we provided significant insights for topological lasers in the context of realizing high power lasers. This work may open up a new pathway for practical applications of topological photonics. \section*{Acknowledgements} Authors thank JSPS KAKENHI Grant Number JP21J40088, MEXT KAKENHI Grant Number JP15H05700, JP15H05868 and 17H06138, JST CREST (JPMJCR19T1) and NEDO. T.B. is supported by the National Natural Science Foundation of China (62071301); State Council of the People’s Republic of China (D1210036A); NSFC Research Fund for International Young Scientists (11850410426); NYU-ECNU Institute of Physics at NYU Shanghai; the Science and Technology Commission of Shanghai Municipality (19XD1423000); the China Science and Technology Exchange Center (NGA-16-004). \bibliographystyle{unsrt}
1,108,101,563,068
arxiv
\section{Introduction} The Sachdev-Ye-Kitaev (SYK) model \cite{Sachdev:1992fk,KitaevTalks,Polchinski:2016xgd,Maldacena:2016hyu}, \begin{equation} S_{\text{SYK}} = \int dt \left(\sum_j \frac{i}{2}\psi_j\partial_t \psi_j - i^{q/2}\sum_{i_1 \dots i_q} J_{i_1\dots i_q} \psi_{i_1}\dots \psi_{i_q} \right)\,, \end{equation} is a quantum mechanical model describing $N$ Majorana fermions $\psi_i$ interacting via a non-linear potential involving gaussian random couplings $J_{i_1\dots i_q}$. It is solvable in the large $N$ limit, exhibits approximate conformal symmetry in the IR, and saturates the quantum chaos bound \cite{Shenker:2013pqa, Shenker:2013yza,Shenker:2014cwa,Maldacena:2015waa,Reynolds:2016pmi}. It has been proposed as the holographic dual of 2d dilaton gravity based on the observation that both sides of the correspondence share the same Goldstone mode effective action, written in terms of the Schwarzian derivative \cite{Almheiri:2014cka,Sachdev:2015efa,Maldacena:2016upp,Jensen:2016pah,Sekino:2008he,Engelsoy:2016xyb,Cvetic:2016eiv}. In this paper, we consider a 1+1 dimensional generalisation of the SYK model, by considering the Thirring model with random couplings. i.e. \begin{equation} \label{IntroMod} S ={1 \over 2 {\bf \pi} } \int d^2z \ \left[ \sum_i \left({\bar\nu}^i \partial_z {\bar\nu}^i + \nu^i \partial_{\bar z} \nu^i\right) + \sum_{i<j,k<l}J_{ij;kl}\nu^i\nu^j{\bar\nu}^k{\bar\nu}^l\right]\,. \end{equation} with random couplings $J_{ij;kl}$ drawn from a gaussian ensemble with standard deviation $J$ (we also discuss generalisations of this model to higher powers of $\nu$ and $\bar\nu$). One can use much of the technology developed in the SYK context, though unlike previous generalisations \cite{Gu:2016oyy,Gross:2016kjj,Berkooz:2016cvq}, the couplings are not taken to be spatially disordered. Some further work on SYK can be seen in \cite{You:2016ldz,Anninos:2016szt,Jevicki:2016bwu,Bagrets:2016cdf,Jevicki:2016ito,Banerjee:2016ncu,Garcia-Garcia:2016mno,Fu:2016vas,Witten:2016iux,Cotler:2016fpe,Klebanov:2016xxf,Blake:2016jnn,Davison:2016ngz,Peng:2016mxj,Liu:2016rdi,Krishnan:2016bvg,Magan:2016ehs,Ferrari:2017ryl,Garcia-Garcia:2017pzl,Li:2017hdt,Gurau:2017xhf,Mandal:2017thl}. The model is also an example of a conformal field theory -- in this case free fermions -- which is perturbed by a large set of marginal operators with random coefficients, some of them being relevant and some irrelevant. In the usual lore, couplings along marginally relevant directions increase, and couplings along marginally irrelevant directions decrease, as one flows towards the IR. In this case, however, RG mixes the two and the anomalous dimension of each operator depends on all of the other random couplings. If we fix the scale at which we define the theory, one expects it will eventually flow in the direction of some relevant operators, but the details and rates at which this happens are unclear. It is also interesting to ask whether the theory is renormalizable, since we are necessarily turning on marginally irrelevant perturbations -- but on the other hand they are in the same statistical ensemble as the marginally relevant ones. For the model in \eqref{IntroMod}, the two point function cannot be solved completely, but we compute it at large distances (in the sense that we make precise below) for all values of $J$, without exhibiting a singularity, or the generation of a mass gap, in the physical domain\footnote{These statements are true in the large N limit, with a significant caveat that we have so far solved only for the two point function.}. From it we can extract an effective $\beta$ function \begin{equation} \beta(J)=4\pi^2 J^3 \label{intro-beta} \end{equation} and study the above questions. The $\beta$ function indicates that the interaction is marginally irrelevant so the theory is not conformal. It will also turn out not to be conformal in the limit of $J\to\infty$. Rather, the theory requires regularization for any value of $J$, and this regularization breaks conformal invariance already at the level of the 2-pt function. Useful reference points are the Thirring or Gross-Neveu models \cite{Moshe:2003xn,ZinnJustin:2002ru,Gross:1974jv} (which are equivalent in our case), where the interaction term is non-random and diagonal between the left and right moving fermions. The $\beta$ function of these models is well understood -- in 2D, the leading term is quadratic in the 4-fermion coupling, and the interaction is marginally relevant for one sign of it. The theory then develops a mass gap (which is seen in \cite{Gross:1974jv} in perturbation theory via the appearance of a tachyonic pole at very low energies). At finite $N$ we can expect a similar behavior in the extreme IR for each realisation, since each contains both relevant and irrelevant operators, and the former will eventually grow to be large, most likely triggering a mass gap. This, however, is the equivalent of the statement that in the 0+1 SYK the very late time behavior, or the behavior of states close enough to the ground state, differ from realisation to realisation \cite{Cotler:2016fpe}\footnote{See \cite{Balasubramanian:2014gla} for earlier work on black holes and random matrix theory.}. The large $N$ ensemble average does not capture this expectation, but it captures the flow prior to chiral $\mathrm{SO}(N)_L\times \mathrm{SO}(N)_R$ symmetry breaking, where it is modified relative to the Thirring and Gross-Neveu models -- in our models we see no pole in the physical regime. Furthermore, the leading quadratic term in the $\beta$ function, which drove the RG in the Thirring and Gross-Neveu models, vanishes for the ensemble (in the large N limit). The $\beta$ function \eqref{intro-beta} is positive for any value of $J$, and it is driven by the cubic terms in the random couplings of the $\beta$ function. The latter is non-universal in the Thirring or Gross-Neveu models, but it is universal in our case for schemes preserving the statistical chiral symmetry. In fact, we reproduce the result \eqref{intro-beta} from suitable average considerations of the perturbative conformal field theory results reported in \cite{Gaberdiel:2008fn,Behr:2013vta}. The 2-pt function therefore does not show signs of the chiral phase transition, at leading order in $1/N$, for any value of the coupling. It is known that in low dimensions quenched disorder can smooth out thermal phase transitions \cite{Imry:1975zz}. Our results generalize that statement to translationally invariant theories, with the chiral phase transition being smoothed out at this order. To our knowledge this is the first demonstration of smoothing out of a phase transition by translationally invariant ensemble average in a field theory. One final note about the application of this model to black hole physics. The obvious analogy is to consider this model as related to a black hole in $AdS_3$. However, we do not want to consider this as the full dual to the entire $AdS_3$ space because the $\beta$ function is positive and the theory will not be asymptote to empty $AdS_3$. Perhaps a better way to think about this model is as describing the effective interactions of degrees of freedom inside a BTZ black hole \cite{Berkooz:2016cvq}. One then needs to 1) "prevent" these degrees of freedom from appearing at high momenta, and 2) couple additional "probe" fields to the model, which will mimic the degrees of freedom outside the black hole\footnote{A related statistical mechanical model appears in \cite{Banerjee:2016ncu}.}. This paper is organised as follows. In section \ref{defmodel}, we define the Random Thirring model. In section \ref{SDsec}, we compute the fermion 2-pt function by solving the Schwinger-Dyson (SD) equations in the IR, for all values of $J$, and show how the 2-pt function flows. We also discuss the reparameterization invariance of the model. In section \ref{pCFT}, we compare our model to the flow of the usual Thirring model (or the Gross-Neveu model in our case). We compute the $\beta$ function and show which terms are washed out due to the randomness of the coupling, and which remain active. Section \ref{disc} contains some concluding remarks. During the completion of this work, the preprint \cite{Turiaci:2017zwd} appeared which contains a 1+1 generalisation of the SYK model with the same interaction term. Although some of the intermediate steps are similar, the UV starting point of our model is different than the one described there, as is our solution and its implications for the RG flow of the model. \vspace{20pt} \section{The Random Thirring, or SYK, model in 1+1 dimensions} \label{defmodel} Consider a 2d non-chiral theory with N left moving $\nu^i$ and N right moving ${\bar\nu}^i$ ($i=1,\dots ,N$). Its free theory action and partition function in Euclidean space are given by \begin{equation} \label{FreeMaj2} S_{\text{free}}(\nu,\,\bar{\nu}) = \frac{1}{2\pi} \int d^2z \ \sum_i\left[ {\bar\nu}^i \partial_z {\bar\nu}^i + \nu^i \partial_{\bar z} \nu^i \right],\quad Z=\int D\nu D{\bar \nu} \,e^{-S(\nu,\,\bar{\nu})}\,. \end{equation} The free propagators are \begin{equation} \begin{aligned} \langle \nu_i(z_1,\bar z_1) \nu_j(z_2,\bar z_2) \rangle = \frac{\delta_{ij}}{z_1 - z_2} \equiv G_{\nu}^0(\vec x_1,\vec x_2) \delta_{ij}, \\ \langle \bar \nu_i(z_1,\bar z_1) \bar \nu_j(z_2,\bar z_2) \rangle = {\delta_{ij} \over \bar z_1 - \bar z_2}\equiv G_{\bar \nu}^0(\vec x_1,\vec x_2) \delta_{ij} \end{aligned} \end{equation} where $\vec x$ denotes the pair $z,{\bar z}$. Their Fourier transforms equal \begin{equation} G^{0}_\nu(p)=\frac{i\pi}{\bar p}, \quad G^{0}_{\bar\nu}(p)=\frac{i\pi}{p} \end{equation} For more details on our conventions, see appendix \ref{2dcft}. The free theory is $\mathrm{SO}(N)_L \times \mathrm{SO}(N)_R$ invariant, with conserved currents \begin{equation} J^{[ij]}(z) = i \ \frac{\nu^{i}\nu^{j}(z) - \nu^{j}\nu^{i}(z)}{2} ,\quad {\bar J}^{[ij]}(\bar z) =i \ \frac{{\bar\nu}^{i}{\bar\nu}^{j}({\bar z}) - {\bar\nu}^{j}{\bar\nu}^{i}({\bar z}) }{2} . \end{equation} defined for a pair $[ij]$ with $i<j$. Using these currents, we can add an interaction of the form \begin{equation} {\cal L}_{int}=- \sum_{i<j,k<l}J_{ij;kl} J^{[ij]}{\bar J}^{[kl]}={1\over 4}\sum_{i,j,k,l}J_{ij;kl}\nu^{i}\nu^{j}{\bar\nu}^{k}{\bar\nu}^{l},\ \ \ J_{ij;kl}=-J_{ji;kl}=-J_{ij;lk} \label{model} \end{equation} The model we discuss in this work consists in treating the coupling constants $J_{ij;kl}$ as random variables drawn independently from a gaussian distribution defined by \begin{equation}\label{NewJAv} \langle \langle J_{ij;kl}^2 \rangle \rangle = \frac{2J^2}{ N^3} \end{equation} where $\langle \langle\ \ \rangle \rangle$ denotes ensemble average over $J_{ij;kl}$. The resulting model can either be viewed as a translationally invariant continuum generalisation of the SYK model in 1+1 dimensions, or as a random cousin of the massless $\mathrm{SO}(N)$ Thirring model, where by the latter we mean the theory whose interaction Lagrangian involves a fixed diagonal coupling $J_{ij;kl} \sim g\,\delta_{ki}\delta_{lj}$ \begin{equation} \label{ThirSec2} {\cal L}_{int} \propto \frac{g}{2} \sum_{i,j} J^{[ij]}\,{\bar J}^{[ij]}\,. \end{equation} We will refer to the model in equation \eqref{FreeMaj2} and \eqref{model} as the random Thirring model. Though much of our discussion will be about \eqref{model}, there are two natural generalisations to consider. The first involves a higher order interaction term, \begin{equation} {\cal L}_{q,int}=\sum_{i_1,..i_q,k_1,..k_q}J_{i_1..i_q;k_1..k_q} \nu^{i_1}...\nu^{i_q}{\bar\nu}^{k_1}...{\bar\nu}^{k_q},\ \ q>2 \label{2qmodel} \end{equation} as already discussed in \cite{Turiaci:2017zwd}. The second involves the inclusion of a low-pass filter multiplying each fermion in Fourier space \begin{multline} {\cal L}_{q,int}=\int dp_1..dp_q du_1..du_q \delta(\sum p_i+\sum u_i) \sum_{i_1,..i_q,k_1,..k_q}J_{i_1..i_q;k_1..k_q} \\ (F(p_1)\nu^{i_1}(p_1))...(F(p_q)\nu^{i_q}(p_q)) (F(u_1){\bar\nu}^{k_1}(u_1))...(F(u_q){\bar\nu}^{k_q}(u_q)),\ \ q\ge 2, \end{multline} where the filter $F(k)$ decays at large momenta. The use of such filters, introduced in this context in \cite{Berkooz:2016cvq}, might be useful for holography in order to force a distinction between UV and IR degrees of freedom, but clearly it will be very restricted from a local field theory point of view. Our model \eqref{model} has two types of symmetries : exact and statistical. The first ones hold for any realisation of the couplings, whereas the second ones only arise after carrying the coupling average. Poincar\'e symmetry and a $\mathbb{Z}_2$ symmetry $\nu\rightarrow -\nu$ and ${\bar\nu}\rightarrow{\bar \nu}$ for even q, or its composition for odd q, are exact symmetries of the Lagrangian. Once the ensemble average is carried out, and assuming self-averaging, i.e. that any realisation behaves similarly to the ensemble average, there is an emergent $\mathrm{SO}(N)_L \times \mathrm{SO}(N)_R$ symmetry, rotating left and right Majorana fermions independently, together with a parity exchanging $\nu^i\leftrightarrow {\bar\nu}^i$. As in the case of the SYK model, we will sum over different realisations of the couplings $J_{ij;kl}$. If the model is self-averaging then this should reproduce, up to small corrections, the results of any specific realisation. However, for a specific realisation the random Thirring model and its $q>2$ generalisations are ordinary field theories. For $q>2$ the interaction term is irrelevant and we don't expect it to generate new interesting dynamics in the IR (although it can generate one in some intermediate scale). For $q=2$ the situation is more interesting, as we perturb the theory by a large number of operators, some of them relevant and some others irrelevant. In fact all the couplings mix together under RG flow but the qualitative picture that some are (marginally) relevant and some are (marginally) irrelevant is expected to hold. Further discussion on these points will appear in section 4. \vspace{20pt} \section{Two Point Function : Schwinger-Dyson Equations} \label{SDsec} To derive the Schwinger-Dyson (SD) equations for our models, we use the standard replica method \cite{Sachdev:2015efa} to perform the disorder average. If we further assume the replica symmetry is unbroken, the single replica action acquires the following bi-local interaction term \begin{equation}\label{BilocalIntineta} S_{int} \propto J^2 N \int d^2\! \vec x_1 d^2\! \vec x_2 \left( {\sum_i \bar{\nu}^i(\vec x_1) \, \bar\nu^i(\vec x_2 )\over N} \right)^2 \left( {\sum_j\nu^j (\vec x_1 ) \,\nu^j(\vec x_2 ) \over N} \right)^2\,. \end{equation} Next, we introduce two Lagrange multipliers $\Sigma_\nu(\vec x_1,\vec x_2), \Sigma_{\bar \nu}(\vec x_1,\vec x_2)$, whose equations of motion impose the constraints \begin{equation} G_\nu(\vec x_1,\vec x_2) =\frac{1}{N} \sum_i \nu^i(\vec x_1) \nu^i(\vec x_2)\,, \quad G_{\bar \nu}(\vec x_1,\vec x_2) =\frac{1}{N}\sum_i \bar \nu^i(\vec x_1) \bar \nu^i(\vec x_2)\,. \end{equation} This is achieved by adding to the action the extra term \begin{equation} \delta S \propto N\int d^2\! \vec x_1 \int d^2 \! \vec x_2 \left[ \Sigma_\nu( \vec x_1,\vec x_2) \left( G_\nu(\vec x_1,\vec x_2) - {\sum_i \nu^i(\vec x_1) \nu^i(\vec x_2) \over N} \right) + (\nu^i \to \bar \nu^i) \right] \end{equation} with $\Sigma(\vec x_1,\vec x_2)=-\Sigma(\vec{x}_2,\vec{x}_1)$. The resulting action is quadratic in the fermions. Performing their gaussian integral, we are left with the action: \begin{equation} \label{ColAction} \begin{split} - \frac{S}{N} & \propto \log \text{Pf} (\partial_{\bar z} -\Sigma_\nu(\vec x_1, \vec x_2)) + \log \text{Pf} (\partial_{ z} -\Sigma_{\bar \nu}(\vec x_1, \vec x_2)) - \int d^2\! \vec x_1 d^2\! \vec x_2 \\ & \ \ \ \ \times \left[ \Sigma_\nu( \vec x_1,\vec x_2) G_\nu(\vec x_1,\vec x_2) + \Sigma_{\bar \nu}( \vec x_1,\vec x_2) G_{\bar \nu}(\vec x_1,\vec x_2) - \frac{J^2}{2} G_\nu(\vec{x}_1,\vec{x}_2)^2 G_{\bar \nu}(\vec{x}_1,\vec{x}_2)^2 \right] \end{split} \end{equation} The equations of motion for $G_\nu(\vec{x}_1,\vec{x}_2)$ and $G_{\bar \nu}(\vec{x}_1,\vec{x}_2)$ are \begin{equation}\label{DefG} \begin{split} \Sigma_\nu(\vec x_1,\vec x_2 ) &= J^2 G_\nu(\vec x_1,\vec x_2 )\, \bar{G}_{\bar \nu}(\vec x_1,\vec x_2 )^2\,, \\ \Sigma_{\bar \nu}(\vec x_1,\vec x_2 ) &= J^2 \bar{G}_{\bar \nu}(\vec x_1,\vec x_2 ) \, G_\nu(\vec x_1,\vec x_2 )^2\,, \end{split} \end{equation} whereas the equations of motion for $\Sigma_\nu(\vec{x}_1,\vec{x}_2)$ and $\Sigma_{\bar \nu}(\vec{x}_1,\vec{x}_2)$ are then: \begin{equation}\label{DefSigma} \begin{split} G_\nu(\vec p_1,\vec p_2 )^{-1} &= G^{0}_{\nu}(\vec p_1,\vec p_2)^{-1} - \Sigma_\nu(\vec p_1,\vec p_2)\,, \\ G_{\bar \nu}(\vec p_1,\vec p_2 )^{-1} &= \bar{G}^{0}_{\bar \nu}(\vec p_1,\vec p_2)^{-1} - \bar{\Sigma}_{\bar \nu}(\vec p_1,\vec p_2)\,. \end{split} \end{equation} The same procedure can be carried out for the 2q fermion interaction model \eqref{2qmodel}. The corresponding SD equations are \begin{equation}\label{FullSDEqnq1} \begin{aligned} G_\nu(\vec p_1,\vec p_2 )^{-1} &= {G^{0}_\nu}(\vec p_1,\vec p_2 )^{-1} - \Sigma_\nu(\vec p_1,\vec p_2 )\,, \\ G_{\bar \nu}(\vec p_1,\vec p_2 )^{-1} &= G^{0}_{\bar \nu}(\vec p_1,\vec p_2 ){-1} - \Sigma_{\bar \nu}(\vec p_1,\vec p_2 )\,,\\ \Sigma_\nu(\vec x_1,\vec x_2 ) &= J^2\,G_\nu(\vec x_1,\vec x_2 )^{q-1}\, G_{\bar \nu}(\vec x_1,\vec x_2 )^{q}\,, \\ \Sigma_{\bar \nu}(\vec x_1,\vec x_2 ) & = J^2\, G_{\bar \nu}(\vec x_1,\vec x_2 )^{q-1}\, G_\nu(\vec x_1,\vec x_2 )^{q}\,. \end{aligned} \end{equation} \vspace{10pt} \subsection{A comment on reparametrization invariance}\label{sec:reparam} Consider the $J\to \infty$ limit of the collective field action \eqref{ColAction} for the 2q fermion model \eqref{2qmodel}. This results in dropping the dependence on the free propagator in the equations of motion \eqref{FullSDEqnq1}. It is convenient to rewrite the first two equations in real space \begin{equation} \label{SDrspace} \begin{aligned} \int d^2z\, G_\nu(z',z;\bar z',\bar z) \Sigma_\nu(z, z'';\bar z , \bar z'') &= -\delta(z'-z'') \delta(\bar z'-\bar z'')\,, \\ \int d^2z\, G_{\bar \nu}(z',z;\bar z',\bar z) \Sigma_{\bar \nu}(z, z'';\bar z , \bar z'') &= -\delta(z'-z'') \delta(\bar z'-\bar z'')\,, \end{aligned} \end{equation} to discuss their symmetries. As pointed out in \cite{Turiaci:2017zwd}, these equations of motion appear to be invariant under reparametrization. For the case $q=2$, which we will argue below is the only physically interesting one for us, we can also keep the free kinetic term and reintroduce it into equations \eqref{SDrspace}. In this case the equations are invariant under conformal transformation $z\to f(z)$ and $\bar z \to \bar f(\bar z)$. To see this, consider the 2-pt function transformations \begin{equation}\label{eq:TwoPointTransf} \begin{aligned} G_\nu(z,z';\bar z,\bar z') &\to [f'(z)\,f'(z')]^{\Delta_L} [\bar f'(\bar z)\bar f'(\bar z')]^{\bar\Delta_L}\, G_\nu(f(z),f(z');\bar f(\bar z),\bar f(\bar z'))\,, \\ G_{\bar \nu}(z,z';\bar z,\bar z') &\to [f'(z) f'(z')]^{\Delta_R} [\bar f' (\bar z)\bar f'(\bar z')]^{\bar\Delta_R} G_{\bar \nu}(f(z),f(z');\bar f(\bar z),\bar f(\bar z'))\,. \end{aligned} \end{equation} The last two equations in \eqref{FullSDEqnq1} determine the transformation for the self-energies $\Sigma_\nu$ and $\Sigma_{\bar{\nu}}$, and the invariance of equations \eqref{SDrspace} (with the free kinetic term reinstated) requires that $(\Delta_L,\Delta_R)=({1 \over 2},0)$ and $({\bar\Delta}_L,{\bar\Delta}_R)=(0,{1 \over 2})$ (for a general $q$, dropping the kinetic term, invariance under conformal transformations implies only that $\Delta_L + \Delta_R = \bar{\Delta}_L + \bar{\Delta}_R = \frac{1}{q}$). The conclusion from this analysis would suggest to solve the SD equations with a scale invariant ansatz for any $J$ for $q=2$ (and for large $J$ for $q>2$). Below, we will argue this is not necessarily the case. This quantum field theory requires regularization, which breaks conformal invariance. A-priori this symmetry may or may not be restored in the IR (or may be restored upon fine tuning some operators away). Actually, here it will not be restored. We will see this explicitly for the $q=2$ theory - the solution would not be scale invariant (although in a mild sense). It is important to emphasize that the regulator needs to be included already in the action, and the breaking of scale invariance is explicit (all the way to the IR) and not spontaneous. As an aside we would also like to comment that the conformal symmetry of the action is also broken in the IR, beyond the effects of the regulator. The reason is that the action \eqref{ColAction} should be appended by boundary conditions, or, in the language of the equations of motions, the solutions are legitimate only when the functions decay fast enough at infinity. This is nothing but the usual argument why ordinary 2D CFTs can have only an $\mathrm{SL}(2)\times \mathrm{SL}(2)$ symmetry on the plane - all other generators of the Virasoro algebra introduce singularities at infinity, which change an n-point function into an n-point function in the presence of an operator at infinity. This situation is different from reparametrization invariance in the SYK model in 0+1 dimensions. In that case, at the level of the 2-pt function, the action was consistently reparametrization invariant and a scale invariant solution could be found. Reparametrization was then broken spontaneously. This pattern of symmetry breaking was used heavily in computing the 4-pt function, although in order to make sense of it one had to eventually re-introduce the explicit breaking of reparametrization. Here the cut-off breaks reparametrization explicitly and the solution is not scale invariance. The net result of an explicit breaking of scale invariant (let alone reparametrization) is similar here and in the 0+1 SYK model, but the differences in the intermediate stage suggests that computing the 4-pt function would be considerably more involved. \vspace{10pt} \subsection{Solutions of the SD equations -- general q} In this subsection we will look for solutions to the SD equations \eqref{FullSDEqnq1} for various values of $q$. We will show that for $q>2$ the interaction term is subleading in the IR, and for $q=2$ it introduces an unusual form of a logarithmic running into the 2-pt function. We will assume translation invariance and, to evaluate the importance of various terms, we will carry out a scaling argument in the IR. We will also assume that Lorentz symmetry (Euclidean rotation) is unbroken throughout the flow and hence the UV Lorentz numbers of the fermions remain fixed. We will also assume that parity is unbroken. Under these assumptions \begin{equation} G(z_1-z_2,\bar z_1-\bar z_2) \equiv G_{\nu}(z_1,\bar z_1, z_2, \bar z_2) = G_{\bar \nu}(\bar z_1, z_1, \bar z_2, z_2) \end{equation} and one can similarly define $\Sigma(z,\bar z)$. Then the SD equations \eqref{FullSDEqnq1} collapse to \begin{eqnarray}\label{SimpSD} G(p, \bar p)^{-1} = G^{(0)}(p, \bar p)^{-1} - \Sigma(p,\bar p)\,, \quad \Sigma(z,{\bar z}) = J^2 G(z,{\bar z})^{q-1} G({\bar z},z)^{q} \end{eqnarray} Consider a scaling solution \begin{equation} G(\lambda z,{\bar \lambda} {\bar z})=\lambda^{-2\Delta_L}{\bar\lambda}^{-2\Delta_R}G(z, {\bar z}),\quad G(\lambda^{-1}\,p, \bar{\lambda}^{-1}\bar p)=\lambda^{-2\Delta_L+1}{\bar\lambda}^{-2\Delta_R+1}G(p,\bar p) \label{scaling} \end{equation} with $2(\Delta_L-\Delta_R)=1$, as dictated by invariance under Euclidean rotations. These solve the SD equations if \begin{equation} \Delta_L + \Delta_R = \frac{1}{q} \implies \quad 2\Delta_{L} = {1 \over q} + {1 \over 2} \quad , \quad 2\Delta_R = {1 \over q} -{1 \over 2} \end{equation} We treat the $q>2$ and $q=2$ cases separately. For $q>2$ we obtain $\Delta_R < 0$, which means the solutions are not physical. If the model is self-averaging (which we assumed before) then correlators should be close to being correlators in a unitary field theory. This necessitates that both $\Delta_L$ and $\Delta_R$ are non-negative. Or, related to it, the free propagator $G^{(0)}(p_z,p_{\bar z}) \sim {1 \over \bar p}$ already scales as $\bar \lambda$ and is leading at low momenta. Basically this means the theory is free in the IR\footnote{One might still be able to work in intermediate energy regimes, where the above approximation might be correct, but it will break down at sufficiently low energies.}. This is what one expects from field theory considerations as well. Deforming a free fermion theory by operators of the form $\nu^q{\bar \nu}^q$, which are operators of dimension $(q/2,q/2)$, the perturbation is irrelevant\footnote{This is also potentially where we differ from \cite{Turiaci:2017zwd} which add the interaction term to an altogether different theory.}. Thus, we will not consider the $q>2$ case further. \vspace{10pt} \subsection{Solutions of the SD equations for q=2} The scaling argument \eqref{scaling} suggests that the solutions to the SD equation \eqref{SimpSD} for the $q=2$ model are \begin{equation} G(z,\bar z) = \frac{C}{z}, \ \ \ \Sigma(z,\bar z) = \frac{C^3 J^2}{z \bar z^2}\,, \end{equation} for some constant $C$. The difficulty with this ansatz originates in the Fourier transform of $\Sigma(z,\bar z)$. The latter is not well defined and requires the introduction of a scale. Following the conventions in appendix \ref{2dcft}, it is given by \begin{equation} \label{FT1byzbz2} \frac{1}{z \bar z^2} \propto \int {d^2\!p_z \over (2\pi)^2} e^{-i (pz + {\bar p} \bar z)} {\bar p} \log(\Lambda^2/|p|^2) \end{equation} We would therefore like to see what the solution to the SD equations is. Our approach will be to modify our ansatz to include some dependence on logarithms, which will turn out to be essential to encode the RG flow of the model. We will present the large J solution in the IR before proceeding to the all-J solution (still at large distances). \vspace{10pt} \subsubsection{The Large J IR solution} We first consider the large $J$ limit, where the free propagator terms in the SD equations can be neglected. If we interpret the presence of the logarithms above as inducing some soft RG flow on top of some leading power law term, as dictated by the scaling argument \eqref{scaling}, it may appear natural to consider the ansatz \begin{equation}\label{Jansatz} G(z,\bar z) = \frac{A}{z} \left(\log(z{\bar z}\Lambda^2)\right)^\alpha\,, \quad \Sigma(z,\bar z)=\frac{A^3J^2}{z{\bar z}^2} \left(\log(z{\bar z}\Lambda^2)\right)^{3\alpha}\,. \end{equation} This depends on two parameters $A$ and $\alpha$, to be determined by solving the SD equations, and a cut-off scale $\Lambda$, whose role as a UV cut-off will become more apparent below. In this ansatz, we solve the "interaction" SD equation exactly, and then we analyse whether there is any regime of low momentum where our ansatz is self-consistent, i.e., the "propagator" SD equation is solved (at least in some approximation). Working in momentum space, we observe that the Fourier transforms $G(p,{\bar p})$ and $\Sigma(p,{\bar p})$ of our ansatz \eqref{Jansatz} satisfy \begin{equation} \begin{aligned} G(p,{\bar p}) &=\partial_{\bar p} {\hat G}(p,{\bar p})\,, \quad \text{with} \quad {\hat G}(p,{\bar p}) \equiv {A \over i} \int {d^2\!z \over 2 } {1 \over z{\bar z}} \log^\alpha(\Lambda^2z{\bar z})\,e^{ipz+i{\bar p}{\bar z}} \\ \partial_{\bar p}\Sigma(p,{\bar p}) &= A^3J^2i \int {d^2\!z \over 2} { \log^{3\alpha}(\Lambda^2z{\bar z}) \over z{\bar z}}\, e^{ipz+i{\bar p}{\bar z}}\,. \end{aligned} \end{equation} Hence, the integral we need to evaluate is \begin{equation} \label{original} \begin{aligned} F_\alpha(|p|/\Lambda)&\equiv \int {d^2\!z \over 2} {1\over z{\bar z}} \log(\Lambda^2z{\bar z})^\alpha e^{ipz+i{\bar p}{\bar z}}= \int { d^2\!\vec x \over |\vec x|^2 } \log^\alpha(\Lambda^2 |\vec x|^2)\, e^{i \vec p . \vec x} \\ &= 2^\alpha \int { dr \over r } \log^\alpha (\Lambda r) \int d\theta e^{i p r \cos \theta} = (2 \pi) \ 2^\alpha \int_0^\infty {dr \over r} \log^\alpha(\Lambda r) J_0(|p| r)\,, \end{aligned} \end{equation} where we used the integral representation of the Bessel function $J_0(x)$ to perform the integration over the angular variable and $|p|^2=4p\,\bar p={\vec p}^2$ according to our conventions in appendix \ref{2dcft}. To evaluate this integral, we start from the convergent formula\footnote{See formula 6.771 in \cite{gradshteyn}, for example.} \begin{equation} \begin{aligned} G_\epsilon(|p|/\Lambda) &\equiv \int_0^\infty dr \ (\Lambda r)^\epsilon \ {\log(\Lambda r) \over r} \ J_0(|p| r) \\ &= {\Gamma({\epsilon \over 2}) |\Lambda/p|^\epsilon \over \Gamma(1-{\epsilon \over 2}) 2^{2 -\epsilon} } \left[ \psi({\epsilon \over 2}) + \psi(1-{\epsilon \over 2}) +2 \log(2 |\Lambda/p| )\right]\,, \end{aligned} \end{equation} valid for $0<\epsilon< \frac{3}{2}$ and where $\psi(x)$ is the Digamma function. Notice we can compute $F_\alpha(|p|/\Lambda)$ for $\alpha = n \in \mathbb Z$ from \begin{equation} F_{n+1}(|p|/\Lambda)= 2^{n+2} \pi \lim_{\epsilon \to 0} \partial_\epsilon^n G_\epsilon(|p|/\Lambda)\,. \label{Fdev} \end{equation} To proceed, we can write $G_\epsilon(|p|/\Lambda)$ for small $\epsilon$ as \begin{equation} \begin{split} G_\epsilon(|p|/\Lambda) =& \left( {1\over\epsilon } + (\log 2- \gamma)+{\cal O}(\epsilon) \right) e^{\epsilon \log(\Lambda/|p|)} \biggr(-{1\over \epsilon} - \gamma +{\cal O}(\epsilon) + \log(2\Lambda/|p|) \biggr) \\ =& - {1 \over \epsilon^2} + \sum_{n=0}^\infty {\epsilon^n \log^{n+2}(\Lambda/|p|) \over n+2} \left[ 1 + {\cal O}({1 \over \log(\Lambda/|p|)}) \right] \\ \end{split} \end{equation} where $\gamma$ is the Euler's constan. Plugging this leading logarithm into \eqref{Fdev}, and dropping the divergent momentum independent $\epsilon^{-2}$ term and the subleading momentum dependent logarithms, we get \begin{equation} \label{GenFormFn} F_n(|p|/\Lambda) = \frac{\pi}{n+1} \log^{n+1}( \frac{\Lambda^2}{|p|^2}) \left( 1 +{\cal O}({1 \over \log(\Lambda/|p|)})\right) \,. \end{equation} Removing the $1/\epsilon^2$ divergences in the procedure above resembles dimensional regularization, but since we do not understand the renormalization of the model well enough we provide an explicit argument. In appendix \ref{alter}, we provide a specific regularisation of our original integral \eqref{original} leading to the same conclusion in the same IR regime. The gist of the argument is that since $\Lambda$ is our cut-off, we should not do the integral below a distance $\Lambda^{-1}$. Since the integral is convergent for $\epsilon > 0$, we can write \begin{equation} \int_{\Lambda^{-1}}^\infty dr\dots = \int_0^\infty dr \dots - \int_0^{\Lambda^{-1}} dr \dots \end{equation} Changing variables to $y=\Lambda r$, working in the limit of small $|p|\Lambda$ and expanding $J_0$ (since the range of integration is finite in $y$), we obtain in the second integral a divergent $1/\epsilon^2$ piece which cancels the divergence from the first integral, justifying the derivation of equation \eqref{GenFormFn}. We would like to extend our conclusion for integer $\alpha$ to a general one \begin{equation} F_\alpha(|p|/\Lambda)=\frac{\pi}{\alpha+1} \left(\log (\Lambda^2/|p|^2)\right)^{\alpha+1} \left(1+{\cal O}(1/\log(|p|/\Lambda)) \right)\,. \label{Fresult} \end{equation} by the following qualitative argument (which becomes more precise for $\alpha$'s which are negative enough). Working with the dimensionless variable $y=\Lambda\,r$, \begin{equation} F_\alpha(|p|/\Lambda) = \pi\, 2^{\alpha+1}\,\int_1^\infty \frac{dy}{y} \left(\log y\right)^\alpha J_0(|p|\,y/\Lambda)\,. \label{Fintegral} \end{equation} When $|p|/\Lambda$ is small, we can split the integral \eqref{Fintegral} into $[1,\, C\, \Lambda/|p|]$ and $[C\,\Lambda/|p|,\,\infty)$, where $C$ is finite and smaller than 1. In the first interval, the argument of the Bessel function is generically small and the function can be approximated by one. The resulting integral immediately reproduces \eqref{Fresult}. We are left to argue the contribution from the second interval is subleading. This is so for two reasons. First, when the argument $z$ of the Bessel function is large, the latter can be approximated by $z^{-1/2}\,\cos (z - \pi/4)$ (up to coefficients of order one). The integral involves a non-oscillating function multiplied by this oscillating decaying function. Second, for $\alpha$ negative and large enough in absolute value, the logarithm is small in this range. In any case, the integral can also be evaluated numerically for the specific values of $\alpha$ that interest us in the same regime $|p|/\Lambda \ll 1$. These confirm the expression \eqref{Fresult}. In the next subsection we present additional arguments for the validity of this large J solution. Using \eqref{Fresult} determines the two Fourier transforms \begin{equation} \begin{aligned} G(p,{\bar p}) &= i\frac{A\pi}{\bar p} \left(\log \frac{\Lambda^2}{|p|^2}\right)^{\alpha}\,, \\ \partial_{\bar p}\Sigma(p,{\bar p}) &= i\frac{A^3J^2\pi}{3\alpha+1}\left(\log \frac{\Lambda^2}{|p|^2}\right)^{3\alpha+1} \quad \Rightarrow \quad \Sigma(p,{\bar p}) = i{\bar p} \frac{A^3J^2\pi}{3\alpha+1}\left(\log \frac{\Lambda^2}{|p|^2}\right)^{3\alpha+1}\,. \end{aligned} \end{equation} The SD equation $G(p,{\bar p})\,\Sigma(p,{\bar p}) = -1$ requires \begin{equation} 4\alpha+1=0,\ \ \ -\frac{A^4J^2 \pi^2}{3\alpha+1}=-1 \quad \Rightarrow \quad \alpha = -\frac{1}{4}\,, \,\,\, A^4=\frac{1}{4\pi^2\,J^2}\,. \end{equation} As a quick consistency check that this is a sensible result, note that $A$ is positive. This is a necessary condition because it has to correspond to a 2-pt function \begin{equation} \sum_i \langle \phi | \nu |i\rangle e^{-\tau E_i} \langle i | \nu | \phi \rangle \end{equation} (where $\nu$ is an Hermitian operator) which is positive. \vspace{10pt} \subsubsection{Solution for any J} Another regime to consider in coupling space is small $J$, by carrying out perturbation theory in $J$. We can actually do better and find an all-$J$ solution, by remaining in the IR, i.e. large $\Lambda^2 z{\bar z}$ or $\Lambda^2/|p|^2$. We will assume an ansatz of the form \begin{eqnarray}\label{perturbative anzatz} G(z,\bar z) &=& \frac{1}{z} \sum_{n=0}^\infty a_n J^{2n} \log^n(\Lambda^2 z \bar z) \xrightarrow{|z| \gg \Lambda^{-1}, |p| \ll \Lambda} G(p , \bar p ) = \frac{i \pi}{\bar p} \sum_{n=0}^\infty a_n J^{2n} \log^n\frac{\Lambda^2}{|p|^2} \\ \nonumber \Sigma(z,\bar z) &=& \frac{J^2}{z \bar z^2} \sum_{n=0}^\infty b_n J^{2n} \log^n(\Lambda^2 z \bar z) \xrightarrow{|z| \gg \Lambda^{-1}, |p| \ll \Lambda} \Sigma(p , \bar p ) = i \pi {\bar p} J^2\sum_{n=0}^\infty \frac{b_n}{n+1}\, J^{2n} \log^{n+1}\frac{\Lambda^2}{|p|^2} \end{eqnarray} with $a_0=1$ to match the free theory propagator. Note that this is not the most general expansion in $J$ and in logarithms that we could consider. In fact if we solve \eqref{DefG}-\eqref{DefSigma} perturbatively in $J$, the coefficient of $J^n$ in \eqref{perturbative anzatz} will have corrections which are lower powers of $\log(\Lambda^2 z \bar z)$ than indicated. However, at lengths much larger than the cutoff scale, these terms are subleading. We will show that the above ansatz is self-consistent, and also solves the SD equations with the right limits $J\to 0$ and large J, i.e. it re-sums the perturbative series allowing to interpolate between both expansions, as long as the IR criteria above is satisfied. Instead of plugging our expansions in the SD equations to determine the coefficients $a_n,\,b_n$ recursively, we notice that the relevant Fourier transforms, summarized in eq. (\ref{Fourier2}-\ref{Fourier3}) (whose operation is denoted by ${\cal F}$ below) can be recast as \begin{equation} \begin{aligned} {\cal F}\left[\frac{1}{z}\,F_G\bigl(\log(\Lambda^2z{\bar z}) \bigr)\right] &= \frac{i\pi}{\bar{p}}F_G\bigl(\log(\Lambda^2/|p|^2)\bigr)\,, \\ {\cal F}\left[\frac{1}{z\,\bar{z}^2}\, F_\Sigma \bigl(\log(\Lambda^2z{\bar z}) \bigr)\right] &= i\pi {\bar p}\,F_{\Sigma,I}\bigl(\log(\Lambda^2/|p|^2)\bigr)\,, \\ F_{\Sigma,I}(x)&\equiv\int^x dy F_\Sigma(y) \end{aligned} \end{equation} Here $F_G,F_\Sigma$ should be thought of as a Taylor expansion in its arguments. This allows us to write our SD equations \eqref{DefG} and \eqref{DefSigma} as \begin{equation} F_\Sigma(x)=J^2F_G^3(x)\,, \quad \frac{1}{i\pi F_G(x)}=-i\pi \int_0^x dy F_\Sigma(y)+{1 \over i\pi} \,, \end{equation} where the integration constant is determined by the boundary condition at $J=0$, i.e. the absence of self energy ($\Sigma=0$) when the interaction is turned off ($J=0$). Taking derivatives, we derive the ODE equation \begin{equation} F^\prime_G(x)=-\pi^2J^2 F^5_G(x)\,,\quad F_G(0)=1 \end{equation} The ODE has a unique solution, giving rise to the 2-pt function \begin{equation}\label{FullTwoPointFunc} G(z,{\bar z})=\frac{1}{z}\,\frac{1}{\left(1+4\pi^2J^2\log(\Lambda^2\,z{\bar z})\right)^{1/4}}\,. \end{equation} The latter interpolates from small to large J, and has the correct limit at $J=0$ and large $J$. \vspace{10pt} \subsection{Beta Function} Our analysis confirms the existence of an RG flow. We can extract the beta function $\beta (J)$ for the coupling $J$ that controls our random Thirring model from the Callan-Symanzik equation satisfied by the fermion propagator $G(z,{\bar z})$ \begin{equation} \bigl( \Lambda \frac{\partial}{\partial\Lambda} +\beta(J) \partial_J +2\gamma(J)\bigr) {1\over z (1+ 4\pi^2 J^2\log(\Lambda^2 z{\bar z}))^{1/4}}=0\,, \end{equation} whose solution is given by \begin{equation}\label{betagamma} \beta(J)=4\pi^2 J^3\,, \quad \gamma(J)=\pi^2J^2\,. \end{equation} A positive $\beta$ function tells us the effective coupling becomes smaller in the IR. We are perturbing the action by a large number of marginal operators, which have non trivial 3-pt functions between them. This means that some of the coefficients of these operators will grow and some will decrease as we flow to the IR. In fact, which ones grow and which one decrease change as the couplings themselves evolve. If we think about $J$ as measuring the root mean square of the couplings, then some of them clearly decrease, generating a positive $\beta$ function. Still, the result is a bit peculiar if we want to think about $J\to\infty$, at least at finite $N$, which is not the case here. If we have a theory at finite cut-off and we turn on some relevant and irrelevant operators, then eventually some relevant directions will grow (and the irrelevant ones will decrease). This means that we expect that eventually the $\beta$ function will turn negative - we see no traces of this, presumably because of the large N scaling taken. In the next section, we will gain a better understanding for the positivity of the beta function \eqref{betagamma} by using conformal perturbation theory around the free field fixed point. \vspace{20pt} \section{The $\beta$ function in conformal perturbation theory} \label{pCFT} To obtain a better understanding of the RG flow, we compare our theory to the Gross-Neveu \cite{Gross:1974jv} and Thirring models (for reviews, see \cite{ZinnJustin:2002ru, Moshe:2003xn}). These have been studied extensively, also in the context of the large N limit. Furthermore, since our model also deforms a free theory by 4-fermion marginal operators, conformal perturbation theory remains a very useful tool for any specific realisation of the couplings $J_{ij,kl}$, when these are small. The main difference is that our couplings are as random as possible (although they have a large statistical symmetry) and the key issue is the interplay between randomness and conformal perturbation theory. In this section, we show that even though the $\beta$ function is quadratic in the couplings for a given realisation of the couplings $J_{ij,kl}$, this flow is suppressed at large N for the ensemble average in our models. We will argue that contrary to what happens to most known models, the next order contribution, i.e the cubic coupling term in the $\beta$ function, is universal\footnote{This statement holds for any scheme that preserves $\mathrm{SO}(N)_L\times \mathrm{SO}(N)_R$ invariance.}, drives the RG flow discovered in the previous section and in fact agrees with it for all values of $J$, and not just at weak coupling, in the $|p|/\Lambda \ll 1$ regime. \vspace{10pt} \subsection{RG in the Thirring and Gross-Neveu models} As discussed in section \ref{defmodel}, our model is the "disordered" version of the Thirring model, given in equation \eqref{ThirSec2}. The latter is usually treated by introducing a gauge field $A_\mu^a$ and rewriting the interaction as \begin{equation} \frac{1}{2} g J_\mu J_\mu\rightarrow \frac{A_\mu^2}{2g} + i\,A_\mu J_\mu\,, \end{equation} where we switched to a vector notation for the original currents and suppressed gauge indices \cite{ZinnJustin:2002ru, Moshe:2003xn}. The $A_\mu^2$ coupling is driven $g\to \infty$ in the IR. For the case of a single fermion in the fundamental representation, the theory is driven to a massive theory. If the starting UV field theory is a higher level WZW model, then it is expected\footnote{We are indebted to David Kutasov for a discussion of this point.} to flow in the IR to a coset theory where we quotient by $\mathrm{SO}(N)$. In our case, it is actually more useful to compare to the Gross-Neveu model \cite{Gross:1974jv}. For Majorana fermions, the Thirring model can be written as Gross-Neveu model by rewriting \begin{equation} \sum_{ab} \delta_{ab}J^a{\bar J}^b\sim \sum_{ij}\nu^i\nu^j{\bar\nu}^i{\bar\nu}^j\sim \sum_{ij} \bigl(\nu^i{\bar\nu}^i\bigr) \bigl(\nu^j{\bar\nu}^j\bigr) \end{equation} Following the notation of Dirac fermions in Minkowski space used in \cite{Gross:1974jv}, this model is \begin{equation} {\cal L} = i {\bar \psi} {\slashed\partial} \psi + {g^2 \over 2 N } ({\bar\psi}\psi)^2\,. \end{equation} It is convenient to introduce an auxiliary bosonic field $\sigma$ \begin{equation} {\cal L} = i {\bar \psi} {\slashed\partial} \psi - \frac{\sigma^2}{2} - \frac{g}{\sqrt N} {\bar \psi} \psi\, \sigma\,, \end{equation} because at large $N$ the fermion propagator is not renormalized and the boson propagator becomes \begin{equation} D_R(p,\mu^2)= \frac{-i}{1+\frac{g^2}{2\pi} \log(-\frac{p^2}{\mu^2})}\,. \end{equation} One can use the Callan-Symanzik equation to extract the $\beta$ function at large $N$ to be \begin{equation} \beta(g) \equiv \mu \frac{\partial g}{\partial \mu} = - \frac{g^2}{2 \pi} \label{betaGN} \end{equation} Hence, if $g>0$, the coupling is marginally relevant and the theory is asymptotically free in the UV. If $g<0$, the coupling is marginally irrelevant and the theory becomes free in the IR. We expect a similar structure to exist in the random Thirring model - some of the couplings will be relevant and some irrelevant, their nature changing due to the non-linearity of the $\beta$ functions (as we will see below). In this situation it is more convenient to think about the theory as defined at some cut-off scale which is held fixed, in which case, at least for finite $N$, we expect the theory to emerge along some relevant directions and eventually develop a mass gap. However, this may not be accessible in the large $N$, in the phase where the statistical $\mathrm{SO}(N)_L\times \mathrm{SO}(N)_R$ is preserved. For completeness, in the Gross-Neveu model the dynamical scale is given by \begin{equation} \Lambda(g) \propto \Lambda\, e^{-{2\pi \over g}}(1+{\cal O}(g)). \end{equation} where $\Lambda$ is a UV cutoff used to regularize the theory. In the IR, the theory breaks the chiral symmetry and flows to a massive theory. In fact, the theory is integrable and the complete spectrum of particles, labelled by $n$ (appearing in full multiplets of the symmetry) can be computed exactly and it is given by \begin{equation} m_n=\Lambda(g){N-2\over\pi}\sin \biggl({n\pi\over (N-2)} \biggr)\,. \end{equation} \paragraph{$\beta$ function from conformal perturbation theory.} The RG flows for the Thirring and Gross-Neveu models can also be understood using conformal perturbation theory. Consider a 2d CFT (the free fermion theory in our case) deformed by some marginal operators $\mathcal{O}_\alpha$ with conformal dimensions $(h,\bar{h})=(1,1)$ \begin{equation} S_{\text{CFT}} - \sum_\alpha \lambda_\alpha \int \mathcal{O}_\alpha(z,\bar{z})\,d^2z\,. \end{equation} Conformal perturbation theory determines the first contribution to the $\beta$ function of these couplings to be\footnote{We are neglecting numerical factors in this subsection, but we will give precise formulas in subsection \ref{pform}.} \cite{cardy1988conformal} \begin{equation}\label{BetaConf} \beta(\lambda_\alpha) \sim \sum_{\gamma,\sigma} C_{\alpha\gamma\sigma}\lambda_\gamma\lambda_\sigma + \dots \end{equation} where $C_{\alpha\gamma\sigma}$ is the 3-pt function $\langle \mathcal{O}_\alpha(z_\alpha,\bar{z}_\alpha) \mathcal{O}_\gamma(z_\gamma,\bar{z}_\gamma) \mathcal{O}_\sigma(z_\sigma,\bar{z}_\sigma)\rangle$ when we normalise the 2-pt functions canonically. In our discussion, the deformation is of the form \begin{equation}\label{SCFTDeform} S_{\text{CFT}} - J_{a;{\bar a}} \int J^{a}(z)\,\bar{J}^{{\bar a}}(\bar{z})\,d^2z\,. \end{equation} $J^a$ and $\bar J_{\bar a}$ are currents labelled by some index in the adjoint representation of $\mathrm{SO}(N)$ (see appendix \ref{currents} for more details on these conventions). (the same letter $J$ is used to denote both the currents and the coupling constants, but they will be distinguished by their index structure). The operators that we are perturbing by are $\mathcal{O}_{a{\bar a}}(z,\bar z)=J^a(z){\bar J}^{\bar a}(\bar z)$. Their three point functions are of the form \begin{equation}\label{CintermsofF} C_{a\bar a,b \bar b, c \bar c} \propto f_{abc}f_{{\bar a}{\bar b}{\bar c}}\,. \end{equation} Hence the $\beta$ function \eqref{BetaConf} is \begin{equation}\label{BetaFunQuad} \beta_{aa'} \equiv \beta(J_{a;a'}) \propto \sum_{b,b',c,c'} f_{abc}f_{a'b'c'}\,J_{b;b'}\,J_{c;c'}\,. \end{equation} For the Gross-Neveu model, the couplings equal $J_{a;\bar a} = \delta_{a\bar a} {g \over \sqrt N} $ and $f_{abc}$ are the structure constants of $\mathrm{SO}(N)$ given in appendix \ref{currents}. Using the identity \eqref{soNid}, the beta function \eqref{BetaFunQuad} reduces to \begin{equation} \beta(g) \sim -g^2\,, \end{equation} reproducing the behaviour in \eqref{betaGN}. \vspace{10pt} \subsection{$\beta$ function in the random Thirring model} The standard use of conformal perturbation theory is for specific realisations of the set of couplings $J_{a;{\bar a}}$. In the following, we explore its extension to 2d CFT with disordered marginal deformations. \paragraph{Flow of ensemble :} The 0+1 SYK model, the random Thirring model, or any of their cousins, have an infinite number of couplings in a given realisation. In principle we should track all of them. Once we declare that couplings are drawn from some class of distributions $f(\{J_{a;{\bar a}}\})$, e.g. Gaussian in our case, the number of parameters is greatly reduced. However, the distribution may flow, and its functional form may change with scale, i.e the ensemble may become a function of scale $f(\{J_{a;{\bar a}}\},\mu)$, in effect introducing more parameters. There are two equivalent ways of thinking of the RG flow in disordered theories. The first is to think of the flow of ensemble of couplings. In this, the ensemble average is done with a scale dependent distribution denoted by $\langle \langle \rangle \rangle_\mu$. For example \begin{equation}\label{eq:flow ensemble} \langle \langle \prod_{\alpha=1}^n \lambda_\alpha \rangle \rangle_\mu = \int d\lambda_\alpha f(\{\lambda_\alpha\},\mu) \prod_{\alpha=1}^n\lambda_\alpha \end{equation} The second is to consider the flow of couplings for a specific realisation drawn from a fixed ensemble. This we denote by the usual $\langle \langle \rangle \rangle$. For example \begin{equation}\label{eq:flow couplings} \langle \langle \prod_{\alpha=1}^n \lambda_\alpha(\mu) \rangle \rangle = \int d\lambda_\alpha f(\{\lambda_\alpha\}) \prod_{\alpha=1}^n\lambda_\alpha(\mu) \end{equation} Given the beta function of couplings in this way, one can deduce the corresponding flow of ensemble by demanding that \eqref{eq:flow ensemble} be the same as \eqref{eq:flow couplings} which is equivalent to \begin{equation} \mu \partial_\mu f(\{\lambda_\alpha\},\mu) = - \partial_\alpha \bigl (\beta^\alpha(\lambda) f(\{\lambda_\alpha\}) \bigr) \end{equation} where $\mu$ is the RG scale. We can characterize these distributions by their moments, and follow their RG flows by the infinite set of $\beta$ functions \begin{equation}\label{n-moment} \beta_n(\{J^{a;\bar a}\})=\mu\partial_\mu\ \langle\langle \Pi_{i=1}^n J^{a_i;{\bar a}_i}\rangle\rangle_\mu =\langle\langle \mu \partial_\mu ( \Pi_{i=1}^n J^{a_i;{\bar a}_i}) \rangle\rangle\,. \end{equation} We will refer to them as the distribution $\beta$ functions. Working with the distribution $\beta$ functions might be a bit counter-intuitive at times. For example, the simplest case is when $n=1$. Using the leading $\beta$ function, quadratic in $J^{a ; \bar a}$, in \eqref{BetaFunQuad} and performing the ensemble average with $\langle\langle J_{b;b'}\,J_{c;c'} \rangle\rangle \sim \frac{J^2}{N^3} \delta_{bc}\delta_{b'c'}$, the average $\beta$ function equals \begin{equation}\label{neq1} \beta_1= \beta( \langle\langle J^{a;{\bar a}} \rangle\rangle_\mu ) =\mu \partial_\mu \langle\langle J^{a ; \bar a} \rangle\rangle_\mu \propto \sum_{b,\bar b,c,\bar c} f_{abc}f_{\bar a \bar b \bar c} \langle \langle J^{b;\bar b}\,J^{c;\bar c} \rangle \rangle = 0. \end{equation} However, this does not mean the theory does not flow. In general, if we choose some functional form for the distribution, the RG flow can take us out of this subspace of functions. For example, working at finite $N$ in our model, the distribution does not remain Gaussian, or even symmetric under $J_{a;\bar a} \to -J_{a;\bar a}$, along the RG flow. To see this, consider \eqref{n-moment} for $n=3$ \begin{equation} \begin{aligned} \beta_3=\mu\partial_\mu \langle\langle J^{a_1;{\bar a}_1} J^{a_2;{\bar a}_2} J^{a_3;{\bar a}_3}\rangle\rangle_\mu &\propto \langle\langle J^{a_1;{\bar a}_1} J^{a_2;{\bar a}_2}f_{a_3bc}f_{{\bar a}_3{\bar b}{\bar c}} J^{b;{\bar b}} J^{c;{\bar c}} \rangle\rangle\ + \text{permutations} \\ & \propto f_{a_1a_2a_3}{\bar f}_{\bar a_1 \bar a_2 \bar a_3} \frac{J^4}{N^6}\,. \end{aligned} \end{equation} Hence, $\beta_3\neq 0$ at finite $N$, which is not compatible with a gaussian distribution. We observe, though, that this departure from gaussianity is subleading in $N$, compared to the nominal scaling in which $J^{a,{\bar a}}\sim J/N^{3/2}$. \paragraph{Flow of the couplings:} The various $\beta_n$ are statistical averages of various couplings in the theory. In principle, all of them appear in different Callan-Symanzik equations and one needs to track the RG flow for all of them. At large $N$, and as long as the statistical symmetry is unbroken, some simplifications occur. Here, we will focus on some additional aspects of the flow generated by the leading term in the $\beta$ function given in \eqref{BetaFunQuad}. We will see this term gives vanishing contributions (in the large N limit) to some of the interesting $\beta_n$'s. \begin{itemize} \item First, the ensemble average of the $\beta$ function was already computed in \eqref{neq1} \begin{equation} \beta_1=\langle\langle \beta_{aa'} \rangle\rangle = \langle\langle \mu \partial_\mu( J^{a;\bar a} ) \rangle\rangle = 0. \end{equation} Note this result also follows from symmetry -- the $\beta$ function transforms in the $\text{ad}(\mathrm{SO}(N)_L) \times \text{ad}(\mathrm{SO}(N)_R)$ representation and $\mathrm{SO}(N)_L\times \mathrm{SO}(N)_R$ is restored after the ensemble average. Hence the averaged $\beta$ function must vanish. \item Although we find that the averaged $\beta$ function vanishes, it could be that there are large fluctuations around this mean value. To gain some intuition on what the physics may be for a given realisation, we compute the standard deviation of this averaged beta function. This is given by $\beta_{a,a'}^2$ (not summed over $a,a'$) \begin{equation} \begin{split} \langle \langle \beta_{a,a'}^2 \rangle \rangle = & \sum_{bb'cc'{\hat b}{\hat b}'{\hat c}{\hat c}'} f_{abc}f_{a'b'c'}f_{a{\hat b}{\hat c}}f_{a'{\hat b}'{\hat c}'} \langle \langle J_{b;b'}J_{c;c'}J_{{\hat b};{\hat b}'}J_{{\hat c};{\hat c}'}\rangle \rangle \\ = & {J^4\over N^6} \sum_{bb'cc'{\hat b}{\hat b}'{\hat c}{\hat c}'} f_{abc}f_{a'b'c'}f_{a{\hat b}{\hat c}}f_{a'{\hat b}'{\hat c}'} \bigl( \delta_{b{\hat b}} \delta_{b'{\hat b}'} \delta_{c{\hat c}} \delta_{c'{\hat c}'} + \delta_{b{\hat c}} \delta_{b'{\hat c}} \delta_{c{\hat b}} \delta_{c'{\hat b}'} \bigr) \\ \sim & {J^4\over N^6} \sum_{bcb'c'} f_{abc}f_{abc}f_{a'b'c'}f_{a'b'c'}\sim {J^4\over N^4} \end{split} \end{equation} The associated nominal scaling is \begin{equation} {\partial {J\over N^{3/2}} \over \partial \log \mu } \propto {J^2\over N^2},\ \implies {\partial J\over \partial \log \mu}\sim {J^2\over \sqrt{N}} \end{equation} and hence to leading order in the large N expansion (with J held fixed), the leading term in the $\beta$ function should be taken to be zero. \item If the ensemble remains gaussian to leading order in $N$, the only parameter characterising the ensemble is $J$ (and in any case, the standard deviation is the leading statistical moment). Hence it is interesting to study its flow by studying \begin{equation} J^2(\mu) ={1\over N} \langle \langle \sum_{a{\bar a}} {J_{a;{\bar a}}}^2 \rangle\rangle_\mu\ . \end{equation} At this order of the $\beta$ function, i.e. quadratic order in the couplings, $\beta_2$ also vanishes \begin{equation} \beta_2=\mu \partial_\mu \langle\langle J^{a_1 ; \bar a_1} J^{a_2 ; \bar a_2} \rangle\rangle_\mu \propto f_{a_1bc}f_{\bar a_1 \bar b \bar c} \langle\langle J^{b ; \bar b} J^{c ; \bar c} J^{a_2 ; \bar a_2} \rangle\rangle + 1 \leftrightarrow 2 = 0\ , \end{equation} i.e., \begin{equation} \partial_{\log \mu} J^2 = \partial_{\log \mu } \frac{1}{N} \langle\langle J_{a{\bar a}}J^{a{\bar a}} \rangle\rangle =0\ , \end{equation} \end{itemize} These are actually different measures why the leading quadratic term in the $\beta$ function \eqref{BetaFunQuad} does not contribute to the flow, due to cancellation between the different couplings. These arguments are correct in perturbation theory, or as long as the $\mathrm{SO}(N)_L\times \mathrm{SO}(N)_R$ is unbroken. They may be limited in teaching us about global issues of the RG flow because the latter is non-linear in the parameters. For example, if we fix the couplings at some high scale $\Lambda$, then since relevant couplings are turned on, then eventually an IR dynamical scale would be generated (at least at finite N). Computing this dynamical scale for a specific realisation, as a function of $J_{a;\bar a}$, and then averaging over the $J_{a;\bar a}$ is different from first averaging the $\beta$ function and then trying to deduce the flow's end. \vspace{10pt} \subsection{$\beta$ function from conformal perturbation theory } \label{pform} Conformal perturbation theory provides an expansion of the $\beta$ function in higher powers of the couplings $J_{a;\bar a}$. We saw the quadratic terms are effectively zero at large $N$ in our models. Yet the distribution parameter $J^2$ still flows as we see explicitly in our computations \eqref{betagamma}. In this section, we argue this RG flow originates from the cubic terms in the $\beta$ function. Before starting this discussion, let us comment on the status of these cubic terms, which are usually scheme dependent and hence non-universal. For example, if we redefine the couplings appearing in \eqref{BetaConf} by \begin{equation}\label{schemedep} {\tilde \lambda}_\alpha=\lambda_\alpha+ \sum_{\beta \gamma} A_{\alpha\beta\gamma}\lambda_\beta\lambda_\gamma + \dots \end{equation} the higher order terms in the $\beta$ function expansion will change. However, we can argue that such terms are universal in schemes preserving the $\mathrm{SO}(N)_L \times \mathrm{SO}(N)_R$ statistical symmetry. To see this, notice that the redefinition \eqref{schemedep} would change the cubic term in the beta function as follows\footnote{Here $\dot \lambda \equiv \mu \partial_\mu \lambda$.} \begin{equation} \begin{split} {\dot{\tilde\lambda}_\alpha} &= {\dot\lambda}_\alpha + 2A_{\alpha\beta\gamma}{\dot\lambda}_\beta\lambda_\gamma =C_{\alpha\beta\gamma}\lambda_\beta\lambda_\gamma+{\cal O}(\lambda^3)+2A_{\alpha\beta\gamma}C_{\beta\sigma\delta}{\lambda}_\sigma\lambda_\delta \lambda_\gamma\\ &= C_{\alpha\beta\gamma}{\tilde\lambda}_\beta{\tilde\lambda}_\gamma +{\cal O}(\lambda^3) - 2C_{\alpha\beta\gamma}A_{\beta\sigma\delta}{\lambda}_\sigma\lambda_\delta \lambda_\gamma + 2A_{\alpha\beta\gamma}C_{\beta\sigma\delta}{\lambda}_\sigma\lambda_\delta \lambda_\gamma\,, \end{split} \end{equation} where $C_{\alpha \beta \gamma}$ are given in \eqref{CintermsofF}. If the redefinition preserves the $\mathrm{SO}(N)_L\times \mathrm{SO}(N)_R$, the matrix $A$ has to intertwine $\mathrm{SO}(N)_L \times \mathrm{SO}(N)_R$ to $\mathrm{SO}(N)_L \times \mathrm{SO}(N)_R \times \mathrm{SO}(N)_L \times \mathrm{SO}(N)_R$. The only option is to take $A_{\alpha \beta \gamma} \propto C_{\alpha \beta \gamma}$ but then the cubic term in the $\beta$ function does not change. In the following, we work with such a set of schemes, dictated by symmetry and having a universal cubic term contribution to the $\beta$ function. We use the results in \cite{Gaberdiel:2008fn,Behr:2013vta}. The formula for the $\beta$ function evaluated to this order is given in equations (5.12) and (5.13) of \cite{Behr:2013vta} which when adapted to our notation gives \footnote{Our convention for the $\beta$ function has an extra sign compared to that of \cite{Behr:2013vta}. In what follows, we will raise and lower adjoint indices $a,b$ etc by $\delta^{ab}, \delta_{ab}$ respectively.} \begin{equation} \beta_{a{\bar a}}=-\pi\,f_{bca}\,{ f}_{\bar b\bar c\bar a}\,J^{b;{\bar b}} J^{c;{\bar c}} -\beta_{a{\bar a} b{\bar b}c{\bar c}d{\bar d}} J^{b;{\bar b}} J^{c;{\bar c}} J^{d;{\bar d}} \label{betak} \end{equation} where \begin{equation} \begin{aligned} \beta_{a{\bar a} b{\bar b}c{\bar c}d{\bar d}} &= \frac{\pi^2}{3!} \biggl(E_{abcd,{\bar a}{\bar b}{\bar c}{\bar d}}+{\bar E}_{abcd,{\bar a}{\bar b}{\bar c}{\bar d}}\biggr) \\ E_{abcd,{\bar a}{\bar b}{\bar c}{\bar d}}&= (\delta_{ad}\delta_{bc}-\delta_{ac}\delta_{bd}){f}_{{\bar a}{\bar b}}^{\ \ \bar r}{f}_{{\bar r}{\bar c}{\bar d}}+ (\delta_{ab}\delta_{cd}-\delta_{ad}\delta_{bc}){f}_{{\bar a}{\bar c}}^{\ \ \bar r}{f}_{{\bar r}{\bar d}{\bar b}} \\ & \quad + (\delta_{ac}\delta_{bd}-\delta_{ab}\delta_{cd}){f}_{{\bar a}{\bar d}}^{\ \ \bar r}{f}_{{\bar r}{\bar b}{\bar c}}\,, \\ {\bar E}_{abcd,{\bar a}{\bar b}{\bar c}{\bar d}}&= (\delta_{\bar a\bar d}\delta_{\bar b\bar c}-\delta_{\bar a\bar c}\delta_{\bar b\bar d}) { f}_{{a}{b}}^{\ \ r}f_{{ r}{c}{d}} + (\delta_{\bar a\bar b}\delta_{\bar c\bar d}-\delta_{\bar a\bar d}\delta_{\bar b\bar c}) { f}_{{a}{c}}^{\ \ r}f_{{ r}{d}{b}} \\ & \quad + (\delta_{\bar a\bar c}\delta_{\bar b\bar d}-\delta_{\bar a\bar b}\delta_{\bar c\bar d}) { f}_{{a}{d}}^{\ \ r}f_{{ r}{b}{c}}\,. \\ &= E_{{\bar a}{\bar b}{\bar c}{\bar d}, a bc d} \end{aligned} \label{betak2} \end{equation} We are interested in evaluating the ensemble average of the $\beta$ function of the mean squared couplings, i.e $\tilde \beta \equiv \langle \langle \beta (J^{a;\bar a} J_{a;\bar a}) \rangle \rangle$. Assuming the ensemble to be gaussian at all scales gives \begin{equation}\label{betafun} \tilde \beta = \sum_{a,\bar a}\mu\partial_\mu \langle \langle J_{a;{\bar a}} J_{a;{\bar a}} \rangle \rangle_\mu = \sum_{a,\bar a} \mu\partial_\mu {2J^2\over N^3} \delta^a_a \delta^{\bar a}_{\bar a} = J N (1 - N^{-1})^2 \mu\partial_\mu J\,. \end{equation} Whereas the usual definition of $\beta$ function of ensemble gives \begin{eqnarray}\label{RHS} \tilde \beta &=& 2 \sum_{a,\bar a} \langle \langle J_{a; \bar a} \mu\partial_\mu J_{a;\bar a} \rangle \rangle = 2 \sum_{a,\bar a} \langle \langle J_{a; \bar a} \beta^{a \bar a}\rangle \rangle \\ &=& - 2 \beta_{a{\bar a} b_1{\bar b}_1 b_2{\bar b}_2 b_3{\bar b}_3} \langle \langle J^{a;{\bar a}} J^{b_1;{\bar b}_1} J^{b_2;{\bar b_2}} J^{b_3;{\bar b_3}}\rangle \rangle\\ \nonumber &=& -{2\pi^2 \over 3!} \left( E_{ab_1b_2b_3,\bar a \bar b_1 \bar b_2 \bar b_3} + E_{\bar a \bar b_1 \bar b_2 \bar b_3,ab_1b_2b_3} \right) \times {4 J^4 \over N^6}[ \delta^{ab_1}\delta^{b_2b_3}\delta^{{\bar a}{\bar b}_1}\delta^{{\bar b}_2{\bar b}_3}+ 1,2,3 \ \text{cyclic}] \\ &=& -{16 \pi^2 J^4 \over 3! N^6} \quad E_{ab_1b_2b_3,\bar a \bar b_1 \bar b_2 \bar b_3} \quad [ \delta^{ab_1}\delta^{b_2b_3}\delta^{{\bar a}{\bar b}_1}\delta^{{\bar b}_2{\bar b}_3}+ 1,2,3 \ \text{cyclic}] \\ \nonumber &=& 4 \pi^2 J^4 N (1 + {\cal O}(N^{-1}))\,, \end{eqnarray} where we used relevant identities satisfied by the structure constants described in appendix \ref{currents}. Notice that the above has the correct nominal scaling ${1 \over N^3}\times N^4$. Comparing with \eqref{betafun}, we get \begin{equation} \beta(J) \equiv \mu \partial_\mu J = 4 \pi^2 J^3 \end{equation} This matches the $\beta$ function in \eqref{betagamma} obtained from direct computation using the Callan-Symanzik equation. \paragraph{Flow of ensemble :} Finally, it is worth noting that this cubic term preserves the Gaussianity of the distribution of the couplings at large $N$. To prove this we can show that the RG flow of any statistical average of $2n$ $J$'s is given by taking into account, at leading order in N, by the RG flow of each pair. i.e., Gaussianity implies that \begin{equation}\label{Gaussian Claim} \begin{split} \mu \partial_\mu \langle\langle J^{a_1;{\bar a}_1}....J^{a_{2n};{\bar a}_{2n}} \rangle\rangle_\mu = \sum_{\sigma} \langle\langle \mu \partial_\mu ( J^{a_{\sigma(1)};{\bar a}_{\sigma(1)}} J^{a_{\sigma(2)};{\bar a}_{\sigma(2)}}) \rangle\rangle \prod_{i=2}^{n} \langle\langle J^{a_{\sigma(2i-1)};{\bar a}_{\sigma(2i-1)}} J^{a_{\sigma(2i)};{\bar a}_{\sigma(2i)}}) \rangle\rangle \end{split} \end{equation} where the permutation $\sigma \subset S_{2n}$ has $n$ 2-cycles - basically all possible Wick contractions. To see this, we evaluate the left hand side first \begin{equation} \mu \partial_\mu \langle\langle J^{a_1;{\bar a}_1}....J^{a_{2n};{\bar a}_{2n}} \rangle\rangle_\mu = - \sum_{k=1}^{2n}\beta_{a_k \bar a_k b_1 \bar b_1 b_2 \bar b_2 b_3 \bar b_3} \langle \langle J^{b_1;\bar b_1} J^{b_2;\bar b_2} J^{b_3;\bar b_3} \prod_{i=1,i\ne k}^{2n} J^{a_i; \bar a_i} \rangle\rangle\,. \end{equation} The evaluation of the right hand side has two kinds of terms. One in which two of the $J^{b_i;\bar b_i}$ contract among themselves. Notice these are exactly the terms captured by \eqref{Gaussian Claim}. Their nominal scaling is $N^{-3n}$. The second kind involves no contractions among the $J^{b_i;\bar b_i}$. Consider one such term below (where say $J^{b_i;\bar b_i}$ contracts with $J^{a_i;\bar a_i}$ for $i=1,2,3$ and $k>3$). We show it has subleading nominal scaling. Hence, it is negligible. \begin{equation} \begin{aligned} & \beta_{a_k \bar a_k b_1 \bar b_1 b_2 \bar b_2 b_3 \bar b_3} \prod_{i=1}^3 \langle \langle J^{b_i;\bar b_i} J^{a_i;\bar a_i} \rangle\rangle \langle \langle \prod_{i=4,i\ne k}^{2n} J^{a_i; \bar a_i} \rangle\rangle \\ & \sim N^{-{9 }} E_{a_k b_1b_2b_3,\bar a_k \bar b_1 \bar b_2\bar b_3} \prod_{i=1}^3 \delta^{a_i b_i} \delta^{\bar a_i \bar b_i} \langle \langle \prod_{i=4,i\ne k}^{2n} J^{a_i; \bar a_i} \rangle\rangle \\ & \sim N^{-9}\left[ \left( \delta_{a_k a_3} \delta_{a_1a_2} - \delta_{a_k a_2} \delta_{a_1 a_3} \right) f_{\bar a_k \bar a_1}^{\ \ \ \ \bar r} f_{\bar r \bar a_2 \bar a_3 } + 1,2,3 \ \text{cyclic} \right] \langle \langle \prod_{i=4,i\ne k}^{2n} J^{a_i; \bar a_i} \rangle\rangle \end{aligned} \end{equation} Since $\sum_{\bar r}f_{\bar a_k \bar a_1}^{\ \ \ \ \bar r} f_{\bar r \bar a_2 \bar a_3 }$ does not scale with $N$, we conclude the nominal scaling of this term is $N^{-3n -3}$, which is subleading compared to the $N^{-3n}$ captured by the gaussianity preserving term in \eqref{Gaussian Claim}. \section{Discussion and Future Directions} \label{disc} We computed the 2-pt function for a set of $N$ 1+1 dimensional Majorana fermions with a random Thirring interaction. We obtained that the $\beta$ function is positive, which means that the theory is not renormalizable. The running is, however, weaker than the usual logarithmic running of marginal operators. As an effective theory below some scale, it is perfectly valid, and its RG exhibits some new features. This is the context in which the theory might be useful for black holes in AdS -- the model above might serve as a heuristic model for the degrees of freedom inside a black hole at some finite energy, and if we want to understand its surrounding weakly curved AdS outside the horizon, we need to couple it to additional "probe" degrees of freedom \cite{Berkooz:2016cvq}. The energy scale is therefore set by the temperature and the degrees of freedom of the black hole "never make it" to high energies. There are some natural future directions \begin{enumerate} \item[1.] A better understanding of the solution to the Schwinger-Dyson equation: We treated the Schwinger-Dyson equations in an approximation in which the we solve the spatial "interaction" equation exactly, and the momentum "propagator" equation only approximately. This was convenient since the "interaction" equation is more non-linear. The fact that we were able to approximately solve the equations consistently for all values of $J$ for sufficiently low energies lends support to this approximation scheme. However, it would be worthwhile to see if the approximation can be justified, or tested on other examples. \item[2.] Thermal partition function: One natural extension is to compute the thermal partition function. We expect that we will be able to do it at low temperature, where we can use a long distance approximations of the type that we used above. \item[3.] 4-pt function: In the 0+1 dimensional SYK there was a truncation of the action, in terms of 2-pt functions, to a reparametrization invariant action, and there was no need to regulate the action in order to make sense of the 2-pt function. This enabled partial computation of the 4-pt function, although the final expression required a scale (or the re-introduction of the RG flow). Here we need to include the RG flow already at the level of the 2-pt function, which means the computation of the 4-pt function might be considerably more difficult. It is, however, the main indication whether the theory can be related to a bulk theory of any sort. \item[4.] Renormalizability and stability: We would like to see whether the theory is renormalizable, and at what scales it develops a mass gap in the IR. The former might be easier in the bi-local description of the mode. To check for a mass gap one can look for poles in futher correlation functions. In the Gross-Neveu model, for example, one indication for the mass scale is a tachyonic pole in the 2-pt function for $\sigma$, which feeds into the a pole in the 4-pt function of the fermions (there is no pole in the 2-pt function of the fermion which is not renormalized when writing the theory with the $\sigma$ field). To see this effect in our model, we again need to compute the 4-pt function of the fermions in the model. \item[5.] The Thirring model is dual to Sine-Gordon models via bosonization dualities. This suggests that one can obtain an SYK like models with bosonic fields too, atleast in 2 dimensions. It is interesting to then explore if there are other interesting bosonic SYK models. \item[6.] Finally, it would be interesting to couple this model, at finite temperature, to additional "probe" degrees of freedom and examine whether it resembles a black hole in $AdS_3$. \end{enumerate} We hope to return to these issues in future work. \vspace{20pt} \subsection*{Acknowledgements} We would like to thank Ofer Aharony and David Kutasov for illuminating discussions. The work of MB is supported by an ISF center of excellence grant (1989/14). PN gratefully acknowledges the support from International Centre for Theoretical Sciences (ICTS), India. The work of MR is supported by a Discovery grant from NSERC. The work of JS is supported by the Science and Technology Facilities Council (STFC) [grant number ST/L000458/1]. MB holds the Charles and David Wolfson Professorial chair of Theoretical Physics. JS's research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science. \vspace{20pt} \begin{appendix} \section{2d Euclidean CFT conventions} \label{2dcft} Given the Euclidean action $S[\nu,\bar \nu]$ for the Majorana fermions $\nu,\bar \nu$, the euclidean partition function is given by \begin{equation} Z = \int [D\nu] [D \bar \nu] e^{-S[\nu,\bar \nu]} \end{equation} Euclidean coordinates are labelled by $\vec x= (x_1,x_2)$ and momentum by $\vec p = (p_1,p_2)$. The definition of Fourier transform and its inverse are \begin{equation} {\cal F}(\vec p) = \int {d^2\vec x}\, e^{i \vec p . \vec x} f(\vec x)\,, \quad f(\vec x) = \frac{1}{(2\pi)^2}\int {d^2\vec p}\, e^{-i \vec p . \vec x} {\cal F}(\vec p)\,. \end{equation} Complex coordinates are defined as \begin{eqnarray} z = x_1+i x_2\,, \quad \bar z = x_1 - i x_2 \quad \implies \quad \underbrace{dz d\bar z}_{d^2\!z} = 2 d^2\! \vec x \end{eqnarray} and similarly for momenta \begin{equation} p = {p_1 - i p_2 \over 2}\,, \quad \bar p = {p_1 + i p_2 \over 2} \quad \implies \quad \underbrace{dp d\bar{p}}_{d^2\!p} = {d^2\vec p \over 2} \end{equation} One can check that $z \bar z = |\vec x|^2$ and $|p|^2 \equiv 4 p \bar p = |\vec p|^2$ \begin{equation} \vec p .\vec x = p z + \bar p \bar z\,, \quad \delta^2(\vec x) = 2 \delta(z) \delta(\bar z) \,, \quad \delta^2(\vec p) = { \delta(p) \delta(\bar p)\over 2} \end{equation} The definition of Fourier transform in complex coordinates translates to \begin{equation} \begin{split} {\cal F}(p, {\bar p})[ f(z,\bar z)] &\equiv \int {d^2\!z \over 2} e^{i (p z + {\bar p} \bar z)} f(z,\bar z) \\ f(z, \bar z) &= {\cal F}^{-1}[{\cal F}(p, {\bar p})] \equiv \int {2d^2\!p \over (2\pi)^2} e^{-i (p z + {\bar p} \bar z)} {\cal F}(p, {\bar p})\,. \\ \end{split} \end{equation} \paragraph{Summary of Fourier transforms.} The main Fourier transforms used in the main text is given below. \begin{eqnarray}\label{Fourier1} \int {d^2\!z \over 2} e^{i (p z + \bar p \bar z)} { \log^\alpha(\Lambda^2 z \bar z) \over z \bar z } &=& { \pi \over \alpha +1 } \log^{\alpha+1} (\Lambda^2/|p|^2) \left( 1 +{\cal O}({ 1 \over \log(\Lambda^2 / |p|^2) })\right) \\ \label{Fourier2} \int {d^2\!z \over 2} e^{i (p z + \bar p \bar z)} { \log^\alpha(\Lambda^2 z \bar z) \over z } &=& { i \pi \over \bar p } \log^{\alpha} (\Lambda^2/|p|^2)\left( 1 +{\cal O}({ 1 \over \log(\Lambda^2 /|p|^2) })\right) \\ \label{Fourier3} \int {d^2\!z \over 2} e^{i (p z + \bar p \bar z)} { \log^\alpha(\Lambda^2 z \bar z) \over z \bar z^2 } &=& { i \pi \bar p \over \alpha +1 } \log^{\alpha+1} (\Lambda^2/|p|^2)\left( 1 +{\cal O}({ 1 \over \log(\Lambda^2 / |p|^2) })\right) \end{eqnarray} \vspace{20pt} \section{A different regularisation of the main Fourier transform} \label{alter} In this appendix we provide an alternative regularisation for the integral \eqref{original} \begin{equation} F_\alpha(|p|/\Lambda) = (2 \pi) \ 2^\alpha \int_0^\infty {dr \over r} \log^\alpha(\Lambda r) J_0(|p| r)\,. \end{equation} Since the integral diverges at small $r$, we need to cut it off. It is natural to do so at $r=1/\Lambda$ (or larger values so that the logarithm does not become negative). This is another indication that the scale $\Lambda$ behaves like a UV cut-off. The resulting integral reduces to \begin{equation} F_\alpha(|p|/\Lambda) = \pi\, 2^{\alpha+1}\,\int_1^\infty \frac{dy}{y} \left(\log y\right)^\alpha J_0(|p|\,y/\Lambda)\,, \label{Fintegral1} \end{equation} where we introduced the dimensionless variable $y=\Lambda\,r$. Let us consider the $\alpha= n \in \mathbb{Z}$ to get some intuition. We are interested in studying the momentum dependence of this integral. For this purpose, define $\tau \equiv |p|/\Lambda$ and $t=\tau\,y$, so that \eqref{Fintegral1} equals \begin{equation*} I_\alpha(\tau) \equiv \frac{F_\alpha(|p|/\Lambda)}{\pi\, 2^{\alpha+1}}= \int^\infty_\tau \frac{dt}{t}(\log z/\tau)^\alpha\,J_0(t)\,. \end{equation*} Its derivative with respect to $\tau$ satisfies \begin{equation} \frac{dI_\alpha(\tau)}{d\tau} = -\frac{\alpha}{\tau} I_{\alpha-1}(\tau)\,. \label{recurrence} \end{equation} Notice the contribution from the lower limit of integration does not contribute since the integrand vanishes at $t=\tau$ due to the logarithm. To proceed, we use an integral appearing in \cite{abramowitz+stegun} \begin{equation} I_0(\tau) = \int^\infty_\tau \frac{J_0(t)}{t}\,dt = -\gamma - \log\frac{\tau}{2} - \sum_{k=1}^\infty (-1)^k \frac{(\tau/2)^{2k}}{2k\,(k!)^2}\,. \end{equation} This result is valid for any $\tau$. In the following, we will explore the physical regime corresponding to the IR, $\tau\equiv |p|/\Lambda \ll 1$, where the dominant contribution to $I_0(\tau)$ is captured by the logarithm and all the analytic terms are dropped. It is easy to solve the recurrence relation \eqref{recurrence} when focusing on this dominant logarithmic contribution. Indeed, by induction, we can prove that the ansatz \begin{equation} I_\alpha(\tau) \sim \frac{(-1)^{\alpha+1}}{\alpha+1} \left(\log \frac{\tau}{2}\right)^{\alpha + 1}\,, \end{equation} solves \eqref{recurrence}. Notice that the integer nature of $\alpha$ was used in the induction step, so that $I_0(\tau)$ belongs to our series. The above derivation for $\alpha\in \mathbb{Z}$ also determines the logarithmic nature of the subleading contributions \begin{equation} F_\alpha(|p|/\Lambda)=\frac{\pi}{\alpha+1} \left(\log (\Lambda^2/|p|^2)\right)^{\alpha+1} \left(1+{\cal O}(1/\log(|p|/\Lambda)) \right)\,, \end{equation} in agreement with the result \eqref{GenFormFn} in the main text. \vspace{20pt} \section{Normalisation of currents} \label{currents} When matching the free fermionic notation with the $\mathrm{SO}(N)$ at level one, the currents are written in terms of adjoint indices $a$ \begin{equation} J^a(z) = \frac{1}{2}\sum_{i,j} \left(\nu^i\,t^a_{ij}\,\nu^j\right)(z)\,, \end{equation} with a similar expression for $\bar{J}^b(\bar{z})$ in terms of $\bar{\nu}^k$, where $a$ stands for a pair of integers $(kl)$ satisfying $1\leq k < l \leq N$ and the matrices $t^a_{ij}$ satisfy\footnote{Following the conventions in \cite{DiFrancesco:1997nk}.} \begin{equation} \begin{aligned} t^a_{ij} &\equiv i\left(\delta^k_i\delta^l_j - \delta^k_j\delta^l_i\right) \\ \text{Tr}\left(t^at^b\right) &= 2\delta^{ab} \equiv 2\left(\delta^{ki}\delta^{lj}-\delta^{kj}\delta^{li}\right) \\ \sum_a t^a_{ij}t^a_{kl} &= -\delta_{ik}\delta_{jl}+ \delta_{jk}\delta_{il}\,, \end{aligned} \end{equation} where in the second line $a=(ij)$ and $b=(kl)$. Notice $\sum_a \delta^{aa} = \frac{N^2}{2}(1-N^{-1})$ due to the range of the indices $(kl)$. These matrices have a commutator $\left[t^a,\,t^b\right]= \sum_c if_{abc}\,t^c$ whose structure constants are explicitly given by \begin{equation} f_{abc} \equiv f_{(ij)(kl)(mn)}=\delta_{mi}\left(\delta_{nl}\delta_{jk}-\delta_{nk}\delta_{jl}\right) + \delta_{mj}\left(\delta_{il}\delta_{nk}-\delta_{nl}\delta_{ik}\right) \end{equation} It is easy to show these structure constants satisfy the identity \begin{equation} \sum_{a,b} f_{abc}f_{abd} = 2(N-2)\delta_{cd}\,. \label{soNid} \end{equation} Finally, the OPE between currents is \begin{equation*} J^a(z)\,J^b(\omega) \sim \sum_c \frac{i\,f_{abc}\,J^c(\omega)}{(z-\omega)} + \frac{1}{2}\frac{\text{Tr}(t^at^b)}{(z-\omega)^2}\,, \end{equation*} with an analogous expression for the opposite chirality. These determine the 3-pt functions \eqref{CintermsofF}. \end{appendix} \vspace{10pt} \bibliographystyle{JHEP}
1,108,101,563,069
arxiv
\section{Introduction} \begin{figure \centerline{\includegraphics[width=0.48\textwidth]{figures/framework_mosaic.png}} \caption{An illustration of the proposed Semantic Connectivity-aware Learning (SCL) approach for semantic segmentation, which improves segmentation performance from the perspective of connectivity.} \label{framework} \end{figure} Portrait segmentation~\cite{shen2016automatic} has brought great success in various entertainment applications, such as virtual background, beautifying filters, character special effects. Among these applications, video conferencing has become a major scenario for portrait segmentation, where participants could automatically replace their private backgrounds (e.g., ones from private rooms) with virtual scenes. The outbreak of coronavirus has further accelerated the prevalence of video conferencing to dramatically replace the traditional face-to-face meetings, as working-from-home has been desired~\cite{sander2020coronavirus}. Moreover, compared to the traditional video conferencing that links participants from different offices/conference rooms, the current meeting scenes become much more diverse in surroundings and lighting conditions, as live videos are recorded from each participant's home. Participants may show various postures and actions, and even wear face masks. In addition, participants sometimes access to the teleconferencing using a thin client, such as a webpage for chat based on JavaScript running on a browser, or a chat App running on mobile devices. Thus, there frequently needs to serve portrait segmentation tasks in resource-limited computing platforms (e.g., webpages and smartphones without powerful GPUs) while ensuring real-time performance for teleconferencing on-the-air. All these practical issues in post-COVID-19 video conferencing have brought great challenges and opportunities to the portrait segmentation field. \begin{figure*}[t!] \centerline{\includegraphics[width=0.95\textwidth]{figures/dataset_mosaic.png}} \caption{Examples of our dataset and existing datasets. (a) FVS contains only 4 green-screen videos. Due to the composition effect, the labels are not smooth enough. (b) Maadaa contains a lot of similar images and irrelevant information of the interface of the software, e.g. virtual buttons, small windows. (c) The proposed dataset contains various teleconferencing scenes, various actions of the participants, interference of passers-by and illumination change. Note that all videos with human subjects in the proposed datasets have granted the rights to use and disseminate for scientific research purposes.} \label{dataset} \end{figure*} Actually, many works have been done on both datasets and methodologies for portrait segmentation. For datasets, there are EG1800~\cite{shen2016automatic}, AISeg~\cite{aiseg}, FVS~\cite{DBLP:journals/corr/abs-2104-09752}, Maadaa~\cite{maadaa}, as shown in figure \ref{dataset}. However, they are rarely applied for video conferencing tasks. The datasets for video conferencing either are with low picture quality and high redundancy or even contain synthesized images. \emph{Thus, a new dataset with real-world teleconferencing videos of high picture quality and fine-grained labels is required}. In terms of segmentation methods, a great number of works have been proposed to address context information~\cite{yuan2020object,zhao2017pyramid,chen2018encoder}, multi-scale adaptation~\cite{chen2016attention,tao2020hierarchical}, fine edge processing~\cite{kirillov2020pointrend,yuan2020segfix,cheng2020boundary,cheng2021boundary,hao2021edgeflow}, category imbalance loss~\cite{berman2018lovasz,milletari2016v} issues. However, these approaches are designed for generic semantic segmentation yet not optimized for portrait segmentation. Although portrait segmentation is a sub-type of semantic segmentation, it has distinct characteristics comparing with other object segmentation. The person can be regarded as a non-rigid object, so that the postures and appearances are varying, which is challenging in the semantic segmentation task. In addition, generic semantic segmentation is pixel-level classification which ignores the completeness of person instances. \emph{Thus, a new learning approach that takes care of completeness of person instances, subject to varying human actions/postures, is required for portrait segmentation in teleconferencing.} In addition, to achieve portrait segmentation on mobile devices, several lightweight models for semantic segmentation have been proposed~\cite{BMVC2019,wang2020deep}. However, the results of these models~\cite{BMVC2019,wang2020deep} evaluated on portrait dataset are unsatisfactory. \emph{Thus, a lightweight model that could deliver real-time portrait segmentation on resource-limited platforms (e.g., mobile devices and browsers) is required.} Therefore, we introduce an open-source solution for practical portrait segmentation named PP-HumanSeg. In this work, we construct a large-scale video portrait dataset including 291 meeting videos in 23 different scenes. To facilitate researchers in the field, we provide 14,117 fine-annotated images. To improve the completeness of person instances, we propose a new Semantic Connectivity-aware Learning (SCL) approach, where the connected component concept is used to represent the completeness of the person. The proposed approach improves the consistent connectivity between the segmentation results and the ground truth. Finally, we propose an ultra lightweight segmentation network using SCL, which achieves the best trade-off among mIoU and the inference speed. The contributions of this paper are as follows: \begin{itemize} \item We release a large-scale video portrait dataset that contains 291 videos from 23 conference scenes with 14K fine-labeled frames provided, to facilitate the progress in portrait segmentation in video conferencing. Please refer to figure~\ref{dataset} for comparisons with existing datasets, such as FVS~\cite{DBLP:journals/corr/abs-2104-09752} and Maadaa~\cite{maadaa}. \item We propose a novel Semantic Connectivity-aware Learning (SCL) framework for portrait segmentation, which improves segmentation performance from the perspective of connectivity. \item We propose an ultra-lightweight model with SCL for practical portrait segmentation, which achieves the best trade-off between performance and the inference speed. Extensive evaluations on our dataset demonstrate the superiority of SCL and our model. \end{itemize} To the best of our knowledge, it is the first video portrait dataset with various scenes, character appearances and actions for video conferencing, with non-trivial baseline models/algorithms offered. \section{Related Works} While the main contributions of this paper include a new dataset, a new learning framework, and a new lightweight model all for portrait segmentation in teleconferencing setting, we thus introduce and discuss the related works from these three perspectives. \paragraph{Datasets.} There are several popular portrait datasets, such as EG1800~\cite{shen2016automatic}, FVS~\cite{DBLP:journals/corr/abs-2104-09752}, Maadaa~\cite{maadaa} and AISeg~\cite{aiseg}. Compared to EG1800~\cite{shen2016automatic}, AISeg~\cite{aiseg}, and FVS~\cite{DBLP:journals/corr/abs-2104-09752} that provided (self-)portrait images and segmentation labels of persons under various indoor/outdoor or even virtual backgrounds, our work offers massive fine-labeled frames of real-world videos for teleconferencing. Maadaa~\cite{maadaa} also provided images collected from video conferencing scenarios, but they were all screenshots from the video conferencing applications that incorporate irrelevant and noisy pixels, such as software interfaces. In addition, all existing datasets do not include persons wearing face masks, which is unavoidable for post-COVID-19 teleconferencing. \paragraph{Learning Methods and Lightweight Models.} The existing learning algorithms for semantic segmentation mainly incorporate cross entropy loss, lovasz loss~\cite{berman2018lovasz}, dice loss~\cite{milletari2016v}, and RMI loss~\cite{zhao2019region} for training. In addition, upon these training methods, the multi-branch networks have been proposed to improve the lightweight models~\cite{yu2018bisenet,BMVC2019,poudel2018contextnet,mazzini2018guided} for generic segmentation problem. Compared to these works, we propose a new SCL framework that incorporates a new loss, namely \emph{semantic connectivity-aware loss}, to improve the completeness of segmentation results for person instances and introduce a new model design, namely \emph{ConnectNet} to facilitate ultra-lightweight connectivity-aware portrait segmentation. Note that some face-related libraries, such as~\cite{wang2021face,zhao2018towards}, also include face detection modules that can improve the performance of portrait segmentation. Due to the page limits, we do not include the discussion on them here. \section{The Proposed Dataset} In this section, we introduce the ways we collect and label images and videos for portrait segmentation in real-world teleconferencing settings. \subsection{Data Collection} In order to get closer to the real video conference data distribution, we collect the videos in 23 common conference scenes including meeting rooms, offices, public office areas, living room, classrooms, etc. In addition, the participants perform various actions, e.g. waving hands, getting up and sitting down, drinking water, using mobile phones, shaking, etc. We also collected a large number of pictures of people wearing masks. Finally, we get a large-scale dataset of 291 videos with 1280x720 resolution. In order to reduce redundancy, we extract frames from the videos at a low frame rate of 2.5 FPS to get 14117 HD images. The diversity of collected images is shown in figure~\ref{dataset}(c). \subsection{Data Labeling} We recruited several professional annotators to label the collected data. They provide high-quality labels of our dataset in both pixel level and video level. \subsubsection{Pixel-Level Labeling} In fact, the annotation of portrait segmentation usually has two ambiguous instances, 1) hand-held items, 2) distant passerby or people with backs. The annotation of them depends on the practical applications, as well as the definition of foreground and background. In video conferencing, the purpose of portrait segmentation is highlighting participant-related parts rather than the surroundings. The hand-held items highly related to the activities of participants, such as mobile phone, glasses and cup. However, distant passerby or people with backs are not participants of the video conference, which should be ignored. Therefore, all hand-held items are labelled together with human body. Distant passers-by or people with backs are not labelled, even though they are usually labelled in other applications of portrait segmentation. \subsubsection{Video-Level Labeling} Following the practice of VOC~\cite{everingham2010pascal} and PSS~\cite{zhang2021personalized}, we annotate our videos based on the objects appeared in the video. Each video clip has multi-class attributes, e.g. the scene id, the number of participants, the activity of participants, wearing face mask, passers-by. The video-level annotation can be used to video description and multi-task learning, which also provides a good starting point to human activity analysis study in video conferencing. \subsection{Video Synthesis for Teleconferencing} Besides the 14K fine-labeled images, we also collected pure-background images in 90 different video conferencing scenes. Then we use a simple video composition strategy to augment the dataset further. The high-quality portrait masks allow us to extract the portrait parts precisely, and much more labeled images is composed of the extracted portrait parts and pure-background images. Through data composition, we generate around one million images eventually. Due to high-quality annotation, the edges of the composition data are smooth and look natural, as shown in figure~\ref{composition}. \begin{figure}[t!] \centerline{\includegraphics[width=0.48\textwidth]{figures/composite_mosaic.png}} \caption{Examples of composited videos for teleconferencing.} \label{composition} \end{figure} \section{SCL: Semantic Connectivity-aware Learning for Portrait Segmentation} In this section, we present the design of Semantic Connectivity-aware Learning (SCL) framework (shown in figure \ref{framework}) for semantic segmentation. To improve the completeness of segmentation results, we define a new concept namely \emph{semantic connectivity} to represent the portrait segmentation results and ground truth. Specifically, in addition to using traditional semantic labels as supervision, SCL extracts the connected components from semantic labels and uses them as the supervision signal via a Semantic Connectivity (SC) loss. Note that SCL framework complement with other deep neural architectures (e.g. CNNs, Transformer-based Networks~\cite{zheng2021rethinking,liu2021swin,xie2021segformer,vit}) to boost the performance of portrait segmentation. \subsection{Semantic Connectivity between Components in Segmentation} \begin{figure \centerline{\includegraphics[width=0.48\textwidth]{figures/concept.png}} \caption{Connected components calculation and matching. (a) It indicates prediction and ground truth, i.e. $P$ and $G$. (b) Connected components are generated through the CCL algorithm~\cite{grana2010optimized}, respectively. (c) Connected components are matched using the IoU value.} \label{concept} \end{figure} In this work, we use the connected components to represent the completeness of the portrait segmentation. In topology, connected component is a maximal subset of a topological space that cannot be covered by the union of disjoint subsets. In portrait segmentation, we take the region of a person instance as a connected component. Figure~\ref{concept} shows an example for connected components calculation and matching. We find the connected components of predictions ($P$) and ground truth ($G$), respectively. Connected components calculation is a fundamental principle in image processing, where there are many methods, e.g. connected component labelling (CCL) and edge thinning. In our approach, we use a CCL algorithm to calculate the connected components, because of its robustness~\cite{grana2010optimized}. We then traverse all connected components of $G$ and $P$ to find all pairs that intersect with each other. In figure~\ref{concept}, there are three pairs, i.e. $[g_2, p_2]$, $[g_3, p_5]$, $[g_4, p_4]$, and three isolated components, i.e. $p_1, p_3, g_1$. Note that a connected component in $G$ could have intersections with multiple connected components in $P$, which is not be indicated in the figure. Assuming $g_i$ is paired with $\{p_1, p_2, ..., p_k\}$, the connectivity of $g_i$ is denoted as $C_i$, which is calculated with the equation as follows. \begin{equation} \label{eq:connectivity} C_i(P) = \frac{1}{k} \Sigma_{k=1}^{k}\mathrm{IoU}(g_i, p_k)\in (0, 1] \end{equation} \begin{equation} \label{eq:iou} \mathrm{IoU}(g_i, p_k) = \frac{|g_i\cap p_k|}{|g_i\cup p_k|}\ \end{equation} In particular, when $g_i$ is only paired with one connected component in $P$, e.g. $p_j$, $C_i$ equals to IoU between $g_i$ and $p_j$. If $g_i$ is an isolated component, $C_i$ equals to 0. Finally, we define the semantic connectivity (SC) of the entire image given the graph of components in the ground truth $G$ and the graph in the prediction $P$ as the follow. \begin{equation} \label{eq:sc} \mathrm{SC}(P,G) = \frac{1}{N} \Sigma_{i=1}^N C_i(P) \end{equation} where $N$ is the total number of both pairs and isolated components. Note that for $\forall P,G$ we have $\mathrm{SC}(P,G)\in [0, 1]$. \subsection{Learning with SC Loss} To enable the semantic connectivity-aware learning, the SCL frameworks uses a novel loss function based on the proposed semantic connectivity, which minimize the inconsistency of connectivity between the prediction and the ground truth. In addition, when no intersection between the prediction and the ground truth, we use an area-based loss function to better optimize the model. The mathematical notation is the same as in the previous section, we denote Semantic Connectivity-aware (SC) Loss as $L_\mathrm{SC}$. If there is at least a pair between $P$ and $G$, $L_\mathrm{SC}$ is defined as follow. \begin{equation} L_\mathrm{SC}(P,G) = 1-\mathrm{SC}(P,G)\ \end{equation} where for $\forall P,G$ we have $L_\mathrm{SC}(P,G)\in [0, 1]$. Note that there is a special case that no pair exists between $P$ and $G$, and connectivity is becoming to be 0. It could happen in the beginning of training, due to random initialization of parameters. However, 0-connectivity in SCL would lead to zeros gradients, and the weights cannot not be updated accordingly the connectivity. For such special case, we design a non-trivial loss function to cold start the process. Specifically, to ensure the continuity and differentiability of the loss function in the cold-start phase, we write the SC loss $ L_\mathrm{SC}$ as follow. \begin{equation} L_{SC}(P, G) = \frac{|P\cup G|}{|I|},\label{lsc2} \end{equation} where $I$ represents the image and $|\cdot|$ represents the area of the region (total number of pixels in the region), and for $\forall P, Q$ we have $L_{SC}(P, G) \in (0, 1]$. Finally, SCL incorporates the SC Loss as a regularizer to complement with the segmentation losses (denoted as $L_S$. e.g. cross entropy loss) in the form of $L=L_S + \lambda * L_\mathrm{SC}$ to optimize the model. The hyper-parameter $\lambda$ denotes a weight to make trade-off between the SC loss and the segmentation loss. \section{ConnectNet: an Ultra-lightweight Neural Network for Portrait Segmentation} \begin{figure \centerline{\includegraphics[width=0.48\textwidth]{figures/connectnet.png}} \caption{ConnectNet: an Ultra-lightweight Model for Portrait Segmentation.} \label{connectnet} \end{figure} We propose an ultra-lightweight segmentation network to work with SCL, namely \emph{ConnectNet}, as shown in figure~\ref{connectnet}. ConnectNet adopts an encoder-decoder structure. The encoder follows an inverted bottleneck block~\cite{ma2018shufflenet} design with channel-shuffle operation to extract features efficiently. To reduce the computation loads while maintaining high resolutions, ConnectNet compresses the number of stages and channels, where every stage is stacked by multiple inverted bottleneck blocks. Moreover, ConnectNet incorporates depth-wise separable convolution to improve the decoding efficiency in the decoder, where depth-wise separable convolution decomposes the ordinary convolution into depth-wise convolution and point-wise convolution so as to further reduce computation loads. With features extracted in an encoder-decoder network with bottleneck layers, the encoder would lower the resolution of the feature map and lose the spatial details. Spatial information is critical in segmentation tasks. Therefore, the proposed network connects the encoder and decoder across layers through a skip connection to integrate the underlying texture features, which is more conducive to generating fine masks. At the same time, the skip connection directly reuses the features extracted by the encoder without additional computation costs. \section{Experiments} \subsection{Experiment settings} All of our experiments are conducted on two Tesla V100 GPUs of 32GB using PaddlePaddle\footnote{https://github.com/PaddlePaddle/Paddle}~\cite{paddlepaddle}. Code and pretrained models are available at PaddleSeg\footnote{https://github.com/PaddlePaddle/PaddleSeg}~\cite{liu2021paddleseg}. During training, we use polynomial decay with power equal to 0.9, and the learning rate equals to 0.05 and 0.025 for HRNet-W18-small and other networks respectively. We use SGD as our optimizer with weight decay parameter being 0.0005. We apply data augmentation methods including scale, crop, flip, and color distortion for training. We use BBDT algorithm~\cite{grana2010optimized} for connected component labeling. In order to avoid similar images in the validation set and test set, we divide the dataset by scene level. The proposed dataset is randomly divided into a training set with 11 scenes and 9006 images, a validation set with 6 scenes and 2549 images, and a test set with 6 scenes and 2562 images. We train our model with the batch size of 128. For all experiments, we take mIoU and pixel accuracy as evaluation metrics. \subsection{Experiment Results} \subsubsection{Hyper-parameter} SCL optimizes the network using a weighted combination of cross entropy loss and SC loss. Different combination coefficient may bring different effects. In order to show that the SC loss is parameter in-sensitive and robust. We conduct 5 experiments with different weight coefficients, i.e. 0.01, 0.05, 0.1, 0.5, and 1.0. As shown in Table~\ref{different_ratios}, the mIoUs under almost all coefficients are improved on different combination of SC loss and segmentation loss. We set $\lambda$ as 1.0 in the following experiment settings. \begin{table}[ht] \centering \resizebox{0.47\textwidth}{!}{ \begin{tabular}{l|cccccc} \toprule Weight coefficient & baseline & 0.01 & 0.05 & 0.1 & 0.5 & 1.0\\ \toprule mIoU & 93.0 & 94.2 & 93.9 & 94.5 & 92.6 & \textbf{94.6}\\ \bottomrule \end{tabular}} \caption{Robustness of SC loss under different weight coefficients} \label{different_ratios} \end{table} \subsubsection{Ablation study on various models} We evaluate the effectiveness of our SC loss on light-weight networks including HRNet-W18-small~\cite{wang2020deep}, BiseNetV2~\cite{yu2021bisenet} and ConnectNet. As shown in table~\ref{ablation study}, SC Loss is effective across these networks, where the mIoU metric improves in HRNet-W18-small, BiseNetV2, and ConnectNet respectively. Through enhancing the connectivity of the connected components, the models obtain the better segmentation performance. \begin{table}[ht] \centering { \begin{tabular}{l|cc} \toprule Model & mIoU & Pixel Acc \\ \toprule HRNet-W18-small & 93.0 & 97.2 \\ HRNet-W18-small + SCL & \textbf{94.5} & 97.8\\ \midrule BiseNetV2 & 85.8 & 94.2 \\ BiseNetV2 + SCL &\textbf{ 87.5} & 94.8 \\ \midrule ConnectNet & 94.1 & 97.6 \\ ConnectNet + SCL & \textbf{94.6} & 97.6\\ \bottomrule \end{tabular}} \caption{Ablation study on light-weight networks} \label{ablation study} \end{table} \subsubsection{Comparision with other SOTA losses} In this section, we prove the superiority of SC loss over other state-of-the-art losses including lovasz loss~\cite{berman2018lovasz}, and RMI loss~\cite{zhao2019region}. These loss focus on different aspects of semantic segmentation like class imbalance and structural information. We conduct these experiments on HRNet-W18-small with learning rate being 0.5. For fair comparison, we set the coeffecient of the compound losses as 0.01 for all of the experiments. As shown in Table \ref{losses}, the SC loss we propose outperforms other loss methods. The experiment with SC loss has the best score in mIoU and pixel accuracy. The evaluation result shows the SC loss has SOTA performance for portrait segmentation. \begin{table}[ht] \centering { \begin{tabular}{l|ccc} \toprule Loss & mIoU & Pixel Acc \\ \toprule CE Loss (baseline) & 93.0 & 97.2\\ CE Loss + Lovasz Loss & 93.0 & 97.2 \\ CE Loss + RMI Loss & 94.3 & 97.7\\ CE Loss + SC Loss & \textbf{94.5} & \textbf{97.8}\\ \bottomrule \end{tabular}} \caption{Comparision with SOTA losses} \label{losses} \end{table} \begin{figure*}[t] \centerline{\includegraphics[width=0.9\textwidth]{figures/res_mosaic.png}} \caption{Semantic segmentation results of different light-weight networks} \label{result} \end{figure*} \subsection{Effectiveness of ConnectNet} In order to validate the performace of our proposed model, we compare its performance with other light-weight state-of-the-art models, including BiseNetV2~\cite{yu2021bisenet}, Fast SCNN~\cite{BMVC2019}, HRNet~\cite{wang2020deep}. As shown in Table \ref{benchmark}, our model is faster and more effective than other SOTA light-weight models. Compared with HRNet-W18-small, our model has greater performance and 41\% faster. Compared with Fast SCNN and BiseNetV2, our model is 1.5-3ms slower, but 1.2\% and 8.5\% higher in mIoU than BiseNetV2 and Fast SCNN, respectively. The experimental results show that our model outperforms BiseNetV2 and Fast SCNN to a great extend, but only have less than 10\% of their parameters. This is crucial in mobile and web applications considering that the storage requirement is rather strict. \begin{table}[h!] \centering \resizebox{0.47\textwidth}{!}{ \begin{tabular}{l|ccccc} \toprule Model & mIoU & Pixel Acc & Infer Time & Params\\ \toprule BiseNetV2 & 85.8 & 94.2 & 10.0 & 2.32 \\ Fast SCNN & 85.7 & 93.9 & \textbf{8.6} & 1.44 \\ HRNet-W18-small & 93.0 & 97.2 & 19.76 & 3.95 \\ ConnectNet & \textbf{94.2} & \textbf{97.6} & 11.5 & \textbf{0.13}\\ \bottomrule \end{tabular}} \caption{Benchmark on the state-of-the-art lightweight models. The unit of inference time is ms and the unit of Params is M.} \label{benchmark} \end{table} \subsubsection{Qualitative Comparison} In order to qualitatively show the performance of our network, We visualize the predictions of different networks on test images. As shown in figure \ref{result}, our model has better completeness than other models, and it is less prone to make disperse predictions. \section{Conclusion} To facilitate the progress in portrait segmentation in a video conferencing context, we introduce an open-source solution named PP-HumanSeg. In this work, we first construct a large-scale video portrait dataset that contains 291 videos from 23 conference scenes with 14K fine-labeled frames provided. To improve the completeness of segmentation results, we propose a Semantic Connectivity-aware Learning (SCL) framework incorporating a novel Semantic Connectivity (SC) loss. Such SC loss models the topology of portrait segmentation as a graph of connected components and measures the inconsistency between the graphs (i.e., connectivities) extracted from the ground truth labels and the prediction results as the loss. Furthermore, we propose an ultra-lightweight model, namely ConnectNet, with SCL for practical portrait segmentation. The proposed solution achieves the best trade-off between IoU and inference time in the dataset. Extensive evaluations on our dataset demonstrate the superiority of SCL and ConnectNet. The comparisons with other algorithms also show the advantage of proposed datasets from the perspectives of coverage and comprehensions. \section*{Acknowledgement} This work was supported by the National Key Research and Development Project of China (2020AAA0103500). {\small \bibliographystyle{ieee_fullname}
1,108,101,563,070
arxiv
\section{Introduction} \label{sec:introduction} \IEEEPARstart{M}{edical} images are a prerequisite for many clinical diagnostic and therapeutic tasks. Common medical imaging modalities include X-ray radiography, computed tomography (CT), magnetic resonance imaging (MRI), nuclear and ultrasound imaging. Among them, CT has the advantages of high resolution, geometric accuracy, fast speed, and relatively low cost, which are often the first imaging study before an intervention is performed. However, it cannot be ignored that X-rays produce ionizing radiation during a CT scan. When the X-ray radiation dose is absorbed by the human body, it may potentially induce abnormal metabolism or even genetic damage and cancer \cite{Ref1,Ref11}. The use of low-dose CT (LDCT) in practice can effectively reduce the radiation risk for patients but the resultant image noise and artifacts could compromise diagnosis \cite{Ref2}. Since the concept of LDCT was proposed \cite{Ref3}, a variety of methods were developed to suppress image noise and artifacts. These methods can be divided into three categories: projection domain filtering \cite{Ref4,Ref6,Ref7,Ref8}, iterative reconstruction \cite{Ref9,Ref10,12,Ref13,13} and image post-processing. The projection domain filtering methods process projections and then use a reconstruction method to produce a low-noise image. These methods include nonlinear filtering \cite{Ref4,Ref6} and statistical iterative methods \cite{Ref7,Ref8}. The main advantage of projection domain filtering is that it is easy to integrate the filter into existing CT systems. The iterative reconstruction methods use a likelihood function to associate the projections with a reconstructed image. The key is to obtain suitable prior information, such as total variation (TV) \cite{Ref9}, nonlocal means \cite{Ref10}, dictionary learning \cite{12}, partial differentiation \cite{Ref13}, and low-rank matrix decomposition \cite{13}. Such prior information can be incorporated into the objective function, and then the image is reconstructed in an iterative manner. These methods rely on projections \cite{Ref17}, which are generally inaccessible to most CT researchers unless they closely collaborate with a CT vendor. The image post-processing methods work on CT images only, without using projection data. Compared with the first two types of methods, the image post-processing methods can be applied after images are reconstructed, which are much more accessible than projection data. Image post-processing has been a hot topic in the field of LDCT image denoising. Along this direction, many excellent methods were developed, such as block-matching 3D filtering (BM3D) \cite{34} and dictionary learning-based filtering \cite{35}\cite{36}. Over recent years, deep learning has emerged as a new approach for image post-processing \cite{Ref25}. Many scholars developed deep learning-based methods for estimating normal-dose CT (NDCT) images from LDCT images. These methods use different network structures, such as 2D convolutional neural networks (CNNs) \cite{Ref18}, 3D CNNs \cite{Ref19}, cascaded CNNs \cite{Ref20}, residual encoder-decoder CNNs \cite{Ref21}, and generative adversarial networks (GANs) \cite{Ref22}). They also used different loss functions to measure the similarity between outputs and labels, such as the mean square error, perceptual loss \cite{Ref23}, generative adversarial loss \cite{Ref24}, and Wasserstein distance. Most of these deep learning methods use the supervised learning mode \cite{Ref18,Ref21,Ref41,38}, which require paired LDCT and NDCT images with the corresponding pixels representing the same position in the same patient. However, paired real datasets require multiple scans, which results in not only an increased amount of manpower, material and financial resources but also an additional radiation dose. Under the principle of ALARA (as low as reasonably achievable) \cite{Ref11}, human studies with an excessive radiation dose are strongly discouraged. Furthermore, multiple scans of the same patient may be subject to motion artifacts and substantial registration errors. Hence, it is impractical to obtain real paired datasets for supervised learning. In the AAPM Low-dose CT Challenge, low-dose CT images were simulated to compare different denoising algorithms. Although the detector-specific noise level can be realistically synthesized, scattering and other factors cannot be perfectly incorporated. In summary, it is an open issue how to construct a high-quality training dataset for low-dose CT image denoising. Many hospitals have accumulated large amounts of unpaired patient CT scans at different dose levels but in the supervised learning mode these images cannot be fully utilized due to their unpaired nature. As a result, unsupervised learning methods have attracted a major attention. GANs are among the most important unsupervised learning methods \cite{Ref22}. A GAN takes one image set as the learning target of another image set, greatly relaxing the requirement for image-level pairing. Park $et~al.$ \cite{Ref28} used a GAN and unpaired data to learn a generator that maps LDCT images to NDCT-like images. CycleGAN \cite{Ref32} as a variant of the generic GAN can realize image-to-image conversion between the input image domain and the target image domain with a cycle-wise consistency. Although it performs well for image conversion, some details may be distorted. As a further extension of CycleGAN, the artifact disentanglement network (ADN) is another well-known unsupervised learning method that maps unpaired low- and high-quality images into two latent spaces and then disentangles the contents and artifacts in the latent spaces, which supports image generation in different forms (artifact reduction, artifact transfer, self-reconstruction, etc.) \cite{Ref42}. For LDCT denoising, we need not only make image outputs look overall similar to NDCT images but also keep all the details as faithful as possible for diagnosis. However, the existing GAN-based methods are not satisfactory at preserving details. Although the current GAN-based learning methods distinguish synthetic images from real images, some important information is still lost. In fact, even if two images come from two different patients, they still have some local similarity that can be leveraged to suppress image noise, based on the same idea behind non-local mean denoising methods. Fig. \ref{fig1} shows structural similarity between the sub-images boxed in red and blue colors respectively, which is clearly higher than that between the red and blue regions. Structural similarity is important prior information; for example, we applied it to achieve superresolution of PET images \cite{Ref36}, in which we proposed a new weakly supervised learning mode, referred to as quasi-supervised learning, for recovering high-resolution PET images from low-resolution counterparts by leveraging the similarity between unpaired data. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth]{1.pdf}} \caption{Local similarity examples in two CT images from two patients under different imaging conditions respectively, where the boxes of the same color are anatomically consistent and structurally similar.} \label{fig1} \end{figure} This paper introduces the strategy of quasi-supervised learning into an ADN architecture for LDCT image denoising. The resulting network is called QS-ADN, which takes LDCT images as the input and unpaired NDCT images as the learning target. QS-ADN fully utilizes local similarity information of unpaired data to train the network, which is different from either supervised or unsupervised learning, and can be viewed as a new weakly supervised learning mode. It is not only applicable to unpaired data but also compatible with partially or fully paired data. In the next section, we present some background information. In the third section, we describe the proposed QS-ADN for LDCT denoising. In the fourth section, we report experimental results, which show the feasibility and effectiveness of the proposed method. In the last section, we discuss relevant issues and conclude the paper. \section{Background: Quasi-supervised Learning and ADN} This work introduces quasi-supervised learning into an ADN architecture for LDCT image denoising. In the following, let us review each of them. \subsection{Quasi-supervised learning} Our developed quasi-supervised learning method utilizes the hidden similarity between unpaired data as prior information for constructing a deep learning-based denoising network. The method consists of the following main steps: \begin{enumerate} \item A large number of unpaired LDCT and NDCT images are collected. \item They are divided into patches of meaningful structures. \item The best-matched pairs are identified from the LDCT and NDCT patches by their similarity. \item A network is trained on the matched patch pairs and their corresponding matching degrees. \end{enumerate} We take a general network as an example to explain the proposed quasi-supervised learning mode. Let $x$ and $y$ represent an LDCT and NDCT patch, respectively, where $x$ and $y$ can be paired or unpaired. Let $f(x,\theta)$ be a denoising operator implemented by a network with a vector of trainable parameters $\theta$, and $w(x, y)\in [0,~1]$ be the weight closely associated with the matching degree. Then, the quasi-supervised denoising objective can be expressed as \begin{equation} \label{eq:qsl} \min \limits_\theta{\mathbb{E}_{(x,y)}}\left[ {w(x,y)\left\| {f(x,\theta ) -} \right.\left. y \right\|_2^2} \right] \end{equation} Note that quasi-supervised learning uses $w(x, y)$ to adjust the matching degrees of the patches. The larger the value of $w(x,y)$ is, the more similar the paired patches are. It can be viewed as a new weakly supervised learning mode. When $(x, y)$ are truly paired data, namely, $w(x, y)=1$, quasi-supervised learning becomes supervised learning. Therefore, it is compatible with supervised, unsupervised and semi-supervised learning strategies in the conventional sense. \subsection{ADN} As proposed by \cite{Ref42}, let $I^a$ be the domain of all artifact-affected CT images and $I$ be the domain of all artifact-free CT images. It is assumed that no paired dataset is available. We define a content (artifact-free) latent space $C$ and an artifact latent space $A$. Notably, the latent space $C$ is different from the observed space $I$. The ADN architecture contains an artifact-free image encoder $E_I$: $I$ $\rightarrow$ $C$ and a decoder $G_I$: $C$ $\rightarrow$ $I$ and an artifact-affected image encoder $E_{I^a}$ = \{$E_{I^a}^c$ : $I^a$ $\rightarrow$ $C$; $E_{I^a}^a$ : $I^a$ $\rightarrow$ $A$\} and a decoder $G_{I^a}$ : $C$ $\times$ $A$ $\rightarrow$ $I^a$. The artifact-affected image encoder includes two subencoders which will be introduced below. The encoders map an image from the image domain to the latent space, and the decoders map a latent code back to the image domain. Specifically, given two unpaired images $x^a$ $\in$ $I^a$ and $y$ $\in$ $I$, $E_I$ and $E_{I^a}^c$ map the content components of $y$ and $x^a$ respectively to the content latent space $C$, and $E_{I^a}^a$ maps the artifact component of $x^a$ to the artifact latent space $A$. They can be formulated as follows: \begin{eqnarray} E_{I}(y)=c_y\in C,~ E_{I^a}^c(x^a)=c_x \in C,~ E_{I^a}^a(x^a)=a \in A \end{eqnarray} Then, the decoder $G_{I^a}$ takes a content code and an artifact code as its input and outputs an artifact-affected image. It is expected that decoding $c_x$ and $a$ should reconstruct $x^a$, and decoding $c_y$ and $a$ should add artifacts to $y$. \begin{equation} G_{I^a}(c_x,a)=\hat{x}^{a}\in I^a,~G_{I^a}(c_y,a)=\hat{y}^{a}\in I^a\\ \end{equation} Similarly, the decoder $G_I$ takes a content code as its input and outputs an artifact-free image. Decoding $c_x$ should remove the artifacts from $x^a$, and decoding $c_y$ should reconstruct $y$. \begin{equation} G_I(c_x)=\hat{x}\in I,~ G_{I}(c_y)=\hat{y}\in I \end{equation} Note that $\hat{y}^{a}$ can be regarded as a synthesized artifact-affected image whose artifacts are from $x^a$ while whose content is from $y$. Thus, by reapplying $E_{I^a}^c$ and $G_I$, $y$ should be recovered. \begin{equation} G_{I}(E_{I^a}^c(\hat{y}^{a}))=\widetilde{y}\in I \end{equation} Based on these encoders and decoders, ADN uses four loss functions to optimize the output through artifact disentanglement; namely, an adversarial loss $L_{adv}$, an artifact consistency loss $L_{art}$, a reconstruction loss $L_{rec}$ and a self-reduction loss $L_{self}$. Then, the overall objective function is formulated as the weighted sum: \begin{equation} L = L_{adv}+\lambda_{rec}L_{rec}+\lambda_{art}L_{art}+\lambda_{self}L_{self} \end{equation} where the $\lambda$s are the hyperparameters controlling the relative importance of each loss. The loss functions are explained as follows. \subsubsection{Adversarial Loss} By manipulating the artifact code in the latent space, ADN outputs $\hat{x}$ and $\hat{y}^a$ where the former removes artifacts from $x^a$ and the latter adds artifacts to $y$. ADN adopts the strategy of adversarial learning by introducing two discriminators $D_{I^a}$ and $D_{I}$ to regularize the plausibility of $\hat{x}$ and $\hat{y}^a$ so that ADN can be trained without paired images. The adversarial loss can be written as \begin{eqnarray} &&L_{adv} = L_{adv}^{I}+L_{adv}^{I^a}\\ &&where:\nonumber\\ &&L_{adv}^{I} = {\mathbb{E}_{I}}[\log {D_I}(y)]+{\mathbb{E}_{I^a}}[1-\log {D_{I}}(\hat{x})]\nonumber\\ &&L_{adv}^{I^a}= {\mathbb{E}_{I^a}}[\log {D_{I^a}}(x^a)]+{\mathbb{E}_{I,{I^a}}}[1-\log {D_{I^{a}}}(\hat{y}^{a})]\nonumber \end{eqnarray} \subsubsection{Reconstruction Loss} In a perfect artifact disentanglement process, encoding and decoding should not lose information nor introduce artifacts. For artifact reduction, after encoding and decoding by $E_{I^a}^c$ and $G_I$, the content information should be obtained. For artifact synthesis, using $E_{I^a}^a$, $E_I$ and $G_{I^a}$, an artifact-affected image should be generated. ADN utilizes two forms of reconstruction to encourage the encoders and decoders to preserve information. Specifically, it uses \{$E_{I^a}$, $G_{I^a}$ \} and \{$E_I$, $G_I$\} as autoencoders as follows: \begin{equation} L_{rec} = {\mathbb{E}_{I,{I^a}}}[\parallel \hat{x}^{a}-x^{a} {\parallel _{{\rm{ 1}}}}+\parallel \hat{y}-y {\parallel _{{\rm{ 1}}}} ] \end{equation} where the $L_1$ loss is used, instead of the $L_2$ loss, to encourage sharper outputs. \subsubsection{Artifact Consistency Loss} Since $\hat{y}^a$ is obtained by contaminating $y$ with the artifact $a$ from $x^a$ and $x$ is a clear image disentangled from $x^a$, $x^a$ and $y^a$ contain the same artifact. In other words, we can obtain the same artifact from either $x^a-\hat{x}$ or $\hat{y}^a-y$. Thus, the artifact consistency loss can be introduced as follows: \begin{equation} L_{art} = {\mathbb{E}_{I,{I^a}}}[\parallel (x^{a}-\hat{x})-(\hat{y}^{a}-y) {\parallel _{{\rm{ 1}}}} ] \end{equation} \subsubsection{Self-reduction Loss} If we add artifacts to $y$, which creates $\hat{y}^a$, and then remove the artifacts from $\hat{y}^a$, which results in $\widetilde{y}$, where $y$ and $\widetilde{y}$ should be close to each other. Based on this consideration, the following self-reduction loss is defined: \begin{equation} L_{self} = {\mathbb{E}_{I,{I^a}}}[\parallel \widetilde{y}-y {\parallel _{{\rm{ 1}}}} ] \end{equation} \section{Proposed QS-ADN} Here we use ADN as a framework and modify it for quasi-supervised learning, which is referred to as QS-ADN. The proposed quasi-supervised learning model for LDCT image denoising includes two main parts: patch matching and network construction. Let us describe them in the following subsections. \subsection{Patch Matching} We use unpaired data of LDCT and NDCT images. Since image patches carry local information, we utilize them to determine the local similarity between patients. The target is to find the pairs of best matched patches and corresponding matching degrees as prior information to train our LDCT image denoising network. Because any patch from an LDCT image must be matched to NDCT image patches to find the best matching pairs, a high computational cost is needed. To address this problem, here we design an efficient method for a satisfactory performance. We first match slices from LDCT and NDCT scans and then match patches from the matched slices. The workflow is described as follows: \begin{enumerate} \item For a given slice in an LDCT image set, the best-matched slice in an NDCT image set is identified using a similarity measure. \item For a given patch in an LDCT slice, the best-matched patch is identified in the corresponding matched NDCT slice using the similarity measure. \item The corresponding LDCT and NDCT patch pairs are obtained, and the similarity degrees are computed. \end{enumerate} In the matching process, we can use the normalized mutual information (NMI), the Pearson correlation coefficient, the radial basis function (RBF) and other functions to measure the similarity between two image patches. The standard mutual information is defined as follows: \begin{equation} I(X,Y) = \sum\limits_{x } {\sum\limits_{{ y} } p } (x,y)log\frac{{p(x,y)}}{{p(x)p(y)}} \end{equation} where $X$ and $Y$ form an image pair, $p(x)$ and $p(y)$ denote the distributions of $X$ and $Y$ respectively, and $p(x,y)$ denotes the joint distribution of $X$ and $Y$. Then, NMI is formulated as \begin{equation} NMI(X,Y) = \frac{{2I(X,Y)}}{{H(X) + H(Y)}} \end{equation} where $H(X) = - \sum\limits_i p ({x_i})\log p({x_i})$. The Pearson correlation coefficient is expressed as \begin{equation} \rho (x,y) = \frac{{{\mathop{\rm{\emph c\emph o \emph v}}} (x,y)}}{{{\sigma _x}{\sigma _y}}} \end{equation} where $cov(x,y)$ is the covariance of $x$ and $y$, and $\sigma _x$ and $\sigma _y$ are the standard deviations of $x$ and $y$ respectively. The RBF is defined as follows: \begin{equation} RBF(x,y) = {e^{ - \frac{{\left\| {\left. {x - y} \right\|} \right._2^2}}{{2{\sigma ^2}}}}} \end{equation} where $\sigma>0$ is a hyperparameter. The ultimate goal in this process is to find the pairs of most similar patches. The slice matching step sacrifices precision to improve efficiency. When the computational resources are sufficient, this step can be removed. Additionally, a patch under a given imaging condition could be matched to one or more patches under different imaging conditions, which can help improve the denoising ability of the neural network. \subsection{Network Construction} Image synthesis is an important process for ADN, which uses any unpaired artifact-free and artifact-affected images ($x^a$ and $y$) to generate a new image. Certainly, if $x^a$ and $y$ are paired, the performance of ADN will become better than that of the generic ADN with unpaired images; however, much extra work is required to acquire paired data, as discussed above. We recall our previous work on obtaining the pairs of best matched patches from unpaired data and recording the matching degrees. These matching degrees will be used here to improve ADN, which leads to our proposed QS-ADN. Specifically, we add the matching degree information $(x^a,y)$ to weight the corresponding losses. By observation, we identify that three losses ($L_{rec}$, $L_{art}$ and $L_{self}$) are simultaneously related to both $x^a$ and $y$, which should be weighted. $L_{adv}$ indicates the difference between two sets sampled from $I$ and $I^a$, which should not be weighted. Therefore, the loss function of QS-ADN can be expressed as \begin{equation} L = L_{adv}+w(I,I^a)\otimes(\lambda_{rec}L_{rec}+\lambda_{art}L_{art}+\lambda_{self}L_{self})\label{eq:total_loss} \end{equation} where $w(I,I^a)$ denotes the weights corresponding to the similarity of two image patches from $I$ and $I^a$ respectively, and the operator $\otimes$ assigns these weights to the corresponding components in the loss function, which has been interpreted in Eq. \eqref{eq:qsl}. All the involved encoders, decoders and discriminators have the same structures as that used in the classic ADN. Traditionally, the patch pairs used in deep learning have only two statuses: paired and unpaired. In contrast to this binary classification, quasi-supervised learning considers the degree of patch matching. This is more in line with a fuzzy reasoning process. As a special case, such pairs of patches form a supervised training dataset when only the pairs with a matching probability of one are included. Therefore, our method is compatible with supervised and semi-supervised learning modes. \section{Experiments} To verify the denoising effect of QS-ADN, we used the Mayo chest CT image dataset from The Cancer Imaging Archive (TCIA) \cite{data}, which is an open and vendor-neutral CT patient database created by the Mayo Clinic. The dataset contains paired LDCT and NDCT images, where quarter-dose and full-dose filtered backprojection images are provided with the corresponding projection datasets. We selected ten patients as the training dataset and two other patients as the testing dataset. In the training set, the LDCT images of five patients are taken as input, and the NDCT images of the remaining five patients are taken as the learning target. The size of each image is 512$\times$512. The dataset contains paired LDCT and NDCT images and, thus, enables qualitative and quantitative evaluation. We use the peak signal-to-noise ratio (PSNR) \cite{51} to measure the denoising performance by calculating the overall difference between the denoised LDCT image and the original NDCT image. A higher value indicates better image quality. The PSNR is expressed as \begin{equation} PSNR = 10{\log _{10}}\left( {\frac{{{\rm{ma}}{{\rm{x}}^2}}}{{\frac{1}{n}\sum\nolimits_{i = 1}^n {{{\left( {{x_i} - {y_i}} \right)}^2}} }}} \right) \end{equation} where $x_i$ and $y_i$ are pixel values of LDCT and NDCT images respectively, $n$ is the number of pixels in an image, and $max$ represents the maximum image pixel value. We also use the structural similarity (SSIM) \cite{52} to evaluate the denoised image quality. The value is in the range of $[-1,1]$, and the higher the value is, the closer the image features are to that in the NDCT image. \begin{equation} SSIM(x,y)=\frac{\left(2\mu_x\mu_y+c_1\right)\left(\sigma_{xy}+c_2\right)} {\left(\mu_x^2+\mu_y^2+c_1\right)\left(\sigma_x^2+\sigma_y^2+c_2\right)} \end{equation} where $\mu_x$ and $\mu_y$ are the averages of $x$ and $y$, $\sigma_x^2$ and $\sigma_y^2$ are the standard deviations of $x$ and $y$ respectively, $\sigma_{xy}$ is the covariance of $x$ and $y$, and $c_1$ and $c_2$ are two offset constants to stabilize the division operation. We use NMI to measure the similarity of two patches when selecting the matched pairs. Since the range of NMI is $[0,1]$, we directly used NMI values as the weights ($w(x,y)$) for network construction. The network was implemented in Python 3.7 with PyTorch 1.2.0 on a computer equipped with an NVIDIA Quadro P2200 GPU. We used the Adam optimizer to optimize the loss function, and the learning rate was set to 0.0001 with $\beta_1=0.5$ and $\beta_2=0.99$. \subsection{NMI Distribution} We first compared the NMI distribution of the truly and manually paired data, which is shown in Fig. \ref{fig13}. Most of the truly paired data have NMI values of approximately 0.4, and the manually paired data have values of approximately 0.31. The former certainly and clearly have higher values than the latter. However, the two distributions overlap in the wide range of $[0.1,~0.35]$. Since low matching degrees are not significant for QS-ADN, we only saved the pairs with matching degrees above 0.1 for training. In fact, the NMI values of the truly paired data are not as high as expected, and some values are very low, approaching 0.01. Such low NMI values for the paired data provide an opportunity for our method to catch up under guidance of the matching degree information. The matching degrees of the manually paired data are actually related to the scale of the data. With a larger dataset, the matching degrees usually become higher up to the case of truly paired data. In fact, according to the following experiments, such similarity degrees are sufficient for satisfactory denoising performance. \begin{figure*} \centering \subfigure[] {\includegraphics[width=0.45\textwidth]{13-1.pdf}} \subfigure[] {\includegraphics[width=0.45\textwidth]{13-2.pdf}} \caption{Distributions of NMI values for (a) truly paired data and (b) computationally paired data.} \label{fig13} \end{figure*} \subsection{Hyperparameter Selection} In consideration of the computer hardware used in this study, we set the training batch size to 2. As suggested by \cite{Ref42}, we set the hyperparameters as follows: $\lambda_{rec}=\lambda_{art}=\lambda_{self}=\lambda$. Then, several key factors of the proposed QS-ADN were studied for LDCT image denoising, including the patch size, number of training epochs, and weighting hyperparameters ($\lambda$). \subsubsection{Patch Size} Quasi-supervised learning is based on patch pairs. A patch is an important local information carrier, and its size may affect the capability of extracting contextual information. We performed experiments to study the effect of the patch size on the denoising results as shown in Fig. \ref{fig3}, where $\lambda=20$ was used. We gradually increased the patch size from 64 to 160 with different numbers of epochs. It was found that the size of the patch had no major effect on the denoising results for the proposed network. Based on this observation, given the computer hardware for reasonable training time we fixed the patch size to 64$\times$64 for the subsequent experiments. \begin{figure*} \centering \subfigure[] {\includegraphics[width=0.45\textwidth]{3-1.pdf}} \subfigure[] {\includegraphics[width=0.45\textwidth]{3-2.pdf}} \caption{Evaluative metrics versus patch size with different numbers of epochs: (a) PSNR and (b) SSIM. } \label{fig3} \end{figure*} \subsubsection{Number of Epochs} Choosing a suitable number of training epochs is crucial for ensuring network convergence and saving training time. To obtain a satisfactory denoising result, we usually iterated the network training process sufficiently many times. However, when the denoising effect is similar, additional iterations is not needed to cut computational cost. The loss curve on the training set is shown in Fig. \ref{fig4}. We still set $\lambda=20$. The loss curve rapidly decreased in the first 10 epochs, especially in the first two, and became quite flat after 60 epochs. Considering the denoising effect and computational cost, we set the number of epochs to 70 in the subsequent experiments. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{4.pdf} \caption{Loss curve of QS-ADN on the training set.} \label{fig4} \end{figure} \subsubsection{Value of $\lambda$} It is important to select a proper value for the hyperparameter $\lambda$ in the total loss function, which indicates the importance of the quasi-supervision dependent components. We selected the optimal value from the range of $ \{10,20,30,40,50,60\}$. As shown in Fig. \ref{fig4.3}, by the evaluation with both PSNR and SSIM, $\lambda = 20$ was identified as the optimal value. In general, the PSNR and SSIM curves showed similar trends. \begin{figure*} \centering \subfigure[] {\includegraphics[width=0.45\textwidth]{5-1.pdf}} \subfigure[] {\includegraphics[width=0.45\textwidth]{5-2.pdf}} \caption{Evaluative metrics versus $\lambda$: (a) PSNR and (b) SSIM.} \label{fig4.3} \end{figure*} \subsection{Noise Synthesis} In addition to noise reduction, ADN also supports noise synthesis. A good synthetic noisy image is very helpful for noise reduction. Fig. \ref{fig5} shows synthetic noisy images generated by ADN and QS-ADN. The synthetic noise introduced by ADN caused severe damage to anatomical structures in the NDCT images. In contrast, QS-ADN produced more realistic noise and more similar anatomical structures in reference to the actual noisy images. In particular, as marked in the figure, the anatomical structures were more seriously damaged by ADN than that by QS-ADN. The main reason is that ADN used randomly paired images to train the network, while QS-ADN used optimally matched images with their matching degrees to train the network more purposely. \begin{figure*} \centering \subfigure[] {\includegraphics[width=0.49\textwidth]{5.pdf}} \subfigure[] {\includegraphics[width=0.49\textwidth]{6.pdf}} \caption{Noisy image synthesis by (a) ADN and (b) QS-ADN. Left: LDCT images; middle: NDCT images; right: synthetic LDCT images obtained from NDCT images with the noise originating from unpaired LDCT images} \label{fig5} \end{figure*} \subsection{Denoising Results} We compared six state-of-the-art methods with our proposed method: BM3D, median filtering with window size $3\times3$, Gaussian low-pass filtering with $\sigma=55$ in the frequency domain, DualGAN \cite{DualGAN}, CycleGAN \cite{Ref32} and Noise2Sim \cite{n2sim}. BM3D is a popular image-based denoising method and has been applied to LDCT image denoising. Gaussian low-pass and median filtering are both basic image denoising methods. DualGAN and CycleGAN are unsupervised deep learning methods based on the GAN framework in the field of image-to-image translation. Noise2Sim is a self-learning method for image denoising that leverages self-similarities of image patches and learns to map between the center pixels of similar patches for self-consistent image denoising, where we search similar patches in two-dimensional mode. To evaluate the denoising performance, we selected a representative slice from the chest area, as shown in Fig. \ref{fig7}. Evidently, the denoising results of BM3D and Gaussian low-pass filtering are too smooth, and many details were lost. Median filtering did not produce satisfactory results and still generated considerable noise. DualGAN, CycleGAN and Noise2Sim removed considerable noise but also lost details. The results by DualGAN were a bit rough, being substantially different from the real NDCT image. CycleGAN gave a better result than DualGAN, which, however, is still distorted seriously compared to that of NDCT. Noise2Sim provided over-smoothed results. In contrast to these results, QS-ADN was able to retain most details while suppressing noise effectively. It has better clarity than other denoising results, and seems very similar to the true NDCT image. An enlarged image in the marked region of interest (ROI) is displayed in Fig. \ref{fig8}, from which we can more clearly observe the detail of the images. By comparsion, we further verifies the conclusion drawn from the above that the proposed QS-ADN indeed provides the best reconstructed image. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{7.pdf} \caption{Denoising results: (a) LDCT, (b) NDCT, (c) BM3D, (d) median filtering, (e) Gaussian low-pass filtering, (f) DualGAN, (g) CycleGAN, (h) Noise2Sim, and (i) QS-ADN.} \label{fig7} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{8.pdf} \caption{Enlarged ROI marked by the red box in Fig. \ref{fig7}: (a) LDCT, (b) NDCT, (c) BM3D, (d) median filtering, (e) Gaussian low-pass filtering, (f) DualGAN, (g) CycleGAN, (h) Noise2Sim, and (i) QS-ADN.} \label{fig8} \end{figure} For a systematic analysis, we quantitatively compared the denoising performance of these methods on the whole testing set in Tab. \ref{tab:1}. Among them, amazingly CycleGAN obtained the lowest PSNR and SSIM values, and its performance is even worse than qualitatively illustrated in Figs. \ref{fig7} and \ref{fig8}. DualGAN and Noise2Sim obtained similar quality metric values. The traditional filtering methods outperformed DualGAN and Noise2Sim. The proposed QS-ADN offerred the best quality metric values regradless of PSNR or SSIM, showing its ability to learn the intrinsic mapping from LDCT to NDCT images. Nevertheless, we would like to note that Noise2Sim can denoise any noisy images individually without using paired or unpaired datasets at all, which only requires LDCT images. \begin{table} \centering \caption{Quantitative evaluation of competing denoising methods on the whole testing set, where the best results are in bold.} \label{tab:1} \begin{tabular}{llllll} \hline\noalign{\smallskip} Method&PSNR & SSIM \\ \noalign{\smallskip}\hline\noalign{\smallskip} BM3D & 24.693 & 0.800\\%a Median filtering & 24.861 & 0.784 \\%135 Gaussian low-pass filtering & 23.059 & 0.735 \\%135 DualGAN &22.101 &0.453 \\%135 CycleGAN &18.022 & 0.309 \\ Noise2Sim & 22.887 & 0.748\\%all QS-ADN & \bf{27.680} & \bf {0.804} \\%all \noalign{\smallskip}\hline \end{tabular} \end{table} % \subsection{Ablation Studies} The proposed QS-ADN consists of the following three key parts: \begin{equation} QS-ADN=ADN + patch~matching+ weighting\nonumber \end{equation} We conducted an ablation study to assess the effectiveness of each part. Fig. \ref{fig11} shows the denoised images obtained by ADN, ADN$+$patch matching and QS-ADN, and Fig. \ref{fig12} enlarged the ROI. By comparing the LDCT and denoised images, it can be seen that there is substantial noise contaminating the LDCT image; for example, anatomical details in the marked ROI are almost invisible. ADN indeed provided a decent denoising result; however, oversmoothing was substantial such that the small tissue structures were lost. The combination of ADN and patch matching reduced the over-smoothing effect compared with ADN, providing a more detailed appearance that thanks to patch matching. Remarkably, QS-ADN further improved the performance such that small tissue features can be seen clearly while suppressing the image noise much better than the other two options. This demonstrates the importance of our proposed weighting scheme. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{11.pdf} \caption{Qualitative comparison: (a) LDCT, (b) NDCT, (c) ADN, (d) ADN+patch matching, and (e) QS-ADN.} \label{fig11} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{12.pdf} \caption{Enlarged ROI images marked by the red box in Fig. \ref{fig11}: (a) LDCT, (b) NDCT, (c) ADN, (d) ADN+patch matching, and (e) QS-ADN.} \label{fig12} \end{figure} As before, we further performed the statistical analysis on the whole testing set for a systematic comparison. Tab. \ref{tab:2} indicates that QS-ADN outperformed ADN$+$patch matching, and the latter outperformed the generic ADN, being consistent with our previous observations. \begin{table} \centering \caption{Quantitative analysis results of quasi-supervised learning with different combinations of the key modules, where the best results are in bold.} \label{tab:2} \begin{tabular}{lll} \hline\noalign{\smallskip} Method & PSNR & SSIM \\ \noalign{\smallskip}\hline\noalign{\smallskip} ADN & 26.639& 0.738 \\%a ADN+patch matching& 27.538 & 0.755\\%a QS-ADN& \bf{27.680} & \bf{0.804} \\%a \noalign{\smallskip}\hline \end{tabular} \end{table} \section{Discussions and Conclusion} Clearly, a larger dataset better reflects the diversity of tissue structure and covers more similar structures. Limited by our available dataset and computational resources, this paper only used a moderately-sized dataset. In the future, we will use a larger number of unpaired CT images to improve the denoising performance further in the quasi-supervised learning mode. In conclusion, we have generalized ADN into a quasi-supervised version, which takes advantage of massive unpaired datasets that contain similar local structures. Our proposed method avoids the difficulty of obtaining supervised/labeled CT data, increasing neither radiation dose nor imaging cost. This is a new and cost-effective approach for LDCT image denoising. \balance \bibliographystyle{IEEEtran}
1,108,101,563,071
arxiv
\section{Statements} Notations: if $B$ is a Banach space, we denote the norm of an element $f$ of $B$ by $\norm{f}{B}$. In this paper, a function defined on a closed subset of a manifold is said to be $C^k$ or $C^\infty$ if it admits an extension to a neighborhood of this closed subset, which is $C^k$ or $C^\infty$ in the usual sense. \subsection{The setting} Let $X$ be a riemannian manifold of dimension $d$, and let $X_0$ be a compact subset of $X$. Let also $0\le d_s\leq d$ and $\alpha> 0$. We call $C^1$ hypersurface with boundary a codimension-one $C^1$ submanifold of $X$ with boundary. For a closed subset $K$ of $X_0$ we shall consider {\it integrable $C^{1+\alpha}$ distributions} of $d_s$-dimensional subspaces $E^s$ on $K$. By definition, this means that for each $x$ in a neighborhood of $K$, $E^s(x)$ is a $d_s$-dimensional vector subspace of the tangent space $\mathcal{T}_x X$, the map $x\mapsto E^s(x)$ is $C^{1+\alpha}$ and, for any $x\in K$, there exists a unique submanifold of dimension $d_s$ containing $x$, defined on a neighborhood of $x$, and everywhere tangent to $E^s$. We will denote this local submanifold by $W^s_{loc}(x)$, and by $W^s_\epsilon(x)$ we will mean the ball of size $\epsilon$ around $x$ in this submanifold. \begin{definition}[Piecewise hyperbolic maps with stable distribution] \label{def:PiecHyp} For $\alpha>0$, we say that a map $T: X_0 \to X_0$ is a piecewise $C^{1+\alpha}$ hyperbolic map with smooth stable distribution if \begin{itemize} \item There exists an integrable $C^{1+\alpha}$ distribution of $d_s$-dimensional subspaces $E^s$ on a neighborhood of $X_0$. \item There exists a finite number of disjoint open subsets $O_1,\dots,O_I$ of $X_0$, covering Lebesgue-almost all $X_0$, whose boundaries are unions of finitely many compact $C^1$ hypersurfaces with boundary. \item For $1\leq i\leq I$, there exists a $C^{1+\alpha}$ map $T_i$ defined on a neighborhood of $\overline{O_i}$, which is a diffeomorphism onto its image, such that $T$ coincides with $T_i$ on $O_i$. \item For any $x\in \overline{O_i}$, there exists $\lambda_s(x)<1$ such that, for any $v\in E^s(x)$, $DT_i(x)v\in E^s(T_i x) $ and $|DT_i(x) v| \leq \lambda_s(x) |v|$. \item There exists a family of cones $C^u(x)$, depending continuously on $x\in X_0$, with $C^u(x)+E^s(x)=\mathcal{T}_x X$, such that, for any $x\in \overline{O_i}$, $DT_i(x) C^u(x) \subset C^u(T_i x)$, and there exists $\lambda_u(x)>1$ such that $|DT_i (x) v|\geq \lambda_u(x) |v|$ for any $v\in C^u(x)$. \end{itemize} \end{definition} See Remark~\ref{remark7} and Subsection~\ref{exchangeus} regarding the replacement of $E^s$ by $E^u$ and $C^u$ by $C^s$ in the above definition. Note that we do not assume that $T$ is continuous or injective on $X_0$. When $d_s=0$, the map $T$ is piecewise expanding. When $d_u=0$, it is piecewise contracting (we shall see that our results are not very useful in this case). In the intermediate case, there are at the same time contracted and expanded directions. We will denote by $\lambda_{s,n}(x)<1$ and $\lambda_{u,n}(x)>1$ the weakest contraction and expansion constants of $T^n$ at $x$. \begin{rmk} \label{rmk:Eslisse} The requirement that $E^s$ is defined everywhere and $C^{1+\alpha}$ is extremely strong. It is possible to weaken it slightly, by requiring only that $E^s$ is $C^{1+\alpha}$ on each set $O_i$. Indeed, our proofs still work under this weaker assumption (one should just slightly modify the definition of the Banach space we use). It is also possible to apply directly our results to this more general setting, by working on a different manifold, as follows. Assume that $T$ is a piecewise hyperbolic map for which $E^s$ is $C^{1+\alpha}$ on each set $O_i$, but not globally. Start from the disjoint union of the sets $\overline{O_i}$, and glue them together at all the points $x\in \overline{O_i} \cap \overline{O_j}$ such that $E^s$ is $C^{1+\alpha}$ on a neighborhood of $x$. Then $T$ induces a piecewise hyperbolic map on this new manifold, for which the stable distribution is globally $C^{1+\alpha}$. Indeed, since $T$ is $C^{1+\alpha}$ on each set $O_i$, the set $T(O_i)$ intersects the boundaries of the sets $O_j$ only at places where $E^s$ is $C^{1+\alpha}$. Hence, the places in the original manifold where $\overline{O_i}$ and $\overline{O_j}$ are cut apart are not an obstruction to extending $T$ to the new manifold. The assumption on the $C^u$ can be similarly weakened. \end{rmk} In order to define our weak transversality condition on the boundaries of the sets $O_i$, we shall use the following notion. \begin{definition}[$L$-generic vector in $E^s$] Let $K\subset X_0$ be a compact hypersurface with boundary and let $L \in \mathbb{Z}_+$. For $x\in K\backslash \partial K$, we say that a vector $a\in E^s(x)$ is \emph{$L$-generic} with respect to $K$ if, for any $C^1$ vector field $v$ defined on a neighborhood of $x$, with $v(x)=a$ and $v(y)\in E^s(y)$ for any $y$, there exists a smaller neighborhood of $x$ in which the intersection of Lebesgue almost every integral line of $v$ with $K$ has at most $L$ points. \end{definition} \begin{definition}[Weak transversality condition for $E^s$] \label{def:WTC} Let $T:X_0 \to X_0$ be a piecewise hyperbolic map with smooth stable distribution. We say that $T$ satisfies the \emph{weak transversality condition} if there exists $L>0$ such that, for any $K \subset \bigcup_{i=1}^I \partial O_i$ which is hypersurface with boundary, there exists a larger hypersurface with boundary $K'$ (containing $K$ in its interior) such that, for any $x\in K'\backslash \partial K'$, the set of tangent vectors at $x$ that are $L$-generic with respect to $K'$ has full Lebesgue measure in $E^s(x)$.\footnote{We could replace ``full Lebesgue measure'' in this definition by ``generic in the sense of Baire'' (i.e., contains a countable intersection of dense open sets), all the following results would hold true as well, with the same proofs.} \end{definition} The small enlargement $K'$ of $K$ is simply a technical point in the definition, to avoid problems at the boundary of $K$. If the boundary of each $O_i$ is a finite union of smooth hypersurfaces $K_{i1},\dots, K_{ik_i}$, each of which is transversal to the stable direction (in the sense that $E^s(x)$ is never contained in $\mathcal{T}_x K_{ij}$), then $T$ satisfies the weak transversality condition. However, the converse does not hold. For instance, we have the following result: \begin{prop} Assume that $d_s=1$ (so that the stable manifolds are curves), and that $T$ is a piecewise hyperbolic map with smooth stable distribution. Then $T$ satisfies the weak transversality condition if there exists $\epsilon>0$ such that \begin{equation} \sup_{1\leq i \leq I} \norm{ \Card( W^s_\epsilon(x)\cap \partial O_i)}{L_\infty(\Leb)} < \infty. \end{equation} \end{prop} Hence, tangencies to the boundaries of the $O_i$'s are allowed, and even flat tangencies or pieces of the boundary coinciding with $W^s$. The only problematic situation is when a boundary oscillates around the stable manifold, cutting it into infinitely many small pieces. \smallskip To get a result on the physical measures of finitely differentiable maps $T$, it is necessary to add {\it some} assumption on the asymptotic dynamical complexity, already for piecewise expanding maps in dimension two or higher (see \cite{Sau}, \cite{BuMa}, \cite{Co}, \cite{Ts00} and \cite{Bu01}). We shall use the following way to quantify the complexity. Let $\mathbf{i}=(i_0,\dots,i_{n-1})\in \{1,\dots,I\}^n$. We define inductively sets $O_\mathbf{i}$ by $O_{(i_0)}=O_i$, and \begin{equation} O_{(i_0,\dots,i_{n-1})}=\{x\in O_{i_0} \;|\; T_{i_0}x\in O_{(i_1,\dots,i_{n-1})}\}. \end{equation} Let also $T_\mathbf{i}=T_{i_{n-1}}\circ \dots \circ T_{i_0}$, it is defined on a neighborhood of $O_\mathbf{i}$. We define the complexity at the beginning \begin{equation}\label{cpb} D^b_n=\max_{x\in X_0} \Card \{ \mathbf{i}=(i_0,\dots,i_{n-1}) \;|\; x \in \overline{O_{\mathbf{i}}} \}, \end{equation} and the complexity at the end \begin{equation}\label{cpe} D^e_n=\max_{x\in X_0} \Card \{ \mathbf{i}=(i_0,\dots,i_{n-1}) \;|\; x \in \overline{T^n(O_{\mathbf{i}})} \}. \end{equation} \subsection{The main spectral result} We shall use spaces $\mathcal{H}_p^{t,t_-}$ which were first introduced in a dynamical setting in \cite{baladi:Cinfty} (the local version of these spaces belongs to the Triebel-Lizorkin class, see \cite{VoPa}, \cite{Bagh}, \cite{Tr} for earlier mentions of these spaces in functional analysis). Section \ref{sec:local} is devoted to a precise study of these spaces, and the statements in the following definition are justified there. Let $\mathcal{F}$ denote the Fourier transform in $\mathbb{R}^d$. We will write a point $z\in \mathbb{R}^d$ as $z=(x,y)$ where $x=(z_1,\dots,z_{d_u})$ and $y=(z_{d_u+1},\dots, z_d)$. In the same way, an element $\zeta$ of the dual space of $\mathbb{R}^d$ will be written as $\zeta=(\xi,\eta)$. The subspaces $\{x\}\times \mathbb{R}^{d_s}$ of $\mathbb{R}^d$ will sometimes be referred to as the ``stable leaves'' in $\mathbb{R}^d$. We say that a diffeomorphisms sends stable leaves to stable leaves if its derivative has this property. \begin{definition}[Local spaces $H^{t,t_-}_p$] \label{def:space} For $1<p<\infty$, $t, t_- \in \mathbb{R}$, we define a space $H_p^{t,t_-}$ of distributions in $\mathbb{R}^d$ as the (tempered) distributions $u$ such that $$\mathcal{F}^{-1}( (1+|\xi|^2+|\eta|^2)^{t/2} (1+|\eta|^2)^{t_-/2} \mathcal{F} u)\in L_p, $$ with its canonical norm. \end{definition} We will simply write $H_p^t$ instead of $H_p^{t,0}$. If $t \ge 0$, $t+t_- \le 0$ and $t+|t_-|<\alpha<1$, we shall see that $H^{t,t_-}_p$ is invariant under $C^{1+\alpha}$ diffeomorphisms sending stable leaves to stable leaves (Remark~\ref{rmk:Invariance}). Hence, we can glue such spaces locally together in appropriate coordinate patches, to define a space $\mathcal{H}_p^{t,t_-}$ of distributions on the manifold: \begin{definition}[Spaces $\mathcal{H}_p^{t,t_-}$ of distributions on $X$] \label{def:spaceX} Let $t \ge 0$, $t+t_- \le 0$ and $t+|t_-|<\alpha<1$. Fix a finite number of $C^{1+\alpha}$ charts $\kappa_1,\dots,\kappa_J$ whose derivatives send $E^s$ to $\{0\}\times \mathbb{R}^{d_s}$, and whose domains of definition cover a compact neighborhood of $X_0$, and a partition of unity $\rho_1,\dots,\rho_J$, such that the support of $\rho_j$ is compactly contained in the domain of definition of $\kappa_j$, and $\sum \rho_j=1$ on $X_0$. The space $\mathcal{H}_p^{t,t_-}$ is then the space of distributions\footnote{\label{page:foot}On a manifold, the space of \emph{generalized functions} supported in $X_0$, i.e., elements in the dual of the space of smooth densities, and the space of \emph{generalized densities} supported in $X_0$, i.e., elements in the dual of the space of smooth functions, are isomorphic if $X_0$ is compact: taking $\Leb$ any smooth riemannian measure then $f\mapsto f\dLeb$ gives an isomorphism. ``Distributions supported in $X_0$" (not to be confused with the integrable distributions of subspaces in Definition~\ref{def:PiecHyp}) refers in this paper to generalized functions (this avoids jacobians in the change of variables).} $u$ supported on $X_0$ such that $(\rho_j u)\circ \kappa_j^{-1}$ belongs to $H_p^{t,t_-}$ for all $j$, endowed with the norm \begin{equation} \label{defnorm} \norm{u}{\mathcal{H}_p^{t,t_-}}=\sum \norm{ (\rho_j u) \circ \kappa_j^{-1}}{H_p^{t,t_-}}. \end{equation} \end{definition} Changing the charts and the partition of unity gives an equivalent norm on the same space of distributions by Lemma~\ref{Leib} and Remark~\ref{rmk:Invariance}. To fix ideas, we shall view the charts and partition of unity as fixed. \begin{rmk}\label{remark7} Note that \cite{baladi:Cinfty} considers a slightly different space, where the stable and unstable direction and the signs of $t$ and $t+t_-$ are exchanged. This choice is completely innocent, we also get the same results for the space of \cite{baladi:Cinfty} (for maps with smooth unstable distribution) in Theorem \ref{thm:smooth_unstable}. \end{rmk} Our main result follows (recall the notation \eqref{cpb}--\eqref{cpe}): \begin{thm}[Spectral theorem for smooth stable distributions] \label{thm:MainSpectralThm} Let $\alpha\in (0,1]$. Let $T$ be a piecewise $C^{1+\alpha} $hyperbolic map with smooth stable distribution, satisfying the weak transversality condition. Let $1<p<\infty$ and let $t,t_-$ be so that $1/p-1<t_-<0<t<1/p$, $t+t_-<0$ and $t+|t_-|<\alpha$. Let $g:X_0\to \mathbb{C}$ be a function such that the restriction of $g$ to any $O_i$ admits a $C^{\alpha}$ extension to $\overline{O_i}$. Define an operator $\mathcal{L}_g$ acting on bounded functions by $(\mathcal{L}_g u)(x)=\sum_{Ty=x} g(y)u(y)$. Then $\mathcal{L}_g$ acts continuously on $\mathcal{H}_p^{t,t_-}$. Moreover, its essential spectral radius is at most \begin{equation}\label{bdess} \lim_{n\to\infty} (D_n^b)^{1/(pn)} \cdot (D_n^e)^{(1/n)(1-1/p)} \cdot \norm{g^{(n)}|\det DT^n|^{1/p}\max(\lambda_{u,n}^{-t}, \lambda_{s,n}^{-(t+t_-)})}{L_\infty}^{1/n}, \end{equation} where $g^{(n)}=\prod_{j=0}^{n-1}g\circ T^j$. \end{thm} When we say that $\mathcal{L}_g$ acts continuously on $\mathcal{H}_p^{t,t_-}$, we should be more precise. We mean that, for any $u\in \mathcal{H}_p^{t,t_-}\cap L_\infty(\Leb)$, then $\mathcal{L}_g u$, which is defined as a bounded function, still belongs to $\mathcal{H}_p^{t,t_-}$ and satisfies $\norm{\mathcal{L}_g u}{\mathcal{H}_p^{t,t_-}}\leq C \norm{u}{\mathcal{H}_p^{t,t_-}}$. Since the set of bounded functions is dense in $\mathcal{H}_p^{t,t_-}$ (by Lemma~\ref{14}), the operator $\mathcal{L}_g$ can therefore be extended to a continuous operator on $\mathcal{H}_p^{t,t_-}$. Note that the limit in (\ref{bdess}) exists by submultiplicativity. Of course, we can bound $\lambda_{s,n}$ and $\lambda_{u,n}^{-1}$ by $\lambda^n$, where $\lambda<1$ is the weakest rate of contraction/expansion of $T$. In some cases, it will be important to use the more precise expression given above (see e.g.~Example \ref{ex5} below). The restriction $1/p-1<t_-<0<t<1/p$ is exactly designed so that the space $\mathcal{H}_p^{t,t_-}$ is stable under multiplication by characteristic functions of nice sets, see Lemma \ref{lem:multiplier}. While this feature will be used in an essential way in the proof, it also implies (see Remark~\ref{nodirac} in Appendix~\ref{sec:SRB}) that Dirac measures (or more generally measures supported on nice hypersurfaces) do not belong to the space $\mathcal{H}_p^{t,t_-}$. \subsection{Physical measures} The physical measures of $T$ are by definition the probability measures $\mu$ such that there exists a set $A$ of positive Lebesgue measure such that, for all $x\in A$, $1/n \sum_{k=0}^{n-1}\delta_{T^k x}$ converges weakly to $\mu$. The physical measures of $T$ are often studied through the transfer operator $\Lp_{1/|\det DT|}$. (Note that the dual of $\Lp_{1/|\det DT|}$ preserves Lebesgue measure.) Theorem \ref{thm:MainSpectralThm} becomes in this setting: \begin{cor} \label{cor:BoundSRB} Under the assumptions of Theorem \ref{thm:MainSpectralThm}, assume that \begin{equation} \label{eq:qlksdjfml} \lim_{n\to\infty} (D_n^b)^{1/(np)}\cdot (D_n^e)^{(1/n)(1-1/p)} \cdot \norm{\max(\lambda_{u,n}^{-t}, \lambda_{s,n}^{-(t+t_-)})|\det DT^n|^{1/p-1}}{L_\infty}^{1/n} <1. \end{equation} Then the essential spectral radius of $\Lp_{1/|\det DT|}$ acting on $\mathcal{H}_p^{t,t_-}$ is $<1$. \end{cor} Together with classical arguments, this implies the following: \begin{thm} \label{thm:ExistSRB} Under the assumptions of Theorem \ref{thm:MainSpectralThm}, if \eqref{eq:qlksdjfml} holds, then $T$ has a finite number of physical measures, which are invariant and ergodic, whose basins cover Lebesgue almost all $X_0$. Moreover, if $\mu$ is one of these measures, there exist an integer $k$ and a decomposition $\mu=\mu_1+\dots+\mu_k$ such that $T$ sends $\mu_j$ to $\mu_{j+1}$ for $j\in \mathbb{Z}/k\mathbb{Z}$, and the probability measures $k\mu_j$ are exponentially mixing for $T^k$ and H\"{o}lder test functions. \end{thm} The deduction of this theorem from Corollary \ref{cor:BoundSRB} is essentially folklore, but the proofs of similar results in the literature (e.g.~in \cite{BKL,DL}) rely on some properties of stable manifolds that are not established in our setting. We prove in Appendix ~ \ref{sec:SRB} a general theorem (Theorem~\ref{thm:SRBabstrait}) that guarantees the existence of finitely many physical measures whenever the transfer operator has a spectral gap on a space of distributions, and show (Lemma~\ref{deduce}) that this general theorem holds in our setting. The interest of this argument is that it also applies to non hyperbolic situations, such as (perturbations of the operators in) \cite{Tsujii}. The results in this subsection answer the question in \cite[Remark 1.1]{baladi:Cinfty}, in a much more general framework. \subsection{Hyperbolic maps with smooth unstable distribution} \label{exchangeus} Just like in Definition \ref{def:PiecHyp}, we can define piecewise $C^{1+\alpha}$ hyperbolic maps with smooth unstable distribution. Our results also apply to such maps (by the same techniques used to prove Theorem \ref{thm:MainSpectralThm}), but on the space of distributions $\tilde \mathcal{H}^{t_+,t}$ whose norm is given in charts by $\norm{\mathcal{F}^{-1} ((1+|\xi|^2)^{t_+/2} (1+|\xi|^2+|\eta|^2)^{t/2} \mathcal{F} u)}{L_p}$. More precisely: \begin{thm}[Spectral theorem for smooth unstable distributions] \label{thm:smooth_unstable} Let $\alpha\in (0,1]$. Let $T$ be a piecewise $C^{1+\alpha} $hyperbolic map with smooth unstable distribution, satisfying the weak transversality condition with $E^s$ replaced by $E^u$. Let $1<p<\infty$ and let $t_+$, $t$ be so that $1/p-1<t<0<t_+<1/p$, $t+t_+>0$ and $|t|+t_+<\alpha$. Let $g:X_0\to \mathbb{C}$ be a function such that the restriction of $g$ to any $O_i$ admits a $C^{\alpha}$ extension to $\overline{O_i}$. Define an operator $\mathcal{L}_g$ acting on bounded functions by $(\mathcal{L}_g u)(x)=\sum_{Ty=x} g(y)u(y)$. Then $\mathcal{L}_g$ acts continuously on $\tilde\mathcal{H}_p^{t_+,t}$. Moreover, its essential spectral radius is at most \begin{equation*} \lim_{n\to\infty} (D_n^b)^{1/(pn)} \cdot (D_n^e)^{(1/n)(1-1/p)} \cdot \norm{g^{(n)}|\det DT^n|^{1/p}\max(\lambda_{u,n}^{-(t+t_+)}, \lambda_{s,n}^{-t})}{L_\infty}^{1/n}. \end{equation*} In particular, if \begin{equation*} \lim_{n\to\infty} (D_n^b)^{1/(np)}\cdot (D_n^e)^{(1/n)(1-1/p)} \cdot \norm{\max(\lambda_{u,n}^{-(t+t_+)}, \lambda_{s,n}^{-t})|\det DT^n|^{1/p-1}}{L_\infty}^{1/n} <1, \end{equation*} then the spectral radius of $\Lp_{1/|\det DT|}$ acting on $\tilde\mathcal{H}_p^{t_+,t}$ is $<1$. This implies that $T$ has a finite number of ergodic physical measures whose basins cover Lebesgue almost all $X_0$. Moreover, if $\mu$ is one of these measures, there exist an integer $k$ and a decomposition $\mu=\mu_1+\dots+\mu_k$ such that $T$ sends $\mu_j$ to $\mu_{j+1}$ for $j\in \mathbb{Z}/k\mathbb{Z}$, and the probability measures $k\mu_j$ are exponentially mixing for $T^k$ and H\"{o}lder test functions. \end{thm} We will not give further details on the proof of this theorem, since it follows from the techniques used in the proof of Theorem \ref{thm:MainSpectralThm}. Finally, similar results hold for maps that have at the same time smooth stable and unstable distributions (and satisfy the weak transversality condition in both directions), as follows. Let $\tilde{\tilde{\mathcal{H}}}_p^{t_+,t_-}$ be the space of distributions whose norm is given in charts by $\norm{\mathcal{F}^{-1} ((1+|\xi|^2)^{t_+/2} (1+|\eta|^2)^{t_-/2} \mathcal{F} u)}{L_p}$. \begin{thm}[Spectral theorem when both distributions are smooth]\label{both} Let $T$ be a piecewise $C^{1+\alpha}$ hyperbolic map with smooth stable and unstable distribution, satisfying the weak transversality conditions for $E^s$ and $E^u$ for $\alpha\in (0,1]$. Let $1<p<\infty$ and let $t_+$, $t_-$ be so that $1/p-1<t_-<0<t_+<1/p$, and $|t_-|+t_+<\alpha$. Let $g:X_0\to \mathbb{C}$ be a function such that the restriction of $g$ to any $O_i$ admits a $C^{\alpha}$ extension to $\overline{O_i}$. Define an operator $\mathcal{L}_g$ acting on bounded functions by $(\mathcal{L}_g u)(x)=\sum_{Ty=x} g(y)u(y)$. Then $\mathcal{L}_g$ acts continuously on $\tilde{\tilde{\mathcal{H}}}_p^{t_+,t_-}$. Moreover, its essential spectral radius is at most \begin{equation}\label{thebest} \lim_{n\to\infty} (D_n^b)^{1/(pn)} \cdot (D_n^e)^{(1/n)(1-1/p)} \cdot \norm{g^{(n)}|\det DT^n|^{1/p}\max(\lambda_{u,n}^{-t_+}, \lambda_{s,n}^{-t_-})}{L_\infty}^{1/n}. \end{equation} \end{thm} The results on physical measures follow analogously. It should be noted that the results of Theorem~ \ref{both} are stronger than Theorems~ \ref{thm:MainSpectralThm} and~ \ref{thm:smooth_unstable}, since the exponents $t_+$ and $t_-$ appear independently in the estimate \eqref{thebest}. Once again, this theorem follows from the techniques we will use to prove Theorem \ref{thm:MainSpectralThm}. \section{Examples} \label{sec:examples} Let us look at some applications of our results to $\Lp_{1/|\det DT|}$. \subsection{General examples} \begin{ex} \label{ex2} On $[-1,1]\times \{0,1\}$, let $T(x,j)=(x/2,j)$ if $x\not=0$, and $T(0,j)=(0,1-j)$. This fits in our framework. Since the complexities $D^b_n$ and $D^e_n$ are always equal to $2$, Theorem~ \ref{thm:MainSpectralThm} gives the following bound for the essential spectral radius of $\Lp_{1/|\det DT|}$ on the classical Sobolev space $\mathcal{H}_p^{t_-}$: \begin{equation} \lim_{n \to \infty} \norm{\lambda_{s,n}^{-t_-}|\det DT^n|^{1/p-1}}{L_\infty}^{1/n} =2^{t_-+1-1/p}. \end{equation} Since $t_-<0$ is restricted by $t_->1/p-1$, this bound is $>1$, hence useless. This is not surprising since the physical measure, the Dirac mass at $0$, does not belong to $\mathcal{H}_p^{t_-}$ if $1/p-1<t_-<0$ (see Remark \ref{nodirac}). This was to be expected since the conclusion of Theorem~\ref{thm:ExistSRB} is false: the map $T$ has two physical measures, the Dirac masses at $(0,0)$ and $(0,1)$, but these measures are not invariant! It is nevertheless interesting to see where precisely our arguments fail. Let $\tilde T(x,j)=(x/2,j)$, then the transfer operators associated to $T$ and $\tilde T$ acting on distributions coincide on $C^\infty$ functions (since the difference at $0$ is not seen by the integration against smooth functions). Since $\tilde T$ is continuous, there is no truncation term in its transfer operator, hence the results of Theorem \ref{thm:MainSpectralThm} hold for the full range $t_-<0$, without the restriction $t_->1/p-1$ (with the same proof). In particular, for $t_-=-1$ and $p=2$, we get a bound $1/\sqrt{2}$ for the essential spectral radius of $\mathcal{L}_{1/\det DT}(T)=\mathcal{L}_{1/\det D\tilde T}(\tilde T)$ acting on $\mathcal{H}_2^{-1}$, and Corollary \ref{cor:BoundSRB} holds. The problem comes up in the deduction of the properties of physical measures from this bound on the essential spectral radius of $\Lp_{1/|\det DT|}$: we need to check that the physical measures do not give weight to the discontinuities of the map, to apply Theorem \ref{thm:SRBabstrait}. This is ensured by Lemma \ref{deduce} when $t_->1/p-1$, but does not hold for $t_-=-1$ and $p=2$. \end{ex} \begin{ex} Assume that $d_s=0$, i.e., $T$ is piecewise expanding. In this case, we can take $\lambda_s=0$, and the value of $t_-$ is irrelevant (in fact, the space $\mathcal{H}_p^{t,t_-}$ does not depend on $t_-$, and is the classical Sobolev space $\mathcal{H}_p^t$). \begin{prop*} If $T$ is piecewise $C^{2}$, if $d_s=0$ and $\lim \norm{\lambda_{u,n}^{-1}}{L_\infty}^{1/n} \cdot \lim (D_n^b)^{1/n} <1$, then there exist $0<t<1/p<1$ such that the spectral radius of $\Lp_{1/|\det DT|}$ acting on $\mathcal{H}_p^t$ is $<1$. In particular, Theorem \ref{thm:ExistSRB} applies. \end{prop*} \begin{proof} When $\epsilon$ tends to $0$, the bound on the essential spectral radius of $\Lp_{1/|\det DT|}$ acting on $\mathcal{H}_{(1-\epsilon)^{-1}}^{1-2\epsilon}$, given by Corollary \ref{cor:BoundSRB}, converges at most to $\lim_{n \to \infty} \norm{\lambda_{u,n}^{-1}}{L_\infty}^{1/n} \cdot \lim_{n \to \infty} (D_n^b)^{1/n}$. Hence, it is $<1$ for small enough $\epsilon$. \end{proof} \end{ex} In the proof of the above proposition, we use parameters $t$ and $p$ very close to $1$, but we are ``morally" working with $\mathcal{H}_1^1$. This is not surprising since this space is essentially a space of functions with one derivative in $L_1$, i.e., a space of functions of bounded variation. It is well known that functions of bounded variation are useful to study piecewise expanding maps, see \cite{Co}. This proposition is analogous to results proved in \cite{Sau,Co} for different Banach spaces. \begin{ex} \label{ex5} When $\det DT=1$ and $D^e_n$, $D^b_n$ grow subexponentially fast, then it is clear from Corollary \ref{cor:BoundSRB} that the essential spectral radius of $\Lp_{1/|\det DT|}$ is $<1$ on any space $\mathcal{H}_p^{t,t_-}$ (as soon as $t>0$ and $t+t_-<0$). In some situations, it is possible to weaken (or even remove) the assumption that $\det DT=1$. We get more precise results using Theorem \ref{thm:smooth_unstable}, i.e., assuming that the unstable direction is smooth. \begin{prop*} Let $T$ be a piecewise $C^{2}$ hyperbolic map with smooth unstable distribution satisfying the weak transversality condition, and such that $D^e_n$ and $D^b_n$ grow subexponentially. Assume that there exist $N>0$ and $\gamma<1$ such that $\lambda_{s,N}\leq \gamma |\det DT^N|$. Then there exist $p\in (1,\infty)$ and $1/p-1<t<0<t_+<1/p$ such that the essential spectral radius of $\Lp_{1/|\det DT|}$ acting on $\tilde\mathcal{H}_p^{t_+,t}$ is $<1$. In particular, $T$ has finitely many physical measures whose basins contain Lebesgue almost every point. \end{prop*} The assumption $\lambda_{s,N}\leq \gamma |\det DT^N|$ is a kind of pinching condition. It is satisfied whenever $d_s=1$ and $d_u>0$. \begin{proof} We will take $p$ very close to $1$, $t=1/p-1+\epsilon$ and $t_+=1/p-\epsilon$ for $\epsilon>0$ very small. We have \begin{equation} |\det DT^N|^{1/p-1} \lambda_{s,N}^{-t} \leq (\gamma^{-1}\lambda_{s,N})^{1/p-1} \lambda_{s,N}^{-(1/p-1)-\epsilon} = \gamma^{1-1/p} \lambda_{s,N}^{-\epsilon}. \end{equation} Since $\gamma<1$, this quantity is $<1$ if $\epsilon$ is small enough (in terms of $p$). Moreover, \begin{equation} |\det DT^N|^{1/p-1} \lambda_{u,N}^{-(t_++t)} = |\det DT^N|^{1/p-1} \lambda_{u,N}^{1-2/p}. \end{equation} When $p\to 1$, this quantity converges to $\lambda_{u,N}^{-1}<1$. Hence, it is possible to choose $p$ and $\epsilon$ such that \begin{equation} \norm{ |\det DT^N|^{1/p-1} \max( \lambda_{s,N}^{-t}, \lambda_{u,N}^{-(t+t_+)})}{L_\infty}<1. \end{equation} This concludes the proof. \end{proof} \end{ex} \subsection{Piecewise linear maps} \label{pwaff} In this paragraph, we describe an explicit class of maps for which the assumptions of the previous theorems are satisfied. Let $A$ be a $d\times d$ matrix with no eigenvalue of modulus $1$. It acts on $\mathbb{R}^d$ in a hyperbolic way, with best expansion/contraction constants $\lambda_u>1$ and $\lambda_s<1$. Let $X_0$ be a polyhedral region of $\mathbb{R}^d$, and define a map $T$ on $X_0$ by cutting it into finitely many polyhedral subregions $O_1,\dots,O_N$, applying $A$ to each of them, and then mapping $AO_1,\dots, AO_N$ back into $X_0$ by translations. Let $J(n)$ be the covering multiplicity of $T^n$, i.e., the maximal number of preimages of a point under $T^n$. It is submultiplicative, hence the limit $J=\lim_{n \to \infty} J(n)^{1/n}$ exists. \begin{prop} \label{prop:Affine} The map $T$ is a piecewise hyperbolic map with smooth stable and unstable distributions (given by the eigenspaces of $A$ corresponding to eigenvalues of modulus $<1$, resp.~$>1$). It satisfies the weak transversality conditions for both stable and unstable distributions. Moreover, if $J\lambda_s <|\det A|$, there exist $1<p<\infty$, and $t_+$, $t_-$ so that $1/p-1 <t_- <0 < t_+ < 1/p$ and such that the essential spectral radius of $\mathcal{L}_{1/\det|DT|}$ acting on $\tilde{\tilde{\mathcal{H}}}_p^{t_+,t_-}$ is $<1$. Therefore, $T$ satisfies the conclusions of Theorem \ref{thm:ExistSRB}. \end{prop} As an example of such a map, one can take $A=\left(\begin{matrix} 2&1\\1&1 \end{matrix}\right)$. Cutting the torus $\mathbb{T}^2$ into finitely many squares, applying $A$ to each of these squares, and then permuting the images of the squares, one obtains a bijection of the torus (for which $J=1$). Hence, Proposition~ \ref{prop:Affine} applies. The novelty with respect to previous works such as \cite{Yo, Ch, DL} is that the sides of the squares can be taken parallel to the stable or unstable directions. \begin{proof}[Proof of Proposition \ref{prop:Affine}] The weak transversality conditions are direct consequences of the definitions. Let $K$ be the total number of the sides of the polyhedra $O_i$. Around any point $x$, the boundaries of the sets $O_{(i_0,\dots,i_{n-1})}$ are preimages of theses sides by one of the maps $A,\dots,A^{n-1}$, which gives at most $nK$ possible directions. Hence, the claim p.~ 105 in \cite{Bu} gives $D_n^b \leq 2(nK)^d$. This quantity grows subexponentially. In the same way, $D_n^e \leq 2 J(n) (nK)^d$. By Theorem \ref{both}, the essential spectral radius of $\mathcal{L}_{1/\det A}$ acting on $\tilde{\tilde{\mathcal{H}}}_p^{t_+,t_-}$ (for suitable values of $p,t_+,t_-$) is bounded by $J^{1-1/p}|\det A|^{1/p-1} \max(\lambda_u^{-t_+}, \lambda_s^{-t_-})$. Let us take $t_+=1/p-\epsilon$, $t_-=1/p-1+\epsilon$ and $p$ close to $1$. Then $1/p-1<t_-<0<t_+<1/p$, hence Theorem~ \ref{both} applies and yields the following bound for the essential spectral radius: \begin{equation} |\det A|^{1/p-1} J^{1-1/p} \max(\lambda_u^{-1/p+\epsilon}, \lambda_s^{1-1/p-\epsilon}). \end{equation} If $p$ is close to $1$ and $\epsilon$ is small enough, this quantity is $<1$ under the assumptions of the proposition. (Note that if $\det A=J=1$, choosing $p=1/2$ and $t_+=1/2-\epsilon$, $t_-=-1/2+\epsilon$ gives better bounds.) \end{proof} The standard conservative (piecewise affine) baker's map on the unit square is given by $T(x,y)= (2x, y/2)$ for $0\le x < 1/2$ and $T(x,y)=(2x-1, (y+1)/2)$ for $1/2\le x \le 1$. It fits in the model of this subsection, for a diagonal matrix $A$ with eigenvalues $2$ and $1/2$. The baker has an obvious Markov partition with two pieces, and can thus be analyzed by a (Lipschitz) symbolic model, which gives an essential decorrelation rate of $2^{-1/2}$ for Lipschitz observables. (The physical measure is just Lebesgue measure.) The proof of the previous proposition gives a bound $2^{-1/2+\epsilon}$ for the essential spectral radius of $\mathcal{L}_{1/\det A}$ on $\tilde {\tilde \mathcal{H}}_2^{1/2- \epsilon,-1/2 +\epsilon}$ for arbitrarily small $\epsilon >0$ (here $J=1$, $\det A=1$, $\lambda_u=2$ and $\lambda_s=1/2$). For a dissipative baker $T(x,y)= (2x, y/3)$ for $0\le x < 1/2$ and $T(x,y)=(2x-1, (y+2)/3)$ for $1/2\le x \le 1$ ($\lambda_u=2$ and $\lambda_s=1/3$, $\det A=2/3$ and $J=1$), the proof of the above proposition gives a bound $ 2^{-1+\epsilon+ (\log 3/\log 6)}$ for the essential spectral radius on $\tilde {\tilde \mathcal{H}}_p^{1/p- \epsilon,1/p-1 +\epsilon}$ for $p=\log 6/\log 3$. (Note that the dimension of the attractor is strictly between $1$ and $2$ in this case.) The above two examples are piecewise affine hyperbolic maps with a finite Markov partition. But the following variant, that we shall call a ``sloppy baker," does not have a finite Markov partition: let $(a,b)$ be a point in the interior of the unit square and put $T(x,y)= (2x+a, y/2+b) \mod 1$ for $0\le x < 1/2$ and $T(x,y)=(2x-1+a, (y+1)/2+b) \mod 1$ for $0\le x <1$. For almost all $(a,b)$, the sloppy baker does not have a finite Markov partition. However, our estimate gives the same bound $2^{-1/2+\epsilon}$ for the essential spectral radius on $\tilde {\tilde \mathcal{H}}_2^{1/2- \epsilon,-1/2 +\epsilon}$. Similarly, one may consider a dissipative sloppy baker, and we recover the same estimates. \section{Tools of functional analysis} In this section, we recall some classical notions of functional analysis (interpolation theory and properties of Triebel spaces), that will be useful in the next sections to study the space $H_p^{t,t_-}$ and to prove our main result. \subsection{Complex interpolation} We first recall some notations and definitions from the classical complex interpolation theory of Lions, Calder\'{o}n and Krejn (see e.g. \cite{TrB}). A pair $(\mathcal{B}_0, \mathcal{B}_1)$ of Banach spaces is called an interpolation couple if they are both continuously embedded in a linear Hausdorff space $\mathcal{B}$. For any interpolation couple $(\mathcal{B}_0, \mathcal{B}_1)$, we let $L(\mathcal{B}_0, \mathcal{B}_1)$ be the space of all linear operators $\mathcal{L}$ mapping $\mathcal{B}_0+\mathcal{B}_1$ to itself so that $\mathcal{L}|_{\mathcal{B}_j}$ is continuous from $\mathcal{B}_j$ to itself for $j=0,1$. For an interpolation couple $(\mathcal{B}_0, \mathcal{B}_1)$ and $0 < \theta < 1$, we denote by $[\mathcal{B}_0, \mathcal{B}_1]_\theta$ the complex interpolation space of parameter $\theta$. We recall the definition: set $S= \{ z\in \mathbb{C} \;|\; 0 < \Re z < 1\}$, and introduce the normed vector space \begin{align*} F(\mathcal{B}_0, \mathcal{B}_1)= \{& f : S \to \mathcal{B}_0 + \mathcal{B}_1, \mbox{ analytic, extending continuously to } \overline S ,\\ \nonumber & \mbox{ with }\sup_{z \in \overline S} \norm{f(z)}{\mathcal{B}_0+\mathcal{B}_1} < \infty, \mbox{ and }\\ \nonumber &t \mapsto f(j+it)\mbox{ is continuous from } (-\infty, \infty)\mbox { to }\mathcal{B}_j , j=0, 1 ,\\ \nonumber &\mbox{and } \norm{f}{F(\mathcal{B}_0,\mathcal{B}_1)}:=\max_{j=0, 1} ( \sup_t \norm {f(j+it)}{\mathcal{B}_j}) < \infty \}. \end{align*} Then the complex interpolation space is defined for $\theta \in (0,1)$ by \begin{equation} [\mathcal{B}_0, \mathcal{B}_1]_\theta := \{ u \in \mathcal{B}_0+\mathcal{B}_1 \;|\; \exists f \in F(\mathcal{B}_0, \mathcal{B}_1) \mbox{ with } f(\theta)=u\}, \end{equation} normed by \begin{equation} \label{InterpolationNorm} \norm{u}{[\mathcal{B}_0, \mathcal{B}_1]_\theta}= \inf_{f(\theta)=u} \norm{f} {F(\mathcal{B}_0, \mathcal{B}_1)}. \end{equation} It is well-known (see e.g. \cite[\S 1.9]{TrB}) that $(\mathcal{B}_0, \mathcal{B}_1)\mapsto [\mathcal{B}_0, \mathcal{B}_1]_\theta$ is an exact interpolation functor of type $\theta$, in the following sense: for any interpolation couple $(\mathcal{B}_0, \mathcal{B}_1)$ and every $\mathcal{L} \in L(\mathcal{B}_0, \mathcal{B}_1)$ we have \begin{equation}\label{interpp} \norm{\mathcal{L}}{[\mathcal{B}_0, \mathcal{B}_1]_\theta \to [\mathcal{B}_0,\mathcal{B}_1]_\theta} \le \norm{\mathcal{L}}{\mathcal{B}_0 \to \mathcal{B}_0}^{1-\theta} \norm{\mathcal{L}}{\mathcal{B}_1\to \mathcal{B}_1}^{\theta} \, \quad \forall \, \theta \in (0,1). \end{equation} The above bound will be used several times throughout this work. \subsection{A class of Sobolev-like spaces containing the local spaces $H^{t,t_-}_p$} Let $S$ be the Schwartz space of $C^\infty$ rapidly decaying functions. Its dual $S'$ is the space of tempered distributions. Let $M$ be the set of functions $a$ from $\mathbb{R}^d$ to $\mathbb{R}_+$ such that there exists $C>0$ such that, for all multi-indices $\gamma=(\gamma_1,\dots,\gamma_d)$ with $\gamma_j \in \{0,1\}$, and all $\zeta\in \mathbb{R}^d$, \begin{equation} \left| \prod_{j=1}^d (1+\zeta_j^2)^{\gamma_j/2} D^\gamma a(\zeta) \right| \leq C a(\zeta). \end{equation} For $a\in M$ and $p\in (1,\infty)$, let us define a space $H_p^a$ as the space of all tempered distributions $u$ such that $\mathcal{F}^{-1}( a\mathcal{F} u)$ belongs to $L_p$, with its canonical norm \begin{equation} \label{def_norm_Triebel} \norm{u}{H_p^a}=\norm{\mathcal{F}^{-1}( a\mathcal{F} u)}{L_p(\mathbb{R}^d)}. \end{equation} These spaces were introduced and studied by Triebel in \cite{Tr}, in a slightly more general setting involving another parameter $q$ (under a different form \cite[Def. 2.3/4]{Tr}, but Theorem 5.1/2 and Remark 5.1 there shows that it is equivalent to the previous description for $q=2$). Among other things, Triebel proved the following results concerning these spaces: \begin{lem}\label{14} For any $a\in M$ and $1<p<\infty$, the space $S$ is contained in $H_p^a$, and dense. \end{lem} \begin{proof} This is proved in Theorem 3.2/2 and Remark 3.2/2 in \cite{Tr}. \end{proof} For $t$, $t_-\in \mathbb{R}$, the function $a_{t,t_-}(\xi,\eta)=(1+|\xi|^2+|\eta|^2)^{t/2} (1+|\eta|^2)^{t_-/2}$ belongs to $M$. Then $H_p^{t,t_-}$ from Definition~ \ref{def:space} is just $H_p^{a_{t,t_-}}$, and the previous lemma says that $S$ is dense in $H_p^{t,t_-}$. \begin{prop}[Interpolation] \label{Triebinterpol} For any $a_0$, $a_1 \in M$, $p_0$, $p_1\in (1,\infty)$ and $\theta \in (0,1)$, the interpolation space $[H_{p_0}^{a_0}, H_{p_1}^{a_1}]_\theta$ is equal to $H_p^a$ for $a=a_0^{1-\theta} a_1^\theta$ and $1/p=(1-\theta)/p_0+\theta/p_1$. \end{prop} \begin{proof} This is \cite[Theorem 4.2/2]{Tr}. \end{proof} We will also use the following straightforward lemma. (Note that if $a \in M$ then $1/a \in M$, see e.g. \cite[Lemma 2.1/1]{Tr}). \begin{lem}[Duality] \label{lem:duality} For any $a\in M$ and $1<p<\infty$, the dual of the space $H_p^a$ is $H_{p'}^{1/a}$ for $1/p+1/{p'}=1$. \end{lem} \subsection{Multiplier theorems} In order to understand the spaces $H_p^a$, an essential tool is provided by Fourier multiplier theorems. The following Marcinkiewicz multiplier theorem (see e.g.~\cite[Theorem 2.4/2]{Tr}) will be sufficient for our purposes. \begin{thm} \label{thm:Marc} Let $b\in C^d(\mathbb{R}^d)$ satisfy $|\zeta^\gamma D^\gamma b(\zeta)| \leq B$ for all multi-indices $\gamma=(\gamma_1,\dots,\gamma_d)$ with $\gamma_j\in \{0,1\}$, and all $\zeta\in \mathbb{R}^d$. Then, for all $p\in (1,\infty)$, there exists a constant $C(p,d)$ such that, for any $u\in L_p$, \begin{equation} \norm{ \mathcal{F}^{-1}( b\mathcal{F} u)}{L_p} \leq CB \norm{u}{L_p}. \end{equation} \end{thm} \section{Towards Lasota-Yorke bounds on the local space $H_p^{t,t_-}$} \label{sec:local} Aiming at the proof of Theorem \ref{thm:MainSpectralThm} on transfer operators, we describe in Subsections \ref{subsec:mult} and \ref{subsec:comp} how the local spaces $H_p ^{t,t_-}$, which are the building blocks of our spaces of distributions, behave under multiplication by a smooth function or by the characteristic function of a nice set, as well as under composition with a smooth map preserving the stable leaves. Then, in Subsection~ \ref{locall}, we state and prove a localization principle on $H_p^{t,t_-}$ that we were not able to find in the literature and which plays a key part in the ``zooming" procedure in the proof of Theorem \ref{thm:MainSpectralThm}. Note for further use that since $X_0$ is compact, \cite[Lemma 2.2]{baladi:Cinfty} (e.g.) gives that the inclusion $\mathcal{H}_p^{t,t_-}\subset \mathcal{H}_p^{t',t'_-}$ for $t' \leq t$ and $t'_- \leq t_-$ is compact if $t' < t$. To study $H_p^{t,t_-}$, we will mainly study $H_p^{t,0}$ and $H_p^{0,t_-}$ and use interpolation (via Proposition \ref{Triebinterpol}). It is therefore useful to recall some classical properties of these spaces. When $t\geq 0$, the space $H_p^{t}$ is the classical Sobolev space. By \cite[Theorem I.4.1]{Str}, it satisfies a Fubini property: if $u$ is a function on $\mathbb{R}^d$, define a function $u_j$ on $\mathbb{R}^{d-1}$ as follows: $u_j(x_1,\dots,x_{j-1},x_{j+1},\dots,x_d)$ is the $H_p^t(\mathbb{R})$-norm of the restriction of $u$ to the line $\{(x_1,\dots,x_{j-1},x,x_{j+1},\dots,x_d) \;|\; x\in \mathbb{R}\}$. Then $u$ belongs to $H_p^{t}(\mathbb{R}^d)$ if and only if each $u_j$ belongs to $L_p(\mathbb{R}^{d-1})$, and the norms $\norm{u}{H_p^{t}}$ and $\sum_{j=1}^d \norm{u_j}{L_p}$ are equivalent. (This is true for any set of coordinates, but for simplicity we shall use a fixed system of coordinates.) This makes it often possible to study only the one-dimensional situation, and extend it readily to $d$ dimensions. For $t_->0$, the space $H_p^{0,t_-}$ also has a Fubini-type property: the norm $\norm{u}{H_p^{0,t_-}}$ is equivalent to $\sum_{j=d_u+1}^d \norm{u_j}{L_p}$ where $u_j$ is the $H_p^{t_-}(\mathbb{R})$-norm of a restriction of $u$ as above (the proof of \cite[Theorem I.4.1]{Str} directly applies, we may take any coordinates on $\mathbb{R}^d$ which preserve the stable leaves of the original coordinate system used to define $H_p^{0,t_-}$, for simplicity we shall fix this original coordinate system). In particular, the study of $H_p^{0,t_-}$ reduces to the study of the usual Sobolev space in one dimension. Finally, for $t_-\in \mathbb{R}$, the space $H_p^{0,t_-}$ also has a slightly different Fubini-type property. Let $u$ be a function on $\mathbb{R}^d$, and define a function $v$ on $\mathbb{R}^{d_u}$ as follows: $v(x)$ is the $H_p^{t_-}(\mathbb{R}^{d_s})$-norm of the restriction of $u$ to $\{x\}\times \mathbb{R}^{d_s}$. Then $\norm{u}{H_p^{0,t_-}(\mathbb{R}^d)}=\norm{v}{L_p(\mathbb{R}^{d_u})}$: this follows from the fact that the function $(1+|\eta|^2)^{t_-/2}$ does not depend on the variable $\xi$, which makes it possible to integrate away the variable $x$ using the Fourier inversion formula (see \cite[p.~1045]{Str} for details). We will refer to these properties respectively as the one-dimensional and the $d_s$-dimensional Fubini properties of $H_p^{0,t_-}$. \subsection{Multiplication by functions} \label{subsec:mult} \begin{lem} \label{Leib} Let $t>0$, $t_-<0$ and $\alpha>0$ be real numbers with $t+|t_-|<\alpha$. For any $p \in (1,\infty)$, there exists a constant $C_{\#}$ such that for any $C^{\alpha}$ function $g : \mathbb{R}^d \to \mathbb{C}$, for any distribution $u\in H_p^{t,t_-}$, the distribution $gu$ also belongs to $H_p^{t,t_-}$ and satisfies \begin{equation*} \norm{ g \cdot u}{H_p^{t,t_-}}\le C_{\#} \|g\|_{C^{\alpha}} \norm{u}{H_p^{t,t_-}}. \end{equation*} \end{lem} The assertion $g u\in H_p^{t,t_-}$ should be interpreted as explained after Theorem~\ref{thm:MainSpectralThm}. \begin{proof} Let $t^0=t+|t_-|$, $t^0_-=-t^0$ and $\theta=t/t^0$, so that $(t,t_-)=(\theta t^0, (1-\theta) t^0_-)$ and $\max(t^0, |t^0_-|)<\alpha$. We will write $H_p^{t,t_-}$ as an interpolation space with parameter $\theta$ between $H_p^{t^0}$ and $H_p^{0,t^0_-}$, thereby reducing the proof to the study of $H_p^{t^0}$ and $H_p^{0,t^0_-}$. First, since $H_p^{t^0}$ is the classical Sobolev space, \cite[Corollary 4.2.2]{Trie} shows that \begin{equation} \label{eq:BorneHpt} \norm{gu}{H_p^{t^0}} \leq C_{\#} \norm{g}{C^\alpha} \norm{u}{H_p^{t^0}}, \end{equation} where $C_{\#}$ depends only on $t^0$ and $\alpha$, whenever $|t^0|<\alpha$. Together with the $d_s$-dimensional Fubini-type property of $H_p^{0,t^0_-}$, this readily implies \begin{equation} \label{eq:Mult0t} \norm{gu}{H_p^{0,t^0_-}} \leq C_{\#}\norm{g}{C^\alpha} \norm{u}{H_p^{0,t^0_-}} \end{equation} whenever $|t^0_-|<\alpha$. Interpolating between \eqref{eq:BorneHpt} and \eqref{eq:Mult0t} via Proposition~\ref{Triebinterpol}, we get the conclusion of the lemma. \end{proof} The following extension of a classical result of Strichartz is the key to our results: \begin{lem} \label{lem:multiplier} Let $1<p<\infty$ and $1/p-1<t_-\leq 0 \leq t <1/p$. There exists a constant $C_{\#}$ satisfying the following property. Let $O$ be a set in $\mathbb{R}^d$ whose intersection with almost every line parallel to a coordinate axis has at most $N$ connected components. Then, for any $u\in H_p^{t,t_-}$, the distribution $1_O u$ also belongs to $H_p^{t,t_-}$, and satisfies \begin{equation} \norm{1_O u}{H_p^{t,t_-}} \leq C_{\#} N \norm{u}{H_p^{t,t_-}}. \end{equation} \end{lem} \begin{proof} If $t_-=0$ and $t \in [0, 1/p)$ then our claim is just Strichartz' result \cite[Cor II.4.2]{Str} on generalized Sobolev spaces (noting that \cite[Cor II.3.7]{Str} gives the estimate $C_{\#} N$). (See also \cite[\S 4.6.3]{RS} for alternative sufficient conditions on $O$ and $p$, $t$ ensuring that $1_O$ is a multiplier of $H_p^{t,0}$.) Assume now that $t=0$ and $t_- \in (0, 1/p) $. Then the one-dimensional Fubini-type argument of Strichartz \cite[Thm I.4.1]{Str} applies, and allows us to generalize \cite[Cor II.4.2]{Str} to give the claim. If $t=0$ and $t_- \in (1/p-1, 0)$, the result follows by duality. Interpolating via Proposition \ref{Triebinterpol}, the set of parameters $(1/p,t,t_-)$ for which the conclusion of the lemma holds is convex. It therefore contains the convex hull of $\{(1/p,t,0) \;|\; 0\leq t<1/p\}$ and $\{ (1/p,0,t_-)\;|\; 1/p-1 < t_- \leq 0\}$, which coincides with the set $\{ (1/p,t,t_-) \;|\; 1/p-1<t_-\leq 0 \leq t <1/p\}$. \end{proof} \subsection{Composition with smooth maps preserving the stable leaves} \label{subsec:comp} In this paragraph, we study the behavior of $H_p^{t,t_-}$ under the composition with smooth maps preserving the stable leaves. Let us start with a very rough and easy to prove lemma. \begin{lem} \label{lem:CompositionFacile} Let $1<p<\infty$, and $t$, $t_-$ be real numbers with $|t|+|t_-|\le 1$. There exists a constant $C_{\#}$ such that, for any invertible matrix $A$ on $\mathbb{R}^d$, sending $\{0\}\times \mathbb{R}^{d_s}$ to itself, and for any $u\in H_p^{t,t_-}$, \begin{equation} \norm{ u\circ A}{H_p^{t,t_-}} \leq C_{\#} |\det A|^{-1/p} \max(\nor{A},\nor{A^{-1}}) \norm{u}{H_p^{t,t_-}}. \end{equation} \end{lem} \begin{proof} By \cite[Proposition 2.1.2 (iv)+(vii)]{RS}, the $H_p^1$-norm is equivalent to the norm $\norm{u}{L_p}+\norm{Du}{L_p}$. Hence, $\norm{ u\circ A}{H_p^{1,0}} \leq C_{\#} |\det A|^{-1/p} \max(\nor{A},\nor{A^{-1}}) \norm{u}{\mathcal{H}_p^{1,0}}$. Similarly, $\norm{ |\det A |^{-1} u\circ A^{-1}}{H_{p'}^{0,1}} \leq C_{\#} |\det A|^{-1+1/{p'}} \max(\nor{A},\nor{A^{-1}}) \norm{u}{H_{p'}^{0,1}}$, by a $d_s$-dimensional Fubini-type argument. Since the adjoint of $u \mapsto \det A ^{-1} u\circ A^{-1}$ is $u \mapsto u \circ A$, the general case follows by duality (Lemma~ \ref{lem:duality}) and interpolation (Proposition~\ref{Triebinterpol}). \end{proof} \begin{lem} \label{lem:CompositionDure} Let $\alpha\in (0,1)$, let $F:\mathbb{R}^d \to \mathbb{R}^d$ be a $C^{1+\alpha}$ diffeomorphism sending stables leaves to stable leaves, and let $A$ be a matrix such that, for all $z\in \mathbb{R}^d$, $\nor{A^{-1}\circ DF(z)}\leq 2$ and $\nor{DF(z)^{-1}\circ A}\leq 2$. Assume moreover that $A$ can be written as $M_0^{-1}\left(\begin{array}{cc} A^u & 0 \\ 0 & A^s \end{array}\right)M_1$, where $M_0$ and $M_1$ are matrices sending stable leaves to stable leaves, and $\mu_u:=\nor{A^u}\leq 1$, $\mu_s:=\nor{(A^s)^{-1}}^{-1}\geq 1$.\footnote{The matrix norms are the operator norms with respect to the usual euclidean metric on $\mathbb{R}^d$, so that the norm of a matrix equals the norm of its transpose.} Then, for all $t>0$ and $t_-<0$ with $t+|t_-|<\alpha$ and $t+t_-<0$, for all $p\in (1,\infty)$, there exists a constant $C_{\#}$ depending only on $\max(\nor{M_0},\nor{M_0^{-1}},\nor{M_1},\nor{M_1^{-1}})$ and $t$, $t_-$, $p$, and a constant $C(A,F)$ such that, for all $u\in H_p^{t,t_-}$, \begin{multline*} \norm{u\circ F}{H_p^{t,t_-}}\leq C_{\#} \norm{\det A / \det DF}{C^\alpha} |\det A|^{-1/p} \max(\mu_u^t, \mu_s^{t+t_-})\norm{u}{H_p^{t,t_-}} \\ + C \norm{u}{H_p^{0,t_-}}. \end{multline*} \end{lem} In the applications to transfer operators, $F$ will be the local {\it inverse} of some iterate $T^n$ of a piecewise hyperbolic map. Since $T^n$ is contracting along $E^s$ and expanding along $E^u$, the map $F$ will therefore satisfy the assumptions of the lemma regarding $\mu_s$ and $\mu_u$. \begin{proof}[Proof of Lemma \ref{lem:CompositionDure}] We will write $u\circ F=u\circ A\circ A^{-1}\circ F$. Hence, we need to study the composition with $A$ and $A^{-1}\circ F$. We claim that \begin{equation} \label{ComposeA} \norm{u\circ A}{H_p^{t,t_-}} \leq |\det A|^{-1/p} C_{\#} \max(\mu_u^t, \mu_s^{t+t_-})\norm{u}{H_p^{t,t_-}}+ C \norm{u}{H_p^{0,t_-}} \end{equation} and \begin{equation} \label{ComposeTA} \norm{u\circ A^{-1}\circ F}{H_p^{t,t_-}} \leq C_{\#} \norm{\det A / \det DF}{C^\alpha} \norm{u}{H_p^{t,t_-}}. \end{equation} Together, these equations prove the lemma. \emph{First step.} Let us prove \eqref{ComposeA}. This is a special case of \cite[Lemma 2.10]{baladi:Cinfty} (replacing $(0,t_-)$ by $(t-1/2, t_-)$). We will give the proof for the convenience of the reader, since it is at the same time very simple and at the heart of our argument. Lemma \ref{lem:CompositionFacile} deals with the composition with $M_0^{-1}$ and $M_1$, hence we can assume that $M_0=M_1=\Id$. We want to estimate $\norm{u\circ A}{H_p^{t,t_-}}=\norm{\mathcal{F}^{-1}(a_{t,t_-} \mathcal{F} (u\circ A))}{L_p}$. A change of variables readily gives $\mathcal{F}^{-1}(a_{t,t_-} \mathcal{F} (u\circ A))=\mathcal{F}^{-1}(a_{t,t_-}\circ \transposee{A}\cdot \mathcal{F} u) \circ A$. Hence, we have to show that \begin{equation} \label{eq:klmjwxvop} \norm{\mathcal{F}^{-1}(a_{t,t_-}\circ \transposee{A} \cdot \mathcal{F} u)}{L_p} \leq C_{\#} \max(\mu_u^t, \mu_s^{t+t_-})\norm{u}{H_p^{t,t_-}}+ C \norm{u}{H_p^{0,t_-}}. \end{equation} Write $\transposee{A}=\left(\begin{array}{cc} U&0\\0 & S \end{array}\right)$ with $|U \xi|\leq \mu_u|\xi|$ and $|S \eta|\geq \mu_s |\eta|$ by definition of $\mu_u,\mu_s$. Let \begin{equation} b(\xi,\eta)=a_{t,t_-}\circ \transposee{A}(\xi,\eta)=(1+|U \xi|^2+|S\eta|^2)^{t/2} (1+|S\eta|^2)^{t_-/2}. \end{equation} Let us prove that, if $C$ is large enough, we have \begin{equation} \label{eq:opuispoif} b \leq C_{\#} \max(\mu_u^t, \mu_s^{t+t_-}) a_{t,t_-}+C a_{0,t_-}. \end{equation} If we can prove this equation together with the corresponding estimates for the successive derivatives of $b$, then Theorem \ref{thm:Marc} applied to $$b/(C_{\#} \max(\mu_u^t, \mu_s^{t+t_-}) a_{t,t_-}+C a_{0,t_-})$$ gives \begin{equation} \norm{\mathcal{F}^{-1}(b \mathcal{F} u)}{L_p} \leq C_{\#} \norm{ \mathcal{F}^{-1}((C_{\#} \max(\mu_u^{t}, \mu_s^{t+t_-}) a_{t,t_-}+C a_{0,t_-}) \mathcal{F} u)}{L_p}, \end{equation} which yields \eqref{eq:klmjwxvop}. Let us now prove \eqref{eq:opuispoif} (the proof for the derivatives of $b$ is similar). We will freely use the following trivial inequalities: for $x\geq 1$ and $\lambda\geq 1$, \begin{equation} \frac{1}{\lambda}(1+\lambda x)\leq 1+x \leq \frac{2}{\lambda}(1+\lambda x). \end{equation} Assume first $|U\xi|^2 \leq |S\eta|^2$ and $|S\eta|^2 \geq 1$. Then, since $t>0$ and $t+t_-<0$, \begin{align*} b(\xi,\eta) &\leq (1+2 |S\eta|^2)^{t/2} (1+|S\eta|^2)^{t_-/2} \leq 2^{t/2} (1+|S\eta|^2)^{t/2}(1+|S\eta|^2)^{t_-/2} \\&\leq 2^{t/2} (1+\mu_s^2 |\eta|^2)^{(t+t_-)/2} \leq 2^{t/2} (\mu_s^2/2)^{(t+t_-)/2} (1+|\eta|^2)^{(t+t_-)/2} \\& \leq 2^{-t_-/2} \mu_s^{(t+t_-)} a_{t,t_-}(\xi,\eta). \end{align*} If $|U\xi|^2 \geq |S\eta|^2$ and $|U\xi|^2 \geq 1$, then \begin{align*} b(\xi,\eta)& \leq (1+2 |U\xi|^2)^{t/2} (1+|S\eta|^2)^{t_-/2} \leq 2^{t/2} (1+|U\xi|^2)^{t/2} (1+\mu_s^2 |\eta|^2)^{t_-/2} \\&\leq 2^{t/2} (1+ \mu_u^2 |\xi|^2)^{t/2} (1+|\eta|^2)^{t_-/2} \leq 2^{t/2} (2\mu_u^2)^{t/2} (1+|\xi|^2)^{t/2} (1+|\eta|^2)^{t_-/2} \\&\leq 2^{t} \mu_u^t a_{t,t_-}(\xi,\eta). \end{align*} In the remaining case, $\xi$ and $\eta$ are uniformly bounded, and \eqref{eq:opuispoif} follows by choosing $C$ large enough. This concludes the proof of \eqref{ComposeA}. \emph{Second step.} Let us now prove \eqref{ComposeTA}. We will write $\tilde F=A^{-1}\circ F$. As in the proof of Lemmas~ \ref{Leib}, \ref{lem:multiplier}, and ~ \ref{lem:CompositionFacile}, we will study simpler spaces before concluding by interpolation. We thus write $(t,t_-)=(\theta t^0, (1-\theta) t^0_-)$ for some $0<\theta<1$ and $t^0, -t^0_- \in (0,\alpha)$. By \cite[Proposition 2.1.2 (iv)+(vii)]{RS}, the $H_p^1$-norm is equivalent to the norm $\norm{u}{L_p}+\norm{D u}{L_p}$. Since the derivative of $\tilde F$ has norm everywhere bounded by $2$ and $|\det D\tilde F|\le 2^d$ by assumption, we get after a change of variables $\norm{u\circ \tilde F}{H_p^1} \leq C_{\#} \norm{u}{H_p^1}$. Since $\norm{u\circ \tilde F}{L_p} \leq C_{\#} \norm{u}{L_p}$, the interpolation inequality \eqref{interpp} gives \begin{equation} \label{eq:Interp0} \norm{u\circ \tilde F}{H_p^{t^0}} \leq C_{\#} \norm{u}{H_p^{t^0}}. \end{equation} Applying the same argument via Fubini to $\tilde F^{-1}$ on each leaf of the vertical direction, we also have $\norm{ u \circ \tilde F^{-1}}{H_{p'}^{0,1}} \leq C_{\#} \norm{u}{H_{p'}^{0,1}}$. The adjoint of the composition by $\tilde F^{-1}$ is given by $\mathcal{P}(u)= \det D\tilde F \cdot u\circ \tilde F$. Hence, duality yields $\norm{\mathcal{P} u}{H_p^{0,-1}} \leq C_{\#} \norm{u}{H_p^{0,-1}}$. Since $\mathcal{P}$ is bounded by $C_{\#}$ on $L_p$, we get by interpolation \begin{equation} \label{qoisufmkljqsdf} \norm{\det D\tilde F \cdot u\circ \tilde F}{H_p^{0,t^0_-}}\leq C_{\#} \norm{u}{H_p^{0,t^0_-}}. \end{equation} Together with \eqref{eq:Mult0t}, we obtain \begin{equation} \label{eq:Interp1} \begin{split} \norm{u\circ \tilde F}{H_p^{0,t^0_-}}&\leq C_{\#}\norm{1/\det D\tilde F}{C^\alpha} \norm{\det D\tilde F \cdot u\circ \tilde F}{H_p^{0,t^0_-}} \\& \leq C_{\#}\norm{1/\det D\tilde F}{C^\alpha} \norm{u}{H_p^{0,t^0_-}}. \end{split} \end{equation} Interpolating between \eqref{eq:Interp0} and \eqref{eq:Interp1}, we get \begin{equation} \norm{u\circ \tilde F}{H_p^{t,t-}}\leq C_{\#} \norm{1/\det D\tilde F}{C^\alpha}^{1-\theta} \norm{u}{H_p^{t,t-}}. \end{equation} Finally, $1/\det D\tilde F=\det A / \det DF$ is bounded from below, and \eqref{ComposeTA} follows. \end{proof} \begin{rmk}[Invariance] \label{rmk:Invariance} The arguments in the second step of the proof of Lemma \ref{lem:CompositionDure} (with $A=\Id$) also imply that, whenever $t>0$ and $t_-<0$ satisfy $t+|t_-|<\alpha$, then the space $H_p^{t,t_-}$ is invariant under the composition with $C^{1+\alpha}$ diffeomorphisms of $\mathbb{R}^d$ sending stable leaves to stable leaves. \end{rmk} \begin{rmk}[Extending \cite{baladi:Cinfty} to $C^{1+\alpha}$ Anosov diffeomorphisms] \label{extend} If $0<\alpha <1$ we can apply Lemma~ \ref{lem:CompositionDure} . If $\alpha \ge 1$ and $t >0$, $t+t_- <0$ satisfy $t+|t_-|<\alpha$, letting $m$ be the smallest integer $\ge t+|t_-|$, \cite[Proposition 2.1.2 (iv)+(vii)]{RS}, implies that the $H_p^m$-norm is equivalent to the norm $\sum_{|\gamma|\le m}\norm{\partial^\gamma u}{L_p}$. Thus, replacing the matrix $A$ in Lemma~ \ref{lem:CompositionDure} by a $C^\infty$ diffeomorphism $A$ preserving stable leaves, with least expansion $\mu_s\ge 1$ on the verticals, and whose inverse preserves horizontal cones with least expansion $\mu_s^{-1}\ge 1$, and such that $\norm{DA^{-1}\circ DF}{C^{m-1}}\le 2$ and $\norm{DF^{-1} \circ DA}{C^{m-1}}\le 2$, we get, by applying \cite[Lemma 2.10]{baladi:Cinfty} to prove the analogue of (\ref{ComposeA}), that \begin{multline*} \norm{u\circ F}{H_p^{t,t_-}}\leq C_{\#} \norm{\det DA / \det DF}{C^\alpha} |\det DA|^{-1/p} \max(\mu_u^t, \mu_s^{t+t_-})\norm{u}{H_p^{t,t_-}} \\ + C \norm{u}{H_p^{t-1/2,t_-}}. \end{multline*} The proof of Theorem~ \ref{thm:MainSpectralThm} then applies to any $C^{1+\alpha}$ Anosov diffeomorphism $T$ with $C^{1+\alpha}$ stable distribution, and to any $C^\alpha$ weight $g$, with $\alpha >0$. \end{rmk} \subsection{Localization}\label{locall} \begin{lem}[Localization principle] \label{lem:localization} Let $\eta:\mathbb{R}^d\to [0,1]$ be a $C^\infty$ function with compact support and write $\eta_m(x)=\eta(x+m)$. For any $p\in (1,\infty)$ and $t$, $t_-\in \mathbb{R}$, there exists $C_{\#} >0$ so that for each $u \in H_p^{t,t_-}$ \begin{equation} \left(\sum_{m\in \mathbb{Z}^d} \norm{\eta_m u }{H_p^{t,t_-}}^p\right)^{1/p} \le C_{\#} \norm{u}{H_p^{t,t_-}}. \end{equation} \end{lem} \begin{rmk} If, in addition to the assumptions of Lemma~\ref{lem:localization}, one supposes that $\sum_{m \in \mathbb{Z}^d} \eta_m(x) =1$ for all $x$, then one can show that there is $C_{\#}$ so that for each $u$ such that ${ \eta_m u }\in {H_p^{t,t_-}}$ for all $m$ we have $$ \norm{u}{H_p^{t,t_-}}\le C_{\#} \left(\sum_{m\in \mathbb{Z}^d} \norm{\eta_m u }{H_p^{t,t_-}}^p\right)^{1/p}. $$ (We shall not need the above bound.) \end{rmk} \begin{proof}[Proof of Lemma \ref{lem:localization}] For $t_-=0$ and arbitrary $t$, Lemma~\ref{lem:localization} is a result of Triebel \cite[Theorem 2.4.7]{Trie} based on a Paley-Littlewood-type decomposition. Moreover, the constant $C_{\#}$ depends only on the size of the support of $\eta$, and its $C^k$-norm for some large enough $k$. To handle $t_-\in \mathbb{R}$, we will (again) start from the result for the classical Sobolev space and use Fubini and interpolation, as follows. Let us prove the lemma for $t=0$ and $t_-\in \mathbb{R}$, using a $d_s$-dimensional Fubini argument. We have \begin{equation} \sum_{m\in \mathbb{Z}^d} \norm{\eta_m u}{H_p^{0,t_-}(\mathbb{R}^d)}^p = \sum_{m\in \mathbb{Z}^d} \int_{x\in \mathbb{R}^{d_u}} \norm{ \eta_m u}{H_p^{t_-}(\{x\}\times \mathbb{R}^{d_s})}^p \;{\rm d} x. \end{equation} For each $x\in \mathbb{R}^{d_u}$, the values of $m\in \mathbb{Z}^d$ for which the restriction of $\eta_m u$ to $\{x\}\times \mathbb{R}^{d_s}$ is nonzero are contained in a set $M(x) \times \mathbb{Z}^{d_s}$, where $\Card M(x)$ is bounded independently of $x$. Using the result of Triebel for the Sobolev space $H_p^{t_-}(\mathbb{R}^{d_s})$, we get \begin{equation} \sum_{m\in \mathbb{Z}^d}\norm{ \eta_m u}{H_p^{t_-}(\{x\}\times \mathbb{R}^{d_s})}^p \leq C_{\#} \norm{u}{H_p^{t_-}(\{x\}\times \mathbb{R}^{d_s})}^p. \end{equation} Integrating over $x\in \mathbb{R}^{d_u}$ and using the Fubini equality \begin{equation} \int_{x\in \mathbb{R}^{d_u}} \norm{u}{H_p^{t_-}(\{x\}\times \mathbb{R}^{d_s})}^p \;{\rm d} x=\norm{u}{H_p^{0,t_-}}^p, \end{equation} we obtain the lemma for $t=0$ and $t_-\in \mathbb{R}$. Consider the map $u\mapsto (\eta_m u)_{m\in \mathbb{Z}^d}$. We have shown that it sends continuously $H_p^t$ to $\ell_p(H_p^t)$ and $H_p^{0,t_-}$ to $\ell_p(H_p^{0,t_-})$. By interpolation, for any $\theta\in (0,1)$, it sends $[H_p^t, H_p^{0,t_-}]_\theta$ to $[\ell_p(H_p^t), \ell_p(H_p^{0,t_-})]_\theta$. By Proposition \ref{Triebinterpol}, the first space is $H_p^{(1-\theta)t, \theta t_-}$ while, by \cite[Theorem 1.18.1]{TrB} and again Proposition \ref{Triebinterpol}, the second space is $\ell_p(H_p^{(1-\theta)t, \theta t_-})$. This proves the lemma. \end{proof} \section{Proof of the main theorem} \label{mainsec} In this section, we prove Theorem \ref{thm:MainSpectralThm}. Let us fix once and for all a piecewise $C^{1+\alpha}$ hyperbolic map $T$ and a $C^\alpha$ function $g$, satisfying the assumptions of this theorem. We will denote by $C_{\#}$ constants that depend only on $p$, $t$, $t_-$ and $T$. We recall that the norm on $\mathcal{H}_p^{t,t_-}$ has been defined in \eqref{defnorm} using a partition of unity $\rho_1,\dots,\rho_J$ and charts $\kappa_1,\dots,\kappa_J$ subordinated to this partition of unity. In the following arguments, when working on a set $\overline{O_\mathbf{i}}$ or in a neighborhood of this set (with $\mathbf{i}$ of length $n$), then $T^n$ will implicitly mean $T_\mathbf{i}$. In the same way, $g^{(n)}$ will rather be a smooth extension of $g^{(n)}\big|_{O_\mathbf{i}}$ to a neighborhood of $\overline{O_\mathbf{i}}$. This should not cause any confusion. To study $\mathcal{L}_g^n$, we will need, in addition to the estimates from Section~ \ref{sec:local}, to iterate the inverse branches $T_\mathbf{i}^{-1}$, to truncate the functions and to use partitions of unity. To do this, we will use the three following lemmas. \begin{lem} \label{lem:iterate} There exists a constant $C_{\#}$ such that, for any $n$ and $\mathbf{i}=(i_0,\dots,i_{n-1})$, for any $x\in \overline{O_\mathbf{i}}$, for any $j,k\in [1,J]$ such that $x\in \supp \rho_j$ and $y=T_\mathbf{i} x\in\supp \rho_k$, there exists a neighborhood $O$ of $y$ and a $C^{1+\alpha}$ diffeomorphism $F$ of $\mathbb{R}^d$, coinciding with $\kappa_j \circ T_\mathbf{i}^{-1}\circ \kappa_k^{-1}$ on $\kappa_k(O)$, and satisfying the assumptions of Lemma \ref{lem:CompositionDure} with $\mu_u \leq C_{\#} \lambda_{u,n}^{-1}(x)$ and $\mu_s \geq C_{\#}^{-1}\lambda_{s,n}^{-1}(x)$, and $$\max(\nor{M_0},\nor{M_0^{-1}},\nor{M_1},\nor{M_1^{-1}})\leq C_{\#}. $$ \end{lem} \begin{proof} Let $F_0=\kappa_j \circ T_\mathbf{i}^{-1}\circ \kappa_k^{-1}$, it is defined on a neighborhood of $\kappa_k(y)$. Moreover, let $P$ be a $d_u$-dimensional subspace of the unstable cone at $x$, and let $M_0$, $M_1$ be invertible matrices (with bounded norms) sending respectively $D\kappa_j(x)P$ and $D\kappa_k(y) DT_\mathbf{i}(x)P$ to $\mathbb{R}^{d_u}\times \{0\}$, and stable leaves to stable leaves. Such matrices exist since the unstable cone is uniformly bounded away from the stable direction. Let $A=DF_0(\kappa_k (y))$, then $M_0 AM_1^{-1}$ sends $\mathbb{R}^{d_u}\times \{0\}$ to itself, and $\{0\}\times \mathbb{R}^{d_s}$ to itself, i.e., it is block-diagonal. Hence, the matrix $A$ satisfies the assumptions of Lemma \ref{lem:CompositionDure}. Let $F$ be a $C^{1+\alpha}$ diffeomorphism of $\mathbb{R}^d$ coinciding with $F_0$ on a neighborhood of $\kappa_k(y)$ and such that $DF(z)$ is everywhere close to $A$. Up to taking a smaller neighborhood $O$ of $y$ (depending on $n$), the claims of Lemma~ \ref{lem:iterate} hold for $F$. \end{proof} \begin{lem} \label{lem:truncate} There exists $C_{\#}$ such that, for any $n$, for any $\mathbf{i}=(i_0,\dots,i_{n-1})$, for any $x\in \overline{O_\mathbf{i}}$, for any $j$ such that $x\in \supp \rho_j$, there exists a neighborhood $O'$ of $x$ and a matrix $M$ sending stable leaves to stable leaves, with $$\max(\nor{M},\nor{M^{-1}})\leq C_{\#},$$ such that the intersection of $M \kappa_j(O'\cap O_\mathbf{i})$ intersects almost any line parallel to a coordinate axis along at most $C_{\#} n$ connected components. \end{lem} \begin{proof} Let $L$ be as in Definition \ref{def:WTC}. Fix $\mathbf{i}=(i_0,\dots,i_{n-1})$ and $x\in \overline{O_\mathbf{i}}$. Let $a_1,\dots,a_d$ be a basis of $\mathcal{T}_x X$, which is close to an orthonormal basis, such that its last $d_s$ vectors form a basis of $E^s(x)$. We can ensure that, for any $\ell<n$, $DT^\ell(x)a_k$ is $L$-generic with respect to $\partial O_{i_j}$, for $d_u<k\leq d$. This is indeed a consequence of the definition of weak transversality. Moving slightly the vectors $a_k$ for $1\leq k\leq d_u$, we can also ensure that $DT^\ell(x)a_k$ is transversal to the hypersurfaces defining $\partial O_{i_j}$ at $T^\ell x$ for any $\ell < n$. Let $b_k =D\kappa_j(x) \cdot a_k$, so that $b_1,\dots,b_d$ is a basis of $\mathbb{R}^d$. Multiplying $a_k$ by a scalar, we can ensure that $b_k$ has norm $1$. If $O'$ is a small enough neighborhood of $x$, then $\kappa_\ell(O'\cap O_\mathbf{i})$ intersects almost any line oriented by one of the vectors $b_k$, $d_u<k\leq d$, along at most $nL$ connected components, by definition of $L$-genericity. Moreover, it intersects any line oriented by one of the vectors $b_k$, $1\leq k\leq d_u$, along at most one connected component by construction. Let $M$ be the matrix sending $b_1,\dots,b_d$ to the canonical basis of $\mathbb{R}^d$, it satisfies the requirements of the lemma. \end{proof} The following lemma on partitions of unity is similar to \cite[Lemma 7.1]{BT1}. \begin{lem} \label{lem:sum} Let $t$ and $t_-$ be arbitrary real numbers. There exists a constant $C_{\#}$ such that, for any distributions $v_1,\dots, v_l$ with compact support in $\mathbb{R}^d$, belonging to $H_p^{t,t_-}$, there exists a constant $C$ with \begin{equation} \norm{ \sum_{i=1}^l v_i}{H_p^{t,t_-}}^p \leq C_{\#} m^{p-1} \sum_{i=1}^l \norm{v_i}{H_p^{t,t_-}}^p + C \sum_{i=1}^l \norm{v_i}{H_p^{t-1,t_-}}^p, \end{equation} where $m$ is the intersection multiplicity of the supports of the $v_i$'s, i.e., $m=\sup_{x\in \mathbb{R}^d} \Card\{i \;|\; x\in \supp(v_i)\}$. \end{lem} \begin{proof} Let $A$ be the operator acting on distributions by $A v=\mathcal{F}^{-1}((1+|\xi|^2+|\eta|^2)^{t/2}(1+|\eta|^2)^{t_-/2} \mathcal{F} v)$, so that $\norm{v}{\mathcal{H}_p^{t,t_-}}=\norm{Av}{L_p}$. \cite[Lemma 2.7]{baladi:Cinfty} shows that, for any distribution $v$ with compact support $K$ and any neighborhood $K'$ of this support, there exist $C>0$ and a function $\Psi:\mathbb{R}^d\to [0,1]$ equal to $1$ on $K$ and vanishing on the complement of $K'$, with \begin{equation} \norm{\Psi Av - Av}{L_p} \leq C \norm{v}{H_p^{t-1,t_-}}. \end{equation} Let $v_1,\dots,v_l$ be distributions with compact supports whose intersection multiplicity is $m$. Choose neighborhoods $K'_1,\dots,K'_l$ of the supports of the $v_i$s whose intersection multiplicity is also $m$, and functions $\Psi_1,\dots,\Psi_l$ as above. Then \begin{equation}\label{previous} \norm{\sum_i v_i}{H_p^{t,t_-}}^p= \norm{\sum_i Av_i}{L_p}^p \leq \norm{\sum_i \Psi_i Av_i}{L_p}^p + C\sum_i\norm{v_i}{H_p^{t-1,t_-}}^p. \end{equation} By convexity, the inequality $(x_1+\dots+x_m)^p \leq m^{p-1}\sum x_i^p$ holds for any nonnegative numbers $x_1,\dots,x_m$. Since the multiplicity of the $K'_i$s is at most $m$, this yields \begin{equation} \left|\sum_i \Psi_i Av_i\right|^p \leq m^{p-1} \sum_i |Av_i|^p. \end{equation} Integrating this inequality and using (\ref{previous}), we get the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:MainSpectralThm}] Let $p$, $t$ and $t_-$ be as in the assumptions of the theorem. Let $n>0$, and let $r_n>1$ (the precise value of $r_n$ will be chosen later). We define a dilation $R_n$ on $\mathbb{R}^d$ by $R_n(z)=r_n z$. Let $\norm{u}{n}$ be another norm on $\mathcal{H}_p^{t,t_-}$, given by \begin{equation}\label{zoom} \norm{u}{n}=\sum \norm{(\rho_j u)\circ \kappa_j^{-1} \circ R_n^{-1}}{H_p^{t,t_-}}. \end{equation} The norm $\norm{u}{n}$ is of course equivalent to the usual norm on $\mathcal{H}_p^{t,t_-}$, but we look at the space $X_0$ at a smaller scale. Functions are much more flatter at this new scale, so that estimates involving their $C^\alpha$ norm, such as Lemma \ref{Leib} or Lemma~\ref{lem:CompositionDure}, will not cause problems. This will also enable us to use partitions of unity with very small supports without spoiling the estimates. The use of this ``zooming" norm is similar to the good choice of $\epsilon_0$ in \cite{Sau}, or the use of weighted norms in \cite{DL}. We will prove that, if $n$ is fixed and $r_n$ is large enough, then \begin{multline} \label{eq:main} \norm{\mathcal{L}_g^n u}{n}^p\leq C \norm{u}{\mathcal{H}_p^{0,t_-}}^p\\+ C_{\#} n^p D_n^b (D_n^e)^{p-1} \norm{ |\det DT^n| \max(\lambda_{u,n}^{-(t+t_-)}, \lambda_{s,n}^{-t_-})^p |g^{(n)}|^p}{L_\infty}\norm{u}{n}^p. \end{multline} The injection of $\mathcal{H}_p^{t,t_-}$ into $\mathcal{H}_p^{0,t_-}$ is compact. Hence, by Hennion's theorem \cite{He}, the essential spectral radius of $\mathcal{L}_g^n$ acting on $\mathcal{H}_p^{t,t_-}$ (for either $\norm{u}{\mathcal{H}_p^{t,t_-}}$ or $\norm{u}{n}$, since these norms are equivalent) is at most \begin{equation} \Bigl[C_{\#} n^p D_n^b (D_n^e)^{p-1} \norm{ |\det DT^n| \max(\lambda_{u,n}^{-(t+t_-)}, \lambda_{s,n}^{-t_-})^p |g^{(n)}|^p}{L_\infty}\Bigr]^{1/p}. \end{equation} Taking the power $1/n$ and letting $n$ tend to $\infty$, we obtain Theorem \ref{thm:MainSpectralThm} since the quantity $(C_{\#} n^p)^{1/pn}$ converges to $1$ (here, it is essential that $C_{\#}$ does not depend on $n$). \smallskip It remains to prove \eqref{eq:main}, for large enough $r_n$. The estimate will be subdivided into three steps: \begin{enumerate} \item Decomposing $u$ into a sum of distributions $v_{j,m}$ with small supports and well controlled $\norm{\cdot}{n}$ norms. \item Estimating each term $(1_{O_{\mathbf{i}}} g^{(n)} v_{j,m})\circ T_{\mathbf{i}}^{-1}$, for $\mathbf{i}$ of length $n$. \item Adding all terms to obtain $\mathcal{L}_g^n u$. \end{enumerate} \emph{First step.} For $1\leq j\leq J$ and $m\in \mathbb{Z}^d$, let $\tilde v_{j,m}=\eta_m \cdot(\rho_j u)\circ \kappa_j^{-1}\circ R_n^{-1}$, where $\eta_m(x)=\eta(x+m)$, with $\eta: \mathbb{R}^d \to [0,1]$ a compactly supported $C^\infty$ function so that $\sum_{m \in \mathbb{Z}^d} \eta_m=1$. Since the intersection multiplicity of the supports of the functions $\eta_m$ is bounded, this is also the case for the $\tilde v_{j,m}$. Moreover, if $j$ is fixed, we get using Lemma~\ref{lem:localization} \begin{equation} \label{eq:decompose} \begin{split} \sum_{m\in \mathbb{Z}^d} \norm{\tilde v_{j,m}}{H_p^{t,t_-}}^p& =\sum_{m\in \mathbb{Z}^d} \norm{\eta_m \cdot (\rho_j u)\circ \kappa_j^{-1}\circ R_n^{-1}}{H_p^{t,t_-}}^p \\& \leq C_{\#} \norm{(\rho_j u)\circ \kappa_j^{-1}\circ R_n^{-1}}{H_p^{t,t_-}}^p \leq C_{\#} \norm{u}{n}^p. \end{split} \end{equation} Since $R_n$ expands the distances by a factor $r_n$ while the size of the supports of the functions $\eta_m$ is uniformly bounded, the supports of the distributions $$v_{j,m}= \tilde v_{j,m}\circ R_n \circ \kappa_j=\eta_m \circ R_n \circ \kappa_j\cdot(\rho_j u) $$ are arbitrarily small if $r_n$ is large enough. Finally \begin{equation} u=\sum_j \rho_j u = \sum_{j,m} v_{j,m}. \end{equation} \emph{Second step.} Fix $j,k\in \{1,\dots,J\}$, $m\in \mathbb{Z}^d$ and $\mathbf{i}=(i_0,\dots,i_{n-1})$. We will prove that \begin{multline} \label{eq:secondstep} \norm{ (\rho_k (g^{(n)}1_{O_\mathbf{i}} v_{j,m})\circ T_\mathbf{i}^{-1})\circ \kappa_k^{-1}\circ R_n^{-1}}{H_p^{t,t_-}} \leq C \norm{u}{\mathcal{H}_p^{0,t_-}}\\ + C_{\#} n \norm{ |\det DT^n|^{1/p} g^{(n)}\max(\lambda_{u,n}^{-t}, \lambda_{s,n}^{-(t+t_-)})}{L_\infty}\norm{\tilde v_{j,m}}{H_p^{t,t_-}}. \end{multline} First, if the support of $v_{j,m}$ is small enough (which can be ensured by taking $r_n$ large enough), there exists a neighborhood $O$ of this support and a matrix $M$ satisfying the conclusion of Lemma~ \ref{lem:truncate}: this follows from Lemma~ \ref{lem:truncate} and the compactness of $X_0$. Therefore, the intersection of $R_n (M (\kappa_j(O\cap O_\mathbf{i})))$ with almost any line parallel to a coordinate axis contains at most $C_{\#} n$ connected components. Hence, Lemma~\ref{lem:multiplier} implies that the multiplication by $1_{O\cap O_\mathbf{i}}\circ \kappa_j^{-1} \circ M^{-1}\circ R_n^{-1}$ sends $H_p^{t,t_-}$ into itself, with a norm bounded by $C_{\#} n$. Using the fact that $M$ and $R_n$ commute, the properties of $M$, and Lemma~ \ref{lem:CompositionFacile}, we get \begin{equation}\label{v'} \norm{ 1_{O_\mathbf{i}}\circ \kappa_j^{-1}\circ R^{-1}_n \cdot \tilde v_{j,m}}{H_p^{t,t_-}} \leq C_{\#} n \norm{\tilde v_{j,m}}{H_p^{t,t_-}}. \end{equation} (Recall that $v_{j,m}$ is supported inside $O$.) Next, let $$\tilde v_{j,k,m}=((\rho_k \circ T_{\mathbf{i}}) 1_{O_\mathbf{i}})\circ \kappa_j^{-1}\circ R^{-1}_n \cdot \tilde v_{j,m}$$ (we suppress $\mathbf{i}$ from the notation for simplicity). Let also $\chi$ be a $C^\infty$ function supported in the neighborhood $O$ of the support of $v_{j,m}$ with $\chi\equiv 1$ on this support. Up to taking larger $r_n$ we may ensure that $\norm{(\chi (\rho_k \circ T_\mathbf{i} ))\circ \kappa_j^{-1}\circ R_n^{-1}}{C^\alpha}\le C_{\#}$. Then Lemma~\ref{Leib} and \eqref{v'} imply \begin{equation}\label{v''} \norm{\tilde v_{j,k,m}}{H_p^{t,t_-}} \leq C_{\#} n \norm{\tilde v_{j,m}}{H_p^{t,t_-}} \end{equation} In addition, we have \begin{align}\label{glu} ((\rho_k \circ T_{\mathbf{i}})1_{O_\mathbf{i}} v_{j,m})\circ T_\mathbf{i}^{-1}\circ \kappa_k^{-1}\circ R_n^{-1} &=\tilde v_{j,k,m} \circ R_n\circ \kappa_j \circ T_\mathbf{i}^{-1}\circ \kappa_k^{-1}\circ R_n^{-1}\\ \nonumber &= \tilde v_{j,k,m} \circ R_n \circ F \circ R_n^{-1}, \end{align} where $F$ is given by Lemma~ \ref{lem:iterate} (we use the fact that the support of $v_{j,m}\circ T_\mathbf{i}^{-1}$ is contained in a very small neighborhood $O'$ if $r_n$ is large enough, and again the compactness of $X_0$). The diffeomorphism $F$ satisfies the assumptions of Lemma~ \ref{lem:CompositionDure}. Since the dilations $R_n$ commute with any matrix, this is also the case of the diffeomorphism $G=R_n\circ F\circ R_n^{-1}$. Applying Lemma \ref{lem:CompositionDure} to $G$, we get (for some point $x$ in the support of $v_{j,m}$, and some matrix $A$ of the form $DF(R_n^{-1}(z))$ for some $z$) \begin{multline}\label{gla} \norm{\tilde v_{j,k,m} \circ R_n \circ F \circ R_n^{-1}}{H_p^{t,t_-}} \leq C \norm{u}{\mathcal{H}_p^{0,t_-}} \\+C_{\#} \norm{\frac{\det A}{\det D G}}{C^\alpha} |\det A|^{-1/p} \max(\lambda_{u,n}(x)^{-t}, \lambda_{s,n}(x)^{-(t+t_-)})\norm{\tilde v_{j,k,m}}{H_p^{t,t_-}}. \end{multline} The factor $\det A$ is close to $\det DT_\mathbf{i}(x)^{-1}$. Moreover, $\det DG=(\det DF)\circ R_n^{-1}$. By choosing $r_n$ large enough, we can make sure that the $C^\alpha$ norm of $\det DG$ is controlled by its sup norm, to ensure that $\norm{\det A/\det DG}{C^\alpha}$ is uniformly bounded. Let $\chi'$ be a $C^\infty$ function supported in $O'$ with $\chi'\equiv 1$ on the support of $v_{j,m}\circ T_\mathbf{i}^{-1}$. For $\delta>0$, we can ensure by increasing $r_n$ that the $C^\alpha$ norm of $(\chi' g^{(n)} )\circ T_\mathbf{i}^{-1} \circ\kappa_k^{-1}\circ R_n^{-1}$ is bounded by $|g^{(n)}(x)|+\delta$ for some $x$ in the support of $v_{j,m}$. Choosing $\delta>0$ small enough, we deduce from \eqref{gla}, Lemma~ \ref{Leib} and \eqref{v''} \begin{multline*} \norm{ (\rho_k(g^{(n)}1_{O_\mathbf{i}} v_{j,m})\circ T_\mathbf{i}^{-1})\circ \kappa_k^{-1}\circ R_n^{-1}}{H_p^{t,t_-}} \leq C \norm{u}{\mathcal{H}_p^{0,t_-}} \\ + C_{\#} n \norm{ |\det DT^n|^{1/p} g^{(n)}\max(\lambda_{u,n}^{-t}, \lambda_{s,n}^{-(t+t_-)})}{L_\infty}\norm{\tilde v_{j,m}}{H_p^{t,t_-}}. \end{multline*} This proves \eqref{eq:secondstep}. \emph{Third step.} We have $\mathcal{L}_g^n u =\sum_{j,m} \sum_{\mathbf{i}} (1_{O_\mathbf{i}} g^{(n)} v_{j,m})\circ T_\mathbf{i}^{-1}$. (Note that only finitely many terms in this sum are nonzero by compactness of the support of each $\rho_j$.) We claim that the intersection multiplicity of the supports of the functions $(1_{O_\mathbf{i}} g^{(n)} v_{j,m})\circ T_\mathbf{i}^{-1}$ is bounded by $C_{\#} D_n^e$. Indeed, this follows from the fact that any point $x\in X_0$ belongs to at most $D_n^e$ sets $\overline{T_\mathbf{i}(O_\mathbf{i})}$, and that the intersection multiplicity of the supports of the functions $v_{j,m}$ is bounded. To estimate $\norm{\mathcal{L}_g^n u}{n}$, we have to bound each term $\norm{ (\rho_k \mathcal{L}_g^n u)\circ \kappa_k^{-1}\circ R_n^{-1}}{H_p^{t,t_-}}$, for $1\leq k\leq J$. Let us fix such a $k$. By Lemma \ref{lem:sum}, we have \begin{multline*} \norm{ (\rho_k \mathcal{L}_g^n u)\circ \kappa_k^{-1}\circ R_n^{-1}}{H_p^{t,t_-}}^p \leq C \norm{u}{\mathcal{H}_p^{0,t_-}}^p \\ +C_{\#} (C_{\#} D_n^e)^{p-1}\sum_{j,m,\mathbf{i}} \norm{(\rho_k(1_{O_\mathbf{i}} g^{(n)} v_{j,m})\circ T_\mathbf{i}^{-1})\circ \kappa_k^{-1}\circ R_n^{-1}}{H_p^{t,t_-}}^p. \end{multline*} We can bound each term in the sum using \eqref{eq:secondstep} and the convexity inequality $(a+b)^p \leq 2^{p-1}(a^p+b^p)$. Moreover, for any $(j,m)$, the number of parameters $\mathbf{i}$ for which the corresponding term is nonzero is bounded by the number of sets $\overline{O_\mathbf{i}}$ intersecting the support of $v_{j,m}$. Choosing $r_n$ large enough, we can ensure that the supports of the $v_{j,m}$ are small enough so that this number is bounded by $D_n^b$. Together with \eqref{eq:decompose}, this concludes the proof of \eqref{eq:main}, and of Theorem \ref{thm:MainSpectralThm}. \end{proof}
1,108,101,563,072
arxiv
\section{Introduction} The first definition of entropy that appeared in the literature is in the context of thermodynamics. Being commonly understood as a measure of disorder, entropy is defined in thermodynamics viewpoint as a measure of the number of specific ways in which a thermodynamic system may be arranged. However, the microscopic details of a system are not considered in this context. The definition of entropy in the statistical mechanics point of view appeared later along with other thermodynamic properties. In this context, entropy is considered as an extensive property of a thermodynamic system wherein thermodynamic properties are defined in terms of the statistics of the motions of the microscopic constituents of a system. \smallskip It is known that the entropy of an isolated system never decreases, which is the essence of the second law of thermodynamics. Such a system will spontaneously proceed towards thermodynamic equilibrium, the configuration with maximum entropy \cite{Gibbs}. There are three macroscopic variables that describe a system in thermodynamic equilibrium which correspond to thermal, mechanical and the chemical equilibrium. To each value of these macroscopic variables, there exist several possible microscopic configurations. These will then entail different systems and the collection of these systems is called an ensemble. One of the popular ensembles is the canonical ensemble, which is statistical in nature that represents the possible states of a mechanical system in thermal equilibrium with a heat bath at a fixed temperature. \smallskip Some of the physical systems cannot be described by Boltzmann-Gibbs(BG) statistical mechanics \cite{Asgarani, 151569, Shlesinger-Zaslavsky-Klafter, Bediaga-Curado-deMiranda, Walton-Rafelski, Binney-Tremaine, Clayton}. However, Tsallis \cite{Tsallis1988} has overcome some of these difficulties by introducing the following $q$-entropy \begin{equation} S_q=k\sum_{i=1}^{\omega}p_i\ln_q\frac{1}{p_i}, \end{equation} where $k$ is a positive constant and $\omega$ is the total number of microscopic states. For any real number $x$ and $q>0$, $\ln_q x$ called the $q$-logarithm is defined by \begin{equation} \ln_q x=\frac{x^{1-q}-1}{1-q},\;\;\;\ln_1x=\ln x. \end{equation} The inverse function of the $q$-logarithm is called $q$-exponential and is given by \begin{equation} \exp_q x=[1+(1-q)x]^{\frac{1}{1-q}},\;\;\;\exp_1x=\exp x. \end{equation} In the case of equiprobability, BG is recovered in the limit $q\to1$. \smallskip A two-parameter entropy $S_{q,q'}$ that recovered the $q$-entropy $S_q$ in the limit $q'\to1$ was defined in \cite{Schwammle-Tsallis} as \begin{equation}\label{twoparaentropy} S_{q,q'}\equiv\sum_{i=1}^{\omega}p_i\ln_{q,q'}\frac{1}{p_i}=\frac{1}{1-q'}\sum_{i=1}^{\omega}p_i\left[\exp\left(\frac{1-q'}{1-q}(p_i^{q-1}-1)\right)-1\right]. \end{equation} Applications of $S_q$ to a class of energy based ensembles were done in \cite{Chandrashekar-Mohammed} while applications of $S_{q,q'}$ to adiabatic ensembles were done in \cite{Chandrashekar-Segar}. Results in the applications of $S_{q,q'}$ involved the well-known Lambert W function. \smallskip A three-parameter entropy $S_{q,q',r}$ that recovers $Sq,q'$ in the limit $r\to1$ was defined in \cite{Corcino-Corcino} as \begin{equation}\label{three-parameter entropy} S_{q,q^{\prime },r} \equiv k\sum_{i=1}^w{p_i\ln_{ q,q^{\prime },r} \frac{1}{p_i}}, \end{equation} where k is a positive constant and \begin{equation} \label{lnq} \ln_{q,q',r}x \equiv \frac{1}{1-r}\left(\exp\left(\frac{1-r}{1-q'}\left(e^{(1-q')\ln_q x}-1\right)-1\right)\right) \end{equation} The three-parameter entropic function \eqref{three-parameter entropy} was shown to be analytic (hence, Lesche-stable), concave and convex in specified ranges of the parameters (see \cite{Corcino-Corcino}). \smallskip In this paper another variation of Lambert W function called the translated logarithmic Lambert function will be introduced. Moreover, the probability distribution of the three-parameter entropy is derived and expressed in terms of the translated logarithmic Lambert function. \section{Translated Logarithmic Lambert Function} The generalization of the Lambert $W$ function introduced here is the translated logarithmic Lambert function denoted by $W_\mathcal{LT}(x)$ and is defined as follows: \begin{defn}\label{def1}\rm For any real number $x$ and constant $B$, the translated logarithmic Lambert function $W_\mathcal{LT}(x)$ is defined to be the solution to the equation \begin{equation} (Ay\ln(By)+y+C)e^y=x. \label{defn of log lambert} \end{equation} \end{defn} \smallskip Observe that $y$ cannot be zero. Moreover, $By$ must be positive. By Definition \ref{def1}, $y=W_\mathcal{LT}(x)$. The derivatives of $W_\mathcal{LT}(x)$ with respect to $x$ can be readily determined as the following theorem shows. \begin{thm} The derivative of the translated logarithmic Lambert function is given by \begin{equation} \frac{dW_\mathcal{LT}(x)}{dx}=\frac{e^{-W_\mathcal{LT}(x)}}{[W_\mathcal{LT}(x)+1]A\ln BW_\mathcal{LT}(x) +W_\mathcal{LT}(x)+A+C+1}. \label{derivatives of log lambert} \end{equation} \end{thm} \begin{proof} Taking the derivative of both sides of \eqref{defn of log lambert} gives \begin{align*} (Ay\ln(By) +y+C)e^y \frac{dy}{dx}+\left(A+A\ln(By)+1\right)e^y \ \frac{dy}{dx}=1, \end{align*} from which \begin{equation} \frac{dy}{dx}=\frac{1}{\left[Ay\ln(By)+y+C+A+A\ln(By)+1\right]e^y}. \label{dy/dx} \end{equation} With $y=W_\mathcal{LT}(x)$, \eqref{dy/dx} reduces to \eqref{derivatives of log lambert}. \end{proof} \smallskip The integral of the translated logarithmic Lambert function is given in the next theorem. \begin{thm} The integral of $W_\mathcal{LT}(x)$ is \begin{align} \int W_\mathcal{LT}(x)\ dx &=e^{W_\mathcal{LT}(x)}\left[\left(W^2_\mathcal{LT}(x)-W_\mathcal{LT}(x)+1\right)A\ln\left(BW_\mathcal{LT}(x)\right)+W^2_\mathcal{LT}(x)\right.\nonumber\\ &\left. +(C-1)W_\mathcal{LT}(x)+1+A-C\right]-2Ei\left(W_\mathcal{LT}(x)\right)+C', \label{integral of log lambert} \end{align} where $Ei(x)$ is the exponential integral given by $$Ei(x)=\int \frac{e^x}{x}dx.$$ \end{thm} \begin{proof} From \eqref{defn of log lambert}, \begin{equation*} dx=\left[Ay\ln(By)+y+C+A+A\ln(By)+1\right]e^y\ dy. \end{equation*} Thus, \begin{align} \int y\ dx&=\int y\left[Ay\ln(By)+y+C+A+A\ln(By)+1\right]e^y\ dy \nonumber \\ &=A\int y^2e^y\ln(By)\ dy+A\int ye^y\ln(By)\ dy+\int y^2e^y\ dy \nonumber \\ &\;\;\;\;\;+(C+A+1)\int ye^ydy. \label{integral of y dx} \end{align} These integrals can be computed using integration by parts to obtain \begin{equation} (A+C+1)\int ye^y\ dy=(A+C+1)(y-1)e^y+C_1, \label{fourth integral} \end{equation} \begin{equation} \int y^2e^y\ dy=(y^2-2y+2)e^y+C_2, \label{third integral} \end{equation} \begin{equation} A\int ye^y\ln(By)\ dy=Ae^y\left((y-1)\ln(By)-1\right)+Ei(y)+C_3, \label{second integral} \end{equation} \begin{equation} A\int y^2e^y\ln(By)\ dy=Ae^y\left[(y^2-2y+2)\ln(By)-y+3\right]-2Ei(y)+C_4, \label{first integral} \end{equation} where $C_1, C_2, C_3$ are constants. Substitution of \eqref{fourth integral}, \eqref{third integral}, \eqref{first integral} and \eqref{second integral} to \eqref{integral of y dx} with $C'=C_1+C_2+C_3+C_4$, and writing $W_\mathcal{LT}(x)$ for $y$ will give \eqref{integral of log lambert}. \end{proof} \smallskip The next theorem contains the Taylor series expansion of $W_\mathcal{LT}(x)$. \begin{thm} Few terms of the Taylor series of $W_\mathcal{LT}(x)$ about 0 are given below: \begin{equation} W_\mathcal{LT}(x)=\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}+\frac{e^{\frac{-1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}}}{A\left[W\left(\frac{-BCe^{1/A}}{A}\right)+1\right]} \ x+\cdots \label{Taylor series} \end{equation} where $W(x)$ is the classical Lambert W function. \end{thm} \begin{proof} Being the inverse of the function defined by $x=y\ln(By)e^y$, the Lagrange inversion theorem is the key to obtain the Taylor series of the function $W_\mathcal{LT}(x)$. \\ \indent Let $f(y)=(Ay\ln(By)+y+C)e^y$. The function $f$ is analytic for $By>0$. Moreover, $f'(y)=\left[A(y+1)\ln(By)+y+A+C+1\right]e^y$, $$f'\left(\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}\right)=Ae^{\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}}\left[W\left(\frac{-BCe^{1/A}}{A}\right)+1\right]\neq 0, $$ where $W\left(\frac{-BCe^{1/A}}{A}\right)\neq -1 \ (A\neq0)$, and for finite $B$, $$f\left(\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}\right)=0.$$ By the Lagrange Inversion Theorem, taking $a=\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}$, we have \begin{equation} W_\mathcal{LT}(x)=\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}+\sum_{n = 1}^\infty g_n \frac{x^n}{n!}, \label{from Lagrange theorem} \end{equation} where \begin{equation} g_n=\lim_{y\to a} \frac{d^{n-1}}{dy^{n-1}} \left(\frac{y-\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}}{f(y)}\right)^n. \label{gn} \end{equation} That is, when $n=1$, \begin{align*} g_1=\frac{e^{\frac{-1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}}}{A\left[W\left(\frac{-BCe^{1/A}}{A}\right)+1\right]}. \end{align*} Substituting to \eqref{from Lagrange theorem} will yield \eqref{Taylor series}. \end{proof} \smallskip An approximation formula for $W_\mathcal{LT}(x)$ expressed in terms of the classical Lambert $W$ function is proved in the next theorem. \begin{thm} For large $x$, \begin{align} W_\mathcal{LT}(x) &\sim W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)-\ln\left\{\left(\frac{e^{\frac{C}{A+1}}}{A+1}\right)\left[A\ln\left(BW\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)\right)+1\right]\right.\nonumber\\ &\;\;\;\;\;\;\left.+\frac{C}{x}e^{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right\}-\frac{C}{A+1}.\label{approximation} \end{align} where $W(x)$ denotes the Lambert $W$ function. \end{thm} \begin{proof} From \eqref{defn of log lambert}, $y=W_\mathcal{LT}(x)$ satisfies \begin{equation*} x=[Ay(\ln By)+y+C]e^y\sim [(A+1)y+C]e^y. \end{equation*} Then \begin{align} y&=W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)-\frac{C}{A+1}+u(x)\label{eqn15}\\ &=W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)\left[1-\frac{\frac{C}{A+1}}{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}+\frac{u(x)}{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right],\label{expression with u(x)} \end{align} where $u(x)$ is a function to be determined. Substituting \eqref{expression with u(x)} to \eqref{defn of log lambert} yields \begin{align} &\left\{AW\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)\left[1-\frac{\frac{C}{A+1}}{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}+\frac{u(x)}{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right]\times\right.\nonumber\\ &\;\;\;\left.\ln\left(BW\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)\left[1-\frac{\frac{C}{A+1}}{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}+\frac{u(x)}{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right]\right)+\right.\nonumber\\ &\;\;\;\left.W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)\left[1-\frac{\frac{C}{A+1}}{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}+\frac{u(x)}{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right]+C\right\}\times\nonumber\\ &\;\;\;e^{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\cdot e^{u(x)}=x. \label{expression1 for x} \end{align} With $\frac{C}{A+1}, u(x)<<W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)$, \eqref{expression1 for x} becomes \begin{align*} &\left\{AW\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)e^{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\ln\left(BW\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)\right)+\right.\nonumber\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\left.\left[W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)+C\right]e^{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right\}e^{u(x)}=x. \end{align*} \begin{align*} \left\{\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)\left[A\ln\left(BW\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)+1\right]+Ce^{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right]\right\}e^{u(x)}=x. \end{align*} \begin{align*} \left\{\left(\frac{e^{\frac{C}{A+1}}}{A+1}\right)\left[A\ln\left(BW\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)+1\right]+\frac{C}{x}e^{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right]\right\}e^{u(x)}=1. \end{align*} Thus, \begin{align*} u(x)=-\ln\left\{\left(\frac{e^{\frac{C}{A+1}}}{A+1}\right)\left[A\ln\left(BW\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)\right)+1\right]+\frac{C}{x}e^{W\left(\frac{xe^{\frac{C}{A+1}}}{A+1}\right)}\right\}. \end{align*} Substituting this to \eqref{eqn15} yields \eqref{approximation}. \end{proof} \smallskip The table below illustrates the accuracy of the approximation formula in \eqref{approximation} with $A=B=C=1$. \begin{table*}[htbp] \centering \begin{tabular}{|c|c|c|c|} \hline $x$ & $W_\mathcal{LT}(x)$ & Approximate & Relative Error\\ & & Value &\\ \hline $3575.7472$ & $4$ & $3.3121$ & $1.71987\times 10^{-1}$\\ \hline $2084.7878$ & $5$ & $4.3301$ & $1.33982\times 10^{-1}$\\ \hline $7161.0857$ & $6$ & $5.3453$ & $1.09116\times 10^{-1}$\\ \hline $23710.7124$ & $7$ & $6.3581$ & $9.16961\times 10^{-2}$\\ \hline $76418.4449$ & $8$ & $7.3690$ & $7.88738\times 10^{-2}$\\ \hline $241269.4957$ & $9$ & $8.3783$ & $6.90741\times 10^{-2}$ \\ \hline $749469.2416$ & $10$ & $9.3864$ & $6.13602\times 10^{-2}$\\ \hline \end{tabular} \end{table*} \bigskip The next theorem describes the branches of the translated logarithmic Lambert function. \bigskip \begin{thm} Let $x=f(y)=[Ay\ln(By)+y+C]e^y$. Then the branches of the translated logarithmic Lambert function $y=W_\mathcal{LT}(x)$ can be described as follows: \begin{enumerate} \item When $B>0, A>0$, the branches are \begin{itemize} \item $W^0_\mathcal{LT}(x) : [f(\delta),f(0))\to (0, \delta]$ is strictly decreasing; \item[] \item $W^1_\mathcal{LT}(x) : [f(\delta), +\infty)\to [\delta, +\infty)$ is strictly increasing, \end{itemize} \item[] \item When $B>0, A<0$, the branches are \begin{itemize} \item $W^0_\mathcal{LT}(x) : [f(0),f(\delta))\to (0, \delta]$ is strictly increasing; \item[] \item $W^1_\mathcal{LT}(x) : (-\infty, f(\delta)]\to [\delta, +\infty)$ is strictly decreasing, \end{itemize} where $\delta$ is the unique solution to \begin{equation}\label{singu} Ay\ln(By)+y+C+A+A\ln(By)=-1. \end{equation} \item When $B<0, A>0, |C|\leq A$, the branches are \begin{itemize} \item $W^0_{\mathcal{LT},<}(x) : (f(0), f(\delta_1)]\to [\delta_1,0)$ is strictly decreasing; \item[] \item $W^1_{\mathcal{LT},<}(x) : [f(\delta_2),f(\delta_1)]\to [\delta_2,\delta_1]$ is strictly increasing, \item[] \item $W^2_{\mathcal{LT},<}(x) : [f(\delta_2),0)\to (-\infty,\delta_2]$ is strictly decreasing, \end{itemize} \item[] \item When $B<0, A<0, C\leq |A|$, the branches are \begin{itemize} \item $W^0_{\mathcal{LT},<}(x) : [f(\delta_1), f(0)]\to [\delta_1,0)$ is strictly increasing; \item[] \item $W^1_{\mathcal{LT},<}(x) : [f(\delta_1),f(\delta_2)]\to [\delta_2,\delta_1]$ is strictly decreasing, \item[] \item $W^2_{\mathcal{LT},<}(x) : (0, f(\delta_2)]\to (-\infty,\delta_2]$ is strictly increasing, \end{itemize} where $\delta_1$ and $\delta_2$ are the two solutions to \eqref{singu} with $$\delta_2<\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}}<\delta_1<0.$$ \end{enumerate} \end{thm} \begin{proof} Consider the case when $B>0, A>0$. Let $x=f(y)=[Ay\ln(By)+y+C]e^y$. From equation \eqref{derivatives of log lambert}, the derivative of $y=W_\mathcal{LT}(x)$ is not defined when $y$ satisfies \eqref{singu}. The solution $y=\delta$ to \eqref{singu} can be viewed as the intersection of the functions $$g(y)=\frac{-y-C-A-1}{y+1}\;\;\;\;\mbox{and}\;\;\;\;h(y)=A\ln (By).$$ Clearly, the solution is unique. Thus, the derivative $\frac{dW_\mathcal{LT}(x)}{dx}$ is not defined for $x=f(\delta)=[A\delta\ln(B\delta)+\delta+C]e^{\delta}$. The value of $f(\delta)$ can then be used to determine the branches of $W_\mathcal{LT}(x)$. To explicitly identify the said branches, the following information are important: \begin{enumerate} \item the value of $y$ must always be positive, otherwise, $\ln (By)$ is undefined; \item the function $y=W_\mathcal{LT}(x)$ has only one $y$-intercept, i.e., $y=\frac{1}{B}$; \item if $y<\delta$, $A(y+1)\ln (By)+y+A+C+1>0$ which gives $\frac{dy}{dx}>0$; \item if $y>\delta$, $A(y+1)\ln (By)+y+A+C+1<0$ which gives $\frac{dy}{dx}<0$; \item if $y=\delta$, $A(y+1)\ln (By)+y+A+C+1=0$ and $\frac{dy}{dx}$ does not exist \end{enumerate} These imply that \begin{enumerate} \item when $y<\delta$, the function $y=W_\mathcal{LT}(x)$ is increasing in the domain $[f(\delta),0)$ with range $(0, \delta]$ and the function crosses the $y$-axis only at $$y=\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}};$$ \item when $y>\delta$, the function $y=W_\mathcal{LT}(x)$ is decreasing, the domain is $[f(\delta),+\infty)$ and the range is $[\delta,+\infty)$ because this part of the graph does not cross the $x$-axis and $y$-axis; \item when $y=\delta$, the line tangent to the curve at the point $(f(\delta),\delta)$ is a vertical line. \end{enumerate} These proved the case when $B>0, A>0$. The case where $B>0, A<0$ can be proved similarly. For the case $B<0, A>0, |C|\leq A$, the solution to \eqref{singu} can be viewed as the intersection of the functions $$g(y)=\frac{-y-C-A-1}{y+1}\;\;\;\;\mbox{and}\;\;\;\;h(y)=A\ln (By).$$ These graphs intersect at two points $\delta_1$ and $\delta_2$. Thus, the derivative $\frac{dW_\mathcal{LT}(x)}{dx}$ is not defined for \begin{align*} x_1&=f(\delta_1)=[A\delta_1\ln(B\delta_1)+\delta_1+C]e^{\delta_1},\\ x_2&=f(\delta_2)=[A\delta_2\ln(B\delta_2)+\delta_2+C]e^{\delta_2}. \end{align*} Note that \begin{enumerate} \item the value of $y$ must always be negative, otherwise, $\ln (By)$ is undefined; \item the function $y=W_\mathcal{LT}(x)$ has only one $y$-intercept, i.e., $$y=\frac{1}{B}e^{W\left(\frac{-BCe^{1/A}}{A}\right)-\frac{1}{A}};$$ \item $g(y)$ is not defined at $y=-1$. \end{enumerate} The desired branches are completely determined as follows: \begin{enumerate} \item If $\delta_1<y<0$, then $A(y+1)\ln (By)+y+A+C+1<0$. This gives $\frac{dy}{dx}<0$. Thus, the function $y=W_\mathcal{LT}(x)$ is a decreasing function with domain $[f(0),f(\delta_1)]$ with range $[\delta_1,0]$; \item If $\delta_2\le y\le \delta_1$, then $A(y+1)\ln (By)+y+A+C+1>0$. This gives $\frac{dy}{dx}>0$. Thus, the function $y=W_\mathcal{LT}(x)$ is increasing function with domain $[f(\delta_2),f(\delta_1)]$ and range $[\delta_2,\delta_1]$; \item If $-\infty<y<\delta_2$, then $A(y+1)\ln (By)+y+A+C+1<0$. This gives $\frac{dy}{dx}<0$. Thus $y=W_\mathcal{LT}(x)$ is a decreasing function with domain $[f(\delta_2),0)$ and range $(-\infty,\delta_2]$. \end{enumerate} The case where $B<0, A<0, C\leq |A|$ can be proved similarly. \end{proof} Figures 1 and 2 depict the graphs of the translated logarithmic Lambert function (red color graphs) when $B=1$ and $B=-1$. The $y$-coordinates of the points of intersection of the blue and black colored graphs correspond to the value of $\delta, \delta_1$ and $\delta_2$. \begin{figure}[t!] \centerline{\includegraphics[width=7cm]{graph_Bpos_Aneg}} \centerline{\footnotesize{\textbf{Figure 1}}. {\footnotesize{Graph of translated logarithmic Lambert Function with $B = 1, A = 2, C=1$. }}} \centerline{{\footnotesize{The graphs with red, blue and black colors are the graphs of }}} \centerline{{\footnotesize{$x=f(y)$, $x=g(y)$ and $x=h(y)$, respectively.}}} \end{figure} \begin{figure}[hbt!] \centerline{\includegraphics[width=7cm]{graph_Bneg_Apos_Cpos}} \centerline{\footnotesize{\textbf{Figure 2}}. {\footnotesize{Graph of translated logarithmic Lambert Function with $B = -1, A = -2, C=1$. }}} \centerline{{\footnotesize{The graphs with red, blue and black colors are the graphs of }}} \centerline{{\footnotesize{$x=f(y)$, $x=g(y)$ and $x=h(y)$, respectively.}}} \end{figure} \vspace{5pt} \bigskip \section{Applications to Entropy} In this section, application of the translated logarithmic Lambert function to entropy in canonical ensemble is derived. Parallel to the two-parameter entropy in \eqref{twoparaentropy}, the three-parameter entropy, denoted by $S_{q,q',r}$, can also be constructed based on the three-parameter logarithm as follows: \begin{align} S_{q,q',r}&=k\sum_{i=1}^{\omega}p_i\ln_{q,q',r} \frac{1}{p_i}\\ &=k\sum_{i=1}^{\omega}p_i\frac{1}{1-r}\left(\exp\left(\frac{1-r}{1-q'}\left(e^{(1-q')\ln_q x}-1\right)-1\right)\right) \end{align} where $x=\frac{1}{p_i}$. In maximizing $S_{q,q',r}$, the following constraints are to be considered: \begin{align} &\sum_{i=1}^{\omega}p_i - 1 = 0\label{cons1}\\ &\sum_{i=1}^{\omega}(p_i\epsilon_i - E) = 0.\label{cons2} \end{align} Now, we construct the three-parameter entropic functional, denoted by $\Phi_{p,q',r}$, by adding the above constraints \eqref{cons1} and \eqref{cons2} to the entropy $S_{p,q',r}$ with Lagrange multipliers. That is, \begin{equation} \Phi_{p,q',r}(p_i, \alpha,\beta)=\frac{1}{k}S_{p,q',r}+\alpha\left(\sum_{i=1}^{\omega}p_i - 1\right)+\beta\sum_{i=1}^{\omega}(p_i\epsilon_i - E). \end{equation} The entropic functional $\Phi_{p,q',r}$ should be maximized in order to reach the equilibrium state. Hence, \begin{equation}\label{entro_func1} \frac{\partial\Phi_{p,q',r}(p_i, \alpha,\beta)}{\partial p_i}=\frac{1}{k}\frac{\partial S_{p,q',r}}{\partial p_i}+\alpha+\beta\epsilon_i=0. \end{equation} Note that \begin{equation}\label{entro_func2} \frac{1}{k}\frac{\partial S_{p,q',r}}{\partial p_i}=p_i\frac{\partial \ln_{p,q',r}\frac{1}{p_i}}{\partial p_i}+\ln_{p,q',r}\frac{1}{p_i} \end{equation} with \begin{align*} \frac{\partial \ln_{p,q',r}\frac{1}{p_i}}{\partial p_i}&=\frac{1}{1-r}\exp\left(\frac{1-r}{1-q'}\left(\exp\left(\frac{1-q'}{1-q}\left(p_i^{q-1}-1\right)\right)-1\right)\right)\times\\ &\;\;\;\;\;\;\;\;\;\;\frac{1-r}{1-q'}\exp\left(\frac{1-q'}{1-q}\left(p_i^{q-1}-1\right)\right)\frac{1-q'}{1-q}(q-1)p_i^{q-2}\\ &=e^{-\frac{1}{1-r}}\exp\left(\frac{1-r}{1-q'}\exp\left(\frac{1-q'}{1-q}\left(p_i^{q-1}-1\right)\right)\right)\times\\ &\;\;\;\;\;\;\;\;\;\;\exp\left(\frac{1-q'}{1-q}\left(p_i^{q-1}-1\right)\right)(-p_i^{q-2}). \end{align*} Letting \begin{equation}\label{eqn_u} u=\exp\left(\frac{1-q'}{1-q}\left(p_i^{q-1}-1\right)\right) \end{equation} yields \begin{align*} \frac{\partial \ln_{p,q',r}\frac{1}{p_i}}{\partial p_i}=e^{-\frac{1-r}{1-q'}}e^{\frac{1-r}{1-q'}u}u(-p_i^{q-2}). \end{align*} Then \begin{align*} \frac{\partial\Phi_{p,q',r}(p_i, \alpha,\beta)}{\partial p_i}&=-p_i^{q-1}e^{-\frac{1-r}{1-q'}}e^{\frac{1-r}{1-q'}u}u+\frac{1}{1-r}\left[e^{-\frac{1-r}{1-q'}}e^{\frac{1-r}{1-q'}u}-1\right]\\ &\;\;\;\;\;\;\;\;\;\;+\alpha+\beta\epsilon_i=0. \end{align*} \begin{align*} -p_i^{q-1}e^{-\frac{1-r}{1-q'}}e^{\frac{1-r}{1-q'}u}u+\frac{1}{1-r}e^{-\frac{1-r}{1-q'}}e^{\frac{1-r}{1-q'}u}-\frac{1}{1-r}+\alpha+\beta\epsilon_i=0. \end{align*} But equation \eqref{eqn_u} can be written as \begin{align*} \ln u&=\frac{1-q'}{1-q}\left(p_i^{q-1}-1\right)\\ p_i^{q-1}&= 1+\frac{1-q}{1-q'}\ln u. \end{align*} Hence, \begin{align*} &-\left(1+\frac{1-q}{1-q'}\ln u\right)e^{\frac{1-r}{1-q'}u}u+\frac{1}{1-r}e^{\frac{1-r}{1-q'}u}+\left(-\frac{1}{1-r}+\alpha+\beta\epsilon_i\right)e^{-\frac{1-r}{1-q'}}=0\\ &e^{\frac{1-r}{1-q'}u}u+\frac{1-q}{1-q'}u(\ln u) e^{\frac{1-r}{1-q'}u}-\frac{1}{1-r}e^{\frac{1-r}{1-q'}u}=\left(-\frac{1}{1-r}+\alpha+\beta\epsilon_i\right)e^{-\frac{1-r}{1-q'}}\\ &\;\;\;\;\;\;\;\;\;\;\frac{1-r}{1-q'}ue^{\frac{1-r}{1-q'}u}+\frac{1-q}{1-q'}\frac{1-r}{1-q'}u(\ln u) e^{\frac{1-r}{1-q'}u}-\frac{1-r}{1-q'}\frac{1}{1-r}e^{\frac{1-r}{1-q'}u}\\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=\left(-\frac{1}{1-r}+\alpha+\beta\epsilon_i\right)\frac{1-r}{1-q'}e^{-\frac{1-r}{1-q'}}. \end{align*} By taking $y=\frac{1-r}{1-q'}u$, we obtain \begin{equation*} ye^y+\frac{1-q}{1-q'}y\ln\left(\frac{1-q'}{1-r}y\right) e^y-\frac{1}{1-q'}e^y=x \end{equation*} where \begin{equation}\label{eqn_x} x=\left(-\frac{1}{1-r}+\alpha+\beta\epsilon_i\right)\frac{1-r}{1-q'}e^{-\frac{1-r}{1-q'}}. \end{equation} Thus, \begin{equation*} \left(\frac{1-q}{1-q'}y\ln\left(\frac{1-q'}{1-r}y\right)+y-\frac{1}{1-q'}\right)e^y=x. \end{equation*} With \begin{equation}\label{constants} A=\frac{1-q}{1-q'}, B=\frac{1-q'}{1-r}, C=-\frac{1}{1-q'}, \end{equation} it follows that \begin{equation*} \left(Ay\ln\left(By\right)+y+C\right)e^y=x. \end{equation*} This implies that \begin{align*} y&=W_{\mathcal{LT}}(x)\\ \frac{1-r}{1-q'}u&=W_{\mathcal{LT}}(x)\\ u&=\frac{1-q'}{1-r}W_{\mathcal{LT}}(x)\\ \end{align*} Using equation \eqref{eqn_u} \begin{align*} &\exp\left(\frac{1-q'}{1-q}\left(p_i^{q-1}-1\right)\right) =\frac{1-q'}{1-r}W_{\mathcal{LT}}(x)\\ &\frac{1-q'}{1-q}\left(p_i^{q-1}-1\right) =\ln\left(\frac{1-q'}{1-r}W_{\mathcal{LT}}(x)\right). \end{align*} Therefore, the probability distribution is given by \begin{equation}\label{proba_distn} p_i =\frac{1}{Z_{q,q',r}}\left\{\frac{1-q}{1-q'}\ln\left(\frac{1-q'}{1-r}W_{\mathcal{LT}}(x)\right)+1\right\}^{\frac{1}{q-1}} \end{equation} where $$Z_{q,q',r}=\sum_{i=1}^{\omega}\left\{\frac{1-q}{1-q'}\ln\left(\frac{1-q'}{1-r}W_{\mathcal{LT}}(x)\right)+1\right\}^{\frac{1}{q-1}}.$$ \begin{align*} x&=\left(1-\alpha(1-r)-\beta(1-r)\epsilon_i\right)\frac{1}{q'-1}e^{-\frac{1-r}{1-q'}}\\ &=\frac{1}{q'-1}e^{-\frac{1-r}{1-q'}}(1-\alpha(1-r))\left(1-\frac{\beta(1-r)}{1-\alpha(1-r)}\epsilon_i\right)\\ &=\frac{1}{q'-1}e^{-\frac{1-r}{1-q'}}(1-\alpha(1-r))\left(1-\beta_r(1-r)\epsilon_i\right)\\ &=\frac{1}{q'-1}e^{-\frac{1-r}{1-q'}}(1-\alpha(1-r))\left[\exp_r(-\beta_r\epsilon_i)\right], \end{align*} where $\beta_r$ may be defined as the inverse of the pseudo-temperature $$\beta_r\equiv\frac{1}{k_rT_r}=\frac{\beta}{1-\alpha(1-r)},$$ \begin{align*} p_i &=\frac{1}{Z_{q,q',r}}\left\{1+(1-q)\ln\left(\frac{1-q'}{1-r}W_{\mathcal{LT}}\left(\frac{e^{\frac{1-r}{q'-1}}(1-\alpha(1-r))e_r^{-\beta_r\epsilon_i}}{q'-1}\right)\right)^{\frac{1}{1-q'}}\right\}^{\frac{1}{q-1}}\\ &=\frac{1}{Z_{q,q',r}}\left\{\exp_q\left(\ln\left(\frac{1-q'}{1-r}W_{\mathcal{LT}}\left(\frac{e^{\frac{1-r}{q'-1}}(1-\alpha(1-r))e_r^{-\beta_r\epsilon_i}}{q'-1}\right)\right)^{\frac{1}{1-q'}}\right)\right\}^{-1}. \end{align*} We can assume the energy level, $\epsilon_i$, as a quadratic function of the variable $x_i$. The continuous normalized probability distribution of $x$ can then be rewritten as \begin{align*} p(x)=\frac{\left\{1+(1-q)\ln\left(\frac{1-q'}{1-r}W_{\mathcal{LT}}\left(\frac{e^{\frac{1-r}{q'-1}}(1-\alpha(1-r))e_r^{-\beta_rx^2}}{q'-1}\right)\right)^{\frac{1}{1-q'}}\right\}^{\frac{1}{q-1}}}{\int_{-\infty}^{\infty}\left\{1+(1-q)\ln\left(\frac{1-q'}{1-r}W_{\mathcal{LT}}\left(\frac{e^{\frac{1-r}{q'-1}}(1-\alpha(1-r))e_r^{-\beta_rx^2}}{q'-1}\right)\right)^{\frac{1}{1-q'}}\right\}^{\frac{1}{q-1}}dx}. \end{align*} \section{Conclusion} In this paper, a special set of three-parameter entropies \cite{Corcino-Corcino} were maximized in the canonical ensemble by the energy constraint $$\sum_{i=1}^{\omega}p_i\epsilon_i=E.$$ It is expected that the probability distribution, $p_i(\epsilon_i)$, can be expressed in terms of the generalized three-parameter exponential defined in \cite{Corcino-Corcino}. However, an interesting form of the solution of the related equation is obtained expressing the solution in terms of the translated logarithmic Lambert function which is a generalization of the classical Lambert W function. \section*{Acknowledgment} This research is funded by Cebu Normal University (CNU) and the Commission on Higher Education - Grants-in-Aid (CHED-GIA) for Research. \section*{Data Availability Statement} The computer programs and articles used to generate the graphs and support the findings of this study are available from the corresponding author upon request.
1,108,101,563,073
arxiv
\section{Proof} \label{app:proof} In the following, we assume that $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)=\mathcal{N}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace,\sigma^2 I)$. Tweedie's formula can also be derived using the proof for Theorem~\ref{thm:second_order_dsm}. \textbf{Theorem~\ref{thm:second_order_dsm}.} \textit{ Given D-dimensional densities $p(\ensuremath{\mathbf{x}}\xspace)$ and $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\triangleq \int p(\ensuremath{\mathbf{x}}\xspace)q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)d\ensuremath{\mathbf{x}}\xspace$, we have \begin{align} &~\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = \tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2)\\ &~\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} - \ensuremath{\mathbf{x}}\xspace\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} - \ensuremath{{\tilde{\mathbf{x}}}}\xspace\ensuremath{\mathbf{x}}\xspace^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = \tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2), \end{align} where $\tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2)$ and $\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2)$ are polynomials of $\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ defined as \begin{align} &~\tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2)=\ensuremath{{\tilde{\mathbf{x}}}}\xspace\xt^{\mkern-1.5mu\mathsf{T}} + \sigma^2\ensuremath{{\tilde{\mathbf{x}}}}\xspace \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} + \sigma^4 \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\sigma^4 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2I,\\ &~\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2) = -\ensuremath{{\tilde{\mathbf{x}}}}\xspace \ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} + \sigma^4 \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) + \sigma^4 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2 I. \end{align} Here $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ denote the first and second order scores of $q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$. } \begin{proof} We can rewrite $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)$ in the form of exponential family \begin{equation*} q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\var\eta)=e^{\var\eta^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\psi(\var\eta)}q_0(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \end{equation*} where $\var\eta=\frac{\ensuremath{\mathbf{x}}\xspace}{\sigma^2}$ is the natural or canonical parameter of the family, $\psi(\var\eta)$ is the cumulant generating function which makes $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\var\eta)$ normalized and $q_0(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)=((2\pi)^d\sigma^{2d})^{-\frac{1}{2}}e^{-\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{2\sigma^2}}$. Bayes rule provides the corresponding posterior \begin{align*} q(\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace)&=\frac{q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\var\eta)p(\var\eta)}{q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)}. \end{align*} Let $\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)=\log\frac{q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)}{q_0(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)}$, then we can write posterior as \begin{equation*} q(\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace)=e^{\var\eta^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\psi(\var\eta)-\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)}p(\var\eta). \end{equation*} Since the posterior is normalized, we have \begin{equation*} \int e^{\var\eta^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\psi(\var\eta)-\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)}p(\var\eta)d\var\eta=1. \end{equation*} As a widely used technique in exponential families, we differentiate both sides w.r.t. $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$ \begin{equation*} \int(\var\eta^{\mkern-1.5mu\mathsf{T}}-{\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}})q(\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace)d\var\eta=0, \end{equation*} and the first order posterior moment can be written as \begin{align} \label{eq: first order posterior moment a} \ensuremath{\mathbb{E}}\xspace[\var\eta\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace]&={\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) \\ \label{eq: first order posterior moment b} \ensuremath{\mathbb{E}}\xspace[\var\eta^{\mkern-1.5mu\mathsf{T}}\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace]&={\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}}, \end{align} where ${\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ is the Jacobian of $\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ w.r.t. $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$. Differentiating both sides w.r.t. $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$ again \begin{equation*} \int\var\eta(\var\eta^{\mkern-1.5mu\mathsf{T}}-{\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}})q(\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace)d\var\eta={\bm{H}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \end{equation*} and the second order posterior moment can be written as \begin{align} \label{eq: second order posterior moment} \ensuremath{\mathbb{E}}\xspace[\var\eta\var\eta^{\mkern-1.5mu\mathsf{T}}\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace]={\bm{H}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+{\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace){\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}}, \end{align} where ${\bm{H}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ is the Hessian of $\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ w.r.t. $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$. Specifically, for $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)=\mathcal{N}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace,\sigma^2 I)$, we have $\var\eta=\frac{\ensuremath{\mathbf{x}}\xspace}{\sigma^2}$ and $q_0(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)=((2\pi)^d\sigma^{2d})^{-\frac{1}{2}}e^{-\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{2\sigma^2}}$. Hence we have \begin{align*} \lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)&=\log q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{2\sigma^2}+\text{constant} \\ {\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)&=\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2} \\ {\bm{H}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)&=\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{1}{\sigma^2}I. \end{align*} From \cref{eq: second order posterior moment}, we have \begin{align*} \resizebox{\hsize}{!}{ $\displaystyle \ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = \ensuremath{{\tilde{\mathbf{x}}}}\xspace\xt^{\mkern-1.5mu\mathsf{T}} + \sigma^2\ensuremath{{\tilde{\mathbf{x}}}}\xspace \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} + \sigma^4 \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\sigma^4 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2I=\tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2).$ } \end{align*} Combined with \cref{eq: first order posterior moment a}, \cref{eq: first order posterior moment b}, and \cref{eq: second order posterior moment}, we have \begin{align*} \resizebox{0.92\hsize}{!}{ $\displaystyle \ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} - \ensuremath{\mathbf{x}}\xspace\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} - \ensuremath{{\tilde{\mathbf{x}}}}\xspace\ensuremath{\mathbf{x}}\xspace^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = -\ensuremath{{\tilde{\mathbf{x}}}}\xspace \ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} + \sigma^4 \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) + \sigma^4 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2 I=\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2).$ } \end{align*} \end{proof} \textbf{Tweedie's formula.} \textit{ Given D-dimensional densities $p(\ensuremath{\mathbf{x}}\xspace)$ and $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\triangleq \int p(\ensuremath{\mathbf{x}}\xspace)q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)d\ensuremath{\mathbf{x}}\xspace$, we have \begin{equation} \ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = \ensuremath{{\tilde{\mathbf{x}}}}\xspace + \sigma^2 \tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \end{equation} where $\tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\triangleq \nabla_{\ensuremath{{\tilde{\mathbf{x}}}}\xspace} \log q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$.} \begin{proof} Plug in $\var\eta=\frac{\ensuremath{\mathbf{x}}\xspace}{\sigma^2}$ and ${\bm{J}}_\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)=\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2}$ in \cref{eq: first order posterior moment a}, we have \begin{equation} \ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\ensuremath{{\tilde{\mathbf{x}}}}\xspace+\sigma^2\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \end{equation} which proves Tweedie's formula. \end{proof} \textbf{Theorem~\ref{eq:second_dsm_loss_theorem}.} \textit{ Suppose the first order score $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ is given, we can learn a second order score model $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}})$ by optimizing the following objectives \begin{align*} &~{\bm{\theta}}^\ast =\argmin_{\bm{\theta}} \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{ q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)}\bigg[\Big\Vert{ \ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}}-\tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace,\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}))}\Big\Vert_2^2\bigg],\\ &~{\bm{\theta}}^\ast =\argmin_{\bm{\theta}} \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{ q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)}\bigg[\Big\Vert{ \ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} - \ensuremath{\mathbf{x}}\xspace\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} - \ensuremath{{\tilde{\mathbf{x}}}}\xspace\ensuremath{\mathbf{x}}\xspace^{\mkern-1.5mu\mathsf{T}} - \tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace,\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}))}\Big\Vert_2^2\bigg] \end{align*} where $\tens{f}(\cdot)$ and $\tens{h}(\cdot)$ are polynomials defined in \cref{eq: second order multivariate expectation} and \cref{corollary:s2_least_square}. Assuming the model has an infinite capacity, then the optimal parameter ${\bm{\theta}}^\ast$ satisfies $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^\ast)=\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ for almost any $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$.} \begin{proof} It is well-known that the optimal solution to the least squares regression problems of \cref{eq:s2_least_square_naive} and \cref{eq:s2_least_square} are the conditional expectations $\tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace,\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace),\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^{*}))=\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ and $\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace,\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace),\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^\ast))=\ensuremath{\mathbb{E}}\xspace[ \ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} - \ensuremath{\mathbf{x}}\xspace\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} - \ensuremath{{\tilde{\mathbf{x}}}}\xspace\ensuremath{\mathbf{x}}\xspace^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ respectively. According to Theorem~\ref{thm:second_order_dsm}, this implies that the optimal solution satisfies $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^\ast)=\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ for almost any $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$ given the first order score $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$. \textbf{Note:} \cref{eq:s2_least_square_naive} and \cref{eq:s2_least_square} have the same set of solutions assuming sufficient model capacity. However, \cref{eq:s2_least_square} has a simpler form (e.g., involving fewer terms) than \cref{eq:s2_least_square_naive} since multiple terms in \cref{eq:s2_least_square} can be cancelled after expanding the equation by using \cref{eq:twiddie_formula} (Tweedie's formula), resulting in the simplified objective \cref{eq:second_dsm_loss}. Compared to the expansion of \cref{eq:s2_least_square_naive}, the expansion of \cref{eq:s2_least_square} (i.e., \cref{eq:second_dsm_loss}) is much simpler (i.e., involving fewer terms), which is why we use \cref{eq:s2_least_square} other than \cref{eq:s2_least_square_naive} in our experiments. \end{proof} Before proving \cref{thm:high_order_dsm}, we first prove the following lemma. \begin{lemma} \label{app:lemma:high_order_moment} Given a $D$ dimensional distribution $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$, and $q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace) \triangleq \mathcal{N}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace,\sigma^2 I)$, we have the following for any integer $n\ge1$: \begin{equation*} \ensuremath{\mathbb{E}}\xspace[\otimes^{n+1} \ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\sigma^2\frac{\partial}{\partial \ensuremath{{\tilde{\mathbf{x}}}}\xspace}\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]+\sigma^2\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]\otimes\bigg(\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2}\bigg), \end{equation*} where $\otimes^{n}\ensuremath{\mathbf{x}}\xspace\in \mathbb{R}^{D^n}$ denotes $n$-fold tensor multiplications. \end{lemma} \begin{proof} We follow the notation used in the previous proof. Since \begin{equation*} \ensuremath{\mathbb{E}}\xspace[\otimes^{n}\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\int e^{\var\eta^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\psi(\var\eta)-\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)}p(\var\eta)\otimes^{n}\var\eta d\var\eta, \end{equation*} differentiating both sides w.r.t. $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$ \begin{align*} \frac{\partial}{\partial \ensuremath{{\tilde{\mathbf{x}}}}\xspace}\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]&=\int e^{\var\eta^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\psi(\var\eta)-\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)}p(\var\eta)\otimes^{n+1}\var\eta d\var\eta-\int e^{\var\eta^{\mkern-1.5mu\mathsf{T}}\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\psi(\var\eta)-\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)}p(\var\eta)\otimes^{n}\var\eta d\var\eta\otimes\frac{\partial}{\partial \ensuremath{{\tilde{\mathbf{x}}}}\xspace}\lambda(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) \\ \frac{\partial}{\partial \ensuremath{{\tilde{\mathbf{x}}}}\xspace}\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]&=\ensuremath{\mathbb{E}}\xspace[\otimes^{n+1}\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]-\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\var\eta|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]\otimes\bigg(\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2}\bigg). \end{align*} Thus \begin{equation*} \ensuremath{\mathbb{E}}\xspace[\otimes^{n+1} \ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\sigma^2\frac{\partial}{\partial \ensuremath{{\tilde{\mathbf{x}}}}\xspace}\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]+\sigma^2\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]\otimes\bigg(\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2}\bigg). \end{equation*} \end{proof} \paragraph{Example} When $n=2$, plug in \cref{eq:twiddie_formula}, we have $$\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}}|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\sigma^2(I+\sigma^2 \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace))+\sigma^2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace+\sigma^2 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace))(\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2})^{\mkern-1.5mu\mathsf{T}},$$ which can be simplified as \cref{eq: second order multivariate expectation}. \cref{app:lemma:high_order_moment} provides a recurrence for obtaining $\tens{f}_n$ in closed form. It is further used and discussed in Theorem~\ref{thm:high_order_dsm}. \textbf{Theorem~\ref{thm:high_order_dsm}.} \textit{ $\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\tens{f}_n(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{n})$, where $\otimes^{n}\ensuremath{\mathbf{x}}\xspace\in \mathbb{R}^{D^n}$ denotes $n$-fold tensor multiplications, $\tens{f}_n(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{n})$ is a polynomial of $\{\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), ..., \tilde{\tens{s}}_{n}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\}$ and $\tilde{\tens{s}}_k(\ensuremath{\mathbf{x}}\xspace)$ represents the $k$-th order score of $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)=\int p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace)d\ensuremath{\mathbf{x}}\xspace$. } \begin{proof} We prove this using induction. When $n=1$, we have \begin{equation*} \ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\sigma^2\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\ensuremath{{\tilde{\mathbf{x}}}}\xspace. \end{equation*} Thus, $\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ can be written as a polynomial of $\{\ensuremath{{\tilde{\mathbf{x}}}}\xspace,\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\}$. The hypothesis holds. Assume the hypothesis holds when $n=t$, then \begin{equation*} \ensuremath{\mathbb{E}}\xspace[\otimes^{t}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\tens{f}_t(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{t}). \end{equation*} When $n=t+1$, \begin{align*} \ensuremath{\mathbb{E}}\xspace[\otimes^{t+1} \ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]&=\sigma^2\frac{\partial}{\partial \ensuremath{{\tilde{\mathbf{x}}}}\xspace}\ensuremath{\mathbb{E}}\xspace[\otimes^{t}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]+\sigma^2\ensuremath{\mathbb{E}}\xspace[\otimes^{t}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]\otimes\bigg(\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2}\bigg) \\ &=\sigma^2\frac{\partial}{\partial \ensuremath{{\tilde{\mathbf{x}}}}\xspace}\tens{f}_t(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{t})+\sigma^2\tens{f}_t(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{t})\otimes\bigg(\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2}\bigg). \end{align*} Clearly, $\sigma^2\tens{f}_t(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{t})\otimes\bigg(\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace}{\sigma^2}\bigg)$ is a polynomial of $\{\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), ..., \tilde{\tens{s}}_{t}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\}$, and $\sigma^2\frac{\partial}{\partial \ensuremath{{\tilde{\mathbf{x}}}}\xspace}\tens{f}_t(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{t})$ is a polynomial of $\{\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), ..., \tilde{\tens{s}}_{t+1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\}$. This implies $\ensuremath{\mathbb{E}}\xspace[\otimes^{t+1} \ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ can be written as $\tens{f}_{t+1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{t+1})$, which is a polynomial of $\{\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), ..., \tilde{\tens{s}}_{t+1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\}$. Thus, the hypothesis holds when $k=t+1$, which implies that the hypothesis holds for all integer $n\geq1$. \end{proof} \textbf{Theorem~\ref{thm:high_order_dsm_general_objective}.} \textit{ Given the true score functions $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace),...,\tilde{\tens{s}}_{k-1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$, a $k$-th order score model $\tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$, and \begin{equation} \label{app:eq:high_order_dsm_least_square} {\bm{\theta}}^{*}=\argmin_{\bm{\theta}} \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)}[\Vert \otimes^{k}\ensuremath{\mathbf{x}}\xspace -\tens{f}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), ..., \tilde{\tens{s}}_{k-1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}))\Vert^2]. \end{equation} Assuming the model has an infinite capacity, we have $\tilde{\tens{s}}_{k}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^{*})=\tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ for almost all $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$. } \begin{proof} Similar to the previous case, we can show that the solution to the least squares regression problems of \cref{app:eq:high_order_dsm_least_square} is $\tens{f}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), ..., \tilde{\tens{s}}_{k-1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})^{*})=\ensuremath{\mathbb{E}}\xspace[\otimes^{t}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]$. According to \cref{thm:high_order_dsm}, this implies $\tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^{*})=\tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ given the score functions $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace),...,\tilde{\tens{s}}_{k-1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$. \end{proof} \section{Analysis on Second Order Score Models} \label{app:analysis} \subsection{Variance reduction} \label{app:variance_reduction} If we want to match the score of true distribution $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$, $\sigma$ should be approximately zero for both DSM and $D_2$SM{} so that $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ is close to $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$. However, when $\sigma\to0$, both DSM and $D_2$SM{} can be unstable to train and might not converge, which calls for variance reduction techniques. In this section, we show that we can introduce a control variate to improve the empirical performance of DSM and $D_2$SM{} when $\sigma$ tends to zero. Our variance control method can be derived from expanding the original training objective function using Taylor expansion. \textbf{DSM with varaince reduction}\: Expand the objective using Taylor expansion \begin{align*} \mathcal{L}_{DSM}({\bm{\theta}}) &=\frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace \mid \ensuremath{\mathbf{x}}\xspace)}\bigg[\Big\Vert \tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}) + \frac{1}{\sigma^2}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\ensuremath{\mathbf{x}}\xspace) \Big\Vert_2^2\bigg]\\ &= \frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim \mathcal{N}(0,I)}\bigg[\Big\Vert \tilde{\tens{s}}_1(\ensuremath{\mathbf{x}}\xspace+\sigma\ensuremath{\mathbf{z}}\xspace;{\bm{\theta}})+\frac{\ensuremath{\mathbf{z}}\xspace}{\sigma} \Big\Vert_2^2\bigg] \\ &= \frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim \mathcal{N}(0,I)}\bigg[\Vert \tilde{\tens{s}}_1(\ensuremath{\mathbf{x}}\xspace+\sigma\ensuremath{\mathbf{z}}\xspace; {\bm{\theta}}) \Vert_2^2 + \frac{2}{\sigma} \tilde{\tens{s}}_1(\ensuremath{\mathbf{x}}\xspace+\sigma\ensuremath{\mathbf{z}}\xspace;{\bm{\theta}})^T\ensuremath{\mathbf{z}}\xspace + \frac{\Vert \ensuremath{\mathbf{z}}\xspace \Vert_2^2}{\sigma^2}\bigg] \\ &= \frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim \mathcal{N}(0,I)}\bigg[\Vert \tilde{\tens{s}}_1(\ensuremath{\mathbf{x}}\xspace; {\bm{\theta}}) \Vert_2^2 + \frac{2}{\sigma}\tilde{\tens{s}}_1(\ensuremath{\mathbf{x}}\xspace; {\bm{\theta}})^T\ensuremath{\mathbf{z}}\xspace + \frac{\Vert \ensuremath{\mathbf{z}}\xspace \Vert_2^2}{\sigma^2}\bigg] + \mathcal{O}(1), \end{align*} where $\mathcal{O}(1)$ is bounded when $\sigma\to0$. Since \begin{equation} \label{app:eq:control_variate} \ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim\mathcal{N}(0,I)} [\frac{2}{\sigma}\tilde{\tens{s}}_1(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}})^T\ensuremath{\mathbf{z}}\xspace + \frac{\Vert \ensuremath{\mathbf{z}}\xspace \Vert_2^2}{\sigma^2}] = \frac{D}{\sigma^2}, \end{equation} where $D$ is the dimension of $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$, we can use \cref{app:eq:control_variate} as a control variate and define DSM with variance reduction as \begin{align} \label{app:eq:dsm_vr} \displaystyle\mathcal{L}_{DSM-VR} &= \mathcal{L}_{DSM} - \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim\mathcal{N}(0,I)}[\frac{2}{\sigma}\tilde{\tens{s}}_1(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}})^T\ensuremath{\mathbf{z}}\xspace + \frac{\Vert \ensuremath{\mathbf{z}}\xspace \Vert_2^2}{\sigma^2}] + \frac{D}{\sigma^2} \end{align} An equivalent version of \cref{app:eq:dsm_vr} is first proposed in \cite{wang2020wasserstein}. \textbf{$D_2$SM{} with variance reduction}\: We now derive the variance reduction objective for $D_2$SM. Let us first consider the $ij$-th term of ${\mathcal{L}}_{\text{$D_2$SM}}({\bm{\theta}})$. We denote $\tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})=\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})+\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})^{\mkern-1.5mu\mathsf{T}}$ and $\tens{\psi}_{ij}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ the $ij$-th term of $\tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. Similar as the variance reduction method for DSM~\cite{wang2020wasserstein}, we expand the objective of $D_2$SM{} (\cref{eq:second_dsm_loss}) using Taylor expansion \begin{align*} &{\mathcal{L}}_{\text{$D_2$SM}}({\bm{\theta}})_{ij}= \frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim \mathcal{N}(0,I)}[\tens{\psi}_{ij}(\ensuremath{\mathbf{x}}\xspace+\sigma\ensuremath{\mathbf{z}}\xspace;{\bm{\theta}}) + \frac{\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j}{\sigma^2}]^2 \\ &= \frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim \mathcal{N}(0,I)}[ \tens{\psi}_{ij}(\ensuremath{\mathbf{x}}\xspace+\sigma\ensuremath{\mathbf{z}}\xspace;{\bm{\theta}})^2 + 2\frac{\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j}{\sigma^2}\tens{\psi}_{ij}(\ensuremath{\mathbf{x}}\xspace+\sigma\ensuremath{\mathbf{z}}\xspace;{\bm{\theta}}) + \frac{(\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j)^2}{\sigma^4}] \\ &\resizebox{0.95\hsize}{!}{= $\displaystyle \frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim \mathcal{N}(0,I)}[\tens{\psi}_{ij}(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}})^2 + 2\frac{\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j}{\sigma^2}\tens{\psi}_{ij}(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}}) + 2\frac{\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j}{\sigma}\ensuremath{\mathbf{J}}\xspace_{\tens{\psi}_{ij}}\ensuremath{\mathbf{z}}\xspace + \frac{(\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j)^2}{\sigma^4}] + \mathcal{O}(1)$}, \end{align*} where $\mathcal{O}(1)$ is bounded when $\sigma\to0$. It is clear to see that the term $\frac{(\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j)^2}{\sigma^4}$ is a constant w.r.t. optimization. When $\sigma$ approximates zero, both $\frac{\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j}{\sigma^2}$ and $\frac{\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j}{\sigma}$ would be very large, making the training process unstable and hard to converge. Thus we aim at designing a control variate to cancel out these two terms. To do this, we propose to use antithetic sampling. Instead of using independent noise samples, we use two correlated (opposite) noise vectors centered at $\ensuremath{\mathbf{x}}\xspace$ defined as $\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{+}=\ensuremath{{\tilde{\mathbf{x}}}}\xspace+\sigma\ensuremath{\mathbf{z}}\xspace$ and $\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{-}=\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\sigma\ensuremath{\mathbf{z}}\xspace$. We propose the following objective function to reduce variance \begin{equation} \label{app:eq:second_order_dsm_vr} \resizebox{0.92\hsize}{!}{ $\displaystyle\mathcal{L}_{\text{$D_2$SM-VR}} = \ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{x}}\xspace\sim p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim \mathcal{N}(0,I)}\bigg[\tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{+};{\bm{\theta}})^2+\tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{-};{\bm{\theta}})^2 +2\frac{\ensuremath{\mathbf{I}}\xspace-\ensuremath{\mathbf{z}}\xspace\z^{\mkern-1.5mu\mathsf{T}}}{\sigma}\odot(\tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{+};{\bm{\theta}}) + \tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{-};{\bm{\theta}}) -2\tens{\psi}(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}}))\bigg]$.} \end{equation} Similarly, we can show easily by using Taylor expansion that optimizing \cref{app:eq:second_order_dsm_vr} is equivalent to optimizing \cref{eq:second_dsm_loss} up to a control variate. On the other hand, \cref{app:eq:second_order_dsm_vr} is more stable to optimize than \cref{eq:second_dsm_loss} when $\sigma$ approximates zero since the unstable terms $\frac{\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j}{\sigma^2}$ and $\frac{\ensuremath{\mathbf{I}}\xspace_{ij}-z_iz_j}{\sigma}$ are both cancelled by the introduced control variate. \subsection{Learning accuracy} \label{app:l2} This section provides more experimental details on Section~\ref{sec:l2}. We use a 3-layer MLP model with latent size 128 and Tanh activation function for $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. As discussed in \cref{sec:low_rank_parameterization}, our $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ model consists of two parts $\bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We also use a 3-layer MLP model with latent size 32 and Tanh activation function to parameterize $\bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. For the mean squared error diagonal comparison experiments, we only parameterize the diagonal component $\bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We use a 3-layer MLP model with latent size 32, and Tanh activation function to parameterize $\bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We use learning rate $0.001$, and train the models using Adam optimizer until convergence. We use noise scale $\sigma=0.01, 0.05, 0.1$ during training so that the noise perturbed distribution $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ is close to $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$. All the mean squared error results in \cref{tab:mse_synthetic} are computed w.r.t. to the ground truth second order score of the clean data $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$. The experiments are performed on 1 GPU. \subsection{Computational efficiency} \label{app:efficiency} This section provides more experimental details on the computational efficiency experiments in \cref{sec:efficiency}. In the experiment, we consider two types of models. \paragraph{MLP model} We use a 3-layer MLP model to parameterize $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ for a 100 dimensional data distribution. As discussed in \cref{sec:low_rank_parameterization}, our $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ model consists of two parts $\bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We use a 3-layer MLP model with comparable amount of parameters as $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ to parameterize $\bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We consider rank $r=20, 50, 200$ and $1000$ for $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ in the experiment as reported in \cref{tab:efficiency}. \paragraph{U-Net model} We use a U-Net model to parameterize $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ for the $784$ dimensional data distribution. We use a similar U-Net architecture as $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ for parameterizing $\bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$, except that we modify the output channel size to match the rank $r$ of $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We consider rank $r=20, 50, 200$ and $1000$ for $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ in the experiment as reported in \cref{tab:efficiency}. All the experiments are performed on the same TITAN Xp GPU using exactly the same computational setting. We use the implementation of U-Net from this repository \url{https://github.com/ermongroup/ncsn}. \section{Uncertainty Quantification} \label{app:uncertainty} This section provides more experimental details on \cref{sec:uncertainty}. \subsection{Synthetic experiments} \label{app:denoising} This section provides more details on the synthetic data experiments. We use a 3-layer MLP model for both $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We train $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ jointly with \cref{eq:l_joint_sample}. We use $\sigma=0.15$ for $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)$, and train the models using Adam optimizer until convergence. We observe that training $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ directly with DSM and training $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ jointly with \cref{eq:l_joint_sample} have the same empirical performance in terms of estimating $\tilde{\tens{s}}_1$. Thus, we train $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ jointly with $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ in our experiments. \subsection{Convariance diagonal visualizations} \label{app:covariance} For both the MNIST and CIFAR-10 models, we use U-Net architectures to parameterize $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We also use a similar U-Net architecture to parameterize $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$, except that we modify the output channel size to match the rank $r$ of $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We use $r=50$ for $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ for both MNIST and CIFAR-10 models. We use the U-Net model implementation from this repository \url{https://github.com/ermongroup/ncsn}. We consider noise scales $\sigma=0.3, 0.5, 0.8, 1.0$ for MNIST and $\sigma=0.3, 0.5, 0.8$ for CIFAR-10. We train the models jointly until convergence with \cref{eq:l_joint_sample}, using learning rate $0.0002$ with Adam optimizer. The models are trained on the corresponding training sets on 2 GPUs. \subsection{Full convariance visualizations} \label{app:pca} We use U-Net architectures to parameterize $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We also use a similar U-Net architecture to parameterize $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$, except that we modify the output channel size to match the rank $r$ of $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We use $r=50$ for $\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ for this experiment. We use the U-Net model implementation from this repository \url{https://github.com/ermongroup/ncsn}. We train the models until convergence, using learning rate $0.0002$ with Adam optimizer. The models are trained on the corresponding training set on 2 GPUs. We provide extra eigenvector visualizations for \cref{fig:pca} in \cref{fig:app_pca_80,fig:app_pca_24}. \newpage \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{main_figures/pca_80.pdf} \caption{Eigenvectors (sorted by eigenvalues) of $\operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ estimated by $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ on MNIST (more details in \cref{sec:uncertainty}). } \label{fig:app_pca_80} \end{figure} \newpage \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{main_figures/pca_24_0.pdf} \caption{Eigenvectors (sorted by eigenvalues) of $\operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ estimated by $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ on MNIST (more details in \cref{sec:uncertainty}).} \label{fig:app_pca_24} \end{figure} \section{Ozaki sampling} \label{app:ozaki_sampling} This section provides more details on \cref{sec:sampling_with_s2}. \subsection{Synthetic datasets} \label{app:ozaki_synthetic} This section provides more details on \cref{sec:ozaki_synthetic}. We use a 3-layer MLP model for both $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. Since we only need the diagonal of the second order score, we parameterize $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ with a diagonal model (\emph{i.e}\onedot with only $\bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$) and optimize the models jointly using \cref{eq:joint_objective_diagonal}. We use $\sigma=0.1$ during training so that the noise perturbed distribution $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ is close to $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$. The models are trained with Adam optimizer with learning rate $0.001$. Given the trained models, we perform a parameter search to find the optimal step size for both Langevin dynamics and Ozaki sampling. We also observe that Ozaki sampling can use a larger step size than Langevin dynamics, which is also discussed in ~\cite{dalalyan2019user}. We observe that the optimal step size for Ozaki sampling is ${\epsilon}=5$ on Dataset 1 and ${\epsilon}=6$ on Dataset 2, while the optimal step size for Langevin dynamics is ${\epsilon}=0.5$ on Dataset 1 and ${\epsilon}=2$ on Dataset 2. We also explore using the same setting of Ozaki sampling for Langevin dynamics (\emph{i.e}\onedot we use the optimal step size of Ozaki sampling and the same number of iterations). We present the results in \cref{fig:app:2d_sampling_dataset1}. We observe that the optimal step size for Ozaki sampling is too large for Langevin dynamics, and does not allow Langevin dynamics to generate reasonable samples. We also find that Ozaki sampling can converge using fewer iterations than Langevin dynamics even when using the same step size (see \cref{fig:2d_sampling_dataset2}). All the experiments in this section are performed using 1 GPU. \begin{figure}[H] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/data.png} \caption{Dataset 1} \label{fig:app:2d_sampling_dataset1_1} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/ozaki_5_5000.png} \caption{Ozaki ($5\times10^{3}$)} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/langevin_5_5000.png} \caption{Langevin ($5\times10^{5}$)} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/langevin_0.5_30000.png} \caption{Langevin ($3\times10^{4}$)} \end{subfigure} \vfill \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/tilted_data.png} \caption{Dataset 2} \label{fig:app:2d_sampling_dataset1_2_1} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/tilted_ozaki_6_3000.png} \caption{Ozaki ($3\times10^{3}$)} \label{fig:app:2d_sampling_dataset1_2} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/tilted_langevin_6_3000.png} \caption{Langevin ($3\times10^{3}$)} \label{fig:app:2d_sampling_dataset1_3} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/tilted_langevin_2_9000.png} \caption{Langevin ($9\times10^{3}$)} \label{fig:app:2d_sampling_dataset1_4} \end{subfigure} \caption{Sampling 2-D synthetic data with score functions. The number in the parenthesis stands for the number of iterations used for sampling. We observe that Ozaki obtains more reasonable samples using 1/6 or 1/3 iterations compared to Langevin dynamics. The second column uses the optimal step size for Ozaki, and the third column uses the same step size and setting for Langevin dynamics. The fourth column uses the optimal step size for Langevin dynamics.} \label{fig:app:2d_sampling_dataset1} \end{figure} \subsection{MNIST} \label{app:ozaki_mnist} We use the U-Net implementation from this repository \url{https://github.com/ermongroup/ncsn}. We train the models until convergence on the corresponding MNIST training set using learning rate $0.0002$ and Adam optimizer. We use 2 GPUs during training. As shown in \cite{song2019generative}, sampling images from score-based models trained with DSM is challenging when $\sigma$ is small due to the ill-conditioned estimated scores in the low density data region. In our experiments, we use a slightly larger $\sigma=0.5$ to avoid the issues of training and sampling from $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ as discussed in~\cite{song2019generative}. We train the $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ jointly with \cref{eq:joint_objective}. For experiments on class label changes, we select 10 images with different class labels from the MNIST test set. For each of the image, we initialize 1000 chains using it as the initialization for sampling. We consider two sampling methods Langevin dynamics and Ozaki method in this section. For the generated images, we first denoise the sampled results with \cref{eq:twiddie_formula} and then use a pretrained classifier, which has 99.5\% accuracy on MNIST test set classification, to classify the labels of the generated images in \Figref{fig:mnist_fix_init_1}. \section{Broader Impact} Our work provides a way to approximate high order derivatives of the data distribution. The proposed approach allows for applications such as uncertainty quantification in denoising and improved sampling speed for Langevin dynamics. Uncertainty quantification in denoising could be useful for medical image diagnosis. Higher order scores might provide new insights into detecting adversarial or out-of-distribution examples, which are important real-world applications. Score-based generative models can have both positive and negative impacts depending on the application. For example, score-based generative models can be used to generate high-quality images that are hard to distinguish from real ones by humans, which could be used to deceive humans in malicious ways ("deepfakes"). \section{Background} \subsection{Scores of a distribution} \begin{definition} Given a probability density $p(\ensuremath{\mathbf{x}}\xspace)$ over $\mathbb{R}^D$, we define the $k$-th order score $\tens{s}_{k}(\ensuremath{\mathbf{x}}\xspace):\mathbb{R}^D\to \otimes^{k}\mbb{R}^D$, where $\otimes^{k}$ denotes $k$-fold tensor multiplications, to be a tensor with the $(i_1, i_2, \dots, i_k)$-th index given by $[\etens{s}_{k}(\ensuremath{\mathbf{x}}\xspace)]_{i_1 i_2 \dots i_k} \triangleq \frac{\partial^k}{\partial x_{i_1} \partial x_{i_2} \cdots \partial x_{i_k}} \log p({\mathbf{x}})$, where $(i_1, i_2, \dots, i_k) \in \{1, \cdots, D\}^k$. \end{definition} As an example, when $k=1$, the \emph{first order score} is the gradient of $\log p(\ensuremath{\mathbf{x}}\xspace)$ w.r.t. to $\ensuremath{\mathbf{x}}\xspace$, defined as $\tens{s}_{1}(\ensuremath{\mathbf{x}}\xspace)\triangleq\nabla_{\ensuremath{\mathbf{x}}\xspace}\log p(\ensuremath{\mathbf{x}}\xspace)$. Intuitively, this is a vector field of the steepest ascent directions for the log-density. Note that the definition of \emph{first order score} matches the definition of (Stein) \emph{score}~\cite{hyvarinen2005estimation}. When $k=2$, the \emph{second order score} is the Hessian of $\log p(\ensuremath{\mathbf{x}}\xspace)$ w.r.t. to $\ensuremath{\mathbf{x}}\xspace$. It gives the curvature of a density function, and with $\tens{s}_{1}(\ensuremath{\mathbf{x}}\xspace)$ it can provide a better local approximation to $\log p(\ensuremath{\mathbf{x}}\xspace)$. \subsection{Denoising score matching} Given a data distribution $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ and a model distribution $p(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}})$, the \emph{score} functions of $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ and $p(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}})$ are defined as $\tens{s}_{1}(\ensuremath{\mathbf{x}}\xspace)\triangleq\nabla_\ensuremath{\mathbf{x}}\xspace \log p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ and $\tens{s}_{1}(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}})\triangleq\nabla_\ensuremath{\mathbf{x}}\xspace \log p(\ensuremath{\mathbf{x}}\xspace;{\bm{\theta}})$ respectively. Denoising score matching (DSM)~\cite{vincent2011connection} perturbs a data sample $\ensuremath{\mathbf{x}}\xspace\sim p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ with a pre-specified noise distribution $q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace \mid \ensuremath{\mathbf{x}}\xspace)$ and then estimates the \emph{score} of the perturbed data distribution $q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) = \int q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace \mid \ensuremath{\mathbf{x}}\xspace)p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace) d\ensuremath{\mathbf{x}}\xspace$ which we denote $\tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\triangleq \nabla_{\ensuremath{{\tilde{\mathbf{x}}}}\xspace} \log q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$. DSM uses the following objective \begin{equation} \label{eq: denoising score matching objective} \frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{ p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace \mid \ensuremath{\mathbf{x}}\xspace)}[\Vert \tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}) - \nabla_\ensuremath{{\tilde{\mathbf{x}}}}\xspace \log q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace \mid \ensuremath{\mathbf{x}}\xspace) \Vert_2^2]. \end{equation} It is shown that under certain regularity conditions, minimizing \cref{eq: denoising score matching objective} is equivalent to minimizing the score matching~\cite{hyvarinen2005estimation} loss between $\tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$~\cite{vincent2011connection} defined as \begin{equation} \label{eq:fisher_divergence} \frac{1}{2} \ensuremath{\mathbb{E}}\xspace_{ p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace \mid \ensuremath{\mathbf{x}}\xspace)}[\Vert \tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})-\tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) \Vert_2^2 ]. \end{equation} When $q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace \mid \ensuremath{\mathbf{x}}\xspace)=\mathcal{N}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace,\sigma^2 I)$ , the objective becomes \begin{equation} \label{eq:dsm_gaussian} {\mathcal{L}}_{\text{DSM}}({\bm{\theta}})=\frac{1}{2}\ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace \mid \ensuremath{\mathbf{x}}\xspace)}\bigg[\Big\Vert \tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}) + \frac{1}{\sigma^2}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\ensuremath{\mathbf{x}}\xspace) \Big\Vert_2^2\bigg]. \end{equation} Optimizing \cref{eq:dsm_gaussian} can, intuitively, be understood as predicting $\frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\ensuremath{\mathbf{x}}\xspace}{\sigma^2}$, the added ``noise" up to a constant, given the noisy input $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$, and is thus related to denoising. Estimating the score of the noise perturbed distribution $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ instead of the original (clean) data distribution $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ allows DSM to approximate scores more efficiently than other methods~\cite{hyvarinen2005estimation,song2019sliced}. When $\sigma$ is close to zero, $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) \approx p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ so the score of $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ estimated by DSM will be close to that of $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$. When $\sigma$ is large, the estimated score for $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ plays a crucial role in denoising~\cite{saremi2018deep} and learning score-based generative models~\cite{song2019generative,song2020improved}. \subsection{Tweedie's formula} Given a prior density $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$, a noise distribution $q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace)=\mathcal{N}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace,\sigma^2 I)$, and the noisy density $q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace)=\int p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace)d\ensuremath{\mathbf{x}}\xspace$, Tweedie's formula~\cite{robbins2020empirical,efron2011tweedie} provides a close-form expression for the posterior expectation (the first moment) of $\ensuremath{\mathbf{x}}\xspace$ conditioned on $\tilde{\ensuremath{\mathbf{x}}\xspace}$: \begin{equation} \label{eq:twiddie_formula} \ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = \ensuremath{{\tilde{\mathbf{x}}}}\xspace + \sigma^2 \tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \end{equation} where $\tilde{\tens{s}}_{1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\triangleq \nabla_{\ensuremath{{\tilde{\mathbf{x}}}}\xspace} \log q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$. \Eqref{eq:twiddie_formula} implies that given a ``noisy'' observation $\ensuremath{{\tilde{\mathbf{x}}}}\xspace\sim q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace)$, one can compute the expectation of the ``clean'' datapoint $\ensuremath{\mathbf{x}}\xspace$ that may have produced $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$. As a result, \Eqref{eq:twiddie_formula} has become an important tool for denoising~\cite{saremi2019neural, saremi2018deep}. We provide the proof in \cref{app:proof}. A less widely known fact is that Tweedies' formula can be generalized to provide higher order moments of $\ensuremath{\mathbf{x}}\xspace$ given $\tilde{\ensuremath{\mathbf{x}}\xspace}$, which we will leverage to derive the objective for learning higher order scores. \section{Conclusion} We propose a method to directly estimate \emph{high order scores} of a data density from samples. We first study the connection between Tweedie's formula and denoising score matching (DSM) through the lens of least squares regression. We then leverage Tweedie’s formula on higher order moments, which allows us to generalize denoising score matching to estimate scores of any desired order. We demonstrate empirically that models trained with the proposed method can approximate second order scores more efficiently and accurately than applying automatic differentiation to a learned first order score model. In addition, we show that our models can be used to quantify uncertainty in denoising and to improve the mixing speed of Langevin dynamics via Ozaki discretization for sampling synthetic data and natural images. Besides the applications studied in this paper, it would be interesting to study the application of high order scores for out of distribution detection. Due to limited computational resources, we only consider low resolution image datasets in this work. However, as a direct next step, we can apply our method to higher-resolution image datasets and explore its application to improve the sampling speed of score-based models~\cite{song2019generative,song2020improved,ho2020denoising} with Ozaki sampling. In general, when approximating the high-order scores with a diagonal or a low rank matrix, our training cost is comparable to standard denoising score matching, which is scalable to higher dimensional data. A larger rank typically requires more computation but could give better approximations to second-order scores. While we focused on images, this approach is likely applicable to other data modalities such as speech. \section{Uncertainty Quantification with Second Order Score Models} \label{sec:uncertainty} Our second order score model $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ can capture and quantify the uncertainty of denoising on synthetic and real world image datasets, based on the following result by combining \cref{eq:twiddie_formula,eq: second order multivariate expectation} \begin{equation} \label{eq:s2_cov} \operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] \triangleq \ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}}\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace]-\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace]\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace]^{\mkern-1.5mu\mathsf{T}}=\sigma^4 \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) + \sigma^2I \end{equation} By estimating $\operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ via $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$, we gain insights into how pixels are correlated with each other under denoising settings, and which part of the pixels has large uncertainty. To examine the uncertainty given by our $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}})$, we perform the following experiments (details in \cref{app:uncertainty}). \textbf{Synthetic experiments}~ We first consider 2-d synthetic datasets shown in \cref{fig:2d_denoising}, where we train $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ jointly with $\mathcal{L}_{\text{joint}}$. Given the trained score models, we estimate $\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ and $\operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ using \cref{eq:twiddie_formula} and \cref{eq:s2_cov}. We approximate the posterior distribution $p(\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ with a conditional normal distribution $\mathcal{N}(\ensuremath{\mathbf{x}}\xspace\mid\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\mid\ensuremath{{\tilde{\mathbf{x}}}}\xspace],\operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace])$. We compare our result with that of \cref{eq:twiddie_formula}, which only utilizes $\tilde{\tens{s}}_1$ (see \cref{fig:2d_denoising}). We observe that unlike \cref{eq:twiddie_formula}, which is a point estimator, the incorporation of covariance matrices (estimated by $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$) captures uncertainty in denoising. \begin{figure} \vspace{-1\baselineskip} \centering \begin{subfigure}[b]{0.119\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/rings_data.png} \caption*{Data} \label{fig:2d_denoising_1} \end{subfigure} \begin{subfigure}[b]{0.119\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/rings_noisy.png} \caption*{Noisy} \end{subfigure} \begin{subfigure}[b]{0.119\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/rings_s1.png} \caption*{Only $\tilde{\tens{s}}_1$} \end{subfigure} \begin{subfigure}[b]{0.119\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/rings_s2.png} \caption*{With $\tilde{\tens{s}}_2$} \end{subfigure} \begin{subfigure}[b]{0.119\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/2spirals_data.png} \caption*{Data} \label{fig:2d_denoising_1} \end{subfigure} \begin{subfigure}[b]{0.119\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/2spirals_noisy_data_0.15.png} \caption*{Noisy} \end{subfigure} \begin{subfigure}[b]{0.119\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/2spirals_s1_0.15.png} \caption*{Only $\tilde{\tens{s}}_1$} \end{subfigure} \begin{subfigure}[b]{0.119\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/2spirals_s2.png} \caption*{With $\tilde{\tens{s}}_2$} \end{subfigure} \caption{Denoising 2-d synthetic data. The incorporation of $\tilde{\tens{s}}_2$ improves uncertainty quantification.} \vspace{-1\baselineskip} \label{fig:2d_denoising} \end{figure} \textbf{Covariance diagonal visualizations}~We visualize the diagonal of the estimated $\operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ for MNIST and CIFAR-10~\cite{krizhevsky2009learning} in \cref{fig:diagonal}. We find that the diagonal values are in general larger for pixels near the edges where there are multiple possibilities corresponding to the same noisy pixel. The diagonal values are smaller for the background pixels where there is less uncertainty. We also observe that covariance matrices corresponding to smaller noise scales tend to have smaller values on the diagonals, implying that the more noise an image has, the more uncertain the denoised results are. \begin{figure} \centering \includegraphics[width=\textwidth]{main_figures/cov_diag.pdf} \caption{Visualizations of the estimated covariance matrix diagonals on MNIST and CIFAR-10. For CIFAR-10 images, we visualize the diagonal for R, G, B channels separately. Images corrupted with more noise tend to have larger covariance values, indicating larger uncertainty in denoising. Pixels in background have smaller values than pixels near edges, indicating more confident denoising. } \vspace{-0.6\baselineskip} \label{fig:diagonal} \end{figure} \textbf{Full convariance visualizations}~We visualize the eigenvectors (sorted by eigenvalues) of $\operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ estimated by $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ in \cref{fig:pca}. We observe that they can correspond to different digit identities, indicating uncertainty in the identity of the denoised image. This suggests $\operatorname{Cov}[\ensuremath{\mathbf{x}}\xspace\mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ can capture additional information for uncertainty beyond its diagonal. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{main_figures/selected_pca.pdf} \caption{Eigenvectors of the estimated covariance matrix on MNIST. The first column shows the noisy images ($\sigma=0.5$) and the second column shows clean images. The remaining columns show the first 19, plus the 30, 80 and 200-th eigenvectors of the matrix. We can see digit 7 and 9 in the eigenvectors corresponding to the noisy 7, and digit 4 and 9 in the second row, which implies that the estimated covariance matrix can capture different possibilities of the denoising results. } \vspace{-1\baselineskip} \label{fig:pca} \end{figure} \section{Sampling with Second Order Score Models} \label{sec:sampling_with_s2} Here we show that our second order score model $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ can be used to improve the mixing speed of Langevin dynamics sampling. \input{new_table_figure} \subsection{Background on the sampling methods} \textbf{Langevin dynamics}~Langevin dynamics~\cite{bussi2007accurate,welling2011bayesian} samples from $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ using the first order score function $\tens{s}_{1}(\ensuremath{\mathbf{x}}\xspace)$. Given a prior distribution $\pi(\ensuremath{\mathbf{x}}\xspace)$, a fixed step size ${\epsilon}>0$ and an initial value $\tilde{\ensuremath{\mathbf{x}}\xspace}_0\sim\pi(\ensuremath{\mathbf{x}}\xspace)$, Langevin dynamics update the samples iteratively as follows \begin{equation} \small \tilde{\ensuremath{\mathbf{x}}\xspace}_t=\tilde{\ensuremath{\mathbf{x}}\xspace}_{t-1}+\frac{{\epsilon}}{2}{\tens{s}}_1(\tilde{\ensuremath{\mathbf{x}}\xspace}_{t-1})+\sqrt{{\epsilon}}\ensuremath{\mathbf{z}}\xspace_t, \end{equation} where $\ensuremath{\mathbf{z}}\xspace_t\sim\mathcal{N}(0,I)$. As ${\epsilon}\to0$ and $t\to\infty$, $\tilde{\ensuremath{\mathbf{x}}\xspace}_t$ is a sample from $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ under suitable conditions. \textbf{Ozaki sampling}~Langevin dynamics with Ozaki discretization~\cite{stramer1999langevin} leverages second order information in $\tens{s}_2({\mathbf{x}})$ to pre-condition Langevin dynamics: % \begin{equation} \label{eq:ozaki_o} \tilde{\ensuremath{\mathbf{x}}\xspace}_t=\tilde{\ensuremath{\mathbf{x}}\xspace}_{t-1}+M_{t-1} {\tens{s}}_1(\tilde{\ensuremath{\mathbf{x}}\xspace}_{t-1})+\Sigma_{t-1}^{1/2}\ensuremath{\mathbf{z}}\xspace_t, \; \ensuremath{\mathbf{z}}\xspace_t\sim\mathcal{N}(0,I) \end{equation} where $M_{t-1}=(e^{{\epsilon} {\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{t-1})}-I){\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{t-1})^{-1}$ and $\Sigma_{t-1}=(e^{2{\epsilon} {\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{t-1})}-I){\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{t-1})^{-1}$. It is shown that under certain conditions, this variation can improve the convergence rate of Langevin dynamics ~\cite{dalalyan2019user}. In general, \cref{eq:ozaki_o} is expensive to compute due to inversion, exponentiation and taking square root of matrices, so we simplify \cref{eq:ozaki_o} by approximating ${\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{t-1})$ with its diagonal in practice. In our experiments, we only consider Ozaki sampling with ${\tens{s}}_2$ replaced by its diagonal in \cref{eq:ozaki_o}. As we use small $\sigma$, $\tilde{\tens{s}}_1\approx {\tens{s}}_1$ and $\tilde{\tens{s}}_2\approx {\tens{s}}_2$. We observe that $\text{diag}(\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}))$ in Ozaki sampling can be computed in parallel with $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ on modern GPUs, making the wall-clock time per iteration of Ozaki sampling comparable to that of Langevin dynamics. Since we only use the diagonal of $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ in sampling, we can directly learn the diagonal of $\tilde{\tens{s}}_2(\tilde{\ensuremath{\mathbf{x}}\xspace})$ with \cref{eq:joint_objective_diagonal}. \begin{figure} \vspace{-0.5\baselineskip} \centering \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/two_mode_data.png} \caption{Data} \label{fig:2d_sampling_dataset2_1} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/ozaki_50.png} \caption{50 iterations} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/ozaki_100.png} \caption{100 iterations} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/ozaki_200.png} \caption{200 iterations} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/ozaki_300.png} \caption{300 iterations} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/ozaki_400.png} \caption{400 iterations} \end{subfigure} \vfill \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/two_mode_init.png} \caption{Initialization} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/langevin_50.png} \caption{50 iterations} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/langevin_100.png} \caption{100 iterations} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/langevin_200.png} \caption{200 iterations} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/langevin_300.png} \caption{300 iterations} \end{subfigure} \begin{subfigure}[b]{0.16\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/langevin_400.png} \caption{400 iterations} \end{subfigure} \caption{ Sampling a two mode distribution. We use the same step size ${\epsilon}=0.01$ for both methods. We observe that Ozaki sampling converges faster than Langevin sampling. } \label{fig:2d_sampling_dataset2} \end{figure} \subsection{Synthetic datasets} \label{sec:ozaki_synthetic} We first consider 2-d synthetic datasets in \cref{fig:2d_sampling_dataset1} to compare the mixing speed of Ozaki sampling with Langevin dynamics. We search the optimal step size for each method and observe that Ozaki sampling can use a larger step size and converge faster than Langevin dynamics (see \cref{fig:2d_sampling_dataset1}). We use the optimal step size for both methods and report the smallest effective sample size (ESS) of all the dimensions~\cite{song2017nice,girolami2011riemann} in \cref{tab:ess}. We observe that Ozaki sampling has better ESS values than Langevin dynamics, implying faster mixing speed. Even when using the same step size, Ozaki sampling still converges faster than Langevin dynamics on the two-model Gaussian dataset we consider (see \cref{fig:2d_sampling_dataset2}). In all the experiments, we use $\sigma=0.1$ and we provide more experimental details in Appendix~\ref{app:ozaki_sampling}. \begin{figure} \centering \includegraphics[width=\linewidth]{main_figures/mnist_samples_denoised.pdf} \caption{ Sampling on MNIST. We observe that Ozaki sampling converges faster than Langevin dynamics. We use step size $\sigma=0.02$ and initialize the chain with Gaussian noise for both methods. } \vspace{-1\baselineskip} \label{fig:mnist_samples} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}[b]{0.26\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/mnist_classification_diversity.jpg} \caption{Percentage of changes in class label w.r.t. iterations. } \label{fig:mnist_fix_init_1} \end{subfigure}\hfill \begin{subfigure}[b]{0.68\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/mnist_ozaki.pdf} \caption{Different chains initialized with the same left panel image after 1000 iterations of update with step size ${\epsilon}=0.03$.} \label{fig:mnist_fix_init_2} \end{subfigure} \caption{Sample diversity analysis. The number in the parenthesis in \cref{fig:mnist_fix_init_1} denotes the step size. We initialize the chain with MNIST test images and report the percentage of images that have changed class labels from the initialization w.r.t. sampling iterations. We observe that Ozaki sampling has more diverse samples. } \vspace{-1\baselineskip} \label{fig:mnist_fix_init} \end{figure} \subsection{Image datasets} Ozaki discretization with learned $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}})$ produces more diverse samples and improve the mixing speed of Langevin dynamics on image datasets (see \cref{fig:mnist_samples}) \if0 As shown in \cite{song2019generative}, sampling from score models trained with DSM on image datasets is challenging when $\sigma$ is small due to the ill-conditioned estimated scores in the low density region. As our goal here is to show that our learned $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ can be used to improve sample diversity and mixing speed than $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ alone, we use a slightly larger $\sigma=0.5$ to avoid the issues of training $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ discussed in~\cite{song2019generative}. \chenlin{this part needs to be explained more.} \fi To see this, we select ten different digits from MNIST test set and initialize 1000 different sampling chains for each image. We update the chains with Ozaki sampling and report the percentage of images that have class label changes after a fixed number of sampling iterations in \cref{fig:mnist_fix_init_1}. We compare the results with Langevin dynamics with the same setting and observe that Ozaki sampling has more diverse samples within the same chain in a fixed amount of iterations. We provide more details in Appendix~\ref{app:ozaki_sampling}. \section{Introduction} The first order derivative of the log data density function, also known as \emph{score}, has found many applications including image generation~\cite{song2019generative,song2020improved,ho2020denoising}, image denoising~\cite{saremi2018deep,saremi2019neural} and audio synthesis~\cite{kong2020diffwave}. Denoising score matching (DSM)~\cite{vincent2011connection} provides an efficient way to estimate the \emph{score} of the data density from samples and has been widely used for training score-based generative models~\cite{song2019generative,song2020improved} and denoising~\cite{saremi2018deep,saremi2019neural}. High order derivatives of the data density, which we refer to as \emph{high order scores}, provide a more accurate local approximation of the data density (\emph{e.g}\onedot, its curvature) and enable new applications. For instance, high order scores can improve the mixing speed for certain sampling methods~\cite{dalalyan2019user,sabanis2019higher,mou2019high}, similar to how high order derivatives accelerate gradient descent in optimization~\cite{martens2015optimizing}. % In denoising problems, given a noisy datapoint, high order scores can be used to compute high order moments of the underlying noise-free datapoint, thus providing a way to quantify the uncertainty in denoising. Existing methods for score estimation~\cite{hyvarinen2005estimation, vincent2011connection, song2019sliced, zhou2020nonparametric}, such as denoising score matching~\cite{vincent2011connection}, focus on estimating the \emph{first order} score (\emph{i.e}\onedot, the Jacobian of the log density). In principle, high order scores can be estimated from a learned first order score model (or even a density model) via automatic differentiation. However, this approach is computationally expensive for high dimensional data and score models parameterized by deep neural networks. For example, given a $D$ dimensional distribution, computing the $(n+1)$-th order score value from an existing $n$-th order score model by automatic differentiation is on the order of $D$ times more expensive than evaluating the latter~\cite{song2019sliced}. Moreover, % computing higher-order scores by automatic differentiation might suffer from large estimation error, since a small training loss for the first order score does not always lead to a small estimation error for high order scores. To overcome these limitations, we propose a new approach which directly models and estimates high order scores of a data density from samples. We draw inspiration from Tweedie's formula~\cite{efron2011tweedie,robbins2020empirical}, which connects the score function to a denoising problem, and show that denoising score matching (DSM) with Gaussian noise perturbation can be derived from Tweedie's formula with the knowledge of least squares regression. We then provide a generalized version of Tweedie's formula which allows us to further extend denoising score matching to estimate high order scores. In addition, we provide variance reduction techniques to improve the optimization of these newly introduced high order score estimation objectives. With our approach, we can directly parameterize high order scores and learn them efficiently, sidestepping expensive automatic differentiation. While our theory and estimation method is applicable to scores of any order, we focus on the \emph{second order score} (\emph{i.e}\onedot, the Hessian of the log density) for empirical evaluation. Our experiments show that models learned with the proposed objective can approximate second order scores more accurately than applying automatic differentiation to lower order score models. % Our approach is also more computationally efficient for high dimensional data, % achieving up to $500\times$ speedups for second order score estimation on MNIST. In denoising problems, there could be multiple clean datapoints consistent with a noisy observation, and it is often desirable to measure the uncertainty of denoising results. % As second order scores are closely related to the covaraince matrix of the noise-free data conditioned on the noisy observation, we show that our estimated second order scores can provide extra insights into the solution of denoising problems by capturing and quantifying the uncertainty of denoising. We further show that our model can be used to improve the mixing speed of Langevin dynamics for sampling synthetic data and natural images. Our empirical results on second order scores, a special case of the general approach, demonstrate the potential and applications of our method for estimating high order scores. \section*{Acknowledgements} The authors would like to thank Jiaming Song and Lantao Yu for constructive feedback. This research was supported by NSF (\#1651565, \#1522054, \#1733686), ONR (N000141912145), AFOSR (FA95501910024), ARO (W911NF-21-1-0125) and Sloan Fellowship. \bibliographystyle{plain} \section{Estimating Higher Order Scores by Denoising} Below we demonstrate that DSM can be derived from Tweedie's formula~\cite{efron2011tweedie,robbins2020empirical}. By leveraging the generalized Tweedie's formula on high order moments of the posterior, we extend DSM to estimate higher order score functions. \subsection{DSM in the view of Tweedie's formula}\label{sec:tweedie_to_dsm} The optimal solution to the least squares regression problem \begin{equation} \label{eq:least_square_dsm} \min_{\bm{\theta}} \ensuremath{\mathbb{E}}\xspace_{ p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)} \ensuremath{\mathbb{E}}\xspace_{ q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace)} [\Vert \tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}})-\ensuremath{\mathbf{x}}\xspace\Vert_2^2] \end{equation} is well-known to be the conditional expectation $\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}}^\ast)=\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$. If we parameterize $\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}}) = \ensuremath{{\tilde{\mathbf{x}}}}\xspace + \sigma^2 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}})$ where $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}})$ is a first order score model with parameter ${\bm{\theta}}$, the least squares problem in \cref{eq:least_square_dsm} becomes equivalent to the DSM objective: \begin{equation} \min_{\bm{\theta}} \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)} \ensuremath{\mathbb{E}}\xspace_{ q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace)}[\Vert \sigma^2 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}}) + \ensuremath{{\tilde{\mathbf{x}}}}\xspace -\ensuremath{\mathbf{x}}\xspace\Vert_2^2]= \min_{\bm{\theta}} 2\sigma^4 \cdot {\mathcal{L}}_{\text{DSM}}({\bm{\theta}}).\label{eq:tweedie_dsm} \end{equation} From Tweedie's formula, we know the optimal ${\bm{\theta}}^*$ satisfies $\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^\ast) = \ensuremath{{\tilde{\mathbf{x}}}}\xspace + \sigma^2 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^\ast) = \ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = \ensuremath{{\tilde{\mathbf{x}}}}\xspace + \sigma^2 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$, from which we can conclude that $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}}^\ast) = \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$. This proves that minimizing the DSM objective in \cref{eq:tweedie_dsm} recovers the first order score. There are other ways to derive DSM. For example, \cite{raphan2011least} provides a proof based on Bayesian least squares without relying on Tweedie's formula. Stein's Unbiased Risk Estimator (SURE)~\citep{stein1981estimation} can also provide an alternative proof based on integration by parts. Compared to these methods, our derivation can be easily extended to learn high order scores, leveraging a more general version of Tweedie's formula. \subsection{Second order denoising score matching} As a warm-up, we first consider the second order score, and later generalize to any desired order. Leveraging Tweedie's formula on $\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace \ensuremath{\mathbf{x}}\xspace^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ and $\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$, we obtain the following theorem. \begin{restatable}[]{theorem}{second_order_dsm} \label{thm:second_order_dsm} Given a D-dimensional distribution $p(\ensuremath{\mathbf{x}}\xspace)$ and $q_{\sigma}(\tilde{\ensuremath{\mathbf{x}}\xspace})\triangleq \int p(\ensuremath{\mathbf{x}}\xspace)q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace)d\ensuremath{\mathbf{x}}\xspace$, we have \begin{align} \label{eq:s2_polynomial_naive} &~\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = \tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2)\\ \label{eq:s2_polynomial} &~\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} - \ensuremath{\mathbf{x}}\xspace\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} - \ensuremath{{\tilde{\mathbf{x}}}}\xspace\ensuremath{\mathbf{x}}\xspace^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace] = \tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2), \end{align} where $\tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2)$ and $\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2)$ are polynomials of $\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ defined as \begin{align} \label{eq: second order multivariate expectation} &~\tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2)=\ensuremath{{\tilde{\mathbf{x}}}}\xspace\xt^{\mkern-1.5mu\mathsf{T}} + \sigma^2\ensuremath{{\tilde{\mathbf{x}}}}\xspace \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} + \sigma^4 \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)+\sigma^4 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2I,\\ \label{corollary:s2_least_square} &~\tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_2) = -\ensuremath{{\tilde{\mathbf{x}}}}\xspace \ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} + \sigma^4 \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) + \sigma^4 \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)^{\mkern-1.5mu\mathsf{T}} + \sigma^2 I. \end{align} Here $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ denote the first and second order scores of $q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$. \end{restatable} In \cref{thm:second_order_dsm}, \cref{eq: second order multivariate expectation} is directly given by Tweedie's formula on $\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$, and \cref{corollary:s2_least_square} is derived from Tweedie's formula on both $\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$ and $\ensuremath{\mathbb{E}}\xspace[\ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} \mid \ensuremath{{\tilde{\mathbf{x}}}}\xspace]$. Given a noisy sample $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$, Theorem~\ref{thm:second_order_dsm} relates the second order moment of $\ensuremath{\mathbf{x}}\xspace$ to the first order score $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ and second order score $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ of $q_\sigma(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$. A detailed proof of Theorem~\ref{thm:second_order_dsm} is given in Appendix~\ref{app:proof}. In the same way as how we derive DSM from Tweedie's formula in \cref{sec:tweedie_to_dsm}, we can obtain higher order score matching objectives with \cref{eq: second order multivariate expectation} and \cref{corollary:s2_least_square} as a least squares problem. \begin{restatable}[]{theorem}{second_dsm_loss_theorem} \label{eq:second_dsm_loss_theorem} Suppose the first order score $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ is given, we can learn a second order score model $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}})$ by optimizing the following objectives \begin{align} \label{eq:s2_least_square_naive} &~{\bm{\theta}}^\ast =\argmin_{\bm{\theta}} \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{ q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)}\bigg[\Big\Vert{ \ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}}-\tens{f}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace,\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}))}\Big\Vert_2^2\bigg],\\ \label{eq:s2_least_square} &~{\bm{\theta}}^\ast =\argmin_{\bm{\theta}} \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{ q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)}\bigg[\Big\Vert{ \ensuremath{\mathbf{x}}\xspace\x^{\mkern-1.5mu\mathsf{T}} - \ensuremath{\mathbf{x}}\xspace\ensuremath{{\tilde{\mathbf{x}}}}\xspace^{\mkern-1.5mu\mathsf{T}} - \ensuremath{{\tilde{\mathbf{x}}}}\xspace\ensuremath{\mathbf{x}}\xspace^{\mkern-1.5mu\mathsf{T}} - \tens{h}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace,\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}))}\Big\Vert_2^2\bigg] \end{align} where $\tens{f}(\cdot)$ and $\tens{h}(\cdot)$ are polynomials defined in \cref{eq: second order multivariate expectation} and \cref{corollary:s2_least_square}. Assuming the model has an infinite capacity, then the optimal parameter ${\bm{\theta}}^\ast$ satisfies $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^\ast)=\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ for almost any $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$. \end{restatable} Here \cref{eq:s2_least_square_naive} and \cref{eq:s2_least_square} correspond to the least squares objective of \cref{eq:s2_polynomial_naive} and \cref{eq:s2_polynomial} respectively, and have the same set of solutions assuming sufficient model capacity. In practice, we find that \cref{eq:s2_least_square} has a much simpler form than \cref{eq:s2_least_square_naive}, and will therefore use \cref{eq:s2_least_square} in our experiments. \subsection{High order denoising score matching} Below we generalize our approach to even higher order scores by (i) leveraging Tweedie's formula to connect higher order moments of $\ensuremath{\mathbf{x}}\xspace$ conditioned on $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$ to higher order scores of $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$; and (ii) finding the corresponding least squares objective. \begin{restatable}[]{theorem}{high_order_dsm} \label{thm:high_order_dsm} $\ensuremath{\mathbb{E}}\xspace[\otimes^{n}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]=\tens{f}_n(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{n})$, where $\otimes^{n}\ensuremath{\mathbf{x}}\xspace\in \mathbb{R}^{D^n}$ denotes $n$-fold tensor multiplications, $\tens{f}_n(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, ..., \tilde{\tens{s}}_{n})$ is a polynomial of $\{\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), ..., \tilde{\tens{s}}_{n}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)\}$ and $\tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ represents the $k$-th order score of $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)=\int p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace)d\ensuremath{\mathbf{x}}\xspace$. \end{restatable} \cref{thm:high_order_dsm} shows that there exists an equality between (high order) moments of the posterior distribution of $\ensuremath{\mathbf{x}}\xspace$ given $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$ and (high order) scores with respect to $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$. To get some intuition, for $n=2$ the polynomial $\tens{f}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1, \tilde{\tens{s}}_{2})$ is simply the function $\tens{f}$ in \cref{eq: second order multivariate expectation}. In \cref{app:proof}, we provide a recursive formula for obtaining the coefficients of $\tens{f}_n$ in closed form. Leveraging Theorem~\ref{thm:high_order_dsm} and the least squares estimation of $\ensuremath{\mathbb{E}}\xspace[\otimes^{k}\ensuremath{\mathbf{x}}\xspace|\ensuremath{{\tilde{\mathbf{x}}}}\xspace]$, we can construct objectives for approximating the $k$-th order scores $\tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ as in the following theorem. \begin{restatable}[]{theorem}{high_order_dsm_general_objective} \label{thm:high_order_dsm_general_objective} Given score functions $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace),...,\tilde{\tens{s}}_{k-1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$, a $k$-th order score model $\tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$, and \begin{equation*} {\bm{\theta}}^{*}=\argmin_{\bm{\theta}} \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)}[\Vert \otimes^{k}\ensuremath{\mathbf{x}}\xspace -\tens{f}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace, \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), ..., \tilde{\tens{s}}_{k-1}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace), \tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}))\Vert^2]. \end{equation*} We have $\tilde{\tens{s}}_{k}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^{*})=\tilde{\tens{s}}_k(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ for almost all $\ensuremath{{\tilde{\mathbf{x}}}}\xspace$. % \end{restatable} As previously discussed, when $\sigma$ approaches $0$ such that $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) \approx p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$, $\tilde{\tens{s}}_{k}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^{*})$ well-approximates the $k$-th order score of $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$. \section{Learning Second Order Score Models} \label{sec:learn_score_models} Although our theory can be applied to scores of any order, we focus on second order scores for empirical analysis. In this section, we discuss the parameterization and empirical performance of the learned second order score models. \subsection{Instantiating objectives for second order score models} In practice, we find that \cref{eq:s2_least_square} has a much simpler expression than \cref{eq:s2_least_square_naive}. Therefore, we propose to parameterize $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ with a model $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$, and optimize $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ with \cref{eq:s2_least_square}, which can be simplified to the following after combining \cref{corollary:s2_least_square} and \cref{eq:s2_least_square}: \begin{equation} \label{eq:second_dsm_loss} {\mathcal{L}}_{\text{$D_2$SM}}({\bm{\theta}}) \triangleq \ensuremath{\mathbb{E}}\xspace_{ p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{ q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)}\bigg[\Big\Vert{ \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})+\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})^{\mkern-1.5mu\mathsf{T}} + \frac{I - \ensuremath{\mathbf{z}}\xspace\z^{\mkern-1.5mu\mathsf{T}}}{\sigma^2} }\Big\Vert_2^2\bigg], \end{equation} where $\ensuremath{\mathbf{z}}\xspace \triangleq \frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace-\ensuremath{\mathbf{x}}\xspace}{\sigma}$. Note that \cref{eq:second_dsm_loss} requires knowing the first order score $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)$ in order to train the second order score model $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace; {\bm{\theta}})$. We therefore use the following hybrid objective to simultaneously train both $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$: \begin{equation} \label{eq:joint_objective} {\mathcal{L}}_{\text{joint}}({\bm{\theta}})={\mathcal{L}}_{\text{$D_2$SM}}({\bm{\theta}}) + \gamma\cdot {\mathcal{L}}_{\text{DSM}}({\bm{\theta}}), \end{equation} where ${\mathcal{L}}_{\text{DSM}}({\bm{\theta}})$ is defined in \cref{eq:dsm_gaussian} and $\gamma\in\mathbb{R}_{>0}$ is a tunable coefficient. The expectation for ${\mathcal{L}}_{\text{$D_2$SM}}({\bm{\theta}})$ and ${\mathcal{L}}_{\text{DSM}}({\bm{\theta}})$in \cref{eq:joint_objective} can be estimated with samples, and we optimize the following unbiased estimator \begin{equation} \label{eq:l_joint_sample} \resizebox{0.9\hsize}{!}{ $\displaystyle \hat{{\mathcal{L}}}_{\text{joint}}({\bm{\theta}}) = \frac{1}{N}\sum_{i=1}^{N}\bigg[\Big\Vert{ \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_i;{\bm{\theta}})+\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_i;{\bm{\theta}})\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_i;{\bm{\theta}})^{\mkern-1.5mu\mathsf{T}} + \frac{I - \ensuremath{\mathbf{z}}\xspace_i\ensuremath{\mathbf{z}}\xspace_i^{\mkern-1.5mu\mathsf{T}}}{\sigma^2} }\Big\Vert_2^2+ \frac{\gamma}{2} \Big\Vert \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_i;{\bm{\theta}}) + \frac{\ensuremath{\mathbf{z}}\xspace_i}{\sigma} \Big\Vert_2^2 \bigg],$ } \end{equation} where we define $\ensuremath{\mathbf{z}}\xspace_i \triangleq \frac{\ensuremath{{\tilde{\mathbf{x}}}}\xspace_i-\ensuremath{\mathbf{x}}\xspace_i}{\sigma}$, and $\{\ensuremath{{\tilde{\mathbf{x}}}}\xspace_i\}_{i=1}^{N}$ are samples from $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace)=\int p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)q_{\sigma}(\tilde \ensuremath{\mathbf{x}}\xspace|\ensuremath{\mathbf{x}}\xspace)d\ensuremath{\mathbf{x}}\xspace$ which can be obtained by adding noise to samples from $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$. Similarly to DSM, when $\sigma \to 0$, the optimal model $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}^*)$ that minimizes \cref{eq:l_joint_sample} will be close to the second order score of $p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ because $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) \approx p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$. When $\sigma$ is large, the learned $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ can be applied to tasks such as uncertainty quantification for denoising, which will be discussed in \cref{sec:uncertainty}. For downstream tasks that require only the diagonal of $\tilde{\tens{s}}_2$, we can instead optimize a simpler objective \begin{gather} \label{eq:joint_objective_diagonal} {\mathcal{L}}_{\text{joint-diag}}({\bm{\theta}})\triangleq {\mathcal{L}}_{\text{$D_2$SM-diag}}({\bm{\theta}}) + \gamma\cdot {\mathcal{L}}_{\text{DSM}}({\bm{\theta}}),~~\text{where}\\ \label{eq:joint_diagonal_samples} \resizebox{0.9\hsize}{!}{ $\displaystyle {\mathcal{L}}_{\text{$D_2$SM-diag}}({\bm{\theta}}) \triangleq \ensuremath{\mathbb{E}}\xspace_{p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace|\ensuremath{\mathbf{x}}\xspace)}\bigg[\Big\Vert{\text{diag}( \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})) + \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})\odot \tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})+\frac{\textbf{1} - \ensuremath{\mathbf{z}}\xspace\odot \ensuremath{\mathbf{z}}\xspace}{\sigma^2}}\Big\Vert_2^2\bigg]$. } \end{gather} Here $\text{diag}(\cdot)$ denotes the diagonal of a matrix and $\odot$ denotes element-wise multiplication. Optimizing \cref{eq:joint_objective_diagonal} only requires parameterizing $\text{diag}( \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}))$, which can significantly reduce the memory and computational cost for training and running the second order score model. Similar to $\hat{{\mathcal{L}}}_{\text{joint}}({\bm{\theta}})$, we estimate the expectation in \cref{eq:joint_diagonal_samples} with empirical means. \subsection{Parameterizing second order score models} \label{sec:low_rank_parameterization} In practice, the performance of learning second order scores is affected by model parameterization. As many real world data distributions (\emph{e.g}\onedot, images) tend to lie on low dimensional manifolds~\cite{narayanan2010sample,dasgupta2008random,saul2003think}, we propose to parametrize $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ with low rank matrices defined as below \begin{equation*} \tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}}) = \bm{\alpha}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})+\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})\bm{\beta}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})^{\mkern-1.5mu\mathsf{T}}, \end{equation*} where $\bm{\alpha}(\cdot;{\bm{\theta}}):\mathbb{R}^D\to\mathbb{R}^{D\times D}$ is a diagonal matrix, $\bm{\beta}(\cdot;{\bm{\theta}}):\mathbb{R}^D\to\mathbb{R}^{D\times r}$ is a matrix with shape $D\times r$, and $r\le D$ is a positive integer. \subsection{Antithetic sampling for variance reduction} As the standard deviation of the perturbed noise $\sigma$ approximates zero, training score models with denoising methods could suffer from a high variance. % Inspired by a variance reduction method for DSM~\cite{wang2020wasserstein,song2021train}, we propose a variance reduction method for $D_2$SM{} \begin{equation*} \label{eq:l_joint_sample_vr} \resizebox{0.92\hsize}{!}{ $\displaystyle\mathcal{L}_{\text{$D_2$SM-VR}} = \ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{x}}\xspace\sim p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)}\ensuremath{\mathbb{E}}\xspace_{\ensuremath{\mathbf{z}}\xspace\sim \mathcal{N}(0,I)}\bigg[\tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{+})^2+\tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{-})^2 +2\frac{\ensuremath{\mathbf{I}}\xspace-\ensuremath{\mathbf{z}}\xspace\z^{\mkern-1.5mu\mathsf{T}}}{\sigma}\odot(\tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{+}) + \tens{\psi}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{-}) -2\tens{\psi}(\ensuremath{\mathbf{x}}\xspace))\bigg]$,} \end{equation*} where $\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{+}=\ensuremath{\mathbf{x}}\xspace+\sigma \ensuremath{\mathbf{z}}\xspace$, $\ensuremath{{\tilde{\mathbf{x}}}}\xspace_{-}=\ensuremath{\mathbf{x}}\xspace-\sigma \ensuremath{\mathbf{z}}\xspace$ and $\tens{\psi}=\tilde{\tens{s}}_2+\tilde{\tens{s}}_1\tilde{\tens{s}}_1^{\mkern-1.5mu\mathsf{T}}$. Instead of using independent noise samples, we apply antithetic sampling and use two correlated (opposite) noise vectors centered at $\ensuremath{\mathbf{x}}\xspace$. Similar to \cref{eq:joint_objective}, we define $\mathcal{L}_{\text{joint-VR}} =\mathcal{L}_{\text{$D_2$SM-VR}}+\gamma\cdot\mathcal{L}_{\text{DSM-VR}}$, where $\mathcal{L}_{\text{DSM-VR}}$ is proposed in ~\cite{wang2020wasserstein}. We empirically study the role of variance reduction (VR) in training models with DSM and $D_2$SM{}. We observe that VR is crucial for both DSM and $D_2$SM{} when $\sigma$ is approximately zero, but is optional when $\sigma$ is large enough. To see this, we consider a 2-d Gaussian distribution $\mathcal{N}(0,I)$ and train $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ with DSM and $D_2$SM{} respectively. We plot the learning curves in \cref{fig:vr_1,fig:vr_2}, and visualize the first dimension of the estimated scores for multiple noise scales $\sigma$ in \cref{fig:vr_3,fig:vr_4}. We observe that when $\sigma=0.001$, both DSM and $D_2$SM{} have trouble converging after a long period of training, while the VR counterparts converge quickly (see \cref{fig:variance_reduction}). When $\sigma$ gets larger, DSM and $D_2$SM{} without VR can both converge quickly and provide reasonable score estimations (\cref{fig:vr_3,fig:vr_4}). We provide extra details in \cref{app:analysis}. \begin{figure}[h] \vspace{-0.5\baselineskip} \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/second_dsm_gaussian_1e_3.png} \caption{$D_2$SM{} loss} \label{fig:vr_1} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/second_dsm_vr_gaussian_1e_3.png} \caption{$D_2$SM{}-VR loss } \label{fig:vr_2} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/s1_predicted_new.png} \caption{Estimated $\tilde{\tens{s}}_1$} \label{fig:vr_3} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{main_figures/s2_predicted_new.png} \caption{Estimated $\tilde{\tens{s}}_2$} \label{fig:vr_4} \end{subfigure} \caption{From left to right: (a) $D_2$SM{} loss without variance reduction ($\sigma=10^{-3}$). (b) $D_2$SM{} loss with variance reduction ($\sigma=10^{-3}$). (c) Estimated $\tilde{\tens{s}}_1$. (d) Estimated $\tilde{\tens{s}}_2$, where the estimation for $D_2$SM{} ($0.001$) is too far from the ground truth to appear on the plot. } \vspace{-1\baselineskip} \label{fig:variance_reduction} \end{figure} \subsection{The accuracy and efficiency of learning second order scores} \label{sec:l2} We show that the proposed method can estimate second order scores more efficiently and accurately than those obtained by automatic differentiation of a first order score model trained with DSM. We observe in our experiments that $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ jointly optimized via $\hat{\mathcal{L}}_{\text{joint}}$ or $\hat{\mathcal{L}}_{\text{joint-diag}}$ has a comparable empirical performance as trained directly by DSM, so we optimize $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ and $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ jointly in later experiments. We provide additional experimental details in Appendix~\ref{app:analysis}. \textbf{Learning accuracy}~We consider three synthetic datasets whose ground truth scores are available---a 100-dimensional correlated multivariate normal distribution and two high dimensional mixture of logistics distributions in \cref{tab:mse_synthetic}. We study the performance of estimating $\tilde{\tens{s}}_2$ and the diagonal of $\tilde{\tens{s}}_2$. For the baseline, we estimate \emph{second order scores} by taking automatic differentiation of $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ trained jointly with $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ using \cref{eq:l_joint_sample} or \cref{eq:joint_diagonal_samples}. As mentioned previously, $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ trained with the joint method has the same empirical performance as trained directly with DSM. For our method, we directly evaluate $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. We compute the mean squared error between estimated \emph{second order scores} and the ground truth score of the \emph{clean} data since we use small $\sigma$ and $q_{\sigma}(\ensuremath{{\tilde{\mathbf{x}}}}\xspace) \approx p_{\text{data}}(\ensuremath{\mathbf{x}}\xspace)$ (see Table~\ref{tab:mse_synthetic}). We observe that $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ achieves better performance than the gradients of $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$. \begin{table}[h] \vspace{-1\baselineskip} \caption{Mean squared error between the estimated \emph{second order scores} and the ground truth on $10^5$ test samples. Each setup is trained with three random seeds and multiple noise scales $\sigma$. } \begin{center} {\setlength{\extrarowheight}{1.8pt} \begin{adjustbox}{max width=\linewidth} \begin{tabular}{cccc||cccc} \Xhline{3\arrayrulewidth} Methods & $\sigma=0.01$ &$\sigma=0.05$ & $\sigma=0.1$ & Methods & $\sigma=0.01$ & $\sigma=0.05$ & $\sigma=0.1$ \\ \hline \multicolumn{4}{c||}{Multivariate normal (100-d)} & \multicolumn{4}{c}{ Mixture of logistics diagonal estimation (50-d, 20 mixtures)} \\ \hline % $\tilde{\tens{s}}_1$ grad (DSM) &43.80$\pm$0.012 &43.76$\pm$0.001 & 43.75$\pm$0.001 & $\tilde{\tens{s}}_1$ grad (DSM-VR) &26.41$\pm$0.55 &26.13$\pm$ 0.53 & 25.39$\pm$ 0.50 \\ $\tilde{\tens{s}}_1$ grad (DSM-VR) &9.40$\pm$0.049 &9.39$\pm$0.015 & 9.21$\pm$0.020 & $\tilde{\tens{s}}_2$ (Ours) &\textbf{18.43$\pm$ 0.11} & \textbf{18.50$\pm$ 0.25} & \textbf{17.88$\pm$0.15}\\ \cline{5-8} $\tilde{\tens{s}}_2$ (Ours, $r=15$) & 7.12$\pm$ 0.319 &6.91$\pm$0.078 &7.03$\pm$0.039 &\multicolumn{4}{c}{Mixture of logistics diagonal estimation (80-d, 20 mixtures)} \\ \cline{5-8} $\tilde{\tens{s}}_2$ (Ours, $r=20$) & 5.24$\pm$0.065 &5.07$\pm$0.047 &5.13$\pm$0.065 &$\tilde{\tens{s}}_1$ grad (DSM-VR) &32.80$\pm$ 0.34 &32.44$\pm$ 0.30 &31.51$\pm$ 0.43 \\ $\tilde{\tens{s}}_2$ (Ours, $r=30$) & \textbf{1.76$\pm$0.038} &\textbf{2.05$\pm$0.544} &\textbf{1.76$\pm$0.045} &$\tilde{\tens{s}}_2$ (Ours) &\textbf{21.68$\pm$ 0.18} & \textbf{22.23$\pm$0.08} &\textbf{22.18$\pm$ 0.08} \\ \Xhline{3\arrayrulewidth} \end{tabular} \end{adjustbox} } \end{center} \vspace{-0.5\baselineskip} \label{tab:mse_synthetic} \end{table} \textbf{Computational efficiency}~\label{sec:efficiency}Computing the gradients of $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ via automatic differentiation can be expensive for high dimensional data and deep neural networks. To see this, we consider two models---a 3-layer MLP and a U-Net~\cite{ronneberger2015u}, which is used for image experiments in the subsequent sections. We consider a 100-d data distribution for the MLP model and a 784-d data distribution for the U-Net. We parameterize $\tilde{\tens{s}}_1$ and $\tilde{\tens{s}}_2$ with the same model architecture and use a batch size of $10$ for both settings. We report the wall-clock time averaged in 7 runs used for estimating second order scores during test time on a TITAN Xp GPU in Table~\ref{tab:efficiency}. We observe that $\tilde{\tens{s}}_2(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ is 500$\times$ faster than using automatic differentiation for $\tilde{\tens{s}}_1(\ensuremath{{\tilde{\mathbf{x}}}}\xspace;{\bm{\theta}})$ on the MNIST dataset. \section{Related Work} \label{app:related_work} Existing methods for score estimation focus mainly on estimating the first order score of the data distribution. For instance, score matching~\cite{hyvarinen2005estimation} approximates the first order score by minimizing the Fisher divergence between the data distribution and model distribution. Sliced score matching~\cite{song2019sliced} and finite-difference score matching~\cite{pang2020efficient} provide alternatives to estimating the first order score by approximating the score matching loss~\cite{hyvarinen2005estimation} using Hutchinson's trace estimator~\cite{hutchinson1989stochastic} and finite difference respectively. Denoising score matching (DSM)~\cite{vincent2011connection} estimates the first order score of a noise perturbed data distribution by predicting the added perturbed "noise" given a noisy observation. However, none of these methods can directly model and estimate higher order scores. In this paper we study DSM from the perspective of Tweedie's formula and propose a method for estimating high order scores. There are also other ways to derive DSM without using Tweedie's formula. For example, \cite{raphan2011least} provides a proof based on Bayesian least squares estimation. Stein's Unbiased Risk Estimator (SURE)~\citep{stein1981estimation} can also provide an alternative proof based on integration by parts. In contrast, our derivation, which leverages a general version of Tweedie's formula on high order moments of the posterior, can be extended to directly learning high order scores.
1,108,101,563,074
arxiv
\section{Introduction} This paper is concerned with variable exponent Picone identity in the context of sub-Riemannian geometry. We derive a nonlinear Picone identity which allows us to study some qualitative properties of the principal eigenvalue of $p(x)$-sub-Laplacian with respect to the general vector fields on smooth manifolds. As by-products, we also derive Hardy type inequalities and Caccioppoli estimates with variable exponents. These results are appearing for the first time, even in the Euclidean setting. In recent years, several authors have devoted their researches towards the study of variable exponent elliptic equations and systems with $p(x)$-growth condition in Euclidean setting with many interesting results \cite{Alv,Deng,Fan,FZZ,HHV,MV}. Models involving $p(x)$-growth condition arise from physical processes such as nonlinear elasticity theory, electrorheological fluids, image processing, etc \cite{AMS,AMSo,Ru}. It has been observed that $p(x)$-Laplacian is similar in many respect to the classical $p$-Laplacian ($p$-constant) but it lacks certain vital properties such as homogeneity. This therefore makes the nonlinearity so much complicated and many of known approaches to $p$-Laplacian can no longer hold for $p(x)$-Laplacian. It is interesting to consider $p(x)$-Laplacian in the sub-elliptic setting and investigate which of the known results for $p$-constant hold for variable exponents. Let $M$ be an $n$-dimensional smooth manifold equipped with a volume form $dx$ and $\{X_k\}_{k=1}^N$, $n\ge N$, be a family of vector fields defined on $M$. Consider the operator $$\mathscr{L}_X := \sum_{k=1}^N X^*_kX_k,$$ which is a second-order differential operator usually called canonical sub-Laplacian. This operator is related to the operator for the sum of squares of vector fields and it is well known to be locally hypoelliptic if the commutators of the vector fields $\{X_k\}_{k=1}^N$ generate the tangent space of $M$ as the Lie algebra, due to H\"ormander's pioneering work \cite{Hom}. We denote the horizontal gradients for general vector fields by $$\nabla_X = (X_1, \cdots, X_N) \ \ \ \text{and} \ \ \ \nabla_X^* = (X_1^*, \cdots, X_N^*),$$ where $X_k$ and its formal adjoint $X_k^*$ are respectively given by $$X_k = \sum_{j=1}^n a_{kj}(x)\frac{\partial}{\partial x_j} \ \ \text{and}\ \ X_k^* = -\sum_{j=1}^n \frac{\partial}{\partial x_j} (a_{kj}(x)), \ \ k= 1, \cdots, N.$$ There are numbers of examples of sub-manifolds where vector fields can be defined. For examples, we list among others, the Carnot groups, Heisenberg groups, Engel groups, and Grushin plane (which does not even posses a group structure). Interested readers can see the book \cite{RS} for more examples and detail discussions on the sub-Laplacian and its various extensions in each case. In the case $M=\mathbb{R}^n$, then $dx$ is the Lebesgue measure, $\nabla_X=\nabla$ and $\mathscr{L}_X =\Delta$ are the usual Euclidean gradient and Laplacian, respectively. Let $p:\bar{\Omega} \to \mathbb{R}$ be a continuous function and $p(x)>1$ for $x\in \bar{\Omega}\subset M$. We define the $p(x)$-sub-Laplacian for general vector fields on $M$ by the formula $$\mathscr{L}_pu:= \nabla^*_X(|\nabla_X u|^{p(x)-2}\nabla_Xu),$$ where $u$ is a smooth function. If $p(x)=p$ ($p$=constant), the operator $\mathscr{L}_pu$ becomes the $p$-sub-Laplacian, $ \nabla^*_X(|\nabla_X u|^{p-2}\nabla_Xu)$ and $|x|$ stands for the Euclidean length of $x=(x_1,\cdots,x_n)$. As mentioned earlier, various partial differential equations with variable exponent growth condition have appeared in literature (see \cite{Alv,Deng,Fan, FZZ,HHV,MV} for instance), but there is scarcity of such mathematical models in the subelliptic setting. In this paper however we shall consider the indefinite weighted Dirichlet eigenvalue problem for $p(x)$-sub-Laplacian on $\Omega\subset M$, $p(x)>1$, \begin{align}\label{eq1} \left. \begin{array}{ll} -\nabla^*_X(|\nabla_X u|^{p(x)-2}\nabla_Xu) = \lambda g(x)|u|^{p(x)-2} u, & \ x \in \Omega,\\ \ \ \ u>0, &\ x \in \Omega,\\ \ \ \ u=0, &\ x \in \partial\Omega, \end{array} \right. \end{align} and discuss some properties of the eigenvalue $\lambda \in \mathbb{R}^+$ and the corresponding eigenfunction $u(x)$ in certain Sobolev spaces with variable exponents \cite{DHHR,CFRW,FZ}. It is well known in the classical setting ($p(x)=p$-constant and $M=\mathbb{R}^n$) that Problem \eqref{eq1} possesses a closed set of nondecreasing sequence of nonnegative eigenvalues $\{\lambda_k\}$ which grows to $+\infty$ as $k\to +\infty$, and that the first nonzero eigevalue is simple and isolated. Due to some complication in the nonlinearities in $p(x)$-Laplacian and inhomogeneity of the corresponding variable exponent norm, some of the results in the classical case may not hold or rather under restrictive assumptions. In \cite{FZZ}, the authors studied \eqref{eq1} (with $g(x)=1$, $M=\mathbb{R}^n$) and showed that the existence of infinitely many eigenvalues and established some sufficient condition for the infimum of the spectrum (called the principal eigenvalue), $$\lambda_{1,p} = \inf _{u\ne 0}\frac{\int_\Omega |\nabla u|^{p(x)}dx}{ \int_\Omega |u|^{p(x)}dx}, \ \ \ p(x)>1,$$ to be zero and positive, respectively. The properties that $\lambda_1>0$ is very useful in analysis and applications. Motivated by \cite{FZZ}, we are able to assume the existence of $\lambda_{1,p}>0$ for \eqref{eq1} and proved its uniqueness, monotonicity, simplicity and isolatedness. The variable exponent Picone identity (discussed in Section \ref{sec2}) plays a crucial role in our proofs. Picone identity is a very useful tool in the study of qualitative properties of solutions of differential equations, and for this, several linear and nonlinear Picone type identities have been derived to handle differential equations of various type. Picone identity was originally developed by Mauro Picone in 1910 to prove Sturm Comparison principle and oscillation theory for a system of differential equations. This identity was later extended to partial differential equation involving Laplacian by Allegretto \cite{Al1} and $p$-Laplacian by Allegretto and Huang \cite{AH} to establish among others, existence and nonexistence of positive solutions, Sturmian comparison principle, Liouville type theorems, Hardy inequalities and some profound results involving $p$-Laplace equations and systems. Precisely, Allegretto \cite{Al1} proved that, for nonnegative differentiable functions $u$ and $v$ with $v \neq 0$, the following formula \begin{align}\label{e11} |\nabla u|^2+\frac{u^2}{v^2}|\nabla v|^2-2\frac{u}{v}\nabla u\nabla v= |\nabla u|^2-\nabla\left(\frac{u^2}{v}\right)\nabla v \ge 0 \end{align} holds. Allegretto and Huang \cite{AH} extended \eqref{e11} to handle $p$-Laplace equations and eigenvalue problems involving $p$-Laplacian. Their identity reads as follows, for $u\ge 0$, $v>0$, then \begin{align}\label{e12} |\nabla u|^p+(p-1)\frac{u^p}{v^p}|\nabla v|^p &-p\frac{u^{p-1}}{v^{p-1}}|\nabla v|^{p-2}\nabla v\nabla u = R_p(u,v), \end{align} where \begin{align*} R_p(u,v):=|\nabla u|^p-\nabla\left(\frac{u^p}{v^{p-1}}\right)|\nabla v|^{p-2}\nabla v \ge 0. \end{align*} Several extensions and generalization of Picone identity have been established in order to handle more general elliptic operators. Tyagi \cite{Ty} and Bal \cite{Ba} established nonlinear versions of \eqref{e11} and its $p$-Laplace analogue \eqref{e12}, respectively, with several applications, (see also \cite{DT,Fe,Tir}). For other interesting extension of Picone type identities one can find \cite{Ja3,Ja4} (for Finsler $p$-Laplacian with application to Caccioppoli inequality), \cite{RSS1,RSS2,RSS3} (for general vector fields and $p$-sub-Laplacian with applications to Grushin plane, Heisenberg group, Stratified Lie groups), \cite{NZW} (for $p$-sub-Laplacian on Heisenberg group and applications to Hardy inequalities), \cite{SY} (for nonlinear Picone identities for anisotropic $p$-sub-Laplacian and $p$-biLaplacian with applications to horizontal Hardy inequalities and weighted eigenvalue problem on Stratified Lie groups). Allegretto \cite{Al3} established variable exponent Picone type identity for differentiable functions $v>0$, $0\le u\in C^\infty_0(\Omega)$ and continuous $p(x)>1$ as follows: \begin{align}\label{e14} \frac{|\nabla u|^{p(x)}}{p(x)} &- \nabla\left[\frac{u^{p(x)}}{p(x)v^{p(x)-1}} \right]|\nabla v|^{p(x)-2}\nabla v \nonumber\\ & = \frac{|\nabla u|^{p(x)}}{p(x)} - \left(\frac{u}{v}\right)^{p(x)-1}|\nabla v|^{p(x)-2}\nabla v\nabla u +\frac{p(x)-1}{p(x)} \left(\frac{u}{v}|\nabla v| \right)^{p(x)}\\ & \hspace{1cm} + \frac{1}{p(x)} \frac{u^{p(x)}}{v^{p(x)-1}}|\nabla v|^{p(x)-2} \left[\frac{1}{p(x)} - \ln\left(\frac{u}{v}\right)\right]\nabla v\nabla p(x) \ge 0\nonumber \end{align} on the assumption that $\nabla v\nabla p(x)=0$. He used the inequality to prove Barta theorem and some other results. Later, Yoshida \cite{Yo1} (see also \cite{Yo2,Yo3}) established similar Picone identities for quasilinear and half-linear elliptic equations involving $p(x)$-Laplacian and pseudo $p(x)$-Laplacian, and consequently developed Sturmian comparison theory. Most recently, Feng and Han \cite{FH}, motivated by Allegretto \cite{Al3} proved a modified form of \eqref{e14} and showed that \begin{align} |\nabla u|^{p(x)} - \nabla\left(\frac{u^{p(x)}}{v^{p(x)-1}} \right)|\nabla v|^{p(x)-2}\nabla v \ge 0 \end{align} if $\nabla v\nabla p(x)=0$ a.e in $\Omega$, with equality if and only if $\nabla(u/v)=0$ in $\Omega$. They proved monotonicity of principal eigenvalue $\lambda_{1,p}$ and a variable exponent Barta inequality for $p(x)$-Laplacian in the form \begin{align*} \lambda_{1,p} \ge \inf_{x\in \Omega}\left[\frac{\Delta_pv}{v^{p(x)-1}}\right], \ \ \ \Omega\subset \mathbb{R}^n, \end{align*} where $\Delta_p:=-\nabla(|\nabla v|^{p(x)-2}\nabla v)$, on the assumption that $\nabla v\nabla p(x)=0$. In this paper, we derive new generalized variable exponent Picone type identities for general vector fields in the sub-Riemannian settings. The derived generalized identity contains some known identities in various setting as will be discussed in Section \ref{sec2}. Consequently, we give several applications to qualitative properties of the principal eigenvalue of $p(x)$-sub-Laplacian. Here, we are concerned with uniqueness, simplicity, monotonicity and isolatedenss of the Dirichlet principal eigenvalue. These are discussed in Section \ref{sec3}. Lastly, motivated by \cite{RSS1}, we derive as a consequent of Picone identity, sub-elliptic variable exponents Caccioppoli estimates in the form \begin{align*} \int_\Omega \phi^{p(x)}|\nabla_Xv|^{p(x)}dx \le (p^+)^{p^+} \int_\Omega v^{p(x)} |\nabla_X\phi|^{p(x)} dx \end{align*} for every nonnegative test function $\phi \in C^\infty_0(\Omega)$, where $v$ is a sub-solution in $\Omega\subset M$ and $p^+:=ess\sup p(x)$. \section{Nonlinear variable exponent Picone identity} \label{sec2} Here we give the statement and the proof of the nonlinear Picone identity with variable exponent, which is the main result of this section. First, we state some hypotheses as adopted in this section (and ofcourse throughout the paper) and Young's inequality in the forms that will be applied here and later. Let $M$ be an $n$-dimensional smooth manifold and $\Omega$ any domain in $M$, $p(x)>1$ is a continuous function on $\bar{\Omega}$, $p'(x) = 1/(p(x)-1)$ is H\"older conjugate to $p(x)$. \begin{lemma}\label{lemY} (Classical Young's inequality) Let $s\ge 0$, $t\ge0$, and $p(x)>1$ such that $1/p(x)+1/p'(x)=1$. There holds the inequality \begin{align}\label{Y1} st \le \frac{s^{p(x)}}{p(x)} + \frac{t^{p'(x)}}{p'(x)} \end{align} with equality if and only if $s^{p(x)}=t^{p'(x)}$. \end{lemma} \noindent Inequality \eqref{Y1} is the classical Young's inequality which can be varied in the following form. \begin{lemma}\label{Y2} (Modified Young's inequality) Let $\Phi(x),\Psi(x)\ge 0$, $p(x)>1$ such that $1/p(x)+1/p'(x)=1$ and $\varepsilon:\Omega\to \mathbb{R}^+$ be a continuous and bounded function. There holds the inequality \begin{align}\label{Y2} \Phi\Psi^{p(x)-1} \le \frac{\Phi^{p(x)}}{p(x)\varepsilon(x)^{p(x)-1}} + \frac{p(x)-1}{p(x)}\varepsilon(x) \Psi^{p(x)} \end{align} for a.e. $x\in \Omega$. \end{lemma} \proof Applying the classical Young's inequality \eqref{Y1} with $$ s = \frac{\Phi}{\varepsilon(x)^{\frac{p(x)}{p(x)-1}}} \ \ \ \text{and}\ \ \ t = \left(\Psi \varepsilon(x)^{\frac{1}{p(x)}}\right)^{p(x)-1},$$ we have \begin{align*} \Phi\Psi^{p(x)-1}& = \left(\frac{\Phi}{\varepsilon(x)^{\frac{p(x)}{p(x)-1}}}\right) \left(\Psi \varepsilon(x)^{\frac{1}{p(x)}}\right)^{p(x)-1}\\ &\le \frac{\Phi^{p(x)}}{p(x)\varepsilon(x)^{p(x)-1}} + \frac{p(x)-1}{p(x)}\left(\Psi \varepsilon(x)^{\frac{1}{p(x)}}\right)^{p(x)}. \end{align*} \qed The next is the variable exponent Picone identity. \begin{theorem}\label{Pic-thm} Let $u\ge 0$ and $v>0$ be nonconstant differentiable functions a.e. in $\Omega$. Suppose $p:\bar{\Omega}\to (0,\infty)$ is a $C^1$-function for $p(x)>1$, and $f:(0,\infty)\to (0,\infty)$ is a $C^1$-function satisfying $f(y)>0$ and $f'(y)\ge (p(x)-1)\left[f(y)^{\frac{p(x)-2}{p(x)-1}}\right]$ for $y>0$. Define \begin{align}\label{e23} L(u,v) &= |\nabla_Xu|^{p(x)}- \frac{u^{p(x)}\ln u}{f(v)} |\nabla_Xv|^{p(x)-2}\nabla_Xv\nabla_Xp(x) \nonumber \\ &-p(x)\frac{u^{p(x)-1}}{f(v)} |\nabla_Xv|^{p(x)-2}\nabla_Xv \nabla_Xu + \frac{u^{p(x)}f'(v)}{(f(v))^2} |\nabla_Xv|^{p(x)} \end{align} and \begin{align}\label{e24} R(u,v) = |\nabla_Xu|^{p(x)}- \nabla_X\left(\frac{u^{p(x)}}{f(v)} \right)|\nabla_Xv|^{p(x)-2}\nabla_Xv. \end{align} Then \begin{enumerate} \item $L(u,v)=R(u,v)$. \item Moreover $L(u,v)\ge 0$ if $\nabla_Xv\nabla_Xp(x)\equiv 0$. \item Furthermore, $L(u,v)=0$ a.e. in $\Omega$ if and only if $\nabla_X(u/v)=0$ a.e. in $\Omega$. \end{enumerate} \end{theorem} \proof By direct computation we have \begin{align*} R(u,v) & = |\nabla_Xu|^{p(x)}- \left(\frac{\nabla_X(u^{p(x)})}{f(v)} - \frac{u^{p(x)}\nabla_X(f(v))}{(f(v))^2} \right)|\nabla_Xv|^{p(x)-2}\nabla_Xv\\ &= |\nabla_Xu|^{p(x)} - \frac{u^{p(x)}\ln u \nabla_Xp(x) +p(x)u^{p(x)-1}\nabla_Xu}{f(v)} |\nabla_Xv|^{p(x)-2}\nabla_Xv\\ & \hspace{2cm} + \frac{u^{p(x)}f'(v)}{(f(v))^2} |\nabla_Xv|^{p(x)}\\ & = L(u,v), \end{align*} which proves $(1)$ of the theorem. Next we verify $L(u,v)\ge 0$. Rewriting the expression for $L(u,v)$ as follows \begin{align*} L(u,v) &= |\nabla_Xu|^{p(x)}-p(x)\frac{u^{p(x)-1}}{f(v)} |\nabla_Xv|^{p(x)-1}|\nabla_Xu | + \frac{u^{p(x)}f'(v)}{(f(v))^2} |\nabla_Xv|^{p(x)} \\ & \ \ \ +p(x)\frac{u^{p(x)-1}}{f(v)} \left(|\nabla_Xv| |\nabla_Xu| - \nabla_Xv\nabla u \right) - \frac{u^{p(x)}\ln u}{f(v)} |\nabla_Xv|^{p(x)-2}\nabla_Xv\nabla_Xp(x)\\ & = p(x)\left(\frac{|\nabla_Xu|^{p(x)}}{p(x)} +\frac{p(x)-1}{p(x)} \left[\frac{(u|\nabla_Xv|)^{p(x)-1}}{f(v)} \right]^{\frac{p(x)}{p(x)-1}}\right) + \frac{u^{p(x)}f'(v)}{(f(v))^2} |\nabla_Xv|^{p(x)}\\ & \ \ \ -(p(x)-1) \left[\frac{(u|\nabla_Xv|)^{p(x)-1}}{f(v)} \right]^{\frac{p(x)}{p(x)-1}} -p(x)\frac{u^{p(x)-1}}{f(v)} |\nabla_Xv|^{p(x)-1}|\nabla_Xu | \\ &\ \ \ +p(x)\frac{u^{p(x)-1}}{f(v)} \left(|\nabla_Xv| |\nabla_Xu| - \nabla_Xv\nabla u \right) - \frac{u^{p(x)}\ln u}{f(v)} |\nabla_Xv|^{p(x)-2}\nabla_Xv\nabla_Xp(x)\\ & = L_1(u,v) \ \ + \ \ L_2(u,v) \ \ + \ \ L_3(u,v) \ \ + \ \ L_4(u,v), \end{align*} where \begin{align*} L_1(u,v)&:= p(x)\left(\frac{|\nabla_Xu|^{p(x)}}{p(x)} +\frac{p(x)-1}{p(x)} \left[\frac{(u|\nabla_Xv|)^{p(x)-1}}{f(v)} \right]^{\frac{p(x)}{p(x)-1}}\right)\\ & \hspace{3cm} -p(x)\frac{u^{p(x)-1}}{f(v)} |\nabla_Xv|^{p(x)-1}|\nabla_Xu |, \end{align*} \begin{align*} L_2(u,v)&:= \frac{u^{p(x)}f'(v)}{(f(v))^2} |\nabla_Xv|^{p(x)} - (p(x)-1 )\left[\frac{(u|\nabla_Xv|)^{p(x)-1}}{f(v)} \right]^{\frac{p(x)}{p(x)-1}}, \end{align*} \begin{align*} L_3(u,v)&:= p(x)\frac{u^{p(x)-1}}{f(v)} \left(|\nabla_Xv| |\nabla_Xu| - \nabla_Xv\nabla u \right), \end{align*} \begin{align*} L_4(u,v)&:= - \frac{u^{p(x)}\ln u}{f(v)} |\nabla_Xv|^{p(x)-2}\nabla_Xv\nabla_Xp(x). \end{align*} Applying the Young's inequality \eqref{Y1}, choosing $s=|\nabla_X u|$ and $\disp t= \frac{(u|\nabla_Xv|)^{p(x)-1}}{f(v)}$, we obtain \begin{align*} p(x)\frac{u^{p(x)-1}}{f(v)} & |\nabla_Xv|^{p(x)-1}|\nabla_Xu | \\ & \le p(x)\left(\frac{|\nabla_Xu|^{p(x)}}{p(x)} +\frac{p(x)-1}{p(x)} \left[\frac{(u|\nabla_Xv|)^{p(x)-1}}{f(v)} \right]^{\frac{p(x)}{p(x)-1}}\right), \end{align*} implying that $L_1(u,v)\ge 0$ with equality if and only if there is equality in the Young's inequality, that is, $\Phi=\Psi^{\frac{1}{p(x)-1}}$. Applying the assumption $f'(y)\ge (p(x)-1)\left[f(y)^{\frac{p(x)-2}{p(x)-1}}\right]$, we have \begin{align*} \frac{u^{p(x)}f'(v)}{(f(v))^2} |\nabla_Xv|^{p(x)} \ge (p(x)-1 )\left[\frac{(u|\nabla_Xv|)^{p(x)-1}}{f(v)} \right]^{\frac{p(x)}{p(x)-1}}, \end{align*} which implies that $L_2(u,v)\ge0$ with equality if and only if\\ $f'(y)= (p(x)-1)\left[f(y)^{\frac{p(x)-2}{p(x)-1}}\right]$. Clearly, $L_3(u,v)\ge 0$ by reverting to the inequality $|\nabla_Xv| |\nabla_Xu| - \nabla_Xv\nabla_X u\ge 0$. By the virtue of the assumption that $\nabla_Xv\nabla_Xp(x)\equiv 0$, we have also $L_4(u,v)\equiv 0$. Putting all of these together we obtain that $L(u,v)\ge 0$ a.e. in $\Omega$. Observe that $L(u,v)=0$ holds if and only if \begin{align}\label{a23} |\nabla_Xu| = \frac{u}{f(v)^{\frac{1}{p(x)-1}}} |\nabla_Xv|, \end{align} \begin{align}\label{a24} f'(y)= (p(x)-1)\left[f(y)^{\frac{p(x)-2}{p(x)-1}}\right], \end{align} and \begin{align}\label{a25} |\nabla_Xv| |\nabla_Xu| = \nabla_Xv\nabla_X u. \end{align} Upon solving for \eqref{a24} we get $f(v)=v^{p(x)-1}$. If $\nabla_X(u/v)=0$ then there exists a positive constant, say $\alpha>0$ such that $u=\alpha v$, then equality \eqref{a25} holds. Combining $f(v)=v^{p(x)-1}$ and $u=\alpha v$, then \eqref{a23} holds. We can now conclude that $L(u,v)=0$ implies $\nabla_X(u/v)=0$. Indeed, if $L(u,v)(x_0)=0$, $x_0\in \Omega$, there are two cases to consider, namely; the case $u(x_0)\neq 0$ and the case $u(x_0)=0$.\\ (a)\ If $u(x_0)\neq 0$, then $L(u,v)=0$ for all $x_0\in \Omega$, that is, $L_1(u,v)=0$, $L_2(u,v)=0$ and $L_3(u,v)=0$, and we conclude that \eqref{a23}, \eqref{a24} and \eqref{a25} hold, which when combined gives $u=\alpha v$ a.e. for some constant $\alpha>0$ and $\nabla_X(u/v)=0$ for all $x_0\in \Omega$.\\ (b)\ If $u(x_0) = 0$, we denote $\Omega^*=\{x\in \Omega: u(x)=0\}$, and suppose $\Omega^*\neq \Omega$. Here $u(x_0)=\alpha v(x_0)$ implies $\alpha=0$ since $u(x_0)=0$ and $v(x_0)>0$. By the first case (Case (a)) we know that $u(x)=\alpha v(x)$ and $u(x)\neq 0$ for all $x\in \Omega\setminus\Omega^*$, then it is impossible that $\alpha=0$. This contradiction implies that $\Omega^*=\Omega$. \qed \begin{remark} Theorem \ref{Pic-thm} generalizes many known results. For examples: \begin{enumerate} \item If $M=\mathbb{R}^n$ and $f(v)=v^{p(x)-1}$ in \eqref{e23} and \eqref{e24}. Then, we obtain the variable exponent Picone identity of Allegretto \cite{Al3} and Feng and Han \cite{FH}. \item If $p(x)=p$, $f(v)=v^{p-1}$ in \eqref{e23} and \eqref{e24}, then our result covers Allegretto and Huang's \cite{AH} ($M=\mathbb{R}^n$), Niu, Zhang and Wang \cite{NZW} (Heisenberg group), Ruzhansky, Sabitbek and Suragan \cite{RSS1} (for general vector fields). \item If we allow $p(x)=p$ in \eqref{e23} and \eqref{e24}, we then recover Bal \cite{Ba} in the Euclidean setting and Suragan and Yessirkegenov \cite{SY} in the setting of stratified Lie groups. \end{enumerate} \end{remark} \section{Applications}\label{sec3} In order to discuss generalized solutions, we need some concepts from the theory of variable Lebesgue and Sobolev spaces. Detailed description of these spaces can be found in \cite{CFRW,DHHR,FZ}. \subsection*{Variable Lebesgue spaces} Let $\Omega\subset M$ be an open domain and $E(\Omega)$ denotes the set of all equivalence classes of measurable real-valued functions defined on $\Omega$ being equal almost everywhere. \begin{definition} The variable exponent Lebesgue space $L^{p(\cdot)}(\Omega)$ is defined as $$L^{p(\cdot)}(\Omega) = \left\{u\in E(\Omega) : \int_\Omega|u(x)|^{p(\cdot)} dx < \infty\right\}$$ equipped with the (Luxemburg) norm $$\|u\|_{L^{p(\cdot)}(\Omega)}= \inf\left\{t>0: \int_\Omega \left| \frac{u(x)}{t}\right|^{p(x)} dx \le 1 \right\}.$$ The variable exponent Sobolev space $W^{1, p(\cdot)}(\Omega)$ is defined as $$W^{1, p(\cdot)}(\Omega) = \{u\in L^{p(\cdot)}(\Omega): \nabla_Xu \in L^{p(\cdot)}(\Omega)\}$$ equipped with the norm $$\|u\|_{W^{1, p(\cdot)}(\Omega)}= \|u\|_{L^{p(\cdot)}(\Omega)} + \|\nabla_Xu\|_{L^{p(\cdot)}(\Omega)}.$$ \end{definition} Denoted by $W_0^{1, p(\cdot)}(\Omega)$ the closure of $C^\infty_0(\Omega)$ in $W^{1, p(\cdot)}(\Omega)$ with respect to the norm $$\|u\|_{W_0^{1, p(\cdot)}(\Omega)}= \|\nabla_Xu\|_{L^{p(\cdot)}(\Omega)}.$$ It can be clearly seen that $L^{p(\cdot)}(\Omega)$, $W^{1, p(\cdot)}(\Omega)$ and $W_0^{1, p(\cdot)}(\Omega)$ are all separable and reflexive Banach spaces in their respectful norms if $1<\inf p(x)<\sup p(x)<\infty$ in $\Omega$. \subsection*{Eigenvalue problem for $p(x)$-Laplacian} Let $\Omega\subset M$ be a bounded domain with smooth boundary $\partial\Omega$. We suppose a continuous function $p:\bar{\Omega}\to \mathbb{R}^+$, $p(x)>1$ is such that $$1<p^-:=ess\inf_{x\in\bar{\Omega}} p(x)\le p(x) \le p^+:= ess\sup_{x\in\bar{\Omega}} p(x)<\infty.$$ Now consider the indefinite weighted Dirichlet eigenvalue problem for $p(x)$-Laplacian \begin{align}\label{e31} \left. \begin{array}{ll} -\nabla^*_X(|\nabla_X u|^{p(x)-2}\nabla_Xu) = \lambda g(x)|u|^{p(x)-2} u, & \ x \in \Omega,\\ \ \ \ u>0, &\ x \in \Omega,\\ \ \ \ u=0, &\ x \in \partial\Omega, \end{array} \right. \end{align} where $\Omega$ is as defined above, $g(x)$ is a positive bounded function and $p:\bar{\Omega}\to (1,\infty)$ is a continuous function for $x\in \bar{\Omega}$. \begin{definition} Let $\lambda\in \mathbb{R}^+$ and $u\in W_0^{1,p(x)}(\Omega)$, the pair $(u,\lambda)$ is called a solution of \eqref{e31} if \begin{align}\label{e32} \int_\Omega|\nabla_Xu|^{p(x)-2}\langle\nabla_Xu,\nabla_X\phi\rangle dx - \lambda \int_\Omega g(x)|u|^{p(x)-2} u\phi dx =0 \end{align} for all $\phi \in W_0^{1,p(x)}(\Omega)$. If $(u,\lambda)$ is a solution of \eqref{e31}, we call $\lambda$ an eigenvalue, and $u$ an eigenfunction corresponding to $\lambda$. Similarly, by the sup-solution and sub-solution of \eqref{e31}, we mean the pair $(u,\lambda)$ such that \begin{align}\label{e33} \int_\Omega|\nabla_Xu|^{p(x)-2}\langle\nabla_Xu,\nabla_X\phi\rangle dx - \lambda \int_\Omega g(x)|u|^{p(x)-2} u\phi dx \ge 0 \end{align} and \begin{align}\label{e34} \int_\Omega|\nabla_Xu|^{p(x)-2}\langle\nabla_Xu,\nabla_X\phi\rangle dx - \lambda \int_\Omega g(x)|u|^{p(x)-2} u\phi dx \le 0 \end{align} for all $\phi \in W_0^{1,p(x)}(\Omega)$, respectively. \end{definition} Denote the principal eigenvalue of \eqref{e31} (the least positive eigenvalue) by $\lambda_{1,p}:= \lambda_{1,p}(\Omega)$, clearly for the solution $(u,\lambda)$ and $u\neq 0$, we get $$\lambda_{1,p} =\inf_{u\in \in W_0^{1,p(x)}(\Omega)\setminus\{0\}} \frac{\int_\Omega|\nabla_Xu|^{p(x)}dx}{\int_\Omega g(x)|u|^{p(x)}dx}.$$ In the case $p(x)=p$(constant), it is well known that $\lambda_{1,p}(\Omega)$ given above is the first eigenvalue of $p$-Laplacian (with $g(x)=1$, $\Omega\subset\mathbb{R}^n$), which must be positive. But this is not true for general $p(x)$ in the sense that $\lambda_{1,p}$ may be zero \cite{FZ}. Nevertheless, Fan, Zhang and Zhao in \cite{FZZ} have proved the existence of infinitely many eigenvalues $p(x)$-Laplacian and established sufficient conditions for $\lambda_{1,p}(\Omega)>0$ (see also Franzina and Lindqvist \cite{FL}). Motivated by \cite{FZZ}, we are able to assume the existence of $\lambda_{1,p}>0$ in the rest of this section. \subsection{Variable exponent Hardy type inequality} \begin{proposition}\label{Pro33} Let $\Omega\subset M$ be an open bounded domain. Suppose that a function $v\in C^\infty_0(\Omega)$ satisfies $\nabla_Xv\nabla_Xp(x)\equiv 0$ and \begin{align}\label{e35} \left. \begin{array}{ll} -\mathscr{L}_pv= \mu a(x)f(v) & \ \ \text{in}\ \Omega,\\ \ \ \ u>0 &\ \ \text{in}\ \Omega,\\ \ \ \ u=0 &\ \ \text{on}\ \partial\Omega, \end{array} \right. \end{align} where $f:\mathbb{R}^+\to\mathbb{R}^+$ is $C^1$ and satisfies $f'(y)\ge(p(x)-1)\left[f(y)^{\frac{p(x)-2}{p(x)-1}} \right]$, $\mu>0$ is a constant, $a(x)$ is a positive continuous function. Then there holds \begin{align*} \int_\Omega|\nabla_Xu|^{p(x)}dx\ge \mu \int_\Omega a(x)|u|^{p(x)}dx \end{align*} for any $0\le u\in C^1_0(\Omega)$. \end{proposition} \proof Since $v>0$ and solves \eqref{e35} in $\Omega$, that is, $v\in W_0^{1,p(x)}(\Omega)$. For a given a $\epsilon>0$, we set $\phi=\frac{|u|^{p(x)}}{f(v+\epsilon)}$. By the definition of solution \eqref{e32} we compute \begin{align*} \mu \int_\Omega a(x)f(v) \frac{|u|^{p(x)}}{f(v+\epsilon)} dx &\le \int_\Omega |\nabla_Xv|^{p-2}\nabla_X v \nabla_X\left(\frac{|u|^{p(x)}}{f(v+\epsilon)}\right) dx\\ & = \int_\Omega \left[|\nabla_X u|^{p(x)}-R(u,v+\epsilon)\right]dx\\ & = \int_\Omega|\nabla_X u|^{p(x)}dx -\int_\Omega L(u,v+\epsilon)dx. \end{align*} Taking the limit as $\epsilon \to 0^+$, applying Fatou's Lemma and Lebesgue dominated convergence theorem respectively on the left hand side and right hand side of the last expression, we obtain \begin{align*} 0\le \int_\Omega |\nabla_X u|^{p(x)}- \mu \int_\Omega a(x)|u|^{p(x)}dx -\int_\Omega L(u,v)dx. \end{align*} Therefore we have \begin{align*} 0\le \int_\Omega|\nabla_X u|^{p(x)}dx- \mu \int_\Omega a(x)|u|^{p(x)} dx \end{align*} since $L(u,v)\ge 0$ almost everywhere in $\Omega$. This therefore completes the proof. \qed \begin{corollary} Suppose there exists $\lambda>0$ and a strictly positive sup-solution of \eqref{e31}. Then \begin{align}\label{e36} \int_\Omega|\nabla_X u|^{p(x)}dx \ge \lambda \int_\Omega g(x)|u|^{p(x)} dx \end{align} for all $u\in W_0^{1,p(x)}(\Omega)$. \end{corollary} \proof Applying Proposition \ref{Pro33} by setting $a(x)\equiv g(x)$, $\mu =\lambda$ and $f(v)=|v|^{p(x)-2}v$ , then one arrives at the conclusion \eqref{e35} at once. \qed \subsection{Principal frequency and domain monotonicity} \begin{proposition}\label{Pro35} Let there exists $\lambda$ and a strictly positive sup-solution $v \in W_0^{1,p(x)}(\Omega)$ of \eqref{e31}. Then we have \begin{align}\label{e37} \int_\Omega|\nabla_X u|^{p(x)}dx \ge \lambda \int_\Omega g(x)|u|^{p(x)} dx \end{align} and \begin{align}\label{e38} \lambda_{1,p}(\Omega)\ge \lambda \end{align} for all $u \in W_0^{1,p(x)}(\Omega)$. \end{proposition} \proof Suppose there exists $\lambda>0$, since $v$ is strictly positive sup-solution of \eqref{e31} in $\Omega$, we have \begin{align}\label{e39} \int_\Omega|\nabla_Xv|^{p(x)-2}\langle\nabla_Xv,\nabla_X\phi\rangle dx \ge \lambda \int_\Omega g(x)|v|^{p(x)-2} v\phi dx \end{align} for all $\phi \in W_0^{1,p(x)}(\Omega)$. For a given small $\epsilon>0$, setting $\phi=\frac{|u|^{p(x)}}{(v+\epsilon)^{p(x)-1}}$ into \eqref{e39}. Then, following the proof of the Proposition \ref{Pro33}, we arrive at \eqref{e37}. Now, let $u_1 \in W_0^{1,p(x)}(\Omega)$ be the eigenfunction corresponding to the principal eigenvalue $\lambda_{1,p}(\Omega)$. We have \begin{align}\label{e310} \int_\Omega|\nabla_Xu_1|^{p(x)-2}\langle\nabla_Xu_1,\nabla_X\phi\rangle dx = \lambda_{1,p} \int_\Omega g(x)|u_1|^{p(x)-2} u_1\phi dx \end{align} for any $ \phi \in W_0^{1,p(x)}(\Omega)$. Choosing $\epsilon>0$ (small) we can define via Picone identity that \begin{align}\label{e311} 0\le L(u_1,v+\epsilon)=R(u_1,v+\epsilon), \ \ v>0. \end{align} Integrating \eqref{e311} over $\Omega$ and then using \eqref{e39} with $\disp \phi=\frac{|u_1|^p}{f(v+\epsilon)}$ and \eqref{e310} with $\phi=u_1$, we obtain \begin{align*} 0& \le \int_\Omega L(u_1,v+\epsilon)dx = \int_\Omega R(u_1,v+\epsilon)dx\\ &=\int_\Omega|\nabla_X u_1|^{p(x)}dx - \int_\Omega \nabla_X\left(\frac{|u_1|^p}{f(v+\epsilon)}\right) |\nabla_X v|^{p(x)-2}\nabla_X vdx\\ &=\int_\Omega|\nabla_X u_1|^{p(x)}dx + \int_\Omega \frac{|u_1|^p}{f(v+\epsilon)} \nabla_X^*( |\nabla_X v|^{p(x)-2}\nabla_X) vdx\\ &\le \lambda_{1,p}(\Omega)\int_\Omega g(x)|u_1|^{p(x)}dx - \lambda\int_\Omega g(x)\frac{|u_1|^p}{f(v+\epsilon)}|v|^{p(x)-2}vdx. \end{align*} As usual, taking the limit as $\epsilon \to 0^+$, applying Fatou's Lemma and Lebesgue dominated convergence theorem, setting $f(v)=v^{p(x)-1}$, we arrive at \[0\le (\lambda_{1,p}(\Omega)-\lambda)\int_\Omega g(x)|u_1|^pdx,\] which implies $\lambda_{1,p}(\Omega)\ge\lambda$. \qed As a corollary to the last proposition, we show strict monotonicity of the principal eigenvalue with respect to domain monotonicity. \begin{corollary} Let $\lambda_{1,p}(\Omega)>0$ be the principal eigenvalue of $\mathscr{L}_p$ on $\Omega$. Suppose $\Omega_1\subset\Omega_2\subset\Omega$ and $\Omega_1\neq\Omega_2$. Then \begin{align*} \lambda_{1,p}(\Omega_1)> \lambda_{1,p}(\Omega_2) \end{align*} if they both exist. \end{corollary} \proof Let $u_1$ and $u_2$ be positive eigenfunctions corresponding to $\lambda_{1,p}(\Omega_1)$ and $\lambda_{1,p}(\Omega_2)$, respectively. Clearly with $\phi\in C^\infty_0(\Omega)$, we have by Picone identity that \[0\le \int_\Omega L(\phi,u_2)dx = \int_\Omega R(\phi,u_2)dx.\] Replacing $\phi$ by $u_1$ and applying Proposition \ref{Pro35} we have \[\lambda_{1,p}(\Omega_1)- \lambda_{1,p}(\Omega_2)\ge 0.\] If we have $\lambda_{1,p}(\Omega_1)= \lambda_{1,p}(\Omega_2)$, then $L(u_1,u_2)=0$ a.e. in $\Omega$ and thus $u_1=\alpha u_2$ for some constant $\alpha>0$. However, this is impossible when $\Omega_1\subset\Omega_2$ and $\Omega_1\neq\Omega_2$. \qed Next is the uniqueness and simplicity results. \subsection{Uniqueness and simplicity of $\lambda_{1,p}(\Omega)$} \begin{proposition}\label{Pro37} Let there exists $\lambda>0$ and a strictly positive solution $v \in W_0^{1,p(x)}(\Omega)$ of \eqref{e31}. Then we have \[\lambda_{1,p}(\Omega)= \lambda.\] Moreover, let $u_1$ be the corresponding eigenfunction to $\lambda_{1,p}(\Omega)$. Then any other $u \in W_0^{1,p(x)}(\Omega)$ corresponding to $\lambda_{1,p}(\Omega)$ is a constant multiple of $u_1$. \end{proposition} \proof Let $u_1\in W_0^{1,p(x)}(\Omega)$ be the eigenfunction corresponding to $\lambda_{1,p}(\Omega)$ and $u$ be a positive solution of \eqref{e31}. Applying Picone identity by choosing $\epsilon>0$ (small) as follows: \begin{align*} 0&\le \int_\Omega L(u,u_1+\epsilon) dx\\ &=\int_\Omega|\nabla_X u|^{p(x)}dx + \int_\Omega \frac{u^{p(x)}}{f(u_1+\epsilon)} \nabla_X^*( |\nabla_X u_1|^{p(x)-2}\nabla_X) u_1dx\\ &= \lambda\int_\Omega g(x)|u|^{p(x)}dx - \lambda_{1,p}(\Omega)\int_\Omega g(x)\frac{u^{p(x)}}{(u_1+\epsilon)^{p(x)-1}}|u_1|^{p(x)-2}u_1dx, \end{align*} where we have set $f(u_1+\epsilon)=(u_1+\epsilon)^{p(x)-1}$. Taking the limit as $\epsilon \to 0^+$, applying Fatou's Lemma and Lebesgue dominated convergence theorem, then \[\lambda_{1,p}(\Omega)\le\lambda.\] On the other hand by Proposition \ref{Pro35}, we have \[\lambda_{1,p}(\Omega)\ge\lambda.\] This therefore implies that $\lambda_{1,p}(\Omega)=\lambda$. By this we have proved the uniqueness part. Now by the hypothesis of the theorem we have for $\phi,\psi \in C^\infty_0(\Omega)$ that \begin{align}\label{e312} \int_\Omega|\nabla_Xu|^{p(x)-2}\langle\nabla_Xu,\nabla_X\phi\rangle dx = \lambda_{1,p} \int_\Omega g(x)|u|^{p(x)-2} u\phi dx, \end{align} \begin{align}\label{e313} \int_\Omega|\nabla_Xu_1|^{p(x)-2}\langle\nabla_Xu_1,\nabla_X\psi\rangle dx = \lambda_{1,p} \int_\Omega g(x)|u_1|^{p(x)-2} u_1\psi dx. \end{align} Taking $\phi=u$ and $\psi=\frac{|u|^p}{(u_1+\epsilon)^{p-1}}$ into \eqref{e312} and \eqref{e313}, respectively, and sending $\epsilon \to 0^+$, we arrive at \begin{align*} \int_\Omega|\nabla_Xu|^{p(x)} dx& = \lambda_{1,p}\int_\Omega g(x)|u|^{p(x)} dx \\ &=\int_\Omega|\nabla_Xu_1|^{p(x)-2} \nabla_Xu_1 \nabla_X \Big(\frac{|u|^{p(x)}}{u_1^{p(x)-1}}\Big)dx, \end{align*} which implies (by choosing $f(u_1)=u_1^{p(x)-2}$) \begin{align*} \int_\Omega R(u,u_1)dx = \int_\Omega L(u,u_1)dx =0 \end{align*} and consequently, $\nabla_X(u/v)=0$, i.e., $u=\alpha u_1$ for some positive constant $\alpha>0$. \qed The next proposition gives the sign changing nature of any other eigenfunction associated to an eigenvalue other than $\lambda_{1,p}(\Omega)$. \begin{proposition} Any eigenfunction $v$ corresponding to an eigenvalue $\lambda\neq \lambda_{1,p}(\Omega)$ changes sign. \end{proposition} \proof By contradiction we suppose $v>0$ does not change sign (the case $v\le 0$ can be handled similarly). Let $\phi>0$ be an eigenfunction corresponding to $\lambda_{1,p}(\Omega)$. Choosing any $\epsilon>0$ as before, applying Picone identity, we have \begin{align*} 0&\le \int_\Omega L(\phi,v+\epsilon) dx\\ & =\int_\Omega\left[|\nabla_X \phi|^{p(x)} -\nabla_X\Big(\frac{\phi^{p(x)}}{f(v+\epsilon)}\Big)|\nabla_Xv|^{p(x)-2}\nabla_X v \right]dx\\ & =\int_\Omega |\nabla_X \phi|^{p(x)}dx + \int_\Omega \frac{\phi^{p(x)}}{f(v+\epsilon)} \mathscr{L}_pv dx. \end{align*} Since $\frac{\phi^{p(x)}}{(v+\epsilon)^{p(x)-1}}$ is admissible in the weak formulation of \eqref{e31} satisfied by $(\phi,\lambda)$, we arrive at \[0\le\lambda_{1,p}(\Omega)\int_\Omega g(x)|\phi|^{p(x)} dx - \lambda\int_\Omega \frac{\phi^{p(x)}}{f(v+\epsilon)} g(x) |v|^{p(x)-2}v dx.\] Setting $f(v+\epsilon)=(v+\epsilon)^{p(x)-1}$ and letting $\epsilon \to 0^+$ in the last inequality as usual we obtain \[0\le (\lambda_{1,p}-\lambda)\int_\Omega g(x) \phi^{p(x)} dx,\] which is a contradiction since $\int_\Omega g(x)\phi^{p(x)}dx=1$. Thus $v$ must change sign. \qed \section{variable exponent Caccioppoli estimates for general vector fields} Picone identity is applied to prove some variable exponent Caccioppoli estimates for general vector fields in this section. Recall that $$1<p^-:=ess\inf_{x\in\bar{\Omega}} p(x)\le p(x) \le p^+:= ess\sup_{x\in\bar{\Omega}} p(x)<\infty.$$ Without giving rise to confusion but for simplicity sake we write $p:=p(x)$ and $q=:q(x)$. We also denote $q^-:= ess\inf_{x\in\bar{\Omega}} q(x)$ and $q^+:= ess\sup_{x\in\bar{\Omega}} q(x)$. \begin{theorem}\label{thm41} Let $v$ be a positive sub-solution of \eqref{e31} in $\Omega\subset M$. Then for every fixed $q(x)>p(x)-1$, $p(x)>1$, $\nabla_Xv\nabla_Xp(x)=0$, $\nabla_Xv\nabla_Xq(x)=0$ and $\lambda \in \mathbb{R}$, we have \begin{align} \int_\Omega v^{q-p}\phi^p|\nabla_Xv|^pdx \le C^{p^+}_{p,q}\int_\Omega v^q |\nabla_X\phi|^pdx + C_{\lambda,p,q} \int_\Omega g(x)v^q\phi^pdx \end{align} for every nonnegative functions $\phi\in C^\infty_0(\Omega)$, where\\ $$C^{p^+}_{p,q}:=\left( \frac{p^+}{q^--p^++1}\right)^{p^+} \ \ \ \text{and}\ \ \ C_{\lambda,p,q}:= \left( \frac{\lambda p^+}{q^--p^++1}\right).$$ \end{theorem} \proof Let $u=v^{q/p}\phi$, where $\phi$ is a nonnegative test function and $v$ is a sub-solution of \eqref{e31}, we compute \begin{align*} \nabla_X\left(v^{ q/p}\phi\right) &= \phi\nabla_X(v^{q/p})+v^{q/p}\nabla_X\phi\\ & = \phi v^{q/p}\ln v\left(\frac{\nabla_Xq}{p}-\frac{q\nabla_Xp}{p^2}\right)+\frac{q}{p}v^{\frac{q-p}{p}}\phi\nabla_Xv + v^{q/p}\nabla_X\phi \end{align*} so that \begin{align*} \langle\nabla_Xv, \nabla_X\left(v^{ q/p}\phi\right)\rangle & = \phi v^{q/p}\ln v\left(\frac{\nabla_Xq}{p}-\frac{q\nabla_Xp}{p^2}\right)\nabla_Xv \\ & \hspace{1cm} +\frac{q}{p}v^{\frac{q-p}{p}}\phi|\nabla_Xv|^2 + v^{q/p}\langle\nabla_X\phi,\nabla_Xv\rangle. \end{align*} Now using the the fact that $v$ is a sub-solution of \eqref{e31} and the condition that $\nabla_Xv\nabla_Xp(x)\equiv 0$ and $\nabla_Xv\nabla_Xq(x)\equiv 0$ in the Picone identity $L(u,v)\ge 0$, we have \begin{align}\label{e42} 0 & \le \int_\Omega L(v^{q/p}\phi,v)\nonumber\\ & = \int_\Omega |\nabla_X\left(v^{ q/p}\phi\right)|^pdx +\int_\Omega \frac{f'(v)}{(f(v))^2} |v^{q/p}|^p|\phi\nabla_Xv|^p dx\nonumber \\ & \ \ \ - \int_\Omega q \frac{|v^{q/p}\phi|^{p-1}}{f(v)}\phi v^{\frac{q-p}{p}} |\nabla_Xv|^pdx\\ & \ \ \ - \int_\Omega p \frac{|v^{q/p}\phi|^{p-1}}{f(v)} v^{q/p} |\nabla_Xv|^{p-2} \langle\nabla_X\phi,\nabla_Xv\rangle dx.\nonumber \end{align} Considering the condition $f'(v)\ge (p(x)-1)\left[f(v)^{\frac{p(x)-2}{p(x)-1}}\right]$, we can then choose $f(v)=v^{p(x)-1}$. Then \eqref{e42} reads \begin{align}\label{e43} 0 \le & \int_\Omega |\nabla_X\left(v^{ q/p}\phi\right)|^pdx +\int_\Omega (p-1) v^{q-p} |\phi\nabla_Xv|^p dx - \int_\Omega q v^{q-p} |\phi\nabla_Xv|^pdx \nonumber \\ & - \int_\Omega p |v^{\frac{q-p}{p}}\phi|^{p-1} v^{q/p} |\nabla_Xv|^{p-2} \langle\nabla_X\phi,\nabla_Xv\rangle dx. \end{align} Using the $\varepsilon(x)$-modified version of the Young's inequality in Lemma \ref{Y2} with $\Phi=v^{q/p}|\nabla_X\phi|$ and $\Psi=v^{\frac{q-p}{p}}\phi|\nabla_Xv|$, we can estimate the last term of \eqref{e43} as follows \begin{align}\label{e44} - \int_\Omega p |v^{\frac{q-p}{p}}\phi|^{p-1} & v^{q/p} |\nabla_Xv|^{p-2} \langle\nabla_X\phi,\nabla_Xv\rangle dx \nonumber \\ & \le \int_\Omega p |v^{\frac{q-p}{p}}\phi|^{p-1} |\nabla_Xv|^{p-1} v^{q/p} \nabla_X\phi dx \nonumber \\ &\le \int_\Omega \varepsilon^{1-p}v^q |\nabla_X\phi|^pdx +\int_\Omega \varepsilon(p-1) v^{q-p} |\phi\nabla_Xv|^pdx, \end{align} where $\varepsilon(x)$ is a continuous bounded function on $\Omega$, which will be chosen later. Substituting \eqref{e44} into \eqref{e43} we get \begin{align*} 0 & \le \int_\Omega |\nabla_X\left(v^{ q/p}\phi\right)|^pdx - \int_\Omega [q-p+1-\varepsilon(p-1)] v^{q-p} |\phi\nabla_Xv|^pdx\\ & \hspace{1cm} + \int_\Omega \varepsilon^{1-p}v^q |\nabla_X\phi|^pdx \\ & \le \lambda \int_\Omega g(x) |v^{ q/p}\phi|^pdx - \mathcal{C}^1_{\epsilon,p,q} \int_\Omega v^{q-p} |\phi\nabla_Xv|^pdx + \mathcal{C}^2_{\epsilon,p} \int_\Omega v^q |\nabla_X\phi|^pdx, \end{align*} where we have used $\disp \int_\Omega|\nabla_Xu|^{p(x)}dx \le \lambda \int_\Omega g(x)|u|^{p(x)}dx$ for the sub-solution of \eqref{e31}. Here \begin{align*} \mathcal{C}^1_{\epsilon,p,q} := q^--p^++1-\bar{\varepsilon}(p^+-1) \ \ \text{and}\ \ \mathcal{C}^2_{\epsilon,p} : = \bar{\varepsilon}^{1-p^+}, \end{align*} where $\bar{\varepsilon}:=\sup_\Omega\varepsilon(x)$. \noindent Rearranging the last inequality we arrive at \begin{align*} \int_\Omega v^{q-p} |\phi\nabla_Xv|^pdx \le \frac{ \mathcal{C}^2_{\epsilon,p}}{\mathcal{C}^1_{\epsilon,p,q} } \int_\Omega v^q |\nabla_X\phi|^pdx + \frac{\lambda}{\mathcal{C}^1_{\epsilon,p,q} } \int_\Omega g(x) |v^{ q/p}\phi|^pdx. \end{align*} We can now choose a suitable number $\bar{\varepsilon}$ as $\disp \bar{\varepsilon}:= \frac{q^--p^++1}{p^+}$ and then compute \begin{align*} \frac{1}{\mathcal{C}^1_{\epsilon,p,q} } & := \frac{1}{q^--p^++1-\bar{\varepsilon}(p^+-1)} = \frac{\lambda p^+}{q^--p^++1}, \\ \frac{ \mathcal{C}^2_{\epsilon,p}}{\mathcal{C}^1_{\epsilon,p,q} } &: = \frac{ \bar{\varepsilon}^{1-p^+}}{q^--p^++1-\bar{\varepsilon}(p^+-1)} = \left( \frac{p^+}{q^--p^++1}\right)^{p^+}. \end{align*} The proof is therefore complete. \qed \begin{corollary} Let $v$ be a positive sub-solution of \eqref{e31} in $\Omega$. If $g(x)\equiv 0$ or $\lambda=0$ and $p(x)=q(x)$ in $\Omega$. Then we have \begin{align*} \int_\Omega \phi^{p(x)}|\nabla_Xv|^{p(x)}dx \le (p^+)^{p^+} \int_\Omega v^{p(x)} |\nabla_X\phi|^{p(x)} dx \end{align*} for every nonnegative function $\phi \in C^\infty_0(\Omega)$. \end{corollary}
1,108,101,563,075
arxiv
\section{Introduction} Galaxies are not distributed randomly in the cosmic web \citep{Joeveer78,Zeldovich82,Shandarin83,Einasto84,Bond96,Aragon10}, but are arranged in filaments and sheets surrounding cosmic voids and connecting clusters of galaxies \citep{Pimbblet04,Aragon10,Jasche10,Tempel14b,Cautun14}. The properties of filaments affect the abundance, shape, and evolution of galaxies \citep{Aragon07b,Hahn07a,Hahn07,Libeskind12,Libeskind13,Cautun13,Tempel13,Tempel13b}, and depend on the properties of the initial density fluctuations generated in the very early Universe. Therefore probes of the large-scale filaments enable us to test current physical and cosmological theories. Probes of the filaments require efficient algorithms for finding filaments. Several sophisticated algorithms for identifying filaments, both in the three dimensions and two dimensions, have been developed. Generally, there are three types of algorithms to identify a filament based on: (1) the distribution of galaxies or clusters of galaxies \citep{Peebles80,Peacock98,Novikov06,Aragon07a,Sousbie11,Shandarin12,Cautun13}; (2) the gravitational tidal tensor -- the Hessian of the gravitational potential \citep{Lee08,Forero09,Bond10a,Bond10b,Wang12}; and (3) the velocity field induced from the dynamics of the underlying density field \citep{Hahn07,Shandarin11,Hoffman12,Libeskind12,Libeskind15,Falco14}. Recently, \cite{Falco14} found that the line-of-sight velocities $v_{\rm{los}}$ of the galaxies around a cluster of galaxies with the projected distances to the cluster center satisfying $2.5r_{\rm{vir}}\lesssim r\lesssim 8r_{\rm{vir}}$, where $r_{\rm{vir}}$ is the virial radius, are as a function of $r$, if the galaxies are arranged in one filament or sheet, since these galaxies are gravitationally affected by the cluster. Therefore they plotted the $(r,v_{\rm{los}})$ map for the galaxies, and identified the filamentary structures on the map as filaments or sheets (see the paper of Falco et al. 2014 for details). They proved that the line-of-sight velocities can be probes of filaments and sheets. In this paper, we develop a new method of identifying filaments using the orientations of galaxies. \cite{Tempel13b,Tempel15} found that the spin axes of bright spiral galaxies have a weak tendency to be aligned parallel to filaments, while the major axes of elliptical/S0 galaxies are significantly aligned with their host filaments. Therefore the galaxies alignment may be an additional probe of filaments. We describe the method, and apply it to the galaxies assembled around the Coma cluster in section 2. The results are compared with the detected standard filaments using the method of \cite{Falco14} in this section, and discussed in section 3. In section 4, we summarize the work. We adopt the WMAP7 cosmological parameters: $\Omega_{\rm{M}}=0.27$, $\Omega_{\rm{\Lambda}}=0.73$, and $H_0=71\ \rm km\ s^{-1}\ Mpc^{-1}$. \section{Identification of Filaments by Galaxies Alignment} \subsection{Galaxies in a filament} For the galaxies arranged in a filament, the major axes of red galaxies and spin axes of blue galaxies are related to their host filaments, e.g., the major axes of the elliptical/S0 (red) galaxies are significantly aligned with the orientations of the filament \citep{Tempel15}. Therefore, according to the distribution of orientations of the projected red galaxies, we may identify the filaments in two-dimensional (2D) images. In order to quantitatively indicate the orientations of the projected ellipses, we define the east direction in the celestial coordinate system as $x$-axis, and the north direction as $y$-axis, and define the position angle of a red galaxy, $\xi$, as the angle between the major axis of the projected ellipse and the $x$-axis, as shown in Fig.~\ref{ske_pa}. The range of $\xi$ is $0\--90^{\circ}$. \begin{figure} \centering \includegraphics[scale=0.4]{ske.pdf} \caption{Illustrations of the coordinate system, position angle $\xi$, and projected distance $r$. O denotes the center of a cluster of galaxies.} \label{ske_pa} \end{figure} Analogous to the work of \cite{Falco14} using the $(r,v_{\rm{los}})$ map, we can plot the $(r, \xi)$ map for the projected red galaxies in an image, where the horizontal coordinate $r$ denotes the distance between a projected ellipse to the origin of the coordinates, and the longitudinal coordinate is the position angle $\xi$. Since we aim to identify the large-scale filaments around a cluster, the origin of the coordinates is preferentially set as the center of a cluster. The $(r, \xi)$ map sufficiently uses the 2D information of the orientations and positions of the red galaxies. If the red galaxies are not arranged in some filament structures, $\xi$ should be uniformly random within $0\--90^{\circ}$. Conversely, if the galaxies are located in a filament, we should expect the non-random distribution of $\xi$ and inhomogeneous distribution of $r$. This manner of detecting filaments is called location-alignment-method (LAM). \subsection{Data Analysis} We will apply LAM to the data of the galaxies around the Coma cluster, and then compare the results with the filaments obtained by the method of \cite{Falco14}. We take the galaxy NGC~4874 as the center of the Coma cluster \citep{Kent82} and origin of the coordinates, which is located at RA = $12^{\rm{h}}59^{\rm{m}}35^{\rm{s}}.7$, and DEC = $+27^{\circ}57'33''$. The galaxies used here are selected from the Sloan Digital Sky Survey data release 12 (SDSS DR12; Alam et al. 2009) with $r$-band model magnitudes (modelMag), $m_r$, satisfying $12\ {\rm{mag}}<m_r\leq 18\ \rm{mag}$, classified as galaxies photometrically by the SDSS data reduction pipelines (i.e.,photoObj.type=3), and distributed within $9^{\circ}$ from the position of the Coma center and with redshifts between 0.01 and 0.037 (i.e., velocities along the line of sight are between 3000 and 11000~km~$\rm{s}^{-1}$; Falco et al. 2014). 2590 galaxies around the Coma cluster are obtained. In order to obtain the position angles of the red galaxies, first we plot the $u-r$ versus $r$ color-magnitude diagram (CMD) and distribution of colors, $u-r$, of the 2590 numbers of galaxies, which are shown in Fig.~\ref{cmd}. Second, we select the red galaxies. The red sequence \citep{Bower92} is formed by old elliptical galaxies whose spectra show similar $4000\ \AA$ breaks (a characteristic of old stellar populations ubiquitous in elliptical galaxies; Pereira \& Kuhn 2005) resulting from photospheric absorptions of heavy elements, and the $u$ and $r$ filters nicely probe the spectral region across the $4000\AA$ break. A linear fitting to the red sequence, \begin{equation} (u-r)=k\cdot r+d, \label{lf} \end{equation} where $k\simeq -0.106$, $d\simeq 4.154$ are the slope and intercept respectively, is applied to obtain the slope of the ridgeline \citep{Bower92} in the CMD. As shown in the distribution of colors of the galaxies, the colors of the red and blue galaxies approximately follow two Gaussian distributions, and the standard error $\sigma\simeq 0.2\ \rm{mag}$ for the colors of the red galaxies is obtained, therefore we select the galaxies confined within $2\sigma\simeq 0.4\ \rm{mag}$ around the ridgeline as the red galaxies, which are known as the collections of elliptical/S0 galaxies and dwarf ``ellipticals'' \citep{Hung10}. The rest bluer galaxies are mostly spiral galaxies. The selection criterion satisfies the Butcher-Oemler condition, i.e., blue galaxies are those at least 0.2 mag bluer than the cluster ridgeline \citep{Butcher84}. Subsequently, $\xi$ of the red galaxies are calculated by the parameters ``photoObj.deVPhi\_r'' (parameter of the de Vaucouleurs surface brightness fitting model) from the SDSS data reduction pipelines, since the de Vaucouleurs $R^{1/4}$ surface brightness profile fits well to many elliptical galaxies \citep{Vaucouleurs48}. \begin{figure} \centering \includegraphics[scale=0.49]{cmd.pdf} \caption{The distribution of color and CMD of the galaxies around the Coma cluster. $u-r$ color is plotted against the $r$ magnitude of the galaxies. The elliptical/S0 galaxies and spiral galaxies are denoted by red and blue points, respectively. The black solid line denotes the ridgeline of red galaxies, and the two dashed lines confine the galaxies within $2\sigma$ color range around the ridgeline.} \label{cmd} \end{figure} Our goal is to find structures confined in a relatively small region in the $x-y$ plane. Fig.~\ref{pa_filament_detect} shows the Coma cluster and its environment up to $9^{\circ}$ from the cluster center. The 536 galaxies distributed within about $1.3^{\circ}$ (corresponding to the virial radius $r_{\rm{vir}}\simeq2.3\ \rm{Mpc}$; Keshet et al. 2014) are mainly the members of Coma cluster and should be removed. Additionally, since the method of \cite{Falco14} can only detect the galaxies in filaments with $r\gtrsim 2.5r_{\rm{vir}}$, therefore the galaxies distributed within $r\lesssim 3^{\circ}$ are also removed in order to compare our results with those obtained by the method of \cite{Falco14}. All of the removed galaxies are denoted by the points in the central yellow round region in Fig.~\ref{pa_filament_detect}. The image of the galaxies except the central removed galaxies is split into eight wedges, and our goal is to detect the filaments in two wedges (W1 and W2), which are denoted by the points of the orange regions in Fig.~\ref{pa_filament_detect}. For each selected wedge, the $(r, \xi)$ map of the red galaxies in this wedge is obtained; the $(r, \xi)$ map is divided into $9\times9$ cells with $\Delta r\times \Delta \xi=1^{\circ}\times 10^{\circ}$, where $\Delta r$ and $\Delta \xi$ are the bins of $r$ and $\xi$, respectively. We need to compare the galaxy number density $n_i$ in each cell $i$, with the expected number of background galaxies in this cell $n^{\rm{bg}}_i$, to determine whether there is an excess in the $(r, \xi)$ map. The five background wedges are represented by the green regions shown in Fig.~\ref{pa_filament_detect}, and we artificially set $\xi$ of the background red galaxies uniformly random in $0\--90^{\circ}$ instead of their real orientations, since the background galaxies may be also arranged in filaments and thus grouped into some specific $\xi$ ranges. In other words, we only use the information of the background density, but not their orientations. We exclude the two wedges (yellow wedges) adjacent to each selected wedge, since any structure in the selected wedge might stretch to the closest two wedges. \begin{figure*} \centering \includegraphics[scale=0.6]{w_f.pdf} \caption{The Coma cluster and its environment up to $9^{\circ}$ from the cluster center. The left and right panels show all the galaxies (red and blue) and only the red galaxies in this region, respectively. The region within $3^{\circ}$ from the cluster center is denoted by the central yellow round. The two selected wedges (W1 and W2), corresponding background wedges, and two wedges adjacent to each selected wedge, are colored orange, green, and yellow, respectively.} \label{pa_filament_detect} \end{figure*} The excess in cell $i$ is defined as \begin{equation} m_i=\frac{n_i-n_i^{\rm{bg}}/5}{n_i^{\rm{bg}}/5}. \label{ov_eq} \end{equation} If $m_i>0$, it means that there is an excess in the cell $i$. In order to select statistically significant excesses, we repeatedly set $\xi$ of the background galaxies $10^5$ times. Each time, the cells with $m_i>0$ are selected. During the $10^5$ times of tests, some cells are always selected, but some other cells are selected only a few times. Finally, we only choose the cells with the accumulated selected times larger than a given criteria, for instance $1\sigma$ (about 68264 times), $2\sigma$ (about 95456 times), and $3\sigma$ (about 99725 times); if a cell is chosen, all of the red galaxies in the cell are selected. Representatively, we plot the $(r,\xi)$ maps and the final chosen-cells with the confidence level of $3\sigma$ for the two selected wedges, as shown in Fig.~\ref{pas}. For each $\xi$ bin (bin of $10^{\circ}$), we also plot the histogram of $r$ of the red galaxies in this $\xi$ bin, i.e., the number of red galaxies as a function of $r$ (as denoted by the orange histogram in Fig.~\ref{pas}), and the histogram of the simulated background galaxies in this $\xi$ bin, i.e., $\bar{n}^{\rm{bg}}/5$ as a function of $r$ (as denoted by the green histogram in Fig.~\ref{pas}), where $\bar{n}^{\rm{bg}}_i$ is the mean value of $n^{\rm{bg}}_i$ during the $10^5$ times of tests. The final chosen-cells are denoted by the blue bins. \begin{figure*} \centering \includegraphics[scale=0.6]{cell_pas.pdf} \caption{The left and right panels show the $(r,\xi)$ map and distribution of $r$ of the red galaxies for W1 and W2, respectively. The top panel shows the $(r,\xi)$ maps for the red galaxies in the two selected wedges. The red galaxies inside and outside the central $3.0^{\circ}$ region are denoted by the black and orange points, respectively. The selected cells with the confidence level of $3\sigma$ are colored blue. The lower nine panels show the distributions of $r$ of the red galaxies within the nine $\xi$ bins. The $3.0^{\circ}$ region from the cluster center is colored gray. The orange and green histograms denote the distributions of $r$ of the red galaxies in the selected and background wedges, respectively. The final chosen cells are also denoted by the blue bins.} \label{pas} \end{figure*} \subsection{Results and Comparison} As described in introduction, \cite{Falco14} developed a method of identifying large-scale filaments and sheets around a cluster using the $v_{\rm{los}}$ and 2D positions of galaxies $r$, which has been proved to be robust for both the simulations and observations \citep{Falco14,Lee15}. In this paper, we use this method to identify the filaments in the same two wedges around the Coma cluster, and treat the detected filaments as the ``standard'' ones to be compared with the results by LAM. The detailed process of detecting the standard filaments can be found in \cite{Falco14}. The red galaxies in W1 and W2 detected by LAM are shown in Figs.~\ref{w1} and \ref{w2}, and denoted by the blue diamonds. The upper-left, upper-right, lower-left panels of the two figures show the results with the criteria $1\sigma$, $2\sigma$ and $3\sigma$, respectively; and the lower-right panels show the red galaxies of the standard filaments (denoted by the blue triangles) in the corresponding wedges using the method of \cite{Falco14}. It is worth noting that the standard filaments can only be detected with the $1\sigma$ confidence level. \begin{figure*} \centering \includegraphics[scale=0.6]{w1_f.pdf} \caption{The selected red galaxies by LAM with the $1\sigma$, $2\sigma$, and $3\sigma$ confidence levels for W1 are denoted by the blue diamonds in the upper-left, upper-right, and lower-left panels, respectively. The detected standard filaments by the method of Falco et al. (2014) are denoted by the blue triangles in the lower-right panel.} \label{w1} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.6]{w2_f.pdf} \caption{Analogous to Fig.~\ref{w1}, this figure shows the selected red galaxies by LAM and method of Falco et al. (2014) for W2.} \label{w2} \end{figure*} In Table~\ref{N_comp}, the filaments obtained by LAM are compared with the standard filaments in the two wedges. For each wedge, we count the numbers of galaxies in the detected filaments ($N_{\rm{det}}$) by LAM and standard filaments ($N_{\rm{std}}$), respectively, and the number of galaxies in both the detected and standard filaments ($N_{\rm{c}}$). Thus the fraction of the duplicated galaxies in the detected filaments, $N_{\rm{c}}/N_{\rm{det}}$, and standard filaments, $N_{\rm{c}}/N_{\rm{std}}$, are obtained. The latter suggests the detection efficiency of LAM: a higher $N_{\rm{c}}/N_{\rm{std}}$ indicates more effective LAM. \begin{table} \scriptsize \begin{tabular}{@{}cccc|c|c@{}} \hline \hline \multicolumn{6}{c}{W1}\\ \hline & \multicolumn{3}{c}{LAM} & Alignment Only & No Redshifts\\ & $1\sigma$ & $2\sigma$ & $3\sigma$ & $1\sigma$ & $1\sigma$\\ \hline $N_{\rm{std}}$ & 59 & 59 & 59 & 59 & 59\\ $N_{\rm{det}}$ & 96 & 81 & 79 & 97 & 87\\ $N_{\rm{c}}$ & 56 & 45 & 44 & 41 & 49\\ $N_{\rm{c}}/N_{\rm{std}}$ & 95.0\% & 76.3\% & 74.6\% & 69.5\% & 83.0\%\\ $N_{\rm{c}}/N_{\rm{det}}$ & 58.3\% & 55.6\% & 55.7\% & 42.3\% & 56.3\%\\ \hline \hline \multicolumn{6}{c}{W2}\\ \hline & \multicolumn{3}{c}{LAM} & Alignment Only & No Redshifts\\ & $1\sigma$ & $2\sigma$ & $3\sigma$ & $1\sigma$ & $1\sigma$\\ \hline $N_{\rm{std}}$ & 66 & 66 & 66 & 66 & 66\\ $N_{\rm{det}}$ & 79 & 66 & 60 & 61 & 69\\ $N_{\rm{c}}$ & 63 & 55 & 54 & 42 & 58\\ $N_{\rm{c}}/N_{\rm{std}}$ & 95.4\% & 83.3\% & 81.8\% & 63.6\% & 87.9\%\\ $N_{\rm{c}}/N_{\rm{det}}$ & 79.7\% & 83.3\% & 90.0\% & 68.8\% & 84.0\%\\ \hline \hline \end{tabular} \caption{The results of the detected galaxies for W1 and W2. The second to fourth columns list the results by LAM with the $1\sigma$, $3\sigma$, and $5\sigma$ confidence levels, respectively. The fifth column lists the results with the $1\sigma$ confidence level, when we only use the information of galaxies alignments. The last column lists the results without restricting the redshifts of galaxies.} \label{N_comp} \end{table} According to Table~\ref{N_comp}, two points are concluded. First, with $1\sigma$ confidence level, the detection efficiency of LAM (denoted by $N_{\rm{c}}/N_{\rm{std}}$) is better than $95\%$, indicating that LAM effectively find out most galaxies found by the method of \cite{Falco14} in the filaments also with $1\sigma$ confidence level. The detection efficiency of LAM decreases with the increasing confidence level, but is always better than $75\%$, suggesting that LAM is still valid with such high confidence levels. Second, $N_{\rm{c}}/N_{\rm{det}}$ is relatively low, suggesting that LAM detects some other galaxies in filaments which cannot be found by the method of \cite{Falco14}. Finally, we plot the orientations (denoted by the black bars) of the selected red galaxies by LAM with the $3\sigma$ confidence level for each wedge in the top panels of Fig.~\ref{f_ori}. In order to clearly show how many filaments have been detected and the overall orientations of the detected filaments, we divide the two wedges into $1^{\circ}\times1^{\circ}$ cells, as shown in Fig.~\ref{f_ori}, and calculate the average orientation of the selected red galaxies in each cell, which is denoted by a red bar in the middle panels of Fig.~\ref{f_ori}. The overall orientations of filaments are clearly revealed by the red bars. According to the average orientations, there are two main filaments in W1 (F1 and F2), and also two filaments, or more precisely, two sheets (S1 and S2; the filaments are located in the planes of the sheets; Falco et al. 2014; Tempel \& Libeskind 2013) in W2. These filaments are circled by the blue lines in Fig.~\ref{f_ori}. For each filament, we also plot the distribution of orientations of the red bars in the filament region, as shown in the bottom panels of Fig.~\ref{f_ori}. Kolmogorov-Smirnov test (K-S test) is performed to detect the deviation of each orientation distribution from a uniform distribution, and the $p$ values of all of the four distributions are small ($p=0.13$ for F1, $p=0.21$ for F2, $p=0.20$ for S1, and $p=0.05$ for S2), suggesting that the orientations of the red galaxies in each filament are indeed anisotropic. \begin{figure*} \centering \includegraphics[scale=0.7]{f_detect.pdf} \caption{The left and right panels show the orientations of the red galaxies and distribution of each filament in W1 and W2, respectively. The top panels show the orientations of the selected red galaxies with the $3\sigma$, which are denoted by the black bars. The dashed grids divide W1 and W2 into $1^{\circ}\times 1^{\circ}$ cells. The middle panels show the average orientations (denoted by the red bars) of the black bars in each cell. The four filaments F1, F2, S1, and S2 are circled by the blue lines. The bottom panels show the distribution histograms of the orientations of red bars in each filament region, and the results of K-S test are illustrated by the $p$ values.} \label{f_ori} \end{figure*} \section{Discussion} The method of \cite{Falco14} can only identify the filaments gravitationally bound to the clusters of galaxies, i.e., only the galaxies whose radial velocities are influenced by the gravity from the cluster matter can be detected; whereas LAM can theoretically find all of the filaments either influenced or not by the gravity of the clusters, since the orientations of red galaxies are not related directly to the gravity of the clusters of galaxies. Therefore the low value of $N_{\rm{c}}/N_{\rm{det}}$ in W1 might indicate that there are some galaxies located in filaments but not or only weakly influenced by the gravity of the Coma cluster in W1. \subsection{Effect of alignment of red galaxies} We are interested in whether the selected galaxies are due to both the alignments and inhomogeneous distributions of the galaxies in the 2D image, or just due to the latter. For example, perhaps the detected filament (sheet) S1 in W2 is identified because of the high number density of galaxies in the $360\--540\ \rm{arcmin}$ radius range rather than the alignments of red galaxies. In order to test the effect of alignment, the galaxies in the selected wedges themselves are treated as the background galaxies; meanwhile we artificially set $\xi$ of the background galaxies uniformly random in $0\--90^{\circ}$ and test the excess $m_i'=(n_i-n_i^{\rm{bg}})/n_i^{\rm{bg}}$ in each cell for $10^5$ times. The results are also listed in Table~\ref{N_comp}. We find that the filaments can also be identified with the $1\sigma$ confidence level; however the detection efficiency ($N_{\rm{c}}/N_{\rm{std}}$) is much worse than that by LAM. The effect of alignments can be characterized by the fraction $f=N_{\rm{c}}'/N_{\rm{c}}$, where $N_{\rm{c}}$ and $N_{\rm{c}}'$ are the numbers of detected galaxies by LAM and alignments only; $f=73.2\%$ for W1, and $f=66.7\%$ for W2. Therefore the galaxies selected by alignments account for substantial parts of the galaxies selected by LAM, suggesting that the alignments of red galaxies play an important role in LAM. \subsection{LAM without redshift information} The galaxies alignments and 2D distributions of galaxies are independent of the redshifts, therefore we expect that LAM should be still effective without the information of redshifts of the galaxies. However in this case, some interlopers, i.e., the foreground and background galaxies, will be included. The orientations of the foreground and background galaxies should be isotropic, if there is no strong gravitational lens in front of these galaxies, and the galaxies are not arranged in other filaments. Therefore the interlopers may be contaminations for LAM. With these in mind, we retrieve the catalog of galaxies around the Coma cluster (within $9^{\circ}$) from SDSS DR12 again, without restricting the range of redshifts. In order to reduce the contamination of the interlopers in the image, the range of the $r$ band magnitudes of the selected red galaxies need to be carefully chosen. Here we only use the bright red galaxies within $12\ {\rm{mag}}<m_r<16\ \rm{mag}$ to detect the filaments, where the faint end of $16\ \rm{mag}$ is brighter than the previous value of $18\ \rm{mag}$, since the alignments of the brighter red galaxies are more significant \citep{Tempel15}, and a brighter faint end induces fewer background interlopers; the other selecting criteria of the red galaxies remain unchanged. We apply LAM to the new sample of red galaxies, and the result for each wedge (W1 and W2) is also compared with the standard filaments, which is listed in the last column of Table~\ref{N_comp}. We find that the results for both W1 and W2 are slightly worse than the results of redshifts restricted; however, the detection efficiency (83.0\% for W1, 87.9\% for W2) is also acceptable. Additionally, $N_{\rm{c}}/N_{\rm{det}}$ (56.3\% for W1, 84.0\% for W2) for each wedge is also close to that with the redshifts restricted (58.3\% for W1, 79.7\% for W2), implying that the selected filaments without the redshifts information of galaxies are almost the same as those with the redshifts restricted. We find that, in W1 and W2, 24 and 15 galaxies of the selected red galaxies by LAM (with $1\sigma$ confidence level) are not found without the redshifts information, respectively. Among these red galaxies which are not found without using the redshifts information, 18 (for W1) and 11 (for W2) galaxies are fainter than $m_r=16\ \rm{mag}$, suggesting that these galaxies are not found is mainly due to the fact that their magnitudes are out of range. Therefore the results without the redshifts information are slightly worse than the results of LAM, because some faint ($m_r>16\ \rm{mag}$) galaxies are not included in the sample ($12\ {\rm{mag}}<m_r<16\ \rm{mag}$) we used here. \subsection{Alignment of ``background" galaxies?} Without restricting the redshifts of galaxies, we select $N_{\rm{det}}=87$ red galaxies in W1, and $N_{\rm{det}}=69$ galaxies in W2. Among these selected red galaxies, 72 (for W1) and 64 (for W2) red galaxies were found by LAM before, and only 15 (for W1) and 5 (for W2) red galaxies were not found by LAM before; meanwhile among these red galaxies which were not found by LAM, 11 (for W1) and 4 (for W2) galaxies are located at the redshifts $z>0.037$ (i.e., the given lower redshift limit of background galaxies), suggesting that almost all of the galaxies which were not found by LAM are possible background interlopers. We directly plot the orientations of the possible background interlopers for W1 and W2 in Fig.~\ref{ori_inter}, and find that the orientations of the possible background interlopers tend to be parallel to the overall orientations of filaments. In order to test whether the tendency is caused by physical reasons or just by accident, we need to study the orientations of the all possible background red galaxies ($z>0.037$, $12\ {\rm{mag}}<m_r<16\ \rm{mag}$) behind the four filaments/sheets. \begin{figure*} \centering \includegraphics[scale=0.7]{ori_inter.pdf} \caption{The upper and lower panels show the orientations of ``background" interlopers in W1 and W2, respectively. The red bars denoting the overall orientations of filaments are the same as those in the middle panels of Fig.~\ref{f_ori}, and the green bars denote the orientations of the ``background" interlopers in the sample of selected red galaxies without restricting redshifts.} \label{ori_inter} \end{figure*} First, we choose all possible background red galaxies ($z>0.037$, $12\ {\rm{mag}}<m_r<16\ \rm{mag}$) in the four filament regions in the 2D image as circled by the blue lines in Fig.~\ref{f_ori}, and plot their orientations in the upper panels of Fig.~\ref{ori_bg}. Second, we calculate the average orientation of the possible background galaxies in each $1^{\circ}\times1^{\circ}$ spatial cell, with the same manner as used for the filament galaxies (see section 2.3 for detail), and plot the average orientations in the middle panels of Fig.~\ref{ori_bg}. Finally, analogous to the lower panels in Fig.~\ref{f_ori}, we plot the distribution histogram of the average orientations in each filament region, and use K-S test to test the deviation from a uniform distribution; $p$ values are shown in the lower panels of Fig.~\ref{ori_bg}. \begin{figure*} \centering \includegraphics[scale=0.7]{ori_bg.pdf} \caption{The orientations of the red ``background" galaxies in each filament region and corresponding distribution of average orientations of the ``background" galaxies. Analogous to Fig.~\ref{f_ori}, the left and right panels correspond to W1 and W2, respectively. The top panels show the orientations of the red ``background" ($z>0.037$) galaxies in each filament region, which are denoted by the green bars. The dashed grids divide W1 and W2 into $1^{\circ}\times 1^{\circ}$ cells. The middle panels show the average orientations (denoted by the cyan bars) of the green bars in each cell. The four filament regions F1, F2, S1, and S2 are circled by the blue lines. The bottom panels show the distribution histograms of the orientations of cyan bars in each filament region, and the results of K-S test are illustrated by the $p$ values.} \label{ori_bg} \end{figure*} We find that the orientation distribution of the possible background galaxies in F1 region significantly deviates from a uniform distribution ($p=0.09$), whereas the distributions of possible background galaxies in F2, S1, and S2 regions are more likely to be uniform; meanwhile, the most significant excess in the average orientation distribution of the possible background galaxies behind F1 is located at about $40^{\circ}\--50^{\circ}$, which is consistent with the excess location of the red galaxies in F1 filament (see the lower left panel of Fig.~\ref{f_ori}). Therefore the selection of the possible background interlopers in F1 region by LAM is more likely due to a physical reason rather than accident. For F2 region, the uniform distribution ($p=0.40$) may be due to the small number of statistics. Therefore for detecting F2, the effect of the possible background galaxies is not clear. For S1 and S2 regions, the possible background interlopers are more likely to be selected by accident, because of the high values of $p$ ($p=0.97$ for S1 region, $p=0.85$ for S2 region). Therefore for detection of S1 and S2, the possible background interlopers are contaminations. There are some possible mechanisms that may result in the relatively significant alignment signal of the possible background galaxies behind F1. For example, according to many previous papers about the effect of gravitational lensing, theoretically the images of background galaxies might be stretched along the orientation of the foreground filament by the shear of the filament (e.g., Higuchi et al. 2014). However, the gravitational lensing by filaments is extremely weak \citep{Dolag06,Mead10}, and changes the observed ellipticity of a background galaxy by only $2|\gamma|\simeq 0.01$ \citep{Waerbeke01}, where $\gamma$ is the shear of gravitational lensing. Therefore it is very unlikely that gravitational lensing contributes to the detected significant alignment signal shown in the lower left panel of Fig.~\ref{ori_bg}. Another possibility is that if some member galaxies in F1 are classified as background galaxies even if $z>0.037$, these galaxies may result in the detected alignment signal. In order to test this possibility, we plot the distribution of relative redshifts $z-z_0$ of all red galaxies in F1 region, as shown in Fig.~\ref{dis_z}; $z$ and $z_0=0.02393$ are the observed redshifts of a red galaxy and NGC~4874 (center of Coma cluster), respectively. The dot-dashed and dashed lines denote $z=z_0$ (i.e., the redshift of Coma center) and $z=0.037$ (i.e., the redshift lower limit of the possible background galaxies), respectively. If there is no filament in F1 region, we should expect a decreasing number of galaxies with increasing $z-z_0$, in the range of $z>z_0$. However, a significant excess is located at about $0.0342<z<0.0377$ (denoted by the grey bin in Fig.~\ref{dis_z}), indicating that there may be a filament (F1). Therefore a significant fraction of the member galaxies in F1 are indeed classified as ``background'' galaxies which may explain the significant alignment signal of ``background'' galaxies. \begin{figure} \centering \includegraphics[scale=0.49]{dis_z.pdf} \caption{Distribution histogram of relative redshifts of the all galaxies in F1 region in the 2D image. The dot-dashed and dashed lines denote $z=z_0=0.02393$ and $z=0.037$, respectively. The excess with the range of $0.0342<z<0.0377$ is denoted by the grey bin.} \label{dis_z} \end{figure} To further test whether the excess in Fig.~\ref{dis_z} really reveals a filament (F1), we use LAM again with the red galaxies in F1 region with $0.0342<z<0.0377$ (the redshift range of the excess in Fig.~\ref{dis_z}), and plot the orientations of the final selected red galaxies (with $3\sigma$ confidence) and distribution of orientations in Fig.~\ref{ori_nf}. We find that F1 is indeed detected with these galaxies, suggesting that the excess in Fig.~\ref{dis_z} is resulted from the member galaxies in F1. The $p$ value ($p=0.09$) is smaller than that with $0.01<z<0.037$ ($p=0.13$ as shown in Fig.~\ref{f_ori}), also indicating that the red galaxies with $0.037<z<0.0377$ in F1 region indeed are member galaxies of F1. \begin{figure} \centering \includegraphics[scale=0.49]{ori_nf.pdf} \caption{Analogous to Fig.~\ref{f_ori}, from the upper to lower panels, we plot the orientations of the selected (with $3\sigma$ confidence) red galaxies with $0.0342<z<0.0377$ in F1 region, average orientation in each spatial cell, and distribution of the average orientations. The result of K-S test is illustrated by the $p$ value.} \label{ori_nf} \end{figure} Therefore for W1, we need to subtract the galaxies with $0.037<z<0.0377$ from the eleven selected background galaxies shown in Fig.~\ref{ori_inter}. After removing these galaxies, we find that the contaminations by the left background interlopers are not strong when LAM is used without the information of galaxy redshifts, since there are only seven ($8.0\%$ of the total selected galaxies) and four ($5.8\%$ of the total selected galaxies) background interlopers in W1 and W2, respectively. Therefore, an advantage of LAM is that the method is independent of the redshifts of galaxies, and thus can be applied to detecting filaments at relatively high redshifts, where there are photometric images but lack of spectroscopic data and thus the previous algorithms of detecting filaments (e.g., using the velocities field) fail. \section{Summary and Conclusion} Since the red galaxies are preferentially aligned with their host filaments, we developed a new method, called location-alignment-method (LAM), of detecting filaments around clusters of galaxies, which uses both the alignments of the red galaxies and their distributions in 2D images. For the first time, the orientations of red galaxies are used to identify filaments. We applied LAM to the environment of Coma cluster, and compared our results with the ``standard'' filaments detected by the line-of-sight velocities and 2D positions of the galaxies \citep{Falco14}. We summarize our results as follows. 1) LAM can effectively find out the filaments around a cluster with $1\sigma$ confidence level, and even relatively higher confidence levels. 2) Four filaments (two filaments are located in sheets) are found in the two selected regions (W1 and W2), 3) The alignment of red galaxies is important in LAM, since a substantial part ($73.2\%$ for W1, and $66.7\%$ for W2) of the selected red galaxies is due to the alignment. 4) Applying LAM to the samples of bright (brighter than $16\ \rm{mag}$) red galaxies without the information of redshifts, we find that the filaments can still be detected (for W1 and W2, 83.0\% and 87.9\% red galaxies in the filaments are detected, respectively). The contaminatons by background interlopers are not strong. In conclusion, there are two main advantages of LAM: (1) LAM can clearly reveal the number and overall orientations of the detected filaments; (2) LAM is independent of the redshifts of galaxies; therefore, we can use LAM to select the red galaxies in filaments among the sample of galaxies without the information of redshifts. Thus LAM can be applied at relatively high redshifts where there are photometric images but lack of spectroscopic data. \section*{Acknowledgments} We much thank the referee for his/her helpful discussion. SNZ acknowledges partial funding support by 973 Program of China under grant 2014CB845802, by the National Natural Science Foundation of China under grant Nos. 11133002 and 11373036, by the Qianren start-up grant 292012312D1117210, and by the Strategic Priority Research Program ``The Emergence of Cosmological Structures'' of the Chinese Academy of Sciences under grant No. XDB09000000. \bibliographystyle{mn2e}
1,108,101,563,076
arxiv
\section{Introduction} \label{sec::introduction}\input{./section-introduction.tex} \section{Problem set up and literature review} \label{sec::problem-set-up-and-literature-review}\input{./section-problem-statement.tex} \section{Edgeworth expansions for network moments} \label{section::moments-and-Edgeworth-expansion}\input{./section-Edgeworth.tex} \section{Theoretical and methodological applications} \label{sec::applications}\input{./section-applications.tex} \section{Simulations} \label{sec::simulations}\input{./section-simulations.tex} \section{Discussion} \label{sec::discussion}\input{./section-discussion.tex} \section*{Acknowledgments} \input{./section-acknowledgments.tex} \begin{supplement} \sname{Supplement for}\label{suppA} \stitle{``Edgeworth expansions for network moments''} \slink[url]{URL to be added} \sdescription{The supplementary material contains: (1). Definition of $\sigma_w$ in Lemma \ref{lemma::term-approx}-\eqref{lemma::term-approx-Delta-hat-conditional-normal}; (2). All proofs; and (3). Additional simulation results and accompanying interpretations.} \end{supplement} \bibliographystyle{imsart-nameyear} \subsection{Higher-order accuracy of node sub- and re-sampling network bootstraps} \label{subsec::bootstrap-accuracy} One important corollary of our results is first higher-order accuracy proof of some network bootstrap schemes. For a network bootstrap scheme that produces an estimated $\hat{U}_{n^*}^b$ and its jackknife\footnote{Here, we use the jackknife estimator in the bootstrap for a better connection with existing literature in the proof.} variance estimator $\hat{S}_{n^*}^*$, define $\hat T_{n^*}^*=(\hat U_{n^*}^b-\hat U_n)/\hat{S}_{n^*}^*$. We are going to justify the following two schemes. \begin{enumerate}[(a).] \item Sub-sampling \citep{bhattacharyya2015subsampling}: randomly sample $n^*$ nodes from $\{1,\ldots,n\}$ \emph{without replacement}, and compute $\hat{T}^*_{n^*}$ from the induced sub-network of $A$. \item Re-sampling \citep{green2017bootstrapping}: random sample $n$ nodes from $\{1,\ldots,n\}$ \emph{with replacement}, and compute $\hat{T}^*_{n^*}$ from the induced sub-network of $A$. \end{enumerate} \begin{remark} Notice that \cite{green2017bootstrapping} did not study the studentized form, and \cite{bhattacharyya2015subsampling} proposed a different variance estimator (what they call ``$\hat{\sigma}_{B_i}$''). Our justifications focus on the \emph{sampling schemes} combined with some natural formulation, not necessarily the same formulation as in these two papers. \end{remark} \begin{remark} As noted in \cite{green2017bootstrapping}, scheme (b) can be viewed as our data generation procedure described in Sections \ref{subsec::problem-setup-graphon-model} and \ref{subsec::problem-setup-network-moments} but with the graphon $f$ replaced by the adjacency-induced graphon $A(u,v) = A_{\lceil nu\rceil, \lceil nv\rceil}$, where $\lceil y\rceil :=\mathrm{Ceiling}(y)$. {We discuss their scheme (a) in Section \ref{sec::discussion}.} This may seem similar to $f$-based data generaion, but in fact they are distinct. The graphon $A(\cdot,\cdot)$ inherits the binary nature of $A$ and will necessarily yield a lattice $g_1^*(X_1^*)$ regardless of the original graphon $f$ and the motif ${\cal R}$, rendering most classical Edgeworth analysis techniques inapplicable. But the real obstacle is that the bootstrapped network data from $A(\cdot,\cdot)$ have no edge-wise observational errors (i.e. no counterpart to the randomness in $A|W$). Consequently, $\hat{T}_{n^*}^*$ loses the self-smoothing feature that $\hat{T}_n$ enjoys. \end{remark} \begin{thm}\label{thm::bootstrap-accuracy} Assume $g_1(X_1)$ satisfies a Cramer's condition such that $\limsup_{t\to\infty}\left|\mathbb{E}\left[ e^{{\mathbbm{i}} t g_1(X_1)\cdot \xi_1^{-1}} \right]\right|<1$. Under the conditions of Theorem \ref{thm::main-empirical}, we conclude for the following bootstrap schemes: \begin{enumerate}[(a).] \item Sub-sampling: choosing $n^* \asymp n$ and $n-n^* \asymp n$, we have \begin{equation} \label{theorem::application::boot-sub-sampling} \left\| F_{\hat{T}_{n^*}^*}(u) - F_{\hat{T}_{n^*(1-n^*/n)}}(u) \right\|_\infty = o_p\left((n^*)^{-1/2}\right) = o_p(n^{-1/2}). \end{equation} \item Re-sampling: choosing $n^* = n$, we have \begin{equation} \label{theorem::application::boot-re-sampling} \left\| F_{\hat{T}_{n^*}^*}(u) - F_{\hat{T}_{n^*}}(u) \right\|_\infty = o_p\left((n^*)^{-1/2}\right) = o_p(n^{-1/2}). \end{equation} \end{enumerate} \end{thm} \begin{remark} In the proof of Theorem \ref{thm::bootstrap-accuracy}, we combined our main results with the results of \cite{bloznelis2003edgeworth} for finite population U-statistics. It is important to notice that all existing works under the finite populations did assume non-lattice with population size growing to infinity, see condition (1.13) in Theorem 1 of \cite{bloznelis2003edgeworth}. Consequently, the higher-order accuracy of some network bootstraps is only proved under Cramer's condition by so far. \end{remark} Part (a) of Theorem \ref{theorem::application::boot-sub-sampling} quantifies the \emph{effective sample size} in the sub-sampling network bootstrap: sampling $n^*$ out of $n$ nodes without replacement, the resulting bootstrap $\hat{T}_{n^*}^*$ approximates the distribution of $\hat{T}_m$ where $m = \left\{n^*/n\cdot (1-n^*/n)\right\}\times n$. Consequently, in order to approach the sampling distribution of $\hat{T}_n$ with higher-order accuracy using sub-sampling \cite{bhattacharyya2015subsampling}, one must have an observed network of at least $4n$ nodes, from which she shall repeatedly sub-sample $2n$ nodes without replacement. \subsection{One-sample t-test for network moments under general null graphon models} \label{subsec::one-sample t-test} In this and the next subsections, we showcase how our results immediately lead to useful inference procedures for network moments. For a given motif $R$, we test on its population mean frequency $\mu_n$. Since $\mu_n$ depends on $n$ through $\rho_n$, we formulate the hypotheses as follows \begin{center} $H_0: \mu_n = c_n$, versus $H_a: \mu_n \neq c_n$. \end{center} where $c_n$ is a speculated value of $\mu_n = \mathbb{E}[h(A_{1,\ldots,r})]$. In practice, $c_n$ may come from a prior study on a similar data set or fitting a speculated model to the data (for concrete examples on $c_n$ guesses, see Section 6.1 of \cite{bhattacharyya2015subsampling}). Here for simplicity we only discuss a two-sided alternative, and one-sided cases are exactly similar. The p-value can be formulated using our empirical Edgeworth expansion $\hat{G}_n(\cdot)$ in \eqref{Edgeworth::empirical}: \begin{equation} \textrm{Estimated p-value} = 2\cdot \min\left\{ \hat{G}_n(t^{\mathrm{(obs)}}), 1-\hat{G}_n(t^{\mathrm{(obs)}}) \right\}. \label{eqn::application::one-sample-test} \end{equation} where $t^{\mathrm{(obs)}} := (\hat{u}_n^{\mathrm{(obs)}} - {c_n})/\hat{s}_n^{\mathrm{(obs)}}$, and $\hat{u}_n^{\mathrm{(obs)}}$ and $\hat{s}_n^{\mathrm{(obs)}}$ are the observed $\hat{U}_n$ and $\hat{S}_n$, respectively. We have the following explicit Type-II error rate. \begin{thm} \label{theorem::application-one-sample-test} Under the conditions of Theorem \ref{thm::main-empirical}, we have the following results: \begin{enumerate} \item The Type-I error rate of test \eqref{eqn::application::one-sample-test} is $\alpha + {{O\left({\cal M}(\rho_n,n; R)\right)}}$. \item The Type-II error rate of this test is ${o}(1)$ when $|c_n-{\mu_n}| = \omega\left(\rho_n^s\cdot n^{-1/2}\right)$. \end{enumerate} \end{thm} \begin{remark} The null model we study is complementary to the degenerate Erdos-Renyi null model in \citep{lei2016goodness, gao2017testinga, gao2017testingb}. The scientific questions are also different: they test model goodness-of-fit whereas we test population moment values. Notice that distinct network models may possibly share some common population moments. These approaches also use very different methods and analysis techniques. \end{remark} \subsection{Cornish-Fisher confidence intervals for network moments} \label{subsec::cornish-fisher-CI} Noticing that $\hat{G}_n$ is almost never a valid CDF, in order to preserve the higher-order accuracy of $\hat{G}_n$, we use the Cornish-Fisher expansion \citep{cornish1938moments, fisher1960percentile} to approximate the quantiles of $F_{\hat{T}_n}$. A Cornish-Fisher expansion is the inversion of an Edgeworth expansion, and its validity hinges on the validity of its corresponding Edgeworth expansion. Using the technique of \cite{hall1983inverting}, we have \begin{thm} \label{theorem::application-CI} { For any $\alpha\in(0,1)$, define the lower $\alpha$ quantile of the distribution of $\hat{T}_n$ by \begin{equation} q_{\hat{T}_n;\alpha} := \arg\inf_{q\in\mathbb{R}}\ F_{\hat T_n}(q)\geq \alpha \end{equation} } { and define the approximation \begin{align} \hat{q}_{\hat{T}_n;\alpha}&:=z_{\alpha}-\frac{1}{\sqrt{n}\cdot \hat{\xi}_1^3} \cdot \Bigg\{ \frac{ 2z_{\alpha}^2 + 1}6\cdot \hat{\mathbb{E}}[g_1^3(X_1)]\notag\\ & + \frac{r-1}2\cdot \left( z_{\alpha}^2 + 1 \right)\hat{\mathbb{E}}[g_1(X_1)g_1(X_2)g_2(X_1,X_2)]\Bigg\}, \label{formula::quantile-for-CI} \end{align} where $z_{\alpha} := \Phi^{-1}(\alpha)$. Then under the conditions of Theorem \ref{thm::main-empirical}, we have the following results \begin{enumerate}[(a).] \item The discrepancy between nominal and actual percentage-below for $q_{\hat T_n;\alpha}$ is bounded by \begin{equation} |F_{\hat T_n}(q_{\hat T_n;\alpha}) - \alpha| = {O\left({\cal M}(\rho_n,n; R)\right)} \label{application::CI-quantile-discrepancy} \end{equation} \item A ``horizontal'' error bound: \begin{equation} \left| \hat{q}_{\hat{T}_n;\alpha} - q_{\hat{T}_n;\alpha} \right| = {{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)} \label{application::CI-q-q-hat-error-bound} \end{equation} \item A uniform ``vertical'' error bound \begin{equation} \mathbb{P}(\hat{T}_n\leq \hat{q}_{\hat{T}_n;\alpha}) = \alpha + {O\left({\cal M}(\rho_n,n; R)\right)}. \label{application::CI-coverage-error-bound} \end{equation} \end{enumerate} } \end{thm} { The vertical error bound describes the approximation error between the nominal and actual coverage probabilities, whereas the horizontal error bound governs the approximation of quantiles. Using the vertical error bound, a $1-\alpha$ two-sided symmetric Cornish-Fisher confidence interval for estimating $\mu_n$ can be easily constructed as follows} \begin{equation} \left( {\hat{U}_n}-\hat{q}_{\hat{T}_n;1-\alpha/2}\cdot {\hat S_n}, {\hat{U}_n}-\hat{q}_{\hat{T}_n;\alpha/2}\cdot {\hat S_n} \right) \label{formula::EEE-confidence-interval} \end{equation} and by Theorem \ref{theorem::application-CI}, we know this CI has a $1-\alpha + {{O\left({\cal M}(\rho_n,n; R)\right)}}$ coverage probability. One-sided confidence intervals can be constructed similarly. \subsection{Outline and core ideas to analyze $\hat{T}_n$} \label{subsec::Edgeworth-outline} Our key discovery is that the studentized noisy U-statistic $\hat{T}_n$ can be decomposed as follows: \begin{equation} \hat{T}_n = \tilde{T}_n + {\check{\Delta}_n + {{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)}}, \label{eqn::T-hat-main-decomposition} \end{equation} where $\tilde{T}_n$, {to be formally defined in \eqref{def::T-tilde},} can be roughly understood as a studentized noiseless U-statistic {who could be approximately decomposed into an $O_p(1)$ linear part and an $O_p(n^{-1/2})$ quadratic part}, ${\check{\Delta}_n} \approx N(0,\sigma^2\asymp (\rho_n\cdot n)^{-1})$, and {recall the symbol ${\tilde{O}_p}$ from Section \ref{subsec::tOp}}. {Here the remainder term would be ignorable if the network is mildly dense. In the sparse regime, the remainder will dominate both ${\check\Delta_n}$ and the quadratic part of $\tilde T_n$.} Our decomposition \eqref{eqn::T-hat-main-decomposition} is a renaissance of the spirits of \cite{singh1981asymptotic} and \cite{lahiri1993bootstrapping}, but with the following crucial {conceptual} differences. First and most important, the error term ${\check\Delta_n}$ in our formula is \emph{not} artificial, but naturally a constituting component of $\hat{T}_n$. Therefore, the smoother does \emph{not} distort the objective distribution, that is, $\hat{T}_n$ is \emph{self-smoothed}. The second difference lies in the bandwidth of the smoothing error term. {Since the smoothing error terms in \cite{singh1981asymptotic} and \cite{lahiri1993bootstrapping} are artificial, the user is at the freedom to choose these bandwidths. In our setting, the bandwidth of the smoothing term $(\rho_n\cdot n)^{-1/2}$ is not managed by the user, but governed by the network sparsity. Therefore, when Cramer's condition fails, we make the very mild sparsity assumption that $\rho_n=O((\log n)^{-1})$ to ensure enough smoothing effect. } This echoes the lower bound on Gausssian bandwidth in \cite{lahiri1993bootstrapping}. We also need $\rho_n$ to be lower bounded {to effectively bound the remainder term}, see Lemma \ref{lemma::term-approx}{-\eqref{lemma::term-approx-Delta-hat-conditional-normal}}. Third, our error term ${\check\Delta_n}$ is \emph{dependent} on $\tilde{T}_n$ through $W$. {Last, the proof technique of \cite{singh1981asymptotic} is inapplicable to our setting due to the quadratic part in $\tilde T_n$; and \cite{lahiri1993bootstrapping} obtains an $o(n^{-1/2})$ error bound\footnote{{The $o(n^{-1/2})$ error bound in \cite{lahiri1993bootstrapping} holds on some ${\frak B}\subset \mathbb R$ with ``diminishing boundary'', while our error bounds hold on the entire $\mathbb{R}$.}}, while we aim at stronger results under a more complicated U-statistic setting with degree-two terms.} In our proofs, we carefully handle {these challenges} with original analysis. \subsection{Decomposition of the stochastic variations of $\hat{U}_n$} \label{subsec::U-hat} To simplify narration, here we focus on analyzing $\hat{U}_n$, and the analysis of $\hat{T}_n$ is conceptually similar. The stochastic variations in $\hat{U}_n=U_n+(\hat{U}_n-U_n)$ {come} from two sources: the randomness in $U_n$ due to $W$ and ultimately $X_1,\ldots,X_n$, and the randomness in $\hat{U}_n-U_n$ due to $A|W$, the edge-wise observational errors. The stochastic variations in $U_n$ as a conventional noiseless U-statistic is well-understood due to Hoeffding's decomposition \citep{hoeffding1948class}: \begin{align} U_n -\mu_n &=\frac{r}{n} \sum_{i=1}^n g_1(X_i) + \frac{r(r-1)}{n(n-1)}\sum_{1\leq i<j\leq n} g_2(X_i,X_j) + {{\tilde{O}_p}(\rho_n^s\cdot n^{-3/2}\log^{3/2}n)} \label{eqn::Hoeffding's decomposition} \end{align} where $g_1,\ldots,g_r$ are defined as follows. To avoid complicated subscripts, without confusion we define $g_k$'s for special indexes $(i_1,\ldots,i_r)=(1,\ldots,r)$. For indexes $1$, $k=\{2,\ldots,r-1\}$ (only when $r\geq 3$) and $r$, define $g_1(x_1) := \mathbb{E}[h(X_1,\ldots,X_r)|X_1=x_1] - \mu_n$, $g_k(x_1,\ldots,x_k) := \mathbb{E}[h(X_1,\ldots,X_r)|X_1=x_1,\ldots,X_{{k}}=x_{{k}}] - \mu_n - \sum_{k'=1}^{k-1} \sum_{1\leq i_1<\ldots <i_{k'}\leq r} g_{k'}(x_{i_1},\ldots,x_{i_{k'}})$ for $2\leq k\leq r-1$ and $g_r(x_1,\ldots,x_r) := h(x_1,\ldots,x_r) - \mu_n$. From classical literature, we know that $\mathbb{E}[g_k(X_{i_1},\ldots,X_{i_k})|\{X_i:i\in{\cal I}_k\subset\{i_1,\ldots,i_k\}\}] = 0$, where the strict subset ${\cal I}_k$ could be $\emptyset$, and $\mathrm{Cov}\left( g_k(X_{i_1},\ldots,X_{i_k}), g_\ell(X_{j_1},\ldots,X_{j_\ell}) \right)=0$ unless $k=\ell$ and $\{i_1,\ldots,i_k\} = \{j_1,\ldots,j_\ell\}$. Consequently, the linear part in the Hoeffding's decomposition is dominant. Define \begin{equation} \xi_1^2 := \mathrm{Var}(g_1(X_1)). \label{def::xi_1} \end{equation} We focus on discussing the stochastic variations in $\hat{U}_n-U_n$. The typical treatment in network bootstrap literature is to simply bound and ignore this component, such as Lemma 7 in \cite{green2017bootstrapping}. {In sharp contrast, by carefully quantifying its impact,} we shall reveal its key smoothing effect by a refined analysis. To better understand the impact of $\hat{U}_n-U_n$, let us inspect two simple {and illustrative} examples. {In these examples, we inspect $\hat T_n$ and use the fact that $\hat S_n\asymp \sigma_n\asymp \rho_n^s\cdot n^{-1/2}$ (by Lemma~\ref{lemma::term-approx})}. \begin{example} \label{example::motif-edge} Let $R$ be an edge with $r=2$ and $s=1$, and $\hat{U}_n$ is simply the sample edge density. By definition, all $h(A_{i_1,i_2})-h(W_{i_1,i_2})$ terms are mutually independent given $W$. Then {the asymptotic behavior of the self-smoother term is} $$ \frac{\hat{U}_n - U_n}{{\hat S_n}} \stackrel{d}{\to} N\left(0, \sigma_{\frac{\hat{U}_n-U_n}{{\hat S_n}}\big|W}^2 {\asymp(\rho_n\cdot n)^{-1}} \right) $$ {at} a uniform $O(\rho_n^{-1/2}\cdot n^{-1})$ Berry-Esseen CDF approximation error {rate}. \end{example} The next example shows that the insight of Example \ref{example::motif-edge} generalizes. \begin{example} \label{example::motif-triangle} Let $R$ be a triangular motif with $r=3, s=3$, and $\hat{U}_n$ is the empirical triangle frequency. We can decompose $\hat{U}_n - U_n$ as follows: \begin{align} &\frac{\hat{U}_n - U_n}{{\hat S_n}} = \frac1{\binom{n}3} \sum_{1\leq i_1<i_2<i_3\leq n} \frac{\left\{h(A_{i_1,i_2,i_3})-h(W_{i_1,i_2,i_3})\right\}}{{\hat S_n}} \notag\\ =& \frac1{\binom{n}3} \sum_{1\leq i_1<i_2<i_3\leq n} \frac{ \left( W_{i_1i_2} + \eta_{i_1i_2} \right)\left( W_{i_1i_3} + \eta_{i_1i_3} \right)\left( W_{i_2i_3} + \eta_{i_2i_3} \right) - W_{i_1i_2}W_{i_1i_3}W_{i_2i_3} }{{\hat S_n}}\notag\\ =& {\frac1{\binom{n}3}\Bigg\{ \sum_{\substack{1\leq i_1<i_2\leq n\\1\leq i_3\leq n\\i_3\neq i_1,i_2}} \frac{ W_{i_1i_3}W_{i_2i_3} \eta_{i_1i_2} + W_{i_1i_2}\eta_{i_1i_3}\eta_{i_2i_3}}{\hat S_n} + \sum_{1\leq i_1<i_2<i_3\leq n} \frac{\eta_{i_1i_2}\eta_{i_1i_3}\eta_{i_2i_3}}{\hat S_n} \Bigg\}}\notag\\ =& { \underbrace{\frac1{\binom{n}2}\sum_{1\leq i<j\leq n}\left(\frac{3\sum_{\substack{1\leq k\leq n\\k\neq i,j}}W_{ik}W_{jk}}{(n-2)\hat S_n}\right)\eta_{ij} }_{\textrm{Linear part}} } + { \underbrace{ \frac1{\binom{n}3}\sum_{\substack{1\leq i<j\leq n\\1\leq k\leq n\\k\neq i,j}} \frac{W_{ij}}{\hat S_n} \eta_{ik}\eta_{jk} }_{\textrm{Quadratic part}} }\notag\\ &+ { \underbrace{ \frac1{\binom{n}3}\sum_{1\leq i<j<k\leq n} \frac1{\hat S_n}\eta_{ij}\eta_{ik}\eta_{jk} }_{\textrm{Cubic part}} }\notag \end{align} where $\eta_{ij}:= A_{ij}-W_{ij}$. { The linear part is $\asymp \rho_n^{-1/2}\cdot n^{-1/2}$, the quadratic part is ${\tilde{O}_p}(\rho_n^{-1}\cdot n^{-1}\log^{1/2}n)$ and the cubic part is ${\tilde{O}_p}(\rho_n^{-3/2}\cdot n^{-1}\log^{1/2}n)$. We make two observations. First, the linear part in this example has the same asymptotic order as the linear part in Example \ref{example::motif-edge}. This is not a coincidence and will be formalized by Lemma \ref{lemma::term-approx}-\eqref{lemma::term-approx-Delta-hat-conditional-normal}. In other words, regardless of the shape of $R$, the linear part in such decomposition always provides smoothing effect at the same magnitude. Second, different from Example \ref{example::motif-edge}, we now have higher-degree remainder terms. The linear part nicely always dominates the quadratic part; but it only dominates the cubic part when $\rho_n=\omega(n^{-1/2}\log^{1/2}n)$. } \end{example} The insights of the two examples are generalized in Lemma \eqref{lemma::term-approx}-\eqref{lemma::term-approx-Delta-hat-conditional-normal}. When the network is moderately dense, the linear part in $\hat{U}_n-U_n$ dominates. Consequently, the overall contribution of the stochastic variations in $A|W$ approximates Gaussian {at} an $O(\rho_n^{-1/2}\cdot n^{-1})$ Berry-Esseen {error rate}. \subsection{Studentization form} \label{subsec::our-method-studentization-form} The understanding of $\hat{U}_n$ in Section \ref{subsec::U-hat} prepares us to fully specify $\hat{T}_n = (\hat{U}_n - \mu_n)/\hat{S}_n$. {Recall that we still need to} design $\hat{S}_n$. In $\mathrm{Var}(\hat{U}_n) = \mathbb{E}[\mathrm{Var}(\hat{U}_n|W)] + \mathrm{Var}(\mathbb{E}[\hat{U}_n|W])$, we observe $\mathrm{Var}(\hat{U}_n|W)\asymp \rho_n^{2s-1}\cdot n^{-2}$ and $\mathrm{Var}(\mathbb{E}[\hat{U}_n|W]) = \mathrm{Var}(U_n) \asymp \rho_n^{2s} \cdot n^{-1}$. We shall assume $\rho_n \cdot n\to \infty$, so $\sigma_n^2=\mathrm{Var}(U_n) = \mathrm{Var}(\mathbb{E}[\hat{U}_n|W])$ dominates. There are two main choices of $\hat{S}_n$. The conventional choice for studentizing noiseless U-statistics \citep{callaert1981order, helmers1991edgeworth, putter1998empirical} {uses} the jackknife estimator \begin{equation} n\cdot \hat{S}_{n;\textrm{jackknife}}^2 := (n-1)\sum_{i=1}^n\left( \hat{U}_n^{(-i)} - \hat{U}_n \right)^2, \label{def::jackknife-variance-estimator} \end{equation} where $\hat{U}_n^{(-i)}$ is $\hat{U}_n$ calculated on the induced sub-network of $A$ with node $i$ removed. Despite conceptual straightforwardness, the jackknife estimator unnecessarily complicates analysis. Therefore, we use an estimator with a simpler formulation. In $\mathrm{Var}(\hat{U}_n) = \sigma_n^2 + O(\rho_n^{2s-1}n^{-2}) = r^2\xi_1^2/n + O(\rho_n^{2s-1}n^{-2})$, replace $\xi_1$ by its moment estimator. {Specifically, recall that $\xi_1^2 = \mathrm{Var}(g_1(X_1)) = \mathbb{E}[(\mathbb{E}[h(X_1,\ldots,X_n)|X_1]-\mu_n)^2]$. Replacing $\mathbb{E}[h(X_1,\ldots,X_n)|X_1]$ and $\mu_n$ by their estimators based on observable data, we can} design $\hat{S}_n$ as follows \begin{equation} n\cdot \hat{S}_n^2 := \frac{r^2}n\sum_{i=1}^n \underbrace{\Bigg\{\frac1{\binom{n-1}{r-1}}\sum_{\substack{1\leq i_1<\cdots<i_{r-1}\leq n\\i_1,\ldots,i_{r-1}\neq i}} h(A_{i,i_1,\ldots,i_{r-1}}) - \hat{U}_n \Bigg\}^2}_{ \textrm{Estimates }\xi_1^2 = \mathrm{Var}(g_1(X_1))}. \label{def::our-variance-estimator} \end{equation} We will show in Theorem \ref{thm::jackknife} that the {$|\hat S_n^2-\hat S_{n;\textrm{jackknife}}^2|$ is ignorable, but our estimator $\hat S_n$ is computationally more efficient than the jackknife estimator.} Next, we expand $\hat{T}_n$. For simplicity, define the following shorthand \begin{align} { U_n^\#} := \frac{1}{\sqrt{n}\cdot \xi_1}\sum_{i=1}^n g_1(X_i), \quad&\Delta_n := \frac{r-1}{\sqrt{n}(n-1)\xi_1} \sum_{1\leq i<j\leq n} g_2(X_i,X_j), \label{def::delta}\\ \hat{\Delta}_n:=(\hat{U}_n-U_n)/\sigma_n,\quad\delta_n := (\hat{\sigma}_n^2&-\sigma_n^2)/\sigma_n^2, \quad\textrm{ and }\quad \hat{\delta}_n := (\hat{S}_n^2-\hat{\sigma}_n^2)/\sigma_n^2,\notag \end{align} where in \eqref{def::delta}, the technical intermediate term $\hat{\sigma}_n^{ 2}$ is defined as \begin{align} n\cdot \hat{\sigma}_n^2 &:= \frac{r^2}n\sum_{i=1}^n\Bigg\{\frac1{\binom{n-1}{r-1}}\sum_{\substack{1\leq i_1<\cdots<i_{r-1}\leq n\\i_1,\ldots,i_{r-1}\neq i}} h(W_{i,i_1,\ldots,i_{r-1}}) - U_n \Bigg\}^2.\notag \end{align} We now show that $\hat{T}_n$ can be expanded as follows. \begin{align} \hat{T}_n &= \left( { U_n^\#} + \Delta_n + \hat{\Delta}_n + {{\tilde{O}_p}(n^{-1}\log^{3/2}n)} \right)\cdot \left( 1+\hat{\delta}_n + \delta_n \right)^{-1/2}\notag\\ &= \tilde{T}_n + {\check{\Delta}_n} + \textrm{Remainder}, \label{main::hat-T_n::expansion} \end{align} where {$\check\Delta_n$ encodes the ``linear part'' of $\hat\Delta_n$ (see Lemma \ref{lemma::term-approx}-\eqref{lemma::term-approx-hat-delta}), recall the definition of ${\tilde{O}_p}$ from Section \ref{subsec::tOp}, and define} \begin{align} \tilde{T}_n := { U_n^\#} + \Delta_n - \frac12 { U_n^\#}\cdot \delta_n \label{def::T-tilde} \end{align} { The remainder in \eqref{main::hat-T_n::expansion} consists of the remainder terms from both $U_n-\mu_n$ and $\hat U_n-U_n$ approximations. We will show that it is ${\tilde{O}_p}({\cal M}(\rho_n,n;R))$.} The form \eqref{main::hat-T_n::expansion} is partially justified by the Taylor expansion $(1+x)^{-1/2}= 1-x/2 + O(x^2)$, with $x:= (\hat S_n^2-\sigma_n^2)/\sigma_n^2 = O_p(n^{-1/2})$ \citep{maesono1997edgeworth}; and a complete justification comes from our main lemma, i.e. Lemma \ref{lemma::term-approx}. {Now recalling the definition of acyclic and cyclic $R$ shapes from Definition \ref{definition::acyclic-cyclic-R}, the definition of ${\cal M}(\rho_n,n;R)$ from definition \ref{definition::M} in Section \ref{sec::introduction}, and the definition of ${\tilde{O}_p}$, we are ready to state our main lemma. } \begin{lem} \label{lemma::term-approx} Assume the following conditions hold: \begin{enumerate}[(i).] \item $\rho_n^{-s}\cdot \xi_1 >C>0$, where $C>0$ is a universal constant, \label{condition::xi_1-bounded-away-from-zero} \item { $\rho_n = \omega(n^{-1})$ for acyclic $R$, or $\rho_n = \omega(n^{-2/r})$ for cyclic $R$, } \label{condition::rho_n} \end{enumerate} We have the following results: \begin{enumerate}[(a)] \item\label{lemma::term-approx-Tn-Unstar-Deltan} $\dfrac{U_n-{\mu_n}}{\sigma_n} = { U_n^\#} + \Delta_n + {{\tilde{O}_p}(n^{-1}\cdot \log^{3/2} n)} $, \item\label{lemma::term-approx-Delta-hat-conditional-normal} We have $$ \hat{\Delta}_n =\frac{(\hat{U}_n-U_n)}{\sigma_n} = {\check{\Delta}_n + \check{R}_n}, $$ { where $\check\Delta_n$ and $\check{R}_n$ satisfy \begin{align} \check{R}_n &= {{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)} \label{eqn::R-check-bound} \\ \left\| F_{{\check{\Delta}_n}|W}(u) - \right. &\left.F_{N(0,(\rho_n\cdot n)^{-1}\sigma_w^2)}(u) \right\|_\infty = {{\tilde{O}_p}}\left(\rho_n^{-1/2}\cdot n^{-1}\right). \label{eqn::Gaussian-term-convergence} \end{align} where the order control in \eqref{eqn::Gaussian-term-convergence} is ${\tilde{O}_p}(\cdot)$ rather than $O(\cdot)$ due to the randomness in $W$. }The definition of $\sigma_w$ is lengthy and formally stated in Section \ref{appendix::def::sigma_w} in Supplemental Material. As $n\to\infty$, we have $\sigma_w\stackrel{p}\asymp 1$. \item\label{lemma::term-approx-hat-delta} $\hat{\delta}_n = {{{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)}} $, \item\label{lemma::term-approx-delta} We have \begin{equation} \delta_n = \frac1n\sum_{i=1}^n\frac{g_1^2(X_i)-\xi_1^2}{\xi_1^2} + \dfrac{2(r-1)}{n(n-1)}\sum_{\substack{1\leq \{i,j\}\leq n\\i\neq j}}\dfrac{g_1(X_i)g_2(X_i,X_j)}{\xi_1^2} + {{\tilde{O}_p}}(n^{-1}{\cdot \log n}).\notag \end{equation} \end{enumerate} \end{lem} \begin{remark} \label{remark::3-1-degenerate} Assumption \eqref{condition::xi_1-bounded-away-from-zero} is a standard non-degeneration assumption in literature. It {is different from a smoothness assumption on graphon $f$}\footnote{{Smooth graphon: $f$ is called \emph{smooth}, if there exists a measure-preserving map $\varrho: [0,1]\to[0,1]$ such that $f(\varrho(\cdot), \varrho(\cdot))$ is a smooth function. See \cite{gao2015rate, zhang2017estimating} for more details.}}. A globally smooth Erdos-Renyi graphon leads to a degenerate $g_1(X_1)$ {that $\xi_1^2=\mathrm{Var}(g_1(X_1))=0$}. In the degenerate setting, both the standardization/studentization and the analysis would be very different. Asymptotic results for $r=2,3$ motifs under an Erdos-Renyi graphon {have been} established by \cite{gao2017testinga, gao2017testingb}. Degenerate U-statistics are outside the scope of this paper. \end{remark} \begin{remark} { \label{remark::rho-n-lower-bound} We note that Lemma \ref{lemma::term-approx} only requires the weak assumption on $\rho_n$ (see Assumption \ref{condition::rho_n}). This assumption matches the classical sparsity assumptions in network bootstrap literature \citep{bickel2011method, bhattacharyya2015subsampling, green2017bootstrapping}. Using Lemma \ref{lemma::term-approx}, we prove a higher-order error bound of the Edgeworth expansion in Theorem \ref{thm::main-theorem} with a stronger density assumption; while in Theorem \ref{thm::sparse-main-theorem}, we prove a novel modified Berry-Esseen bound for the normal approximation. Both downstream theorems significantly improve over existing best results. } \end{remark} \begin{remark} Lemma \ref{lemma::term-approx}-\eqref{lemma::term-approx-Tn-Unstar-Deltan} and \eqref{lemma::term-approx-delta} are similar to results in classical literature on Edgeworth expansion for noiseless U-statistics, but here we account for $\rho_n$. Parts \eqref{lemma::term-approx-Delta-hat-conditional-normal} and \eqref{lemma::term-approx-hat-delta} are {new results} unique to the network setting. Especially in the proof of part \eqref{lemma::term-approx-Delta-hat-conditional-normal}, we significantly refine the analysis of the randomness in $A|W$ in \cite{bhattacharyya2015subsampling} and \cite{green2017bootstrapping}. \end{remark} \begin{remark} \label{remark::knowing-rho_n} Our result \eqref{eqn::Gaussian-term-convergence} in Lemma \ref{lemma::term-approx}-\eqref{lemma::term-approx-Delta-hat-conditional-normal} {is different from} Theorem 1 of \cite{bickel2011method}. {Here we distinguish} three quantities: the true $\rho_n$, $\tilde{\rho}_n = \mathrm{Mean}(W_{ij})$ and $\hat{\rho}_n = \mathrm{Mean}(A_{ij})$. The convergence rate of $\hat{\rho}_n\to \tilde{\rho}_n$ is much faster than $\tilde{\rho}_n\to\rho_n$. {The convergence rate here would contribute to the eventual CDF approximation error bound, therefore it is important that }our result \eqref{eqn::Gaussian-term-convergence} corresponds to $\hat{\rho}_n\to \tilde{\rho}_n$, thus avoids the bottleneck. In contrast, \cite{bickel2011method} and later \cite{lunde2019subsampling} focus on $\hat{\rho}_n\to \rho_n$. \end{remark} \begin{table}[h!] \centering \vspace{-1em} \caption{Summary of the main components in $\hat T_n$} \begin{tabular}{c|c|c|c}\hline Component & Order std. dev. & \bigcell{c}{Impacts\\ Edgeworth formula} & \bigcell{c}{Smoothing\\effect}\\\hline $U_n^{\#}$ & $1$ & Yes & No\\ $\Delta_n-\frac12 U_n^{\#}\cdot \delta_n$ & $n^{-1/2}$ & Yes & No\\ \rule{0pt}{3ex}$\check\Delta_n$ & $(\rho_n\cdot n)^{-1/2}$ & No & Yes\\ Remainder & ${{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)}$ & No & No\\\hline \end{tabular} \label{tab::Tn-hat-summary} \end{table} Overall, Lemma \ref{lemma::term-approx} clarifies the asymptotic orders of the leading terms in the expansion of $\hat{T}_n$. In fact, Lemma \ref{lemma::term-approx} {has a parallel version for a jackknife $\hat{S}_{n}$} in view of Theorem \ref{thm::jackknife}, but we do not present it due to page limit. { We conclude this subsection by a summary table of the main components in $\hat T_n$. Notice that despite smoother $\check\Delta_n$ is $\Omega(n^{-1/2})$, it does \emph{not} distort the $n^{-1/2}$ term in the Edgeworth expansion formula. Similar phenomenon is observed in the i.i.d. setting, see \cite{singh1981asymptotic} (equation (2.8)) and \cite{lahiri1993bootstrapping} (Section 2.2). } \subsection{Population and empirical Edgeworth expansions for network moments} In this subsection, we present our main theorems. \begin{thm}[Population network Edgeworth expansion] \label{thm::main-theorem} Define \begin{align} G_n(x) &:= \Phi(x) + \frac{\phi(x)}{\sqrt{n}\cdot \xi_1^3} \cdot \Bigg\{ \frac{ 2 x^2 + 1}{6} \cdot\mathbb{E}[g_1^3(X_1)]\notag\\ & + \frac{r-1}2\cdot \left( x^2 + 1 \right)\mathbb{E}[g_1(X_1)g_1(X_2)g_2(X_1,X_2)]\Bigg\}, \label{eqn::main-theorem-1} \end{align} where $\Phi(x)$ and $\phi(x)$ are the CDF and PDF of $N(0,1)$. { Assume condition (\ref{condition::xi_1-bounded-away-from-zero}) of Lemma \ref{lemma::term-approx} hold, and replace condition \eqref{condition::rho_n} by a stronger assumption that either $R$ is acyclic and $\rho_n = \omega(n^{-1/2})$, or $R$ is cyclic and $\rho_n = \omega(n^{-1/r})$. } Additionally, assume either $\rho_n = O((\log n)^{-1})$ or Cramer's condition $\limsup_{t\to\infty}\left| \mathbb{E}\left[ e^{{\mathbbm{i}} t g_1(X_1)\cdot \xi_1^{-1}} \right] \right|<1$ holds. We have \begin{equation} \left\| F_{\hat{T}_n}(x) - G_n(x) \right\|_\infty = {O\left({\cal M}(\rho_n,n; R)\right)}.\notag \end{equation} \end{thm} \begin{remark} The assumed $\rho_n$'s upper bound in absence of Cramer's condition serves to sufficiently boost the smoothing power of $\check{\Delta}_n$, quantified in Lemma \ref{lemma::term-approx}-\eqref{eqn::Gaussian-term-convergence}. This assumption is unlikely improvable, since its required Gaussian variance $(\rho_n\cdot n)^{-1}= \Omega(\log n\cdot n^{-1})$ matches the minimum Gaussian standard deviation requirement $\Omega((\log n)^{1/2}\cdot n^{-1/2})$ in Remark 2.4 in \cite{lahiri1993bootstrapping} for the i.i.d. setting. \end{remark} In \eqref{eqn::main-theorem-1}, the Edgeworth coefficients depend on true population moments. In practice, they need to be estimated from data. Define \begin{align} \hat{g}_1(X_i) &:= \frac1{\binom{n-1}{r-1}}\sum_{\substack{1\leq i_1<\ldots<i_{r-1}\leq n\\i_1,\ldots,i_{r-1}\neq i}}h(A_{i,i_1,\ldots,i_{r-1}}) - \hat{U}_n,\notag\\ \hat{g}_2(X_i,X_j) &:= \frac1{\binom{n-2}{r-2}}\sum_{\substack{1\leq i_1<\ldots<i_{r-2}\leq n\\i_1,\ldots,i_{r-2}\neq i,j}}h(A_{i,j,i_1,\ldots,i_{r-2}}) - \hat{U}_n - \hat{g}_1(X_i) - \hat{g}_1(X_j),\notag \end{align} where we write ``$\hat{g}_1(X_i)$'' rather than ``$\hat{g_1(X_i)}$'' for cleanness. We stress that the evaluation of $\hat{g}_1(X_i)$ and $\hat{g}_2(X_i,X_j)$ does \emph{not} require knowing the latent $X_i,X_j$. The Edgeworth coefficients can be estimated by \begin{align} \hat{\xi}_1^2 := \frac{n\cdot \hat{S}_n^2}{r^2} = \frac1n\sum_{i=1}^n\hat{g}_1^2(X_i),&\quad\textrm{ and }\quad \hat{\mathbb{E}}\left[ g_1^3(X_1) \right] := \frac1n\sum_{i=1}^n \hat{g}_1^3(X_i),\notag\\ \hat{\mathbb{E}}\left[ g_1(X_1)g_1(X_2)g_2(X_1,X_2) \right] &:= \frac1{\binom{n}2} \sum_{1\leq i<j\leq n}\hat{g}_1(X_i)\hat{g}_1(X_j)\hat{g}_2(X_i,X_j).\notag \end{align} \begin{thm}[Empirical network Edgeworth expansion] \label{thm::main-empirical} Define the empirical Edgeworth expansion as follows: \begin{align} \hat{G}_n(x) &:= \Phi(x) + \frac{\phi(x)}{\sqrt{n}\cdot \hat{\xi}_1^3} \cdot \Bigg\{ \frac{ 2 x^2 + 1}{6} \cdot \hat{\mathbb{E}}[g_1^3(X_1)]\notag\\ & + \frac{r-1}2\cdot \left( x^2 + 1 \right)\hat{\mathbb{E}}[g_1(X_1)g_1(X_2)g_2(X_1,X_2)]\Bigg\}, \label{Edgeworth::empirical} \end{align} Under the conditions of Theorem \ref{thm::main-theorem}, we have \begin{equation} \left\| F_{\hat{T}_n}(x) - \hat{G}_n(x) \right\|_\infty = {{\tilde{O}_p}}({\cal M}(\rho_n,n;R)).\notag \end{equation} \end{thm} \begin{remark} {Another approach to estimate the unknown coefficients in Edgeworth expansion is by bootstrap.} The concentration of $\hat{G}_n\to G_n$ should not be confused with the concentration $\hat{G}_n^*\to \hat{G}_n$, where $\hat{G}_n^*$ is the expansion with bootstrap-estimated coefficients. See literature regarding the i.i.d. setting \citep{helmers1991edgeworth,maesono1997edgeworth}. In $\hat{G}_n^*\to \hat{G}_n$, the convergence rate is not a concern, because, without constraining computation cost, one can let the number of bootstrap samples grow arbitrarily fast, so the proof of bootstrap concentration only requires consistency, but our proof regarding $\hat{G}_n\to G_n$ requires careful rate calculations. \end{remark} Next, we show that different choices of the variance estimators for studentization represent no essential discrepancy. \begin{thm}[Studentizing by a jackknife variance estimator \eqref{def::jackknife-variance-estimator}] \label{thm::jackknife} Define $$ \hat{T}_{n;\mathrm{jackknife}} := \frac{\hat{U}_n-\mu_n}{\hat{S}_{n;\mathrm{jackknife}}}. $$ Under the assumptions of Theorem \ref{thm::main-theorem}, we have \begin{align} | \hat{S}_n - \hat{S}_{n;\mathrm{jackknife}} | &= O(\hat S_n\cdot n^{-1}), \label{proofeqn::jackknife-temp}\\ \left\| F_{\hat{T}_{n;\mathrm{jackknife}}}(x) - G_n(x) \right\|_\infty &= {O\left({\cal M}(\rho_n,n; R)\right)},\notag\\ \left\| F_{\hat{T}_{n;\mathrm{jackknife}}}(x) - \hat{G}_n(x) \right\|_\infty &= {{\tilde{O}_p}}({\cal M}(\rho_n,n;R)).\notag \end{align} \end{thm} Theorem \ref{thm::jackknife} states that on statistical properties, one does not need to differentiate between $\hat{T}_n$ and $\hat{T}_{n;\mathrm{jackknife}}$. {The evaluation of $\hat S_{n;\textrm{jackknife}}$ costs $O(n^{r+1})$ time because each individual $\hat U_n^{(-i)}$ costs $O(n^r)$; whereas our estimator $\hat S_n$ costs $O(n^r)$. Our estimator also has a more convenient form for theoretical analysis.} \subsection{Remarks on non-smooth graphons and a comparison table of our results with literature} Our results do not assume graphon smoothness or low-rankness. This aligns with the literature on noiseless U-statistics but sharply contrasts network inferences based on model parameter estimation such as \cite{hoff2002latent,lei2016goodness} and network bootstraps based on model estimation \citep{green2017bootstrapping, levin2019bootstrapping}. Notice that the concept ``non-smoothness'' usually emphasizes ``not assuming smoothness'' rather than explicitly describing irregularity. It is a very useful tool for modeling networks with high structural complexity or unbalanced observations, examples include: (1) a small group of \emph{outlier} nodes that behave differently from the main network patterns \cite{cai2015robust}; (2) in networks exhibiting ``core-periphery'' structures \citep{della2013profiling, zhang2015identification}, we may wish to relax structural assumptions on periphery nodes due to scarcity of observations; and (3) networks generated from a mixture model \citep{newman2007mixture} with many small-probability mixing components may appear non-smooth in these parts. Unfortunately, existing research on practical methods for non-smooth graphons is rather limited due to the obvious technical difficulty, but exceptions include \citep{choi2017co}. Our results send the surprising message that under mild conditions, the sampling distribution of a network moment is still \emph{smooth} and can be \emph{accurately} approximated, even if the graphon is non-smooth. { \subsection{Sparse networks} \label{subsec::sparse-networks} We have been focusing on discussing mildly sparse networks, but many networks tend to be sparse \citep{guedon2016community}. In this section, we investigate the following sparsity regime \begin{equation} \rho_n:\ \begin{cases} n^{-1}\prec \rho_n \preceq n^{-1/2}, &\textrm{ for acyclic }R\\ n^{-2/r}\prec \rho_n \preceq n^{-1/r}, &\textrm{ for cyclic }R\\ \end{cases} \label{rho_n-sparse-regime} \end{equation} It turns out that the Berry-Esseen bound would deteriorate and be worse than the conventional $n^{-1/2}$ in the i.i.d. and the noiseless U-statistic settings. The exact reason is technical and will be better seen in the proof of Theorem \ref{thm::sparse-main-theorem}, but the intuitive explanation is that if $\rho_n$ is too small, the higher degree ($\geq 2$) random errors in $\hat U_n-U_n$, caused by the randomness in $A|W$, diminishes too slowly compared to the scale of the demoninator of $\hat T_n$. If the network sparsity $\rho_n$ falls below the typically assumed lower bounds: $n^{-1}$ for acyclic $R$ and $n^{-2/r}$ for cyclic $R$ \citep{bickel2011method, bhattacharyya2015subsampling, green2017bootstrapping}, then no known consistency guarantee exists. \begin{thm}\label{thm::sparse-main-theorem} Under the conditions of Lemma \ref{lemma::term-approx}, except replacing Condition \eqref{condition::rho_n} by \eqref{rho_n-sparse-regime}, we have the following modified Berry-Esseen bound \begin{align} \left\| F_{\hat T_n}(u) - G_n(u) \right\|_\infty &\asymp \left\| F_{\hat T_n}(u) - \Phi(u) \right\|_\infty= {O\left({\cal M}(\rho_n,n; R)\right)} \bigwedge o(1),\notag \end{align} where recall that $\Phi(\cdot)$ is the CDF of $N(0,1)$. Moreover, $$ \left\| F_{\hat T_n}(u) - \hat G_n(u) \right\|_\infty = \begin{cases} {{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)}, &\textrm{ if }\rho_n=\omega(n^{-1}\log^{1/2} n),\\ o_p(1), &\textrm{ if }\rho_n = \omega(n^{-1}) \end{cases} $$ \end{thm} In the sparse regime, we could not control the uniform CDF approximation error bound below $n^{-1/2}$. Consequently, using Edgeworth expansion would not bring asymptotic merit compared to a simple $N(0,1)$ approximation. On the other hand, the conclusion of Theorem \ref{thm::sparse-main-theorem} connects the error bound results for dense and sparse regimes. Interestingly, as the order of $\rho_n$ decreases from $n^{-1/2}$ to $n^{-1}$ for acyclic $R$, or from $n^{-1/r}$ to $n^{-2/r}$ for cyclic $R$, we see a gradual depreciation in the uniform CDF approximation error from the order of $n^{-1/2}$ to merely uniform consistency. The classical literature only studied the boundary cases ($\rho_n=\omega(n^{-1})$ or $\rho_n=\omega(n^{-2/r})$, depending on $R$), and our result here reveals the complete picture. } We conclude this section by comparing our results to some representative works in classical and very recent literature. \begin{table}[ht!] \centering \vspace{-1em} \caption{Comparison of CDF approximation methods for noisy/noiseless studentized U-statistics} \label{table::comparison-of-results} \begin{adjustbox}{center} \begin{tabular}{|c|c|c|c|c|c|c|}\hline Method & \bigcell{c}{U-stat.\\type} & \bigcell{c}{Popul.\\momt.\footnotemark\label{footnote::knownpopmoments}} & \bigcell{c}{Smooth\\graphon} & \bigcell{c}{Non lat.\\/Cramer} & \bigcell{c}{Network sparsity\\assumption on $\rho_n$\footnotemark\label{footnote::sparsity-assumption}} & \bigcell{c}{CDF approx.\\error rate}\\\hline \multirow{3}{*}{\bigcell{c}{Our method\\(empirical Edgeworth)}} & \multirow{3}{*}{Noisy} & \multirow{3}{*}{No} & \multirow{3}{*}{No} & If yes & $\omega(n^{{-2/r}})$(C); $\omega(n^{{-1}})$(Ac)\footnotemark\label{footnote::motifshape} & \rule{0pt}{3ex}{${{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)} \wedge o_p(1)$} {\bf (H)}\footnotemark\label{footnote::higher-order-accuracy} \rule[-1.2ex]{0pt}{0pt}\\\cline{5-7} & & & & If no & \bigcell{c}{$\omega(n^{{-2/r}})$(C); $\omega(n^{{-1}})$(Ac)\\and $O\left((\log n)^{-1}\right)$(C, Ac)} & {${{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)} \wedge o_p(1)$} {\bf (H)}\\\hline \bigcell{c}{Node re-/sub- sampling\\justified by our theory} & Noisy & No & No & Yes & $\omega(n^{-1/r})$(C); $\omega(n^{-1/2})$(Ac) & $o_p(n^{-1/2})$ {\bf (H)}\\\hline \citet{bickel2011method} & Noisy & No\footnotemark\label{footnote::knowing-true-rho} & No & No & $\omega(n^{-2/r})$(C); $\omega(n^{-1})$(Ac) & Consistency \\\hline \citet{bhattacharyya2015subsampling} & Noisy & No & No & No & $\omega(n^{-2/r})$(C); $\omega(n^{-1})$(Ac) & Consistency\\\hline \citet{green2017bootstrapping} & Noisy & No & Mixed\footnotemark\label{footnote::greenshalizismoothness} & No & $R$ is Ac; or $\omega(n^{-1/(2r)})$(C)\footnotemark\label{footnote::greenshalizisparsity} & Consistency\\\hline \citet{levin2019bootstrapping} & Noisy & No & {Low-rank\footnotemark\label{footnote::levinlevinaassumptions}} & No & $\omega(n^{-1}\cdot \log n)$ (Ac*)\footnotemark\label{footnote::levinalevinaresults} & Consistency\\\hline Bickel, Gotze and Zwet \cite{bickel1986edgeworth} & Noiseless & Yes & No & Yes & Not applicable& ${o}(n^{-1})$ {\bf (H)}\\\hline Bentkus, Gotze and Zwet \cite{bentkus1997edgeworth} & Noiseless & Yes & No & Yes & Not applicable& ${O}(n^{-1})$ {\bf (H)}\\\hline Putter and Zwet \cite{putter1998empirical} & Noiseless & No & No & Yes & Not applicable & $o_p(n^{-1/2})$ {\bf (H)}\\\hline \citet{bloznelis2003edgeworth} & Noiseless & No & No & Yes & Not applicable& $o_p(n^{-1/2})$ {\bf (H)}\\\hline \end{tabular} \end{adjustbox} \vspace{-1em} \end{table} \footnotetext[\getrefnumber{footnote::knownpopmoments}]{``Yes'' means need to know the population moments that appear in Edgeworth coefficients, i.e. $\xi_1$, $\mathbb{E}[g_1^3(X_1)]$ and $\mathbb{E}[g_1(X_1)g_1(X_2)g_2(X_1,X_2)]$.} \footnotetext[\getrefnumber{footnote::sparsity-assumption}]{To compare $\rho_n$ assumptions, see our Remark \ref{remark::rho-n-lower-bound}} \footnotetext[\getrefnumber{footnote::motifshape}]{(C): {\bf c}yclic $R$; (Ac): {\bf ac}yclic $R$.} \footnotetext[\getrefnumber{footnote::higher-order-accuracy}]{Recall ${\cal M}(\rho_n,n; R)$ defined in \eqref{M} {and ${\tilde{O}_p}$ defined in Section \ref{subsec::tOp}}. {\bf (H)}: higher-order accuracy results. ``Consistency'': only convergence, no error rate.} \footnotetext[\getrefnumber{footnote::knowing-true-rho}]{In \cite{bickel2011method, bhattacharyya2015subsampling, lunde2019subsampling}, $\hat{U}_n-\mu_n$ was rescaled by $\rho_n$ and $n$. Whether assuming the knowledge of the true $\rho_n$ or not does not matter for their $o_p(1)$ error bound, but it would make a difference if an $o_p(n^{-1/2})$ or finer bound is desired. See our Remark \ref{remark::knowing-rho_n}.} \footnotetext[\getrefnumber{footnote::greenshalizismoothness}]{The bootstrap based on denoised $A$ requires smoothness. See Theorem 2 of \cite{green2017bootstrapping}.} \footnotetext[\getrefnumber{footnote::greenshalizisparsity}]{It seems their assumption for cyclic $R$ was a typo, and $\rho_n=\omega(n^{-2/r})$ should suffice. Also, they used \cite{bhattacharyya2015subsampling} in their proof, which requires $\rho_n = \omega(n^{-1})$ for (Ac).} \footnotetext[\getrefnumber{footnote::levinlevinaassumptions}]{\cite{levin2019bootstrapping} assumed the graphon rank is low and known.} \footnotetext[\getrefnumber{footnote::levinalevinaresults}]{(Ac*): They require the motif to be either acyclic or an $r$-cycle, see their Theorem 4. Their Theorem 3 requires condition (8) that only holds when $R$ is a clique.} \subsection{Overview} \emph{Network moments} are the frequencies of particular patterns, called \emph{motifs}, that repeatedly occur in networks \citep{milo2002network, alon2007network, rubinov2010complex}. {Examples include} triangles, stars and wheels. They provide {succinct and} informative sketches of potentially very high-dimensional network population distributions. Pioneered by \cite{bickel2011method, lovasz2009very}, the \emph{method of moments} for network data has become a powerful tool for frequentist nonparametric network inferences \citep{ambroise2012new, maugis2017statistical, wegner2018identifying, ali2016comparison, lunde2019subsampling, matsushita2020jackknife}. Compared to model-based network inference methods \citep{lei2016goodness, bickel2016hypothesis, wang2017likelihood, li2018two}, moment method enjoy{s} several unique advantages. First, {network moments play important roles in network modeling.} They are the building blocks of the well-known exponential random graph models (ERGM) \citep{hunter2008ergm, yan2016asymptotics}. {More generally, under an exchangeable network assumption,} the deep theory by \cite{bickel2011method} (Theorem 3) and \cite{borgs2010moments} (Theorem 2.1) show that knowing all population moments can uniquely determine the network model up to weak isomorphism, despite no explicit inversion formula is yet available. {From an inference perspective,} the evaluation of network moments is completely model-free, making them objective evidences for the specification{, validation} and comparison of network models \citep{broido2019scale, Seshadhri201911030, tsiotas2019detecting, ouadah2019degree}. Second, {network moments can be very efficiently computed, easily allowing parallel computing. This is a crucial advantage} in a big data era, where business and industry networks could contain $10^5\sim 10^7$ or even more nodes \citep{clauset2004finding, snapnets} and computation efficiency becomes a substantive practicality concern. Model-fitting based network inference methods might face challenges in handling huge networks, while moment method equipped with proper sampling techniques \citep{rohe2019critical, crane2018probabilistic} {will scale more comfortably (also see our comment in Section \ref{sec::discussion})}. Third, many network moments {and their derived functionals} are {important structural features of great practical} interest. Examples include clustering coefficient \citep{holland1971transitivity, watts1998collective}, degree distribution \citep{prvzulj2007biological, stephen2009explaining}, transitivity \citep{rohe2013blessing}, and {more listed in Table A.1 in \cite{rubinov2010complex}.} Despite the {importance and raising interest in} network moment method, the answer to the following core question remains under-explored: \begin{center} \smallskip \emph{What is the sampling distribution of a network moment?} \smallskip \end{center} For a given network motif ${\cal R}$, let $\hat{U}_n$ denote its sample relative frequency {(see \eqref{def::h(A)} for a formal definition)} with expectation $\mu_n:=\mathbb{E}[\hat{U}_n]$. Let $\hat{S}_n^2$ be an estimator of $\mathrm{Var}(\hat{U}_n)$ to be specified later. We are mainly interested in finding the distribution of the studentized form $\hat{T}_n := (\hat{U}_n - \mu_n)/\hat{S}_n$. It is well-known that under the widely-studied \emph{exchangeable network} model, $\hat{T}_n\stackrel{d}{\to} N(0,1)$ uniformly \citep{bickel2011method, bhattacharyya2015subsampling, green2017bootstrapping}, but usually, $N(0,1)$ only provides a {rough characterization of $F_{\hat T_n}$}, and one naturally yearns for a finer approximation. To this end, several network bootstrap methods have been recently proposed \citep{bickel2011method, bhattacharyya2015subsampling, green2017bootstrapping, levin2019bootstrapping, lunde2019subsampling} in an attempt to address this question, and they quickly inspired many follow-up works \cite{thompson2016using, tang2017semiparametric, gel2017bootstrap, chen2019bootstrap}, which clearly reflects the need of {an accurate approximation by data analysts}. However, compared to their empirical effectiveness, the theoretical {foundation} of network bootstraps remains weak. Almost all existing justifications of network bootstraps critically depend on the following type of results { \begin{align*} |\hat{U}_n^*-\hat{U}_n|=o_p(n^{-1/2}), &\quad \textrm{ and }\quad |\hat{U}_n-U_n|=o_p(n^{-1/2});\\ \textrm{ or similarly, }\left|\hat{T}_n^* - \hat{T}_n\right| = o_p(1), &\quad \textrm{ and }\quad |\hat{T}_n-T_n|=o_p(1); \end{align*}} where $\hat{U}_n^*$ or $\hat{T}_n^*$ are bootstrapped statistics. {Then the validity of network bootstraps is implied by the {well-known} asymptotic normality of $ U_n$ or $T_n$} \citep{bhattacharyya2015subsampling, green2017bootstrapping, lunde2019subsampling}. However, this approach cannot show whether network bootstraps have any accuracy advantage over a simple normal approximation, especially considering the much higher computational costs of bootstraps. In this paper, we propose the first provable \emph{higher-order} approximation to the sampling distribution of a given studentized network moment. {To our knowledge, we are the first to realize that in fact the noisy $\hat U_n$ and $\hat T_n$ are usually more analytically tractable than the noiseless versions $U_n$ and $T_n$. This sharply contrasts the existing literature on network bootstraps that attempt to prove the asymptotic properties of $\hat U_n$ by reducing it to $U_n$.} We briefly summarize our main results into an informal theorem {here. It turns out that the error bound depends on the shape of the motif $R$.} { \begin{defn}[Acyclic and cyclic motifs, see also \cite{bickel2011method,bhattacharyya2015subsampling, green2017bootstrapping, levin2019bootstrapping}] \label{definition::acyclic-cyclic-R} A motif $R$ is called \emph{acyclic}, if its edge set is a subset of an $r$-tree. The motif is called \emph{cyclic}, if it is \emph{connected} and contains at least one cycle. In other words, a \emph{cyclic} motif is connected but not a tree. \end{defn} \begin{defn} \label{definition::M} To simplify the statements of our method's error bound under different motif shapes, especially in Table \ref{table::comparison-of-results} and proof steps, define the following shorthand \begin{equation} {\cal M}(\rho_n,n; R):= \begin{cases} \left(\rho_n\cdot n \right)^{-1}{\cdot \log^{1/2} n+n^{-1}\cdot \log^{3/2}n}, & \textrm{ For acyclic }R\\ \rho_n^{-r/2}\cdot n^{-1}{\cdot \log^{1/2} n+n^{-1}\cdot \log^{3/2}n}, & \textrm{For cyclic }R \end{cases} \label{M} \end{equation} \end{defn} } Now we are ready to present the informal statement of our main results. \begin{thm}[Informal statement of main results] \label{theorem::introduction-main} Assume the network is generated from an exchangeable network model. Define the Edgeworth expansion for a given network moment ${\cal R}$ with $r$ nodes $s$ edges as follows: \begin{align*} G_n(x) &:= \Phi(x) + \frac{\phi(x)}{\sqrt{n}\cdot \xi_1^3} \cdot \Bigg\{ \frac{ 2 x^2 + 1}{6} \cdot\mathbb{E}[g_1^3(X_1)]\notag\\ & + \frac{r-1}2\cdot \left( x^2 + 1 \right)\mathbb{E}[g_1(X_1)g_1(X_2)g_2(X_1,X_2)]\Bigg\}, \end{align*} where $\Phi, \phi$ are the CDF and PDF of $N(0,1)$, and the {estimable coefficients components $\xi_1$, $\mathbb{E}[g_1^3(X_1)]$ and $\mathbb{E}[g_1(X_1)g_1(X_2)g_2(X_1,X_2)]$,} to be defined in Section \ref{section::moments-and-Edgeworth-expansion}, {only depend} on the graphon $f$ and the motif $R$. Let $\rho_n$ denote the network sparsity parameter. {In the dense regime, where we assume:} \begin{enumerate} \item $\rho_n^{-2s}\cdot \mathrm{Var}(g_1(X_1))\geq\textrm{constant}>0$; \item $\rho_n = \omega(n^{-1/2})$ for acyclic $R$, or $\rho_n = \omega(n^{-1/r})$ for cyclic $R$; \item Either $\rho_n\preceq (\log n)^{-1}$, or $\limsup_{t\to\infty}\left|\mathbb{E}\left[e^{{\mathbbm{i}} t g_1(X_1)/\xi_1}\right]\right|<1$; \end{enumerate} we have \begin{equation} \left\| F_{\hat{T}_n}(u) - G_n(u) \right\|_\infty = {O\left({\cal M}(\rho_n,n; R)\right)}, \label{thmeqn::introduction-main} \end{equation} where $\|H(u)\|_\infty := \sup_{u\in\mathbb{R}}\left| H(u) \right|$, and ${\cal M}(\rho_n,n;R)$, defined in \eqref{M}, satisfies ${\cal M}(\rho_n,n;R)\ll n^{-1/2}$. Under the same conditions, the empirical Edgeworth expansion $\hat{G}_n$ with estimated coefficients (see \eqref{Edgeworth::empirical}) satisfies { \begin{equation} \mathbb{P}\left(\left\| F_{\hat{T}_n}(u) - \hat{G}_n(u) \right\|_\infty > C\cdot {\cal M}(\rho_n,n;R)\right) = O(n^{-1}). \label{thmeqn::introduction-main-empirical} \end{equation} for a large enough absolute constant $C$. In the sparse regime, where we replace condition 2 by: \begin{enumerate} \item[2'.] $n^{-1}\prec \rho_n\preceq n^{-1/2}$ for acyclic $R$, or $n^{-2/r}\prec \rho_n\preceq n^{-1/r}$ for cyclic $R$, \end{enumerate} a simple $N(0,1)$ approximation achieves the following Berry-Esseen bound: \begin{align} \label{thmeqn::introduction-main-sparse} \left\| F_{\hat T_n}(u) - G_n(u) \right\|_\infty & \asymp \left\| F_{\hat T_n}(u) - \Phi(u) \right\|_\infty = {O\left({\cal M}(\rho_n,n; R)\right)} \bigwedge o(1).\notag \end{align} Moreover, we have $$ \left\| F_{\hat T_n}(u) - \hat G_n(u) \right\|_\infty = \begin{cases} {{\tilde{O}_p}\left({\cal M}(\rho_n,n; R)\right)}, &\textrm{ if }\rho_n=\omega(n^{-1}\log^{1/2} n),\\ o_p(1), &\textrm{ if }\rho_n = \omega(n^{-1}) \end{cases} $$ That is, in the sparse regime, the empirical Edgeworth expansion has the same order of approximation error as $N(0,1)$.} \end{thm} \subsection{Our contributions} Our contributions are three-fold. First, we establish the first accurate distribution approximations for network moments \eqref{thmeqn::introduction-main}. {The results} originated from our {discovery of the surprisingly blessing} roles that network noise and sparsity jointly play in this setting. {Our work reveals a new dimension to the understanding of these two components in network analysis.} Second, we propose a provably highly accurate and computationally efficient empirical Edgeworth approximation \eqref{thmeqn::introduction-main-empirical} for practical use. Third, our results {enable} accurate and fast nonparametric network inference procedures. To understand the strength of our main results \eqref{thmeqn::introduction-main} and \eqref{thmeqn::introduction-main-empirical}, notice that for mildly sparse networks, we achieve \emph{higher-order accuracy} in distribution approximation \emph{without non-lattice or smoothness assumption}. To our best knowledge, the non-lattice assumption is universally required to achieve higher-order accuracy in all literature {for similar settings}. However, this assumption is violated by some popular network models such as stochastic block model, arguably {one of the most widely-used} network models. Waiving the graphon smoothness assumption makes our approach a powerful tool for model-free exploratory network analysis and for analyzing networks with {high complexity and} irregularities. { In the sparse regime, our modified Berry-Esseen bound \eqref{thmeqn::introduction-main-sparse} significantly improves over the previous best known bound $o(1)$ in existing literature \citep{bickel2011method, bhattacharyya2015subsampling, green2017bootstrapping, levin2019bootstrapping} and fills a large missing part in the big picture. As the network sparsity $\rho_n$ declines from $n^{-1/2}$ towards $n^{-1}$ for acyclic $R$ and from $n^{-1/r}$ towards $n^{-2/r}$ for cyclic $R$, our result reveals a gradually depreciating uniform error bound. When $\rho_n$ hits the minimum assumption boundary, our result matches the uniform consistency result in classical literature. } Our key insight is {to view} the sample network moment $\hat{U}_n$ as a \emph{noisy U-statistic}, where ``noise'' refers to edge-wise observational errors in $A$. Our analysis reveals the connection and differences between the noisy and the conventional \emph{noiseless} U-statistic settings. We discover the surprisingly blessing roles that the two typically-hated factors, namely, \emph{edge-wise observational errors} and \emph{network sparsity} jointly play in this setting: \begin{enumerate} \item The edge-wise errors behave like a smoother that tames potential distribution discontinuity due to a lattice or discrete network population\footnote{More precisely speaking, such irregularity is jointly induced by both the network population distribution and the shape of the motif, but the former is usually the determining factor.}; \item Network sparsity boosts the smoothing effect of the error term to a sufficient level such that $F_{\hat{T}_n}$ becomes analytically tractable. \end{enumerate} {At first sight, the smoothing effect of edge-wise errors is rather counter-intuitive, since generating $A$ from $W$ \emph{discretizes} edge probabilities from numbers originally in a continuum $[0,1]$ to binary entries. How could this eventually yield a smoothing effect? In Section \ref{subsec::U-hat}, we present two simple examples to illustrate the intuitive reason.} In our proofs, we present original analysis to carefully quantify the impact of such smoothing effect. Our analysis techniques are very different from those in network bootstrap papers \cite{bhattacharyya2015subsampling, green2017bootstrapping, levin2019bootstrapping, lunde2019subsampling}. It seems unlikely that our assumptions can be substantially relaxed since they match the well-known minimum conditions in related settings in \cite{lahiri1993bootstrapping}. Our empirical Edgeworth expansion \eqref{thmeqn::introduction-main-empirical} is very fast, much more scalable than network bootstraps, and easily permits parallel computing. We showcase three applications of our theory. We present the first proof of the higher-order accuracy of some mainstream network bootstrap techniques under certain conditions, which their original proposing papers did not prove. Our results also enable rich future works on accurate and {computationally} very efficient network inferences. We present two immediate applications to testing and Cornish-Fisher type confidence interval for network moments with explicit accuracy guarantees. \subsection{Paper organization} The rest of this paper is organized as follows. In Section \ref{sec::problem-set-up-and-literature-review}, we formally set up the problem and provide a detailed literature review. In Section \ref{section::moments-and-Edgeworth-expansion}, we present our core ideas, derive the Edgeworth expansions and establish their uniform approximation error bounds. We discuss different versions of the studentization form. {We also present our modified Berry-Esseen theorem for the sparse regime.} In Section \ref{sec::applications}, we present three applications of our results: bootstrap accuracy, one-sample test, and one-sample Cornish-Fisher confidence interval. In Section \ref{sec::simulations}, {we conduct three simulations to evaluate the performance of our method from various aspects}. Section \ref{sec::discussion} discusses interesting implications of our results and future work. { \subsection{Big-O and small-o notation system} \label{subsec::tOp} In this paper, we will make frequent references to the big-O and small-o notation system. We use the same definitions of $O(\cdot)$, $o(\cdot)$, $\Omega(\cdot)$ and $\omega(\cdot)$ as that in standard mathematical analysis, and the same $O_p(\cdot)$ and $o_p(\cdot)$ as that in probability theory. For a random variable $Z$ and a deterministic sequence $\{\alpha_n\}$, define ${\tilde{O}_p}(\cdot)$ as follows \begin{align} Z := {\tilde{O}_p}(\alpha_n), & \textrm{ if } \mathbb{P}(|Z|\geq C\alpha_n) = O(n^{-1}) \textrm{ for some constant $C>0$.} \end{align} This is similar to ``o$_p$'' in \cite{maesono1997edgeworth} (see the remark beneath Lemma 2) and Assumption (A1) in \cite{lai1993edgeworth}. For technical reasons, in this paper, we do not need to define a $\top(\cdot)$ sign. } \subsection{Exchangeable networks and graphon model} \label{subsec::problem-setup-graphon-model} The base model of this paper is exchangeable network model \citep{diaconis2007graph, bickel2009nonparametric}. Exchangeability describes the unlabeled nature of many networks in social, knowledge and biological contexts, where node indices do not carry meaningful information. It is a very rich family that contains many popular models as special cases, including the stochastic block model and its variants \footnote{{Here we follow the convention of \cite{bickel2009nonparametric} and view community memberships as randomly sampled from a multinomial distribution.}} \citep{holland1983stochastic, zhao2012consistency, zhang2016minimax, airoldi2008mixed, karrer2011stochastic, zhang2014detecting, jing2020community}, the configuration model \citep{chung2002connected, newman2009random}, latent space models \citep{hoff2002latent, grover2016node2vec} and general smooth graphon models \citep{choi2012stochastic, gao2015rate, zhang2017estimating}\footnote{{Smooth graphon: we can simply think that a graphon is called ``smooth'' if $f(\cdot,\cdot)$ is a smooth function. In the rigorous definition, $f$ is smooth if $f(\psi(\cdot),\psi(\cdot))$ is smooth under some measure-preserving map $\psi:[0,1]\to[0,1]$, see \cite{bickel2009nonparametric, gao2015rate, zhang2017estimating}.}}. {In this paper, we base our study on the following exchangeable network model called \emph{graphon model}. The framework is closely related to the Aldous-Hoover representation for infinite matrices \citep{aldous1981representations, hoover1979relations}. Under a graphon model,} the $n$ nodes correspond to latent space positions $X_1,\ldots,X_n\stackrel{\textrm{i.i.d.}}{\sim}$~Uniform$[0,1]$. Network generation is governed by a measurable latent graphon function $f(\cdot,\cdot): [0,1]^2\to [0,1]$, $f(x,y)=f(y,x)$ that encodes all structures. The edge probability between nodes $(i,j)$ is \begin{equation} W_{ij} = W_{ji} := \rho_n\cdot f(X_i,X_j); \quad 1\leq i<j\leq n, \label{graphon-model::W} \end{equation} where the sparsity parameter $\rho_n\in(0,1)$ absorbs the constant factor, and we fix {$\int_{[0,1]^2}f(u,v){\mathrm{d}} u{\mathrm{d}} v=$~constant}. We only observe the adjacency matrix $A$: \begin{equation} A_{ij}=A_{ji}|W \sim \textrm{Bernoulli}(W_{ij}), \forall 1\leq i<j\leq n. \label{graphon-model::A|W} \end{equation} The model defined by \eqref{graphon-model::W} and \eqref{graphon-model::A|W} has a well-known issue that both $f$ and $\{X_1,\ldots,X_n\}$ are only identifiable up to equivalence classes \citep{chan2014consistent}. This may pose significant challenges for model-based network inference, {especially those based on parameter estimations}. On the other hand, network moments are permutation-invariant and thus clearly immune to this identification issue{, making them attractive candidates for inference objective}. \subsection{Network moment statistics} \label{subsec::problem-setup-network-moments} To formalize network moments, it is more convenient to first define the sample version and then the population version. Each network moment is indexed by the corresponding motif ${\cal R}$. For simplicity, we follow the convention to focus on connected motifs. Let $R$ represent the adjacency matrix of ${\cal R}$ with $r$ nodes and $s$ edges. For any $r$-node sub-network $A_{i_1,\ldots,i_r}$ of $A$, define \begin{equation} h(A_{i_1,\ldots,i_r}) := \mathbbm{1}_{[A_{i_1,\ldots,i_r}{\sqsupseteq} R]}\footnote{Since we consider an arbitrary but fixed $R$ throughout this paper, without causing confusion, we drop the dependency on $R$ in symbols such as $h$ to simplify notation.},\quad \textrm{ for all }1\leq i_1<\cdots<i_r\leq n, \label{def::h(A)} \end{equation} Here, ``$A_{i_1,\ldots,i_r}{{\sqsupseteq}} R$'' means there exists a permutation map $\pi:\{1,\ldots,r\}\to\{1,\ldots,r\}$, such that $A_{i_1,\ldots,i_r} {\geq} R_\pi$, {where the ``$\geq$'' is entry-wise} and $R_\pi$ is defined as $(R_\pi)_{ij}:= R_{\pi(i)\pi(j)}$. { Our definition of $h(A_{i_1,\ldots,i_r})$ here is similar to the ``$Q(R)$'' defined in \cite{bickel2011method}. One can similarly define \begin{equation} \tilde h(A_{i_1,\ldots,i_r}) := \mathbbm{1}_{[A_{i_1,\ldots,i_r}\cong R]},\quad \textrm{ for all }1\leq i_1<\cdots<i_r\leq n, \label{def::htilde(A)} \end{equation} where ``$A_{i_1,\ldots,i_r}\cong R$'' means there exists a permutation map $\pi:\{1,\ldots,r\}\to\{1,\ldots,r\}$, such that $A_{i_1,\ldots,i_r} = R_\pi$. The definition of $\tilde h$ corresponds to the ``$P(R)$'' studied in \cite{bickel2011method, bhattacharyya2015subsampling}, and \cite{green2017bootstrapping}. As noted by \cite{bickel2011method}, each $h$ can be explicitly expressed as a linear combination of $\tilde h$ terms, and vice versa. Therefore, they are usually treated with conceptual equivalence in literature, and most existing papers would choose one of them to study.} { For technical cleanness, in this paper we focus on $h$. We believe our method is also applicable to analyzing $\tilde h$, but the analysis is much more complicated and we leave it to future work. } Define the \emph{sample network moment} as \begin{equation} \hat{U}_n := \dfrac1{\binom{n}r}\sum_{1\leq i_1<\cdots<i_r\leq n} h(A_{i_1,\ldots,i_r}), \label{def::sample-moments} \end{equation} Then we define the \emph{sample-population version} and \emph{population version} of $\hat{U}_n$ to be $U_n := \mathbb{E}[ \hat{U}_n|W ]$ and $\mu_n := \mathbb{E}[U_n] = \mathbb{E}[\hat{U}_n ]$, respectively. We refer to $\hat{U}_n$ as the \emph{noisy} U-statistic, and call $U_n := \binom{n}r^{-1}\sum_{1\leq i_1<\cdots<i_r\leq n} h(W_{i_1,\ldots,i_r}) = \binom{n}r^{-1}\sum_{1\leq i_1<\cdots<i_r\leq n} h(X_{i_1},\ldots,X_{i_r})$\footnotemark\label{footnote::def::h} the (conventional) \emph{noiseless} U-statistic. Similar to the {insight that studentization is key to achieve higher-order accurate approximations} in the i.i.d. setting (Section 3.5 of \cite{wasserman2006all}), we study \begin{equation} \hat{T}_n := \frac{\hat{U}_n - \mu_n}{\hat{S}_n},\notag \end{equation} where $\hat{S}_n$ will be specified later {in \eqref{def::jackknife-variance-estimator} and \eqref{def::our-variance-estimator}}. We can similarly {standardize} or studentize the noiseless U-statistic $U_n$ by $\check{T}_n:=(U_n-\mu_n)/\sigma_n$ and $T_n:=(U_n-\mu_n)/S_n$, respectively, where $\sigma_n^2:=\mathrm{Var}(U_n)$ and $S_n^2$ is a {$\sqrt{n}$-consistent estimator\footnotemark\label{footnote::def::sqrt-n-consistency} for $\sigma_n^2$, which will be specified later}. \footnotetext[\getrefnumber{footnote::def::h}]{{Here, without causing confusion, we slightly abused the notation of $h(\cdot)$, letting it take either $W$ or $X$ as its argument, noticing that $W$ is determined by $X_1,\ldots,X_n$.}} \footnotetext[\getrefnumber{footnote::def::sqrt-n-consistency}]{{$\sqrt{n}$-consistency of $S_n^2$ means that $\sqrt{n}(S_n^2-\sigma_n^2)=o_p(1)$, see \cite{bhattacharyya2015subsampling, levin2019bootstrapping} for definition.}} \subsection{Edgeworth expansions for i.i.d. data and noiseless U-statistics} Edgeworth expansion \citep{edgeworth1905law, wallace1958asymptotic} refines the central limit theorem. It is the supporting pillar in the justification of bootstrap's higher-order accuracy. In this subsection, we review the literature on Edgeworth expansions for i.i.d. data and {conventional noiseless} U-statistics, due to their close connection. Under mild conditions, the one-term Edgeworth expansion for {the sample mean of $n$} i.i.d. mean-zero and unit-variance $X_1,\ldots,X_n$ {reads} $F_{n^{{1/2}}(\bar{X} - \mathbb{E}[X_1])/\sigma_X}(u) = \Phi(u) - n^{-1/2}\cdot \mathbb{E}[X_1^3](u^2-1)\phi(u)/6 + O(n^{-1})$, where $\Phi$ and $\phi$ are the CDF and PDF of $N(0,1)$, respectively. Edgeworth terms of even higher orders can be derived \citep{hall2013bootstrap} but are not meaningful in practice unless we know a few true population moments. The minimax rate for estimating $\mathbb{E}[X_1^3]$ is $O_p(n^{-1/2})$, so $O(n^{-1})$ is the best practical remainder bound for an Edgeworth expansion. For further references, see \cite{bickel1974edgeworth, sakov2000edgeworth, bhattacharya1978validity, hall1987edgeworth, hall1993edgeworth, babu1984one} and textbooks \cite{hall2013bootstrap, davison1997bootstrap, wasserman2006all}. The literature on Edgeworth expansions for U-statistics concentrates on the noiseless version. In early 1980's, \cite{callaert1978berry, janssen1981rate, callaert1981order} established the asymptotic normality of {the standarized} and {the studentized U-statistics, respectively, both} with $O(n^{-1/2})$ Berry-Esseen type bounds. Then \cite{callaert1980edgeworth, bickel1986edgeworth,lai1993edgeworth} approximated degree-two (i.e. $r=2$) standardized U-statistics with an $o(n^{-1})$ remainder {with known population moments}, and \cite{bentkus1997edgeworth} established an $O(n^{-1})$ bound under relaxed conditions for more general symmetric statistics. Later, \cite{helmers1991edgeworth, putter1998empirical} studied empirical Edgeworth expansions (EEE) {with estimated coefficients} and established $o(n^{-1/2})$ bounds. For finite populations, \cite{babu1985edgeworth, bloznelis2001orthogonal, bloznelis2002edgeworth, bloznelis2003edgeworth} established the earliest results, and we will use some of their results in our analysis of network bootstraps. An incomplete list of other notable works {on Edgeworth expansions for noiseless U-statistics with various finite moment assumptions} includes \cite{bentkus1994lower, hall1995uniform, jing2003edgeworth, maesono1997edgeworth, bentkus2009normal, jing2010unified}. \subsection{The non-lattice condition and lattice Edgeworth expansions in the i.i.d. setting} \label{subsec::Edgeworth-lattice-iid} A major assumption called the \emph{non-lattice condition} is critical for achieving $o(n^{-1/2})$ accuracy in Edgeworth expansions {and is needed by} all results in the i.i.d. setting {without} oracle moment knowledge and all results for noiseless U-statistics, but this condition is clearly not required {for} an $O(n^{-1/2})$ accuracy bound\footnote{Simply use a Berry-Esseen theorem.}. A random variable $X_1$ is called \emph{lattice}, if it is supported on $\{a + bk: k\in\mathbb{Z}\}$ for some $a,b\in\mathbb{R}$ where $b\neq 0$. General discrete distributions are ``nearly lattice'' \footnote{``A discrete distribution is nearly-lattice'': a discrete distribution, if not already lattice, can be viewed as a lattice distribution with diminishing periodicity.}. A distribution is essentially \emph{non-lattice} if it contains a continuous component. In many works, the non-lattice condition is replaced by the stronger Cramer's condition \citep{cramer1928composition}: \begin{equation} \limsup_{t\to\infty}\left| \mathbb{E}\left[ e^{{\mathbbm{i}} t X_1} \right] \right| <1.\notag \end{equation} For U-statistics, this condition is imposed on $g_1(X_1):=\mathbb{E}[h(X_1,\ldots,X_r)|X_1] - \mu_n$. Cramer's condition can be relaxed \citep{angst2017weak, mattner2017optimal, song2016ordering, song2018uniform} towards a non-lattice condition, but all {existing} relaxations come at the price of essentially depreciated error bounds \footnote{{To our knowledge,} existing {results} assuming only non-lattice{ness} achieve no better than $o(n^{-1/2})$ approximation errors. {For example,} \cite{bentkus1997edgeworth} replaces {the RHS} ``1'' in Cramer's condition by $1-q$ {and assumes} it holds for $t\preceq n^{1/2}$. {They} obtain an error bound proportional to $q^{-2}$. Another example is \cite{bloznelis2002edgeworth}. It replaces \cite{bentkus1997edgeworth}'s $t$ range by $t\preceq \pi$ {(their $\pi$ is a variable)} and obtains an error bound proportional to $q^{-2}\pi^{-2}$. Also see the comment {beneath} equation (4.7) of \cite{putter1998empirical}.}. Therefore, for simplicity, in Theorems \ref{thm::main-theorem} and \ref{thm::bootstrap-accuracy}, we use Cramer's condition to represent the non-lattice setting. However, in network analysis, Cramer's condition {may be a strong assumption, for the following reasons. First, it} is violated by stochastic block model, {a very popular and important} network model. In a block model, $g_1(X_1)$ only depends on node $1$'s community membership, thus is discrete. {Second, this} condition is difficult to check in practice. {Third, }some smooth models may even {induce a lattice $g_1(X_1)$ under certain} motifs {and a non-lattice $g_1(X_1)$ under a different motif}. For example, under the graphon model $f(x,y):= 0.3+0.1\cdot \mathbbm{1}_{[x>1/2; y>1/2]} + 0.1 \sin\left( 2\pi(x+y) \right)$, {$g_1(X_1)$ is lattice} when $R$ is an edge, but it is non-lattice when $R$ is a triangle. Next, we {review existing treatments of Edgeworth expansion in the lattice case that will spark} the key inspiration to our work. {In current literature, in the lattice case, we could approximate the CDF of an i.i.d. sample mean at higher-order accuracy, where the lattice Edgeworth expansion would contain an order $n^{-1/2}$ jump function; whereas to our best knowledge, no analogous result exists for U-statistics.} {Available} approaches can be categorized into two mainstreams: (1) adding an artificial error term to the sample mean to smooth out lattice-induced discontinuity \citep{singh1981asymptotic, lahiri1993bootstrapping}; and (2) formulating the lattice version Edgeworth expansion with a jump function \citep{singh1981asymptotic}. The seminal work \cite{singh1981asymptotic} adds a uniform error of bandwidth $n^{-1/2}$, and by {inverting} its impact {on} the smoothed distribution function, it {explicitly} formulates the lattice Edgeworth expansion with an $O(n^{-1})$ remainder. Another classical work \cite{lahiri1993bootstrapping} uses a normal {artificial} error instead of uniform and shows that the Gaussian bandwidth must be $\omega((\log n/ n)^{1/2})$ and $o(1)$ to provide sufficient smoothing effect without {causing} an $\omega(n^{-1/2})$ distribution distortion. Other notable works include \cite{woodroofe1988singh, kolassa1990edgeworth, babu1989edgeworth}, {in which, \cite{woodroofe1988singh} and \cite{kolassa1990edgeworth} also formulate lattice Edgeworth expansions in the i.i.d. univariate setting, and \cite{babu1989edgeworth} studies Edgeworth expansions for the sample mean of i.i.d. random vectors, where some dimensions are lattice and the others are non-lattice.} {Despite the significant achievements of these treatments, latticeness remains an obstacle in practice. The difficulties are two-fold.} {On one hand, if we introduce an artificial error to smooth the distribution, it will unavoidably} bring an $\Omega(n^{-1/2})$ distortion to the original distribution\footnote{To see this, simply notice that the original distribution contains $n^{-1/2}$ jumps, but the smoothed distribution does not, {so an $o(n^{-1/2})$ approximation error is impossible} \citep{bickel1986edgeworth}.}. {On the other hand, the exact formulation of a lattice Edgeworth expansion contains} an $n^{-1/2}$ jump term. In many examples such as bootstrap, the jump locations depend on the true population variance, laying an uncrossable $\Omega(n^{-1/2})$ barrier for practical CDF approximation. {For more details, see page 91 of \cite{hall2013bootstrap}.} \subsection{{Simulation 1: Higher-order accuracy of empirical Edgeworth expansion}} \label{subsec::simulation-1} {In the first simulation, }our numerical studies focus on the CDF of $F_{\hat{T}_n}$. In an illustrative example, we simulate with a lattice $g_1(X_1)$ and show the distinction between $F_{\hat{T}_n}$ and $F_{T_n}$ that clearly illustrates the self-smoothing effect in $\hat{T}_n$. Then we systematically compare the performance of our empirical Edgeworth expansion to benchmarks that demonstrates the clear advantage of our method in both accuracy and computational efficiency. We begin by describing the basic settings. We range the network size $n$ in an exponentially spaced set $n\in\{10,20,40,80,160\}$. Synthetic network data are generated from three graphons marked by their code-names in our figures: (1). \texttt{"BlockModel":} This is an ordinary stochastic block model with $K=2$ equal-sized communities and the following edge probabilities $B = (0.6,0.2;0.2,0.2)$; (2). \texttt{"SmoothGraphon":} Graphon 4 in \cite{zhang2017estimating}, i.e. $f(u,v):=(u^2+v^2)/3\cdot \cos(1/(u^2+v^2))+0.15$. This graphon is smooth and full-rank \citep{zhang2017estimating}; (3). \texttt{"NonSmoothGraphon"}\citep{choi2017co}: We set up a high-fluctuation area in a smooth $f$ to emulate the sampling behavior of a non-smooth graphon, as follows $$ f(u,v):=0.5 \cos\left\{ 0.1/((u-1/2)^2+(v-1/2)^2)^{-1} + 0.01 \right\}\max\{u,v\}^{2/3}+0.4. $$ We test the {four} simplest motifs: \emph{edge}, \emph{triangle}, \emph{V-shape}\footnote{A ``V-shape'' is the motif obtained by disconnecting one edge in a triangle. In the language of \cite{bickel2011method}, it is a 2-star.}{, and a \emph{three-star} among 4 nodes with edge set $\{(1,2),(1,3),(1,4)\}$}. The main computation bottleneck lies in the evaluation of $F_{\hat{T}_n}$. Network bootstraps also becomes costly as $n$ increases. The benchmarks are: 1. $N(0,1)$ (its computation time is deemed zero and not compared to others); 2. sub-sampling by \cite{bhattacharyya2015subsampling} with $n^*=n/2$; 3. re-sampling $A$ by \cite{green2017bootstrapping}; 4. latent space bootstrap called ``ASE plug-in'' defined in Theorem 2 of \cite{levin2019bootstrapping}. Notice that we equipped \cite{levin2019bootstrapping} with an adaptive network rank estimation\footnote{Consequently, our enhanced version of this benchmark can decently denoise some smooth but high-rank graphons, in view of the remarks in \cite{zhang2017estimating} and the results of \cite{xu2018rates}.} by USVT \citep{chatterjee2015matrix}. For each (graphon, motif, $n$) tuple, we first evaluate the true sampling distribution of $\hat{T}_n$ by a Monte-Carlo approximation that samples $n_{\mathrm{MC}}:= 10^6$ networks from the graphon. Next we start $30$ repeated experiments: in each iteration, we sample $A$ from the graphon and approximate $F_{\hat{T}_n}$ by all methods, in which we draw $n_{\mathrm{boot}}=2000$ bootstrap samples for each bootstrap method -- notice that this is 10 times that in \cite{levin2019bootstrapping}. We compare \begin{equation} \mathrm{Error}(\hat{F}_{\hat{T}_n}) := \sup_{u\in[-2,2]; 10u\in\mathbb{Z}}\left|\hat{F}_{\hat{T}_n}(u) - F_{\hat{T}_n}(u)\right|. \label{simu::loss-function} \end{equation} \begin{remark} We need many Monte-Carlo repetitions, because the uniform accuracy of the empirical CDF of an i.i.d. sample is only $O_p(n_{\mathrm{MC}}^{-1/2})$ \citep{dvoretzky1956asymptotic, kosorok2007introduction}, and for the noiseless and noisy U-statistic setting, the bound might be worse than the i.i.d. setting due to dependency\footnote{This is not to be confused with the Edgeworth approximation error bound. In this Monte Carlo procedure, both the true and approximate $F_{\hat{T}_n}$ are oracle.}. Therefore, we set $n_{\mathrm{MC}}\gg \max(n^2)= 160^2$ to prevent the errors of the compared methods being dominated by the error of the Monte-Carlo procedure; while keep our simulations reproducible with moderate computation cost. We did find smaller $n_{\mathrm{MC}}$ such as $10^5$ to cloud the performance of our method. \end{remark} \begin{figure}[h!] \centering \includegraphics[width=0.95\textwidth]{./illustrate.png} \caption{CDF curves of the studentization forms and approximations. Network size $n=80$. The graphon is the ``\texttt{BlockModel}'' we described earlier in this section, and the motif is triangular. Each bootstrap method draws 500 random samples. \texttt{TrueA} is $F_{\hat{T}_n}$; \texttt{TrueAJack} is $F_{\hat{T}_{n;\mathrm{jackknife}}}$; \texttt{TrueW} is $F_{T_n}$; \texttt{Edgeworth} is our empirical Edgeworth expansion; \texttt{Re-sample} is node re-sampling $A$ in \cite{green2017bootstrapping}; \texttt{Sub-sample} is node sub-sampling $A$ in \cite{bhattacharyya2015subsampling}; \texttt{Levin-Levina} is the ``ASE plug-in'' bootstrap in \cite{levin2019bootstrapping}.} \label{fig::illustration-1} \end{figure} Now we present the results. We first present the illustrative simulation for just one specific setting. Figure \ref{fig::illustration-1} shows the distribution approximation curves under a block model graphon that yields a lattice $g_1(X_1)$. Lines correspond to the population CDF of $\hat{T}_n$, its jackknife version and noiseless version, all evaluated by Monte-Carlo procedures; our proposed empirical Edgeworth expansion; and benchmarks. We make two main observations. First, \texttt{TrueA} and \texttt{TrueAJack} are almost indistinguishable, echoing our Theorem \ref{thm::jackknife}; meanwhile, they are both smooth and rather different from the step-function \texttt{TrueW}. This clearly demonstrates the self-smoothing feature of $\hat{T}_n$ in the lattice case. If we change the graphon to a smooth one, these curves would all be smooth and close to each other. Second, we observe the higher accuracy of our empirical Edgeworth expansion compared to competing methods. In fact, repeating this experiment multiple times, our method shows significantly stabler approximations than bootstraps. Next, we conduct a systematic comparison of the performances of all methods across many settings. We mainly varied three factors: graphon type, motif type and network size, over the previously described ranges. Our experiment results under different network sparsity levels would have to sink to Supplemental Material due to page limit, and here we keep $\rho_n=1$. Results are shown in Figure \ref{fig::numerical-1} (error) and Figure \ref{fig::numerical-2} (time cost), where error bars show standard deviations. \begin{figure}[htbp!] \begin{adjustwidth}{-\oddsidemargin-1.5in}{-\rightmargin-2in} \centering \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_BlockModel_Edge_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_SmoothGraphon_Edge_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_NonSmoothGraphon_Edge_sparsity_10_0}\\ \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_BlockModel_Triangle_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_SmoothGraphon_Triangle_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_NonSmoothGraphon_Triangle_sparsity_10_0}\\ \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_BlockModel_Vshape_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_SmoothGraphon_Vshape_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_NonSmoothGraphon_Vshape_sparsity_10_0}\\ \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_BlockModel_ThreeStar_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_SmoothGraphon_ThreeStar_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Error_NonSmoothGraphon_ThreeStar_sparsity_10_0} \end{adjustwidth} \caption{{CDF approximation errors. Both axes are log(e)-scaled.} {\bf Motifs:} row 1: {\tt Edge}; row 2: {\tt Triangle}; row 3: {\tt Vshape}{; row 4: {\tt ThreeStar}}. \tred{Red solid curve marked circle}: our method (empirical Edgeworth); black dashed curve marked down-triangle: $N(0,1)$ approximation; \textcolor{green}{green dashed curve marked up-triangle}: re-sampling of $A$ in \cite{green2017bootstrapping}; \tblue{blue dashed curve marked plus}: \cite{bhattacharyya2015subsampling} sub-sampling $\asymp n$ nodes; \textcolor{magenta}{magenta dashed line with square markers}: ASE plug-in bootstrap in \cite{levin2019bootstrapping}.} \label{fig::numerical-1} \end{figure} \begin{figure}[htbp!] \begin{adjustwidth}{-\oddsidemargin-1.5in}{-\rightmargin-2in} \centering \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_BlockModel_Edge_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_SmoothGraphon_Edge_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_NonSmoothGraphon_Edge_sparsity_10_0}\\ \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_BlockModel_Triangle_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_SmoothGraphon_Triangle_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_NonSmoothGraphon_Triangle_sparsity_10_0}\\ \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_BlockModel_Vshape_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_SmoothGraphon_Vshape_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_NonSmoothGraphon_Vshape_sparsity_10_0}\\ \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_BlockModel_ThreeStar_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_SmoothGraphon_ThreeStar_sparsity_10_0} \includegraphics[width=0.4\textwidth]{./Figures/New-numplot_Time_NonSmoothGraphon_ThreeStar_sparsity_10_0} \end{adjustwidth} \caption{{Time costs (in seconds) of all methods.} Both axes are log(e)-scaled. {\bf Motifs:} row 1: {\tt Edge}; row 2: {\tt Triangle}; row 3: {\tt Vshape}{; row 4: {\tt ThreeStar}}. \tred{Red solid curve marked circle}: our method (empirical Edgeworth); \textcolor{green}{green dashed curve marked up-triangle}: re-sampling of $A$ in \cite{green2017bootstrapping}; \tblue{blue dashed curve marked plus}: \cite{bhattacharyya2015subsampling} sub-sampling $\asymp n$ nodes; \textcolor{magenta}{magenta dashed line with square markers}: ASE plug-in bootstrap in \cite{levin2019bootstrapping}. We regarded $N(0,1)$ as zero time cost so does not appear in the time cost plot.} \label{fig::numerical-2} \end{figure} In all experiments, our empirical Edgeworth expansion approach exhibited clear advantages over benchmark methods in all aspects: the absolute values of errors, the diminishing rates of errors, and computational efficiency. Our method shows a higher-order accuracy by slopes steeper than $-1/2$ and much steeper than other methods. On computation efficiency, our method is the second cheapest after the simple $N(0,1)$ approximation (that does not need computation) and much faster than network bootstraps. It typically costs about $e^{-5}\approx 1/150$ the time of sub-sampling and about $e^{-7}\approx 1/1000$ the time of re-sampling. Our method only needs one run and does not require repeated sampling. Notice that there is no simple rule to judge the difficulty of different scenarios, which jointly depends on the graphon and the motif through implicit and complex relationship. In our experience, triangle may be more difficult than V-shape under some graphons, but easier under some others, and this comparison may vary from method to method. Answering this question requires calculation of the population Edgeworth expansion up to $o(n^{-1})$ remainder, and the leading term in the remainder of the one-term Edgeworth expansion would then quantify the real difficulty. But the calculation is very complicated and outside the scope of this paper. We did not observe the higher-order accuracy of bootstrap methods as our results predicted. One likely reason is the numerical accuracy limited by the $n_{\mathrm{boot}}$ that our computing servers can afford. We did see an observable improvement in the performances of network bootstraps as we increased $n_{\mathrm{boot}}$ from 200 suggested by \cite{levin2019bootstrapping} to the current 2000. But further increasing $n_{\mathrm{boot}}$ will also increase their time costs and potentially memory usage. We ran each experiment on 36 parallel Intel(R) Xeon(R) X5650 CPU cores at \@ 2.67GHz with 12M cache and 2GB RAM. It took roughly 3$\sim$8 hours to run each experiment that produces one individual plot in Figures \ref{fig::numerical-1} and \ref{fig::numerical-2}. { \subsection{Simulation 2: Finite-sample performance of Cornish-Fisher confidence interval} \label{subsec::simulation-2::CI} In this simulation, we numerically assess the performance of our Cornish-Fisher confidence interval, compared to benchmark methods. Throughout this subsection, we set $\alpha=0.2$ and focus on symmetric two-sided confidence intervals. We inherit most simulation settings from Section \ref{subsec::simulation-1} with some modifications we now clarify. The main difference is that in this simulation, we must conduct many repeated experiments in order to accurately evaluate the coverage probability (each iteration produces a binary outcome of whether the CI contains the population parameter). We repeated the experiment 10000 times for our method and normal approximation, and 500 times for the much slower bootstrap methods. Due to the computer limitations, while we can keep the same number of Monte Carlo evaluations, in order to repeat the entire experiment 500 times to accurately evaluate the actual CI coverage rates of bootstraps, we have to reduce their numbers of bootstrap samples to 500 (still more than the 200 in \cite{levin2019bootstrapping}). We evaluate three performance measures: {\tt coverage}: \emph{actual coverage probability}; {\tt length}: \emph{confidence intereval length}; and {\tt time}: \emph{computation time in seconds}. \input{Figures/Table_CI_n_80_sparsity_10_0_graphon_BlockModel.txt} \input{Figures/Table_CI_n_80_sparsity_10_0_graphon_SmoothGraphon.txt} \input{Figures/Table_CI_n_80_sparsity_10_0_graphon_NonSmoothGraphon.txt} Due to page limit, here we present the results for the setting $n=80$ and $\rho_n=1$ in Tables \ref{table::n=80::sparse-dense::graphon-block model} (block model), \ref{table::n=80::sparse-dense::graphon-smooth graphon} (smooth graphon) and \ref{table::n=80::sparse-dense::graphon-non-smooth graphon} (non-smooth graphon). Each entry is formatted ``mean(standard deviation)''. We sink the remaining results to Supplemental Materials. Our method exhibits very accurate actual coverage probabilities consistently close to the nominal confidence level. Our method is the only method that can always achieve a $\leq 0.010$ coverage error across all settings. It also produces competitively short confidence interval lengths, again, reflecting the high accuracy of the method. The comparison of computational efficiency between different methods echoes the qualitative results in Figure \ref{fig::numerical-2} despite slightly different settings and confirms our method's huge speed advantage over all bootstrap methods. We remark that for the three star motif, we used a double for-loop in evaluating $\hat\mathbb{E}[g_1(X_1)g_1(X_2)g_2(X_1,X_2)]$ for easiness of derivation at the price of speed. It is interesting to observe that under the setting of this simulation, our empirical Edgeworth expansion method always produces the same interval length as the normal approximation. This is not a coincidence in view of \eqref{formula::quantile-for-CI}, \eqref{formula::EEE-confidence-interval} and that $z_{\alpha/2}^2=z_{1-\alpha/2}^2$. In other words, our two-sided Edgeworth confidence interval is a bias-corrected version (by mean-shift) of the corresponding ordinary CLT confidence interval. } \subsection{Simulation 3: Numerical evaluation of the finite-sample impact of sparsity} \label{subsec::simulation-3::sparse-networks} In this part, we conduct numerical studies to evaluate the finite sample performances of our method compared to benchmarks as the network grows sparser under fixed $n$. Despite in Simulation \ref{subsec::simulation-1}, we tested different network sparsity settings (see Supplemental Material), it would still be interesting to more directly illustrate the impact of $\rho_n$ for each fixed network size. The simulation set up carries over the same set of graphon models, motif shapes and compared methods from Simulation \ref{subsec::simulation-1}. Here, for simplicity, we only tested $n = 80, 160$ and varied $\rho_n$ in $\{1\textrm{ (``dense'')},n^{-1/4},n^{-1/2},n^{-1}\}$. \begin{figure}[htbp!] \begin{adjustwidth}{-\oddsidemargin-1.5in}{-\rightmargin-2in} \centering \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_BlockModel_Edge_n_160} \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_SmoothGraphon_Edge_n_160} \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_NonSmoothGraphon_Edge_n_160}\\ \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_BlockModel_Triangle_n_160} \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_SmoothGraphon_Triangle_n_160} \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_NonSmoothGraphon_Triangle_n_160}\\ \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_BlockModel_Vshape_n_160} \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_SmoothGraphon_Vshape_n_160} \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_NonSmoothGraphon_Vshape_n_160}\\ \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_BlockModel_ThreeStar_n_160} \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_SmoothGraphon_ThreeStar_n_160} \includegraphics[width=0.4\textwidth]{./Figures/Sparse-New-numplot_Error_NonSmoothGraphon_ThreeStar_n_160} \end{adjustwidth} \caption{ {Impact of sparsity on approximation errors, $n=160$.} Both axes are log(e)-scaled. {\bf Motifs:} row 1: {\tt Edge}; row 2: {\tt Triangle}; row 3: {\tt Vshape}{; row 4: {\tt ThreeStar}}. \tred{Red solid curve marked circle}: our method (empirical Edgeworth); black dashed curve marked down-triangle: $N(0,1)$ approximation; \textcolor{green}{green dashed curve marked up-triangle}: re-sampling of $A$ in \cite{green2017bootstrapping}; \tblue{blue dashed curve marked plus}: \cite{bhattacharyya2015subsampling} sub-sampling $\asymp n$ nodes; \textcolor{magenta}{magenta dashed line with square markers}: ASE plug-in bootstrap in \cite{levin2019bootstrapping}.} \label{fig::numerical-sparsity-main} \end{figure} Figure \ref{fig::numerical-sparsity-main} shows the CDF approximation errors under different $\rho_n$ settings for $n=160$. Aligned with our theory's prediction, we observe that as the network grows sparser, our method's performance depreciates and gradually regresses to the performance of normal approximation. Due to page limit, we sink the approximate error plots for $n=80$ and the time cost plots for both $n$ settings to Supplemental Materials.
1,108,101,563,077
arxiv
\section{Introduction} Electromagnetically induced transparency (EIT) \cite{harris-first-EIT} is a quantum interference effect in which coherence between atomic states is used to reduce the absorption in a window around an atomic resonance, while simultaneously generating large dispersion and third order nonlinear susceptibilities within the induced transparency window \cite{fleischhauer-review, scully-text}. The simplest realization of an EIT system consists of a driven $\Lambda$-type atomic system wherein the excited state is coupled to one ground state---the auxiliary state---via a a strong control beam; meanwhile the system is probed by a weak laser beam near resonance with the transition between the excited state and the as-yet uncoupled ground state. In these simple systems, the characteristic EIT interference effects arise because the control beam induces a splitting of the excited state into a doublet of ``dressed'' states, providing two interfering excitation pathways for the probe beam. The null in the probe absorption at the normal atomic resonance frequency is a result of destructive quantum interference between these two pathways. The resulting dispersion has very large slope for frequencies within the transparency window around the absorption null, leading to extremely slow probe beam group velocities \cite{lukin+imamoglu, leonhardt-primer}. Slow light propagation of this sort has been observed in a variety of media, including hot atomic gases \cite{lukin+scully} and atomic Bose-Einstein condensates \cite{hau-first-observation}. Several groups have expanded on the basic EIT system to further explore the optical properties of coherently prepared atoms. In general, these modifications to standard EIT have involved coupling additional states to the three levels of standard EIT, via new lasers or RF/microwave fields. One class of these have led to EIT's sister phenomenon, electromagnetically induced absorption, which occurs when the primary ground state is strongly coupled to an additional, off-manifold state, forming an $N$-configuration \cite{yudin-eia, akulshin-eia}. Others involve levels on the same electronic manifold as the EIT ground states. One such modification considers a four level configuration, in which an additional ground state is coupled to the auxiliary state via an RF or optical field. The resulting doublet constructively interferes with the excited state to produce a new absorption feature in the EIT spectrum, with properties controllable by tuning the Rabi frequency of the additional field \cite{lukin-dark-resonances}. These so-called ``interacting dark resonances'' have been observed experimentally in several experiments \cite{yu, zhu, wei+manson2}, and they have been suggested for the coherent control of group velocity in the UV regime \cite{mahmoudi-dark-resonances}. Additional theory and experimental work has shown that the feature arising from the interaction between the dark states persists, even in the presence of considerable Doppler broadening \cite{lukin+yelin-doppler, scully-doppler}. More recently, in an attempt to explain experimental results \cite{wei+manson2}, the absorption spectrum of a similar four level atom was considered, where now the additional state was coupled to the ground state that is probed by the probe laser \cite{wei-autler-townes}. Other modifications have involved coupling the two grounds states of the $\Lambda$ atom (or other 3-level atomic configurations, such as cascades) to each other, via an RF field \cite{wei+manson1, fleischhauer-full-analytic, korsunsky}. In these cases, a variety of new phenomena are possible, including efficient frequency conversion for the probe beam. Alternatively, rather than introduce additional hyperfine levels, recent work has explored the modifications to EIT resulting from coherent tunneling of $\Lambda$ atoms in a double-well Bose-Einstein condensate, forming an effective six-level configuration \cite{weatherall+search-BEC}. Here, space-like separated atoms can be used to modify the optical response of a sample in one well. In the current contribution, we consider the more general coherent interactions that occur when both of the $\Lambda$ atom ground states are coherently coupled to additional states. These interacting dressed states generate new features in the EIT absorption spectrum, whose widths and locations in frequency space can be controlled by independently modulating the Rabi frequencies of the two new coupling fields, providing markedly better control over the optical properties of an atomic sample than possible with previous implementations. Moreover, we show how, by controlling the intensity of one of the new coupling fields, one can generate group velocities almost 100 times slower than otherwise possible in a given EIT system with a fixed control laser intensity. Whereas other studies have considered replacing either one or the other ground state with a driven doublet, no general treatment has appeared in which both ground states are driven. Thus, the phenomenon of the interaction between the doublets---which is what permits the fine tuning of the resonance locations within the transparency window---has yet to be explored. The ability to tune the relative position of the narrow resonances is what allows for the enhanced control over the dispersion within the transparency window. The current paper includes an analytic statement of the linear susceptibility of the five level atom, from which both the absorption coefficient and dispersion can be simply derived. The remainder of this paper is organized as follows. In section \ref{the_model}, we describe our model for the five level atom, dressed by two RF/microwave fields and a strong control laser. We derive the master equation in a partially dressed basis and show that in the steady state, the coherence term in the master equation relevant to the optical properties of the probe beam permits an analytic solution. In section \ref{properties}, we derive the system's linear susceptibility, $\chi^{(1)}$, from which the optical properties of the sample can be extracted. Here we also analyze the dispersion and group velocity of the probe laser. Finally in section IV, we comment on present experimental constraints. An appendix includes the full equations of motion for the relevant terms of the master equation. \section{The Model}\label{the_model} We consider a five level atom as depicted in Fig. 1 where two pairs of ground states ($\{\ket{b},\ket{b'}\}$ and $\{\ket{c},\ket{c'}\}$) interact with a single excited state, $\ket{a}$. Direct transitions between these ground states are assumed to be electric dipole forbidden (ie. $\ket{b}\not\leftrightarrow\ket{c'}$ or $\ket{b}\not\leftrightarrow\ket{c}$, etc.); moreover, to avoid possible degeneracies, we assume that any degeneracy of the states is lifted by an external magnetic field. In analogy to a standard EIT configuration, a strong laser of field amplitude $\mathcal{E}_{\mu}$ and of fixed frequency $\omega_{\mu}$ propagates near the $\ket{c}\leftrightarrow\ket{a}$ transition. Here, we study the propagation of a weak probe laser ($\mathcal{E}_p\ll\mathcal{E}_{\mu}$) of variable frequency $\omega_p$ near the $\ket{b}\leftrightarrow\ket{a}$ transition. The couplings between the atomic levels are moderated by their complex Rabi frequencies, which are defined in terms of the lasers driving the transitions. The $\ket{c}\leftrightarrow\ket{a}$ Rabi frequency, $\Omega_{\mu}$, is given by $\hbar\Omega_{\mu}e^{-i\phi_{\mu}}=\mathcal{E}_{\mu}D_{ac}$ while $\ket{b}\leftrightarrow\ket{a}$ Rabi frequency is similarly given by $\hbar\Omega_{p}e^{-i\phi_{p}}=\mathcal{E}_{p}D_{ab}$. $D_{ij}=e\bra{i}\mathbf{x}\cdot\epsilon\ket{j}$ is the dipole moment matrix element in the direction of the laser polarization, $\epsilon$, for the $\ket{j}\leftrightarrow\ket{i}$ transition. Additionally, two RF/microwave fields drive magnetic dipole transitions between members of each hyperfine ground state pair: one, of frequency $\omega_{b}$ and Rabi frequency $\Omega_b$ drives the $\ket{b}\leftrightarrow\ket{b'}$ transition; another, of frequency $\omega_{c}$ and Rabi frequency $\Omega_c$ drives the $\ket{c}\leftrightarrow\ket{c'}$. From here on we will refer to the new fields as RF fields for the sake of simplicity. $\Omega_{\mu}$, $\Omega_{b}$, $\Omega_{c}$, and $\Omega_p$ are all taken to be real. \begin{figure} \includegraphics[width=.8\columnwidth,angle=270,trim= 0in 1in 2in 1in,clip]{Fig1} \caption{\label{Fig_Model}Our 5-level model. $\ket{b}$, $\ket{b'}$, $\ket{c}$, and $\ket{c'}$ are all in the same ground state manifold; $\ket{a}$ is an excited state. In analogy to standard EIT, $\ket{a}$ and $\ket{c}$ are coupled by a strong control beam, $\Omega_{\mu}$. $\ket{b}$ and $\ket{b'}$ and $\ket{c}$ and $\ket{c'}$ are coupled by RF/microwave fields, $\Omega_b$ and $\Omega_c$, respectively. A weak probe beam, $\Omega_p$, propagates near the $\ket{b}\leftrightarrow\ket{a}$ transition. } \end{figure} We take the base vectors for the five atomic levels $\ket{a}$, $\ket{b}$, $\ket{b'}$, $\ket{c}$, and $\ket{c'}$ to form a basis for the relevant subspace of our overall Hilbert space. Then, in a frame rotating at the frequencies of the fields, defined to avoid explicit time-dependence, we can write the state vector as \begin{equation} \ket{\tilde{\Psi}(t)}=\psi_a(t)\ket{a}+\tilde{\psi}_b(t)\ket{b}+\tilde{\psi}_{b'}(t)\ket{b'}+\tilde{\psi}_c(t)\ket{c}+\tilde{\psi}_{c'}(t)\ket{c'}, \end{equation} where \begin{subequations} \begin{align} \tilde{\psi}_b&=e^{-i\phi_p}e^{-i\nu_p t}\psi_b\\ \tilde{\psi}_{b'}&=e^{-i\phi_p+i\phi_b}e^{-i\nu_p t+i\nu_b t}\psi_{b'}\\ \tilde{\psi}_c&=e^{-i\phi_{\mu}}e^{-i\nu_{\mu} t}\psi_c\\ \tilde{\psi}_{c'}&=e^{-i\phi_{\mu}+i\phi_c}e^{-i\nu_{\mu}t+i\nu_c t}\psi_{c'}. \end{align} \end{subequations} The Hamiltonian in the rotating frame is then given by: \begin{align} \tilde{\mathcal{H}}=&\frac{\hbar}{2}\left(\omega_a\ket{a}\bra{a}+(\omega_b+\nu_p)\ket{b}\bra{b}+(\omega_{b'}+\nu_p-\nu_b)\ket{b'}\bra{b'}\right.\notag\\ &+(\omega_c+\nu_{\mu})\ket{c}\bra{c}+(\omega_{c'}+\nu_{\mu}-\nu_c)\ket{c'}\bra{c'}-\Omega_{\mu} \ket{a}\bra{c}\notag\\ &-\Omega_{b}\ket{b'}\bra{b} -\Omega_{c}\ket{c'}\bra{c} \left.-\Omega_{p}\ket{a}\bra{b}\right)+\text{h.c.} \end{align} To accommodate the possibility of strong fields driving the hyperfine transitions, we will want to keep $\Omega_{b}$ and $\Omega_{c}$ to all orders in our calculation. To do so, it is convenient to work in a partially dressed basis, in which the subspaces of the effective Hamiltonian corresponding to the two ground state doublets are diagonalized. We will explicitly diagonalize the $\{\ket{b},\ket{b'}\}$ subspace; the $\{\ket{c},\ket{c'}\}$ subspace behaves identically mathematically and therefore all results can be inferred from those for the $\{\ket{b},\ket{b'}\}$ subspace. The $\{\ket{b},\ket{b'}\}$ subspace of the Hamiltonian can be written as a sum of its diagonal and traceless parts: \begin{equation} \tilde{\mathcal{H}}_{bb'}=\frac{\hbar}{2}\left((2\omega_b+\Delta_{b}+2\nu_p)\mathbf{I}+ \begin{pmatrix} -\Delta_{b} &-\Omega_{b}\\ -\Omega_{b} &\Delta_{b} \end{pmatrix}\right), \end{equation} where $\Delta_{b}=\omega_{b'}-\omega_{b}-\nu_b$ is the detuning of the RF field driving the $\ket{b}\leftrightarrow\ket{b'}$ transition. The matrix diagonalizing $\mathcal{H}_{bb'}$ will be a member of $SO(2)$, so it can be written in terms of a rotation angle in the $\{\ket{b},\ket{b'}\}$ plane, $\theta_b$: \begin{equation} D_b=\begin{pmatrix} \cos\theta_b &\sin\theta_b\\ -\sin\theta_b &\cos\theta_b \end{pmatrix}, \end{equation} where \begin{subequations} \begin{align} \cos\theta_b &=\left(\frac{1+\Delta_{b}/\Omega_{b}^{\text{eff}}}{2}\right)^{1/2}\\ \sin\theta_b &=\left(\frac{1-\Delta_{b}/\Omega_{b}^{\text{eff}}}{2}\right)^{1/2}\\ \Omega_{b}^{\text{eff}}&=\sqrt{\Delta_{b}^2+\Omega_{b}^2}. \end{align} \end{subequations} Applying this transformation produces dressed states $\ket{B}=\cos\theta_b\ket{b}+\sin\theta_b\ket{b'}$ and $\ket{B'}=-\sin\theta_b\ket{b}+\cos\theta_b\ket{b'}$ in the diagonalized basis with the energy eigenvalues $\pm \hbar\Omega_b^{eff}/2$. Combining the transformations for the two subspaces, $D_b$ and $D_c$, into a block diagonal matrix, we find the diagonalization matrix, $D$, which can be used to determine the partially dress Hamiltonian: \begin{widetext} \begin{equation} D\tilde{\mathcal{H}}D^{\dagger}=\frac{\hbar}{2} \begin{pmatrix} 2\omega_a &-\Omega_{p}\cos\theta_b &\Omega_p\sin\theta_b &-\Omega_{\mu}\cos\theta_c &\Omega_{\mu}\sin\theta_c\\ -\Omega_{p}\cos\theta_b &2\omega_b+\Delta_{b}+2\nu_p-\Omega_{b}^{\text{eff}} &0 &0 &0\\ \Omega_p\sin\theta_b &0 &2\omega_b+\Delta_{b}+2\nu_p+\Omega_{b}^{\text{eff}} &0 &0\\ -\Omega_{\mu}\cos\theta_c &0 &0 &2\omega_c+\Delta_{c}+2\nu_{\mu}-\Omega_{c}^{\text{eff}} &0\\ \Omega_{\mu}\sin\theta_c &0 &0 &0 &2\omega_c+\Delta_{c}+2\nu_{\mu}+\Omega_{c}^{\text{eff}} \end{pmatrix}. \end{equation} \end{widetext} At this point we move from a wave function description of the atom to a density matrix, $\tilde{\rho}_{ij}$, that allows us to incorporate decay and decoherence into our model. Since the linear response of the probe is determined by the coherence between the states $\ket{a}$ and $\ket{b}$, $\tilde{\rho}_{ab}=\cos\theta_b\tilde{\rho}_{aB}-\sin\theta_b\tilde{\rho}_{aB'}$, the primary task will be to calculate the steady state solutions of the partially dressed density matrix elements, $\tilde{\rho}_{aB}$ and $\tilde{\rho}_{aB'}$ to linear order in the probe field. In this paper we consider a closed system in which population from the excited state $|a\rangle$ decays to the included ground states via spontaneous emission and the ground states are stable with respect to decay. Consequently, population is conserved within our 5 level basis and therefore we do not include pumping of the states. Note this restricts our discussion to cold atomic gases where atoms do not quickly pass through the region of interaction with the lasers, which allows us to ignore pumping and decay due to atom transits through the interaction region. For the ground states, we include pure dephasing decoherence between states due to collisions, stray magnetic fields, etc. The total spontaneous emission rate is given by $\gamma_a$ and we assume that $\ket{a}$ can decay with equal likelihood to $\ket{b}$, $\ket{b'}$, or $\ket{c}$ at the rate $\gamma_a/3$, but because of optical dipole selection rules, it cannot decay to a fourth state. We denote pure dephasing decoherence rates between two states by $\tilde{\gamma}_{ij}$. We make the following assumption that the difference in the dephasing between $\ket{a}$ and both members of the doublet $\{\ket{b},\ket{b'}\}$ are much smaller than $\gamma_a$, i.e. $|\tilde{\gamma}_{ab}-\tilde{\gamma}_{ab'}|\ll \gamma_a/6$ so that we can set $\gamma_{ab}=\gamma_{ab'}=\gamma_a/6+\tilde{\gamma}_{ab}$ for the total decoherence rate. This approximation is justified by experimental parameters where $\gamma_a \sim 10^{7} s^{-1}$ while collisional decoherences rates can be anywhere from a few tens of $KHz$ down to hundreds of $Hz$ or less for quantum degenerate atomic gases or with buffer gases \cite{pfau,brandt,erhard}. Additionally, we take the decoherence between the dressed state doublets to be of two types, which we allow to vary independently: we have $\tilde{\gamma}_{cb}=\tilde{\gamma}_{cb'}=\gamma_C$ and $\tilde{\gamma}_{c'b}=\tilde{\gamma}_{c'b'}=\gamma_{C'}$. These two assumptions lead to a considerable simplification of the equations of motion in the partially dressed basis with the relevant contributions being: \begin{align*} \dot{\tilde{\rho}}_{aB}&\sim-\gamma_{ab}\tilde{\rho}_{aB}\\ \dot{\tilde{\rho}}_{aB'}&\sim-\gamma_{ab}\tilde{\rho}_{aB'}\\ \dot{\tilde{\rho}}_{CB}&\sim-(\gamma_C\cos^2\theta_c+\gamma_{C'}\sin^2\theta_c)\tilde{\rho}_{CB}&\\&-(\gamma_{C'}-\gamma_{C})\cos\theta_c\sin\theta_c\tilde{\rho}_{C'B}\\ \dot{\tilde{\rho}}_{C'B}&\sim-(\gamma_C\sin^2\theta_c+\gamma_{C'}\cos^2\theta_{c})\tilde{\rho}_{C'B}&\\&-(\gamma_{C'}-\gamma_{C})\cos\theta_c\sin\theta_c\tilde{\rho}_{CB}\\ \dot{\tilde{\rho}}_{CB'}&\sim-(\gamma_C\cos^2\theta_c+\gamma_{C'}\sin^2\theta_c)\tilde{\rho}_{CB'}&\\&-(\gamma_{C'}-\gamma_C)\cos\theta_c\sin\theta_c\tilde{\rho}_{C'B'}\\ \dot{\tilde{\rho}}_{C'B'}&\sim-(\gamma_C\sin^2\theta_c+\gamma_{C'}\cos^2\theta_c)\tilde{\rho}_{C'B'}&\\&-(\gamma_{C'}-\gamma_C)\cos\theta_c\sin\theta_c\tilde{\rho}_{CB'} \end{align*} While this second assumption may seem strange, it permits both an analytic solution, and maintains sufficient nuance to allow exploration of the effects of different decoherence pathways on the new resonances as will be shown later. Note that we do not include $\gamma_{ac}, \gamma_{ac'}$ in this discussion since they do not enter into the solution for $\tilde{\rho}_{ab}$ to first order in the pump field. We assume that all of the atoms are initially in either $\ket{b}$ or $\ket{b'}$. Since we are solving to linear order in the probe under the assumption $\Omega_p \ll \gamma_{ab}$, many of the terms in the density matrix can be discarded since they only develop population at order $(\Omega_p/\gamma_{ab})^2$ or higher in perturbation theory. For example, $\rho_{aa}\sim\Omega^2_{p}/\gamma^2_{ab}\approx 0$ while $\{\ket{c},\ket{c'}\}$ manifold only develops population at order $\Omega_{p}^2\Omega_{\mu}^2/\gamma_{ab}^4$ which is also negligible. We can then take $\rho_{cc}\approx\rho_{c'c'}\approx\rho_{cc'}\approx\rho_{ac}\approx\rho_{ac'}\approx0$, and likewise their complex conjugates. Note, however, that the assumption of a weak probe and thus of negligible occupation of $\ket{a}$, $\ket{c}$, and $\ket{c'}$ amounts only to the assumption that $\Omega_p\ll\gamma_{ab}$. In particular, it places no constraints on the strength of the $\Omega_b$ and $\Omega_c$. With these assumptions, we find two systems of equations that are decoupled from each other, which we include in the appendix. Also included in the equations in the appendix are those for the subspace ${\ket{B},\ket{B'}}$, which we note are decoupled from the other equations under the assumption that all terms higher than first order in $\Omega_p$ are negligible, implying that they can be solved separately from the other equations. The terms $\tilde{\rho}_{BB}$, $\tilde{\rho}_{B'B'}$, $\tilde{\rho}_{BB'}$, and $\tilde{\rho}_{B'B}$, which appear in the other equations, act as source terms. In the special case of an on-resonance field driving the $\ket{b}\leftrightarrow\ket{b'}$ transition, we have $\theta_b=\pi/4$ and $\Omega_b^{\text{eff}}=\Omega_b$; thus, Eqs. \ref{bbfull} simplify to: \begin{align*} i\dtime{\tilde{\rho}_{BB}}=&-\frac{i\tilde{\gamma}_{bb'}}{2}(\tilde{\rho}_{BB}-\tilde{\rho}_{B'B'})\\ i\dtime{\tilde{\rho}_{B'B'}}=&\frac{i\tilde{\gamma}_{bb'}}{2}(\tilde{\rho}_{BB}-\tilde{\rho}_{B'B'})\\ i\dtime{\tilde{\rho}_{BB'}}=&(-\Omega_b-\frac{i\tilde{\gamma}_{bb'}}{2})\tilde{\rho}_{BB'}+\frac{i\tilde{\gamma}_{bb'}}{2}\tilde{\rho}_{B'B}\\ i\dtime{\tilde{\rho}_{B'B}}=&(\Omega_b-\frac{i\tilde{\gamma}_{bb'}}{2})\tilde{\rho}_{B'B}+\frac{i\tilde{\gamma}_{bb'}}{2}\tilde{\rho}_{BB'} \end{align*} Since $\partial\tilde{\rho}_{BB}/\partial t$ and $\partial\tilde{\rho}_{B'B'}/\partial t$ depend only on the difference between $\tilde{\rho}_{BB}$ and $\tilde{\rho}_{B'B'}$, these equations are solved by equal constants---since we have assumed that initially all of the atoms are in the $\{\ket{b},\ket{b'}\}$ manifold, we have $\tilde{\rho}_{BB}=\tilde{\rho}_{B'B'}=1/2$. As for the coherence terms, we can write their coupled equation of motion as a matrix: \begin{equation} i\frac{\partial}{\partial t}\begin{pmatrix} \tilde{\rho}_{BB'}\\ \tilde{\rho}_{B'B} \end{pmatrix} =\left(-\frac{i\tilde{\gamma}_{bb'}}{2}\mathbf{I}+\begin{pmatrix} -\Omega_b &\frac{i\tilde{\gamma}_{bb'}}{2}\\ \frac{i\tilde{\gamma}_{bb'}}{2} &\Omega_b\end{pmatrix}\right)\begin{pmatrix} \tilde{\rho}_{BB'}\\ \tilde{\rho}_{B'B} \end{pmatrix} \end{equation} The solution for $\tilde{\rho}_{BB'}$ and $\tilde{\rho}_{B'B}$ can be written as linear combinations of the eigenvectors of the matrix, which have the time dependence $X_1=C_1 e^{(i A-B)t}$ and $X_2=C_2 e^{(-iA-B)t}$ with $A=\sqrt{4(\Omega_b)^2-\gamma^2_{bb'}}/2$ and $B=\gamma_{bb'}/2$. For $t\gg \gamma_{bb'}$, both of these go to zero, which gives the steady state solutions $\tilde{\rho}_{BB'}=\tilde{\rho}_{B'B}=0$. To solve for $\tilde{\rho}_{aB}$ and $\tilde{\rho}_{aB'}$, we assume that the $\{\ket{B},\ket{B'}\}$ manifold has been given time to reach the steady state we have described above. Plugging these into the master equation for $\tilde{\rho}_{aB}$ and $\tilde{\rho}_{aB'}$, and assuming further that the RF coupling $\ket{c}\leftrightarrow\ket{c'}$ transition is also on resonance, $\Delta_c=0$, (giving $\theta_c=\pi/4$), we find the simplified systems: \begin{subequations}\label{on-resonance-approximation1} \begin{align} i\dtime{\tilde{\rho}_{aB}}=&\left(\Delta_p+\frac{\Omega_b}{2}-i\gamma_{ab}\right)\rho_{aB} -\frac{\Omega_p\sqrt{2}}{8} \notag\\ &- \frac{\Omega_{\mu}\sqrt{2}}{4}\left(\tilde{\rho}_{CB}-\tilde{\rho}_{C'B}\right)\\ i\dtime{\tilde{\rho}_{CB}}=&\left(\Delta_{p}-\Delta_{\mu}-\frac{\Omega_c}{2}+\frac{\Omega_b}{2}-\frac{i}{2}(\gamma_{C}+\gamma_{C'})\right)\tilde{\rho}_{CB}\notag\\ &-\frac{\Omega_{\mu}\sqrt{2}}{4}\tilde{\rho}_{aB} -\frac{i}{2}(\gamma_{C'}-\gamma_C)\tilde{\rho}_{C'B}\\ i\dtime{\tilde{\rho}_{C'B}}=&\left(\Delta_{p}-\Delta_{\mu}+\frac{\Omega_c}{2}+\frac{\Omega_b}{2}-\frac{i}{2}(\gamma_C+\gamma_{C'})\right)\tilde{\rho}_{C'B} \notag\\ &+\frac{\Omega_{\mu}\sqrt{2}}{4}\tilde{\rho}_{aB}-\frac{i}{2}(\gamma_{C'}-\gamma_C)\tilde{\rho}_{CB} \end{align} \end{subequations} and \begin{subequations}\label{on-resonance-approximation2} \begin{align} i\dtime{\tilde{\rho}_{aB'}}=&\left(\Delta_p-\frac{\Omega_b}{2}-i\gamma_{ab}\right)\tilde{\rho}_{aB'} +\frac{\Omega_p\sqrt{2}}{8}\notag\\ &-\frac{\Omega_{\mu}\sqrt{2}}{4}\left(\tilde{\rho}_{CB'}-\tilde{\rho}_{C'B'}\right)\\ i\dtime{\tilde{\rho}_{CB'}}=&\left(\Delta_{p}-\Delta_{\mu}-\frac{\Omega_c}{2}-\frac{\Omega_b}{2}-\frac{i}{2}(\gamma_C+\gamma_{C'})\right)\rho_{CB'} \notag\\ &-\frac{\Omega_{\mu}\sqrt{2}}{4}\rho_{aB'}-\frac{i}{2}(\gamma_{C'}-\gamma_C)\tilde{\rho}_{C'B'}\\ i\dtime{\tilde{\rho}_{C'B'}}=&\left(\Delta_{p}-\Delta_{\mu}+\frac{\Omega_c}{2}-\frac{\Omega_b}{2}-\frac{i}{2}(\gamma_C+\gamma_{C'})\right)\tilde{\rho}_{C'B'} \notag\\ &+\frac{\Omega_{\mu}\sqrt{2}}{4}\tilde{\rho}_{aB'} -\frac{i}{2}(\gamma_{C'}-\gamma_C)\tilde{\rho}_{CB'}, \end{align} \end{subequations} where $\Delta_p=\omega_a-\omega_b-\nu_p$ and $\Delta_{\mu}=\omega_a-\omega_c-\nu_{\mu}$ are the probe and control beam detunings, respectively. Eqs. \ref{on-resonance-approximation1} and \ref{on-resonance-approximation2} have identical structures; both can be solved analytically by writing the systems as matrix equations of the form $\partial X/\partial t=-\mathbf{M}\cdot X(t)+A$ and noting that equations of this form have steady state solutions $\lim_{t\leftrightarrow\infty} X(t)=\mathbf{M}^{-1}\cdot A$. We then extract the steady state solutions to $\tilde{\rho}_{aB}$ and $\tilde{\rho}_{aB'}$ from the resulting vectors. \section{Optical properties of the 5-level system}\label{properties} The complex linear susceptibility, expanded about the $\ket{a}\leftrightarrow\ket{b}$ transition, is given by \begin{equation} \chi^{(1)}=\frac{2 \sigma(\mathbf{r}) N D_{ab}}{\epsilon_0\mathcal{E}_p}\tilde{\rho}_{ab}. \end{equation} $\chi^{(1)}$ determines both the absorption coefficient, $\alpha(\Delta_p)=k_p\Im[\chi^{(1)}(\Delta_p)]$, and the index of refraction, $n(\Delta_p)\approx(1+\Re[\chi^{(1)}(\Delta_p)])^{1/2}$. Considerations arising from the particular experimental set-up would determine density profile, $\sigma(\mathbf{r})$, and number density of atoms, $N$, which are included to formally account for the particular optical thickness of the sample. In our analysis we will focus on the reduced susceptibility $\tilde{\chi}^{(1)}=\epsilon_0\hbar\gamma_{ab}\chi^{(1)}/2D_{ab}^2N\sigma(\mathbf{r})=\gamma_{ab}\tilde{\rho}_{ab}/\Omega_p$. We then arrive at the following analytic expression for $\tilde{\chi}^{(1)}$, \begin{widetext} \begin{align} \tilde{\chi}^{(1)}=&\frac{\gamma_{ab}}{2}\left(\frac{(\Delta_{\mu}-\Delta_p+i\gamma_{C'}-\Omega_b/2) (\Delta_{\mu}-\Delta_p+i\gamma_{C}-\Omega_b/2)-\Omega_c^2/4}{(\Delta_p-i\gamma_{ab}+\Omega_b/2) ((\Delta_{\mu}-\Delta_p+i\gamma_{C'}-\Omega_b/2) (\Delta_{\mu}-\Delta_p+i\gamma_{C}-\Omega_b/2)-\Omega_c^2/4)+(\Delta_{\mu}-\Delta_p+i\gamma_{C'}-\Omega_b/2) \Omega_{\mu}^2/4}\right.\notag\\ &\left.+\frac{(\Delta_{\mu}-\Delta_p+i\gamma_{C'}+\Omega_b/2) (\Delta_{\mu}-\Delta_p+i\gamma_{C}+\Omega_b/2)-\Omega_c^2/4}{(\Delta_p-i\gamma_{ab}-\Omega_b/2) ((\Delta_{\mu}-\Delta_p+i\gamma_{C'}+\Omega_b/2) (\Delta_{\mu}-\Delta_p+i\gamma_{C}+\Omega_b/2)-\Omega_c^2/4)+(\Delta_{\mu}-\Delta_p+i\gamma_{C'}+\Omega_b/2) \Omega_{\mu}^2/4}\right) \end{align} \end{widetext} It is worth pointing out that in the limit that $\Omega_b,\Omega_c\rightarrow 0$, we recover the standard expression for the coherence for EIT in a $\Lambda$ system, \[ \tilde{\chi}^{(1)}\rightarrow\frac{\gamma_{ab}}{2}\frac{\Delta_p-i\tilde{\gamma}_{cb}}{(\Delta_p-i\gamma_{ab})(\Delta_p-i\tilde{\gamma}_{cb})-(\Omega_{\mu}/2)^2}. \] Further analysis requires us to estimate the values for the important variables in the problem. The Rabi frequencies of the coupling laser and RF fields are experimentally tunable over a large range. For the spontaneous emission rate, we take $\gamma_a=10^7 s^{-1}$ and will measure the Rabi frequencies and detunings in units of $\gamma_{ab}$. For the ground state dephasing rates the range of values are limited primarily by collision rates and therefore are temperature and density dependent. However for concreteness, we assume that $\tilde{\gamma}_{ab}$, $\gamma_{C}$, and $\gamma_{C'}$ are in the range $10^{3}-10^{4} s^{-1}$. \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig2.eps} \caption{\label{new_features}(Color Online) (a) The full spectrum of the susceptibility showing the imaginary part, $\Im[\tilde{\chi}^{(1)}]$, (blue dashed line) and real part $\Re[\tilde{\chi}^{(1)}]$, (red solid line). (b) A close up on one of the new narrow features. Here, $\Omega_b=\Omega_c=\gamma_{ab}/10$ and $\Omega_{ab}=2\gamma_{ab}$, while $\gamma_{C}=\gamma_{C'}=0$.} \end{figure} \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig3.eps} \caption{\label{fig-varyOb} Imaginary part of the susceptibility, $\Im[\tilde{\chi}^{(1)}]$ plotted as a function of $\Delta_p$ and $\Omega_b$. The separation of the new features varies with $\Omega_b$, the Rabi frequency of the RF field coupling the states $\ket{b}$ and $\ket{b'}$. Here, we show the dependence for $0\leq\Omega_b\leq\gamma_{ab}/10$, with $\gamma_C=\gamma_{C'}=\gamma_{ab}\times 10^{-3}$, $\Omega_{\mu}=2\gamma_{ab}$, and $\Omega_c=\gamma_{ab}/10$. In the intensity plot, lighter color denotes larger $\Im[\tilde{\chi}^{(1)}]$.} \end{figure} Examples of the real and imaginary parts of the susceptibility are shown in Fig. \ref{new_features} for $\Delta_{\mu}=0$ while Fig. \ref{fig-varyOb} display the spectrum's dependence on the Rabi frequency $\Omega_b$. We see that the presence of the additional levels manifest themselves as two narrow resonances located inside of the EIT transparency window. In general for arbitrary $\Delta_b$, the new resonances are symmetrically located about $\Delta_p=0$ at the locations $\Delta_p=\pm \Omega_b^{\text{eff}}/2$. For $\Omega_{\mu},\gamma_{ab}\gg \Omega_b,\Omega_c,\gamma_{C},\gamma_{C'}$, their shape is approximately Lorentzian, given by (for $\Delta_{\mu}=0$): \begin{equation} \Im[\tilde{\chi}^{(1)}]\approx\frac{\gamma_{ab}\Omega_c^2}{2\Omega_{\mu}}\left(\frac{\Omega_c^2/\Omega_{\mu}^2+\gamma_{C'}/\gamma_{ab}}{(\Delta_p\mp\Omega_b/2)^2 +(\gamma_{ab}(\Omega_c^2/\Omega_{\mu}^2+\gamma_{C'}/\gamma_{ab}))^2}\right) \label{narrowlorentzian} \end{equation} in the vicinity $\Delta_p\approx \pm \Omega_b/2$. In the limit as $\gamma_{C'}\rightarrow 0$, this expression reduces to a Lorentzian with FWHM of $2\gamma_{ab}\Omega_c^2/\Omega_{\mu}^2$, which agrees with what has been obtained by Lukin {\em et al.} \cite{lukin-dark-resonances} and Mahmoudi {\em et al.} \cite{mahmoudi-dark-resonances} for interacting dark resonances in a 4 level system with no ground state dephasing. In the case of nonzero $\gamma_{C'}$, the widths of the features, \begin{equation} \Gamma_n=2\gamma_{ab}(\Omega_c^2/\Omega_{\mu}^2+\gamma_{C'}/\gamma_{ab}), \label{linewidth} \end{equation} is the sum of the 'power broadening' term $2\gamma_{ab}\Omega_c^2/\Omega_{\mu}^2$ and the dephasing rate for $|c'\rangle$ while the height is given by \begin{equation}\label{feature_height} \Im[\tilde{\chi}^{(1)}(\pm\Omega_b)]=\frac{\Omega_c^2\Omega_{\mu}}{2(\gamma_{ab}\Omega_c^2+\Omega_{\mu}^2\gamma_{C'})}. \end{equation} These new resonances have a simple interpretation in terms of the dressed states $|B\rangle$ and $|B'\rangle$, which are coupled via the probe laser to the 3 level ststem $|c'\rangle \leftrightarrow |c\rangle \leftrightarrow |a\rangle$. The energies of $|B\rangle$ and $|B'\rangle$ are $\hbar\omega_{B,B'}=\hbar\omega_b\pm \Omega_b^\text{eff}/2$ so that in the absence of the control laser, the $|b\rangle \leftrightarrow |a\rangle$ absorption line would be split into an Autler-Townes doublet located at $\omega_{a}- \omega_{B}$ and $\omega_a-\omega_{B'}$. The $|c'\rangle \leftrightarrow |c\rangle \leftrightarrow |a\rangle$ is isomorphic to a $\Lambda$ atom. Again assuming $\Delta_c=0$ and $\Delta_{\mu}=0$, we then have the following eigenstates for the $\{|a\rangle,|c\rangle,|c'\rangle\}$ subsystem, \begin{eqnarray} |a_+\rangle &=&\frac{1}{\sqrt{2}}\left(\sin\theta|a\rangle+|c\rangle+\cos\theta|c'\rangle \right) \\ |a_-\rangle &=&\frac{1}{\sqrt{2}}\left(\sin\theta|a\rangle-|c\rangle+\cos\theta|c'\rangle \right) \\ |a_0\rangle &=& \cos\theta |a\rangle -\sin\theta|c'\rangle \end{eqnarray} where $\tan\theta=-\Omega_{\mu}/\Omega_c$. The energies of the states $|a_{\pm}\rangle$ are $E_{\pm}=\hbar\omega_a \pm \hbar\sqrt{\Omega_{\mu}^2+\Omega_c^2}/2$ while $|a_0\rangle$ has energy $E_0=\hbar\omega_a$. As one can see, $|a_0\rangle$ is the same type of dark state that appears in STIRAP and coherent population trapping. In this case, this induced dark state is a superposition of $|a\rangle$ and $|c'\rangle$ {\em but not} $|c\rangle$ . Since it is decoupled from the control laser, there will not be any destructive quantum interference in the probe absorption for transitions to $|a_0\rangle$. Transitions from the $\{ |B\rangle, |B'\rangle \}$ manifold to $|a_0\rangle$ will then exhibit absorption {\em resonances} at $\omega_a-\omega_{B,B'}$, which correspond to the new narrow resonances. By contrast, destructive interference created by the control would lead to nulls in the absorption at $\omega_a-\omega_{B,B'}$ when $\Omega_c=0$. Fig. \ref{dressedstatefig} shows a schematic diagram of the energy levels of the dressed ground state manifold $\{|B\rangle, |B'\rangle \}$, which are coupled to all three states of the excited state manifold $\{ |a_+\rangle, |a_-\rangle, |a_0\rangle \}$ via the probe. All in all there are six transitions that should appear as resonances in the absorption spectrum. The transitions to the two `bright' states $|a_\pm\rangle$ correspond to the main absorption peaks located at $\Delta_p\approx \pm\Omega_{\mu}/2$ for $\Omega_{\mu}\gg \Omega_c,\Omega_b$. Notice that each of these resonances actually consist of a pair of resonances separated by a distance $\Omega_b$ but only when $\Omega_b > \gamma_{ab}$ can these pairs be individually resolved as shown in Fig. \ref{six-peaks}. \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig4.eps} \caption{\label{dressedstatefig} Energy level diagram that indicates transitions induced by the probe laser between the ground state manifold $\{ |B\rangle, |B'\rangle \}$ and the excited state manifold $\{ |a_+\rangle, |a_-\rangle, |a_0\rangle \}$. Transitions to the dark state $|a_0\rangle$ are indicated by dashed lines. The energy of the bare state $|b\rangle$ is also shown for reference.} \end{figure} \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig5.eps} \caption{\label{six-peaks}Imaginary part of the susceptibility, $\Im[\tilde{\chi}^{(1)}]$, as a function of $\Delta_p$. Here, $\Omega_{\mu}=2\gamma_{ab}$, $\Omega_b=2.2\gamma_{ab}$, $\Omega_c=1.8\gamma_{ab}$ have been chosen so that all six absorption resonances are simultaneously visible. See text and Fig. \ref{dressedstatefig}.} \end{figure} The independence of $|a_0\rangle$ from $|c\rangle$ explains why the line widths of the new features only depend on the dephasing between states in the $\{\ket{b},\ket{b'}\}$ manifold and $\ket{a}$ and $\ket{c'}$ but is independent of dephasing relative to state $\ket{c}$ as one can see from Eq. \ref{narrowlorentzian}, which is independent of $\gamma_C$. Fig. \ref{varygammacprime} shows how $\Im[\tilde{\chi}^{(1)}]$ varies with $\gamma_{C'}$. Moreover, that the height of the features decreases with increasing $\gamma_{C'}$, as in Eq. \ref{feature_height}, reflects the fact that the dephasing decreases the probability of transitions to the dark state $\ket{a_0}$. \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig6.eps} \caption{\label{varygammacprime} (Color Online) Imaginary part of the susceptibility, $\Im[\tilde{\chi}^{(1)}]$, as a function of $\Delta_p$ showing a close up of the narrow resonances. Here we show the dependence of the new features on the dephasing, $\gamma_{C'}$. The broad dashes represent the case where $\gamma_{C'}=0$, the short dashed line takes $\gamma_{C'}=\gamma_{ab}\times10^{-3}$, and the dotted line represents $\gamma_{C'}=\gamma_{ab}\times 10^{-2}$. In all cases, $\Omega_b=\Omega_c=\gamma_{ab}/10$ and $\Omega_{\mu}=2\gamma_{ab}$.} \end{figure} These new narrow resonances offer the possibility of additional control of the dispersion, $\partial \Re[\chi^{(1)}]/\partial \omega_p$, and group velocity, \[ v_g(\omega_p)=\frac{c}{n+(\omega_p/2n)(\partial\Re[\chi^{(1)}]/\partial \omega_p)} \] inside of the normal EIT transparency window because of the large normal dispersion in the vicinity of the narrow resonances, whose positions and widths are controlled by $\Omega_b$ and $\Omega_c$, respectively. By controlling the RF Rabi frequencies, the narrow resonances can be selectively positioned inside of the normal transparency window in order to locally control the dispersion over a relatively small frequency range. To be more specific, we note that in `standard EIT' (note that `standard EIT' refers to the case where $\Omega_b=\Omega_c=0$ but with all other parameters, including the control laser, being the same), the observable effects of EIT including a transparency window and slow light occur when $|\Omega_{\mu}|^2\gg \gamma_{ab}\gamma_{cb}$ but at the same time $\partial \Re[\chi^{(1)}]/\partial \omega_p \propto 2\gamma_{ab}/|\Omega_{\mu}|^2$ for $\Delta_p\approx 0$ \cite{fleischhauer-review}. This implies that although lowering $\Omega_{\mu}$ will decrease the group velocity, the decreasing of the intensity of the control laser to $|\Omega_{\mu}|^2< \gamma_{ab}\gamma_{cb}$ will result in a complete loss of the transparency window. Let us return now to our 5 level system and focus specifically on the case where $\Omega_{\mu},\gamma_{ab} \gg \Omega_b,\Gamma_n$ so that the narrow resonances are clearly visible in between the Autler-Townes doublet induced by the control laser. In this case the narrow resonances have line widths given by Eq. \ref{linewidth}, which requires that $\Omega_b\gg \Gamma_n$ in order to have a clearly defined window around $\Delta_p=0$. The natural line width of the excited state only contributes through the power broadening term $2(\Omega_c/\Omega_{\mu})^2\gamma_{ab} \ll \gamma_{ab}$ for $\Omega_c\ll \Omega_{\mu}$ and can easily be made much smaller than the ground state dephasing rates. As a result the primary limit on the dispersion and absorption in the middle of the resonances arises from the ground state dephasing alone, requiring $\Omega_b \gg \gamma_{C'}$. This limit is a much weaker condition than for the control laser in standard EIT when $\gamma_{ab}\gg \gamma_{cb}$ and allows for much narrower transparency windows embedded inside of the standard EIT window. In our system, at $\Delta_p=0$ and assuming $\Omega_b,\Omega_c,\gamma_C,\gamma_{C'}\ll \Omega_{\mu},\gamma_{ab}$, the absorption and dispersion are, to lowest nonvanishing orders of $\Omega_b$, $\Omega_c$, $\gamma_C$, and $\gamma_{C'}$, given by, \begin{widetext} \begin{equation} \Im[\tilde{\chi}^{(1)}(0)]=\gamma_{ab}\frac{4\gamma_{ab}(\Omega_b^2-\Omega_c^2)^2 +(\gamma_C\Omega_b^2+\gamma_{C'}\Omega_c^2)\Omega_{\mu}^2}{\Omega_b^2(\Omega_b^2-\Omega_c^2)^2-2(\Omega_b^4+(\gamma_{ab}\gamma_{C'}+\Omega_b^2)\Omega_c^2)\Omega_{\mu}^2+\Omega_b^2\Omega_{\mu}^4} \end{equation} and \begin{equation} \partial \Re[\tilde{\chi}^{(1)}]/\partial \Delta_p=\gamma_{ab}\frac{-8\gamma_{ab}(\gamma_C\Omega_b^4+(\gamma_C+3\gamma_{C'})\Omega_b^2\Omega_{c}^2 -\gamma_{C'}\Omega_c^4)\Omega_{\mu}^2-\Omega_b^2(\Omega_b^2+\Omega_c^2)\Omega_{\mu}^4}{\Omega_{\mu}^2(8\gamma_{ab}(\gamma_C\Omega_b^2+\gamma_{C'}\Omega_c^2) +\Omega_b^2(-2\Omega_b^2+\Omega_c^2+\Omega_{\mu}^2))^2}. \end{equation} \end{widetext} From these formulae, in the symmetric case that $\Omega_b=\Omega_c$ and $\gamma_{C}=\gamma_{C'}$, $\Im[\tilde{\chi}^{(1)}(0)]\approx(2\gamma_{C}\gamma_{ab}/\Omega_{\mu}^2)(1+2(\gamma_{ab}\gamma_C+2\Omega_c^2)/\Omega_{\mu}^2)$ while $\partial \Re[\tilde{\chi}^{(1)}]/\partial \Delta_p\approx -(2\gamma_{ab}/\Omega_{\mu}^2)(1-16\gamma_{ab}\gamma_C/\Omega_{\mu}^2)$, which is identical to standard EIT to lowest order. However, slightly off resonance in the vicinity of the narrow resonances where the absorption is still negligible, the affect of these resonances on the dispersion is still very significant. In Figs. 7 and 8 we consider the effect of the RF Rabi frequencies on the group velocity in the vicinity of the narrow features at $\Delta_p\approx 0$ by plotting the ratio of the group velocity for $\Omega_b,\Omega_c \neq 0$ to the group velocity for standard EIT. Figure 7 shows the group velocity near $\Delta_p=0$ for zero decoherence and small RF Rabi frequencies, $\Omega_b=\Omega_c=0.0001\gamma_{ab}=1KHz$, which leads to an almost 100 fold reduction in the group velocity in a window of about $50Hz$ where the absorption is negligible. Figure 8 shows the reduction of the group velocity and absorption for larger RF Rabi frequencies and finite ground state decoherence. As one can see, even in the presence of decoherence, one can achieve a reduction in the group velocity close to a factor of 10 in a frequency window of $\sim 50KHz$ for the parameters in Fig. 8, while at the same time having relatively small absorption. At the same time, the dispersion is highly nonlinear implying that any pulse propagating near the narrow resonances could experience significant reshaping. \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig7} \caption{\label{fig-groupvelocity} Reduction of group velocity $v_{g}$, in a window near $\Delta_p=0$ for very small RF Rabi frequencies $\Omega_b=\Omega_c=0.0001\gamma_{a}$, $\Omega_\mu=2\gamma_{ab}$, and no decoherence, $\gamma_C=\gamma_{C'}=0$. As in the previous graph, the group velocity is measured relative to the group velocity for $\Omega_b=\Omega_c=0$ but with all other parameters being the same, which we denote as $v_{\text{EIT}}$.} \end{figure} \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig8} \caption{\label{fig-groupvelocity} (Color Online) (a) The group velocity, $v_{g}$, in a window near $\Delta_p=0$ for $\Omega_\mu=2\gamma_{ab}$ and $\Omega_b=\Omega_c=0.01\gamma_{a}$ and $\gamma_C=\gamma_{C'}=0.0001\gamma_{a}$ (Blue long dash line); $\Omega_b=\Omega_c=0.02\gamma_{a}$ and $\gamma_C=\gamma_{C'}=0.0001\gamma_{a}$ (Red short dash line); and $\Omega_b=\Omega_c=0.01\gamma_{a}$ and $\gamma_C=\gamma_{C'}=0$ (Gold dotted line). In each case, the group velocity is measured relative to the group velocity for $\Omega_b=\Omega_c=0$ but with all other parameters being the same, which we denote as $v_{\text{EIT}}$ to mean `standard EIT'. We note that the group velocity can readily be related to the delay time of a light pulse by $\tau_d=L(1/v_{g}-1/c)$ where $L$ is the thickness of the sample. (b) The imaginary part of the susceptibility for the same parameters as in pt. (a).} \end{figure} \section{Discussion and conclusion}\label{discussion} One important aspect that has been ignored in our discussion is that of Doppler broadening. To this end, we note that the current analysis is limited to that of a Doppler free geometry involving co-propagating probe and control lasers \cite{fleischhauer-review}. Doppler shifts will be unimportant for RF/microwave fields. Another alternative to avoid Doppler broadening would be to use laser cooled atoms in a optical trap \cite{pfau}. Additionally, we propose here that the following energy levels of $^{87}$Rb could be used to implement the 5-level geometry (the same transitions but with different electronic number could also be used for $^{23}$Na), $\ket{a}=\ket{5P_{3/2},F=3,m_f=1}$, $\ket{b}=\ket{5S_{1/2},F=2,m_f=2}$, $\ket{b'}=\ket{5S_{1/2},F=2,m_f=1}$, $\ket{c}=\ket{5S_{1/2},F=2,m_f=0}$, and $\ket{c'}=\ket{5S_{1/2},F=1,m_f=0,\pm 1}$ where the probe laser has $\sigma^-$ polarization while the control laser is $\sigma^{+}$ polarized. We note that the five level system could also be implemented using transitions between the $F=1 \rightarrow F'=0$ manifolds, namely: $\ket{a}=\ket{5P_{3/2},F=0,m_f=0}$, $\ket{b}=\ket{5S_{1/2},F=1,m_f=1}$, $\ket{b'}=\ket{5S_{1/2},F=1,m_f=0}$, $\ket{c}=\ket{5S_{1/2},F=1,m_f=-1}$, and $\ket{c'}=\ket{5S_{1/2},F=2,m_f=-2,-1,0}$ again with $\sigma^{-}$ polarized probe and a $\sigma^{+}$ polarized control laser. We note that an additional off-resonant laser could be used to shift the energy level of $|b'\rangle$ relative to $|b\rangle$ and $|c\rangle$ via the AC Stark effect so that an RF field would only resonantly induce spin flips between $|b\rangle$ and $|b'\rangle$ and not between $|b'\rangle$ and $|c\rangle$. In conclusion we have analyzed the linear susceptibility in a five level system and showed that the emergence of two dark resonances inside of the EIT transparency window offers enhanced control over the dispersion and absorption spectrum due to the controllable widths and separation of the resonances. We have argued that narrower transparency windows and slower group velocities can be achieved using this scheme as compared to standard EIT. This work is supported in part by the National Science Foundation. \section{Appendix A} The full statement of the equations of motion determining $\tilde{\rho}_{aB}$ and $\tilde{\rho}_{aB'}$, approximated by Eqs. \ref{on-resonance-approximation1} and \ref{on-resonance-approximation2} for the case when $\Delta_b=\Delta_c=0$, and so $\theta_b=\theta_c=\pi/4$, are given here in their general form. We have neglected terms proportional to $\tilde{\rho}_{aa}$, $\tilde{\rho}_{cc}$, $\tilde{\rho}_{c'c'}$, $\tilde{\rho}_{ac}$, $\tilde{\rho}_{ac'}$, and $\tilde{\rho}_{cc'}$, for reasons given in the body of the paper. We find: \begin{widetext} \begin{subequations} \begin{align} i\dtime{\tilde{\rho}_{aB}}=&\left(\Delta_p-\frac{\Delta_b}{2}+\frac{\Omega_b^{\text{eff}}}{2}-i\gamma_{ab}\right)\rho_{aB} -\frac{\Omega_p}{2}\left(\cos\theta_b\rho_{BB}-\sin\theta_b\rho_{B'B}\right) -\frac{\Omega_{\mu}}{2}\left(\cos\theta_c\rho_{CB}-\sin\theta_c\rho_{C'B}\right)\\ i\dtime{\tilde{\rho}_{CB}}=&\left(\Delta_{p}-\Delta_{\mu}+\frac{\Delta_c}{2}-\frac{\Delta_b}{2}-\frac{\Omega_c^{\text{eff}}}{2}+\frac{\Omega_b^{\text{eff}}}{2}-i(\gamma_C\cos^2\theta_c+\gamma_{C'}\sin^2\theta_c)\right)\rho_{CB} \nonumber \\ -&\frac{\Omega_{\mu}}{2}\cos\theta_c\rho_{aB}-i(\gamma_{C'}-\gamma_{C})\cos\theta_c\sin\theta_c\tilde{\rho}_{C'B}\\ i\dtime{\tilde{\rho}_{C'B}}=&\left(\Delta_{p}-\Delta_{\mu}+\frac{\Delta_c}{2}-\frac{\Delta_b}{2}+\frac{\Omega_c^{\text{eff}}}{2}+\frac{\Omega_b^{\text{eff}}}{2}-i(\gamma_C\sin^2\theta_c+\gamma_{C'}\cos^2\theta_{c})\right)\rho_{C'B} \nonumber \\ +&\frac{\Omega_{\mu}}{2}\sin\theta_c\rho_{aB}-i(\gamma_{C'}-\gamma_{C})\cos\theta_c\sin\theta_c\tilde{\rho}_{CB} \end{align} \end{subequations} and \begin{subequations} \begin{align} i\dtime{\tilde{\rho}_{aB'}}=&\left(\Delta_p-\frac{\Delta_b}{2}-\frac{\Omega_b^{\text{eff}}}{2}-i\gamma_{ab}\right)\rho_{aB'} +\frac{\Omega_p}{2}\left(\sin\theta_b\rho_{B'B'}-\cos\theta_b\rho_{BB'}\right) -\frac{\Omega_{\mu}}{2}\left(\cos\theta_c\rho_{CB'}-\sin\theta_c\rho_{C'B'}\right)\\ i\dtime{\tilde{\rho}_{CB'}}=&\left(\Delta_{p}-\Delta_{\mu}+\frac{\Delta_c}{2}-\frac{\Delta_b}{2}-\frac{\Omega_c^{\text{eff}}}{2}-\frac{\Omega_b^{\text{eff}}}{2}-i(\gamma_C\cos^2\theta_c+\gamma_{C'}\sin^2\theta_c)\right)\rho_{CB'} \nonumber \\ -&\frac{\Omega_{\mu}}{2}\cos\theta_c\rho_{aB'}-i(\gamma_{C'}-\gamma_C)\cos\theta_c\sin\theta_c\tilde{\rho}_{C'B'}\\ i\dtime{\tilde{\rho}_{C'B'}}=&\left(\Delta_{p}-\Delta_{\mu}+\frac{\Delta_c}{2}-\frac{\Delta_b}{2}+\frac{\Omega_c^{\text{eff}}}{2}-\frac{\Omega_b^{\text{eff}}}{2}-i(\gamma_C\sin^2\theta_c+\gamma_{C'}\cos^2\theta_c)\right)\rho_{C'B'} \nonumber \\ +&\frac{\Omega_{\mu}}{2}\sin\theta_c\rho_{aB'}-i(\gamma_{C'}-\gamma_C)\cos\theta_c\sin\theta_c\tilde{\rho}_{CB'}. \end{align} \end{subequations} \begin{subequations} \label{bbfull} \begin{align} i\dtime{\tilde{\rho}_{BB}}=&-\frac{i\gamma_{bb'}}{2}(\sin^2\theta_b(\tilde{\rho}_{BB}-\tilde{\rho}_{B'B'}) +\sin2\theta_b\cos2\theta_b(\tilde{\rho}_{BB'}+\tilde{\rho}_{B'B})) \\ i\dtime{\tilde{\rho}_{B'B'}}=&\frac{i\gamma_{bb'}}{2}(\sin^2\theta_b(\tilde{\rho}_{BB}-\tilde{\rho}_{B'B'}) +\sin2\theta_b\cos2\theta_b(\tilde{\rho}_{BB'}+\tilde{\rho}_{B'B})) \\ i\dtime{\tilde{\rho}_{BB'}}=&(-\Omega_b^{\text{eff}}-\frac{i\gamma_{bb'}}{4}(3+\cos4\theta_b))\tilde{\rho}_{BB'} +\frac{i\gamma_{bb'}}{4}\left((1-\cos4\theta_b)\tilde{\rho}_{B'B} +\sin4\theta_b(\tilde{\rho}_{B'B'}-\tilde{\rho}_{BB})\right) \\ i\dtime{\tilde{\rho}_{B'B}}=&(\Omega_b^{\text{eff}}-\frac{i\gamma_{bb'}}{4}(3+\cos4\theta_b))\tilde{\rho}_{B'B} +\frac{i\gamma_{bb'}}{4}\left((1-\cos4\theta_b)\tilde{\rho}_{BB'} +\sin4\theta_b(\tilde{\rho}_{B'B'}-\tilde{\rho}_{BB})\right) \end{align} \end{subequations} \end{widetext}
1,108,101,563,078
arxiv
\section{Introduction} \IEEEPARstart{M}{etaheuristics} have shown effectiveness in solving the complex optimization problems from various fields, such as manufacturing \cite{Alam2003Process}, scheduling \cite{Potvin1996Vehicle}, bioinformatics \cite{App-ModuleIdentification4}, and economics \cite{App-PortfolioOptimization4}. In contrast to mathematical programming methods that require an explicit objective function \cite{Operator-GD}, metaheuristics provide a high-level methodology solving problems in a black-box manner. So far, there have been a variety of algorithms inspired by the biological mechanisms in natural evolution, such as genetic algorithms (GA) \cite{Holland1992Adaptation}, differential evolution (DE) \cite{Operator-DE}, evolution strategies \cite{algorithm-CMAES}, and evolutionary programming \cite{Operator-FEP}. Moreover, many swarm intelligence based algorithms have also been proposed, including particle swarm optimization (PSO) \cite{Operator-PSO}, ant colony optimization \cite{algorithm-ACO}, artificial bee colony algorithms \cite{Operator-ABC}, among many others. \IEEEpubidadjcol Existing metaheuristics exhibit quite different search behaviors and optimization performance, which are mainly determined by the manually designed variation operators \cite{Survey2,Okabe2005Theoretical,Survey1}, i.e., the strategies for receiving the decision vectors of parents and generating the decision vectors of offspring solutions. For example, the GA was designed according to the evolution theory and law of inheritance, which generates offspring by the crossover between two parents and the mutation on a single offspring solution. The crossover and mutation operators provide a powerful exploration ability \cite{Eiben1998Evolutionary}, making GA good at handling multimodal landscapes \cite{Su2020Non}. The DE mutates each solution according to the weighted difference between the other two solutions, which holds a good performance on problems with complicated variable linkages \cite{MOEA/D-DE}. The covariance matrix adaptation evolution strategy (CMA-ES) generates new solutions by sampling a multivariate normal distribution model adaptively learned from the population, showing high performance on many real-world applications \cite{Rodemann2018Industrial}. Inspired by the choreography of bird flocking, the PSO updates each particle according to its personal best particle and the global best particle, which has a high speed of convergence \cite{MOPSO}. Among the superiorities in existing variation operators, the independence of search space is crucial to the robustness and generalization of metaheuristics, since metaheuristics do not rely on specific characteristics of problems. A search space independent operator holds the same performance on a problem with arbitrary search space transformations, including translation (i.e., $x^\prime=x+b$), scaling (i.e., $x^\prime=ax$), and rotation (i.e., $\mathbf{x}^\prime=\mathbf{x}M$). Existing studies have shed some light on the sufficient conditions of achieving these invariance properties. For instance, CMA-ES is scale invariant since its step-sizes are set proportionally to the distance to the optimum found so far \cite{Jebalia2011Log}, and the mutation operator of DE is rotation invariant since it is the weighted sum of multiple parents \cite{Caraffini2019Study}. Nevertheless, the mathematical definitions of all the three properties have not been given, and the sufficient and necessary condition of achieving them is still unknown. Therefore, it is difficult to analyze whether an operator is translation, scale, and rotation invariant theoretically, or to consider these properties in designing new operators explicitly. To address this issue, this work aims to deduce the generic form (i.e., sufficient and necessary condition) of translation, scale, and rotation invariant operators, which can be used to judge the possession of invariance properties and guide the design of new operators. Based on the deduced generic form, this work proposes a principled approach for designing new operators with invariance properties. In contrast to the parameter tuning of metaheuristics \cite{Karafotias2015Parameter,Huang2019Survey}, the off-line recommendation of metaheuristics \cite{Tian2020Recommender}, and the on-line combination of metaheuristics \cite{framework-AutoMOEA}, the proposed approach is not based on any existing metaheuristic but searches for totally new operators. Moreover, the proposed approach does not utilize any existing optimizer or classifier (e.g., F-Race for parameter tuning \cite{Birattari2002Racing}, artificial neural network for metaheuristic recommendation \cite{Rosenblatt1958Perceptron}, and sum-of-ranks multiarmed bandit algorithm for metaheuristic combination \cite{Fialho2002Toward}). By contrast, it is a self-contained approach that can search for new operators by itself. The main components of this work include the following three aspects: \begin{itemize} \item\textbf{Theoretical analysis:} To illustrate the importance of translation invariance, scale invariance, and rotation invariance, their effects on the search behavior and performance of metaheuristics are investigated. Then, the sufficient and necessary condition of achieving these properties is mathematically derived, which reveals the generic form of search space independent operators. \item\textbf{New approach:} A principled approach to automated design of variation operators is proposed, termed AutoV. Based on the deduced generic form of variation operators, AutoV converts the search of high-performance operators into an optimization problem, in which the decision variables are the parameters in the operators. This way, AutoV can solve optimization problems without relying on any existing operators. \item\textbf{Experimental study:} The variation operator found by AutoV is embedded in a simple evolutionary framework and compared to eight classical or state-of-the-art metaheuristics. The experimental results show that the operator found by AutoV can obtain the best results on various challenging benchmark problems; in particular, it outperforms the winner of the CEC competition that contains multiple operators with complex adaptation strategies. The results indicate that AutoV has the potential to replace the laborious process of manual design of new metaheuristics. \end{itemize} The rest of this paper is organized as follows. Section~II analyzes the effects of the three invariance properties, and Section~III deduces their sufficient and necessary condition. Section~IV presents the proposed principled approach, and Section~V gives the experimental studies. Finally, Section~VI concludes this paper. \section{Effects of Invariance Properties} This work focuses on the variation operators for the following continuous optimization problem: \begin{equation} \begin{aligned} \min&\ \ \ f(\mathbf{x})\\ {\rm s.\,t.}&\ \ \ \mathbf{l}\leq \mathbf{x}\leq \mathbf{u} \end{aligned} \ , \end{equation} where $\mathbf{x}=(x_1,x_2,\dots,x_D)$ is a decision vector denoting a solution for the problem, $\mathbf{l}=(l_1,l_2,\dots,l_D)$ denotes the lower bound, $\mathbf{u}=(u_1,u_2,\dots,u_D)$ denotes the upper bound, and $D$ is the number of decision variables. To solve such problems in a black-box manner, a variety of variation operators have been proposed to generate solutions without using any specific information except for the lower and upper bounds. This section first introduces some representative variation operators, then presents the definitions of translation invariance, scale invariance, and rotation invariance and analyzes their effects by examples and experimental studies. \subsection{Variation Operators in Metaheuristics} An operator generally receives the decision vectors of one or more parents, then outputs the decision vectors of one or more offspring solutions. For example, the simulated binary crossover (SBX) operator \cite{Operator-SBX} used in GA uses two parents $\mathbf{x}_1$ and $\mathbf{x}_2$ to generate two offspring solutions $\mathbf{o}_1$ and $\mathbf{o}_2$ each time: \begin{equation} \left \{ \begin{aligned} o_{1d} = 0.5\left[ (1+\beta)x_{1d}+(1-\beta)x_{2d}\right] \\ o_{2d} = 0.5\left[ (1-\beta)x_{1d}+(1+\beta)x_{2d}\right] \\ \end{aligned} \ ,\ 1\leq d \leq D \right., \label{equ:SBX} \end{equation} where $o_{1d}$ denotes the $d$-th variable of solution $\mathbf{o}_1$ and $\beta$ is a random number obeying a special distribution. The mutation operator of DE/rand/1/bin \cite{Operator-DE} uses three parents $\mathbf{x}_1$, $\mathbf{x}_2$, and $\mathbf{x}_3$ to generate an offspring solution $\mathbf{o}$ each time: \begin{equation} o_{d} = x_{1d} + F\cdot(x_{2d}-x_{3d})\ ,\ 1\leq d \leq D\ , \label{equ:DE} \end{equation} where $F$ is a parameter controlling the amplification of the difference between $\mathbf{x}_2$ and $\mathbf{x}_3$. In contrast to the random parameter $\beta$ in SBX that varies on each dimension, the parameter $F$ in DE is a predefined constant. The operator of CMA-ES \cite{algorithm-CMAES} generates offspring by sampling a multi-variate normal distribution: \begin{equation} \mathbf{o} = \mathbf{x}_m + \sigma\cdot\mathcal{N}(\mathbf{0},C)\ , \label{equ:CMAES} \end{equation} where $\mathbf{x}_m$ is the weighted sum of all solutions, $\sigma$ is a vector of iteratively updated step-sizes, and $C$ is a covariance matrix updated according to the current population. In general, the initial $\sigma$ can be set to $0.6(\mathbf{u}-\mathbf{l})$ \cite{Igel2007Covariance}. Similar to CMA-ES, the operator of fast evolutionary programming (FEP) \cite{Operator-FEP} generates offspring by sampling a single-variate normal distribution: \begin{equation} o_d = x_d + \eta_d\cdot\mathcal{N}(0,1)\ ,\ 1\leq d \leq D\ , \label{equ:FEP} \end{equation} where $\mathbf{\eta}$ is a vector of self-adaptive standard deviations related to each solution $\mathbf{x}$, whose elements can be initialized to 3 \cite{Operator-FEP}. It can be found that these operators generate offspring by using distinct formulas. Generally, an operator can be regarded as a function $h(x_{1d},x_{2d},\dots)$ performed on each dimension $d$, where $x_{1d},x_{2d},\dots$ contains the decision variables of parents, the lower bound, and the upper bound. Note that other parameters (e.g., $\beta$ in SBX and $C$ in CMA-ES) are ignored for simplicity. In the following, we investigate how the invariance properties influence the search behavior and performance of operators. \subsection{Effects of Translation Invariance} According to \cite{Hansen2011Impacts}, a variation operator $h(x_{1d},x_{2d},\dots)$ is invariant to search space transformation $\mathcal{T}$ means that $h(\mathcal{T}(x_{1d}),\mathcal{T}(x_{2d}),\dots)=\mathcal{T}(h(x_{1d},x_{2d},\dots))$. A translation of the search space can be regarded as an addition of each decision variable, i.e., $\mathcal{T}(x)=x+b$, hence the translation invariance property can be defined as follows: \begin{definition}[\textbf{Translation invariance}] A variation operator $h(x_{1d},x_{2d},\dots)$ is translation invariant if and only if \begin{equation} h(x_{1d}+b,x_{2d}+b,\dots)=h(x_{1d},x_{2d},\dots)+b \end{equation} holds for any real constant $b$. \end{definition} It is not difficult to find that all the operators of SBX, DE, CMA-ES, and FEP are translation invariant. In particular, the SBX operator described in (\ref{equ:SBX}) can be rewritten as \begin{equation} h_{sbx}(x_{1d},x_{2d})=x_{1d}+0.5(1\pm\beta)(x_{2d}-x_{1d})\ , \label{equ:SBX2} \end{equation} hence \begin{equation} \begin{aligned} &h_{sbx}(x_{1d}+b,x_{2d}+b)\\ &\qquad=x_{1d}+b+0.5(1\pm\beta)(x_{2d}+b-x_{1d}-b)\\ &\qquad=h_{sbx}(x_{1d},x_{2d})+b \end{aligned} \end{equation} and the operator is translation invariant. By contrast, if the SBX operator is modified to \begin{equation} h_{sbx^\prime}(x_{1d},x_{2d})=0.1x_{1d}+0.5(1\pm\beta)(x_{2d}-x_{1d})\ , \end{equation} then \begin{equation} \begin{aligned} &h_{sbx^\prime}(x_{1d}+b,x_{2d}+b)\\ &\qquad=0.1x_{1d}+0.1b+0.5(1\pm\beta)(x_{2d}+b-x_{1d}-b)\\ &\qquad\neq h_{sbx^\prime}(x_{1d},x_{2d})+b \end{aligned} \end{equation} and the operator becomes not translation invariant. Obviously, the modified SBX$^\prime$ operator is likely to evolve the population toward the origin. \begin{figure}[!t] \centering \subfloat[SBX based GA (translation invariant)]{\includegraphics[width=0.5\linewidth]{invariance1.eps}}\hfil \subfloat[SBX$^\prime$ based GA]{\includegraphics[width=0.5\linewidth]{invariance2.eps}} \caption{Convergence profiles of SBX and SBX$^\prime$ based GAs with fixed random number seed on $f(x_1,x_2)=x_1^4+x_2^4$ with and without translation.} \label{fig:invariance1} \end{figure} To better illustrate this fact, Fig.~\ref{fig:invariance1} depicts the convergence profiles of SBX and SBX$^\prime$ based GAs on problem $f(x_1,x_2)=x_1^4+x_2^4$ with and without translation, where the random number seed is fixed and both the population size and the number of generations are set to 10. It can be found that the SBX based GA holds the same search behavior and can always converge to the global optimums of the three problems, whereas the SBX$^\prime$ based GA always converges to the origin. In short, the SBX operator is translation invariant but SBX$^\prime$ is not. Furthermore, Table~\ref{tab:example1} lists the mean and standard deviation of the minimum objective values obtained by SBX and SBX$^\prime$ based GAs on six benchmark problems \cite{Su2020Non}, averaged over 30 runs. The six problems have the same global optimum $(0,0,\dots,0)$, while the global optimum is changed to $(-6,-6,\dots,-6)$ if the problems are translated by $x^\prime=x+6$. For the original problems, the SBX$^\prime$ based GA significantly outperforms the SBX based GA since the former can quickly converge to the origin. While for the translated problems, the performance of SBX$^\prime$ based GA deteriorates considerably since it cannot converge to the translated global optimum; by contrast, the performance of SBX based GA keeps unchanged since the SBX operator is translation invariant. \begin{table}[!t] \footnotesize \renewcommand{\arraystretch}{1.3} \centering \caption{Minimum Objective Values Obtained by SBX and SBX$^\prime$ Based GAs on Six Problems with and without Translation.} \label{tab:example1} \setlength{\tabcolsep}{0.5mm}{ \begin{tabular}{c|c|c} \hline Original problem&SBX based GA&\multirow{2}{*}{SBX$^\prime$ based GA}\\ ($x_1,\dots,x_{30}\in[-10,10]$)&(translation invariant)\\ \hline Schwefel's Function 2.22&5.4310e+0 (2.13e+0)&\hl{1.0965e-49 (1.29e-49)}\\ Schwefel's Function 2.21&4.5792e+0 (1.21e+0)&\hl{8.3896e-49 (1.23e-48)}\\ Quaric Function&1.6425e+3 (1.03e+3)&\hl{2.9477e-4 (2.66e-4)}\\ Griewank Function&4.7700e-1 (1.87e-1)&\hl{0.0000e+0 (0.00e+0)}\\ Ackley's Function&3.3728e+0 (3.75e-1)&\hl{8.8818e-16 (0.00e+0)}\\ Rastrigin's Function&4.2269e+1 (1.08e+1)&\hl{0.0000e+0 (0.00e+0)}\\ \hline\hline Translated problem&SBX based GA&\multirow{2}{*}{SBX$^\prime$ based GA}\\ ($x^\prime=x+6$)&(translation invariant)\\ \hline Schwefel's Function 2.22&\hl{5.2237e+0 (1.88e+0)}&1.0083e+12 (2.18e+12)\\ Schwefel's Function 2.21&\hl{4.9401e+0 (1.24e+0)}&5.0000e+0 (0.00e+0)\\ Quaric Function&\hl{1.9875e+3 (1.62e+3)}&2.3718e+5 (1.53e+4)\\ Griewank Function&\hl{4.6346e-1 (1.17e-1)}&1.1523e+0 (4.76e-3)\\ Ackley's Function&\hl{3.5154e+0 (7.18e-1)}&1.2639e+1 (5.98e-4)\\ Rastrigin's Function&\hl{4.5695e+1 (1.01e+1)}&7.4895e+2 (1.83e-1)\\ \hline \end{tabular}} \end{table} \subsection{Effects of Scale Invariance} Similarly, a scaling of the search space can be regarded as a multiplication of each decision variable, i.e., $\mathcal{T}(x)=ax$, hence the scale invariance property can be defined as follows: \begin{definition}[\textbf{Scale invariance}] A variation operator $h(x_{1d},x_{2d},\dots)$ is scale invariant if and only if \begin{equation} h(ax_{1d},ax_{2d},\dots)=a\cdot h(x_{1d},x_{2d},\dots) \end{equation} holds for any real constant $a$. \end{definition} It can be found that the operators of SBX, DE, and CMA-ES are scale invariant but the operator of FEP is not. Considering that the step-size $\sigma$ is proportionally related to the decision space, the operator of CMA-ES described in (\ref{equ:CMAES}) can be rewritten as \begin{equation} h_{cmaes}(x_{md},l_d,u_d)=x_{md}+(u_d-l_d)\cdot\mathcal{N}(0,{\sigma^\prime_d}^2c)\ , \label{equ:example1} \end{equation} where $\sigma^\prime_d$ is a parameter within $(0,1]$ and $c$ is related to the covariance matrix $C$. Therefore, \begin{equation} \begin{aligned} &h_{cmaes}(ax_{md},al_d,au_d)\\ &\qquad=ax_{md}+(au_d-al_d)\cdot\mathcal{N}(0,{\sigma^\prime_d}^2c)\\ &\qquad=a\cdot h_{cmaes}(x_{md},l_d,u_d)\ , \end{aligned} \end{equation} which means that the operator is scale invariant. On the other hand, the operator of FEP described in (\ref{equ:FEP}) can be rewritten as \begin{equation} h_{fep}(x_{d})=x_{d}+\mathcal{N}(0,\eta_d^2)\ , \end{equation} hence \begin{equation} h_{fep}(ax_{d})=ax_{d}+\mathcal{N}(0,\eta_d^2)\neq a\cdot h_{fep}(x_{d})\ , \end{equation} which means that the operator is not scale invariant. \begin{figure}[!t] \centering \subfloat[CMA-ES (scale invariant)]{\includegraphics[width=0.5\linewidth]{invariance3.eps}}\hfil \subfloat[FEP]{\includegraphics[width=0.5\linewidth]{invariance4.eps}} \caption{Convergence profiles of CMA-ES and FEP with fixed random number seed on $f(x_1,x_2)=x_1^4+x_2^4$ with and without scaling.} \label{fig:invariance2} \end{figure} Fig.~\ref{fig:invariance2} plots the convergence profiles of CMA-ES and FEP on $f(x_1,x_2)=x_1^4+x_2^4$ with and without scaling. It can be seen that the search behaviors of CMA-ES are the same on the three problems with different scales, whereas the search behaviors of FEP are quite different. Moreover, Table~\ref{tab:example2} presents the mean and standard deviation of the minimum objective values obtained by CMA-ES and FEP on six benchmark problems. Obviously, CMA-ES is competitive to FEP on the original problems, while CMA-ES dramatically outperforms FEP on the scaled problems. This is because the operator of CMA-ES is scale invariant and thus exhibits the same performance on a problem with different scales; by contrast, the operator of FEP is not scale invariant and thus is sensitive to the scales of problems. \begin{table}[!t] \footnotesize \renewcommand{\arraystretch}{1.3} \centering \caption{Minimum Objective Values Obtained by CMA-ES and FEP on Six Problems with and without Scaling.} \label{tab:example2} \setlength{\tabcolsep}{0.8mm}{ \begin{tabular}{c|c|c} \hline Original problem&CMA-ES&\multirow{2}{*}{FEP}\\ ($x_1,\dots,x_{30}\in[-10,10]$)&(scale invariant)\\ \hline Schwefel's Function 2.22&1.8226e+1 (3.38e+0)&\hl{1.6919e+1 (5.30e+0)}\\ Schwefel's Function 2.21&1.0000e+1 (0.00e+0)&\hl{4.6087e+0 (2.29e-1)}\\ Quaric Function&\hl{2.5024e+1 (9.92e+0)}&1.4005e+3 (1.69e+3)\\ Griewank Function&\hl{3.2242e-1 (5.74e-2)}&8.5491e-1 (1.18e-1)\\ Ackley's Function&\hl{2.7634e+0 (1.71e-1)}&5.7111e+0 (5.97e-1)\\ Rastrigin's Function&2.1628e+2 (4.87e+0)&\hl{1.6904e+2 (2.52e+1)}\\ \hline\hline Scaled problem&CMA-ES&\multirow{2}{*}{FEP}\\ ($x^\prime=10x$)&(scale invariant)\\ \hline Schwefel's Function 2.22&\hl{1.9390e+1 (2.70e+0)}&4.6714e+10 (5.47e+10)\\ Schwefel's Function 2.21&\hl{1.3107e+0 (1.36e-1)}&8.4873e+0 (5.56e-1)\\ Quaric Function&\hl{2.6711e+1 (2.22e+1)}&3.8536e+5 (2.35e+4)\\ Griewank Function&\hl{2.8246e-1 (9.96e-2)}&1.1561e+0 (8.63e-3)\\ Ackley's Function&\hl{3.0355e+0 (3.42e-1)}&1.3471e+1 (5.21e-1)\\ Rastrigin's Function&\hl{2.1651e+2 (1.40e+1)}&9.2797e+2 (3.64e+1)\\ \hline \end{tabular}} \end{table} \subsection{Effects of Rotation Invariance} A rotation of the search space can be regarded as a matrix multiplication of each decision vector, i.e., $\mathcal{T}(\mathbf{x})=\mathbf{x}M$, hence the rotation invariance property can be defined as follows: \begin{definition}[\textbf{Rotation invariance}] A variation operator $h(\mathbf{x}_1,\mathbf{x}_2,\dots)$ is rotation invariant if and only if \begin{equation} h(\mathbf{x}_1M,\mathbf{x}_2M,\dots)=h(\mathbf{x}_1,\mathbf{x}_2,\dots)M \end{equation} holds for any orthogonal matrix $M$. \end{definition} Note that here the decision variables on all dimensions $\mathbf{x}_1,\mathbf{x}_2,\dots$ rather than those on a single dimension $x_{1d},x_{2d},\dots$ should be considered. It can be deduced that the mutation operator of DE is rotation invariant while the operators of SBX, CMA-ES, and FEP are not. In particular, the mutation operator of DE described in (\ref{equ:DE}) can be rewritten as \begin{equation} h_{de}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3)=\mathbf{x}_1+F\cdot(\mathbf{x}_2-\mathbf{x}_3)\ , \end{equation} hence \begin{equation} \begin{aligned} h_{de}(\mathbf{x}_1M,\mathbf{x}_2M,\mathbf{x}_3M)=&\ \mathbf{x}_1M+F\cdot(\mathbf{x}_2M-\mathbf{x}_3M)\\ =&\ h_{de}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_3)M\ , \end{aligned} \end{equation} and the operator is rotation invariant. By contrast, since the SBX operator described in (\ref{equ:SBX2}) can be rewritten as \begin{equation} h_{sbx}(\mathbf{x}_1,\mathbf{x}_2)=\mathbf{x}_1+(\mathbf{x}_2-\mathbf{x}_1)B\ , \end{equation} where \begin{equation} \footnotesize B=\left(\begin{array}{cccc} 0.5(1\pm\beta_1)&0&\cdots&0\\ 0&0.5(1\pm\beta_2)&\cdots&0\\ \cdots&\cdots&\cdots&\cdots\\ 0&0&0&0.5(1\pm\beta_D)\\ \end{array}\right)\normalsize\ . \end{equation} Therefore, \begin{equation} h_{sbx}(\mathbf{x}_1M,\mathbf{x}_2M)=\mathbf{x}_1M+(\mathbf{x}_2-\mathbf{x}_1)MB \end{equation} and \begin{equation} h_{sbx}(\mathbf{x}_1,\mathbf{x}_2)M=\mathbf{x}_1M+(\mathbf{x}_2-\mathbf{x}_1)BM\ . \end{equation} That is, $h_{sbx}(\mathbf{x}_1M,\mathbf{x}_2M)=h_{sbx}(\mathbf{x}_1,\mathbf{x}_2)M$ holds only if $MB=BM$, i.e., $\beta_1=\beta_2=\dots=\beta_D$, which is almost impossible since they are independent random numbers. Hence, the SBX operator is not rotation invariant. \begin{figure}[!t] \centering \subfloat[DE (rotation invariant)]{\includegraphics[width=0.5\linewidth]{invariance5.eps}}\hfil \subfloat[SBX based GA]{\includegraphics[width=0.5\linewidth]{invariance6.eps}} \caption{Convergence profiles of DE and SBX based GA with fixed random number seed on $f(x_1,x_2)=x_1^4+x_2^4$ with and without rotation, where $M$ is a randomly generated orthogonal matrix.} \label{fig:invariance3} \end{figure} Fig.~\ref{fig:invariance3} shows the convergence profiles of the mutation based DE and SBX based GA on $f(x_1,x_2)=x_1^4+x_2^4$ with and without rotation. As can be seen, the search behavior of DE keeps unchanged on the two problems, where the convergence profile is rotated together with the search space. On the contrary, the search behavior of GA is unstable and the convergence profiles are different on the two problems. As can be further observed from Table~\ref{tab:example3}, the performance of DE is similar to and much better than GA on six benchmark problems with and without rotation, respectively, which is consistent with the facts that the mutation operator of DE is rotation invariant while the SBX operator is not. In fact, the superiority of DE on problems with complicated variable linkages \cite{MOEA/D-DE} is mainly due to its rotation invariance property. \begin{table}[!t] \footnotesize \renewcommand{\arraystretch}{1.3} \centering \caption{Minimum Objective Values Obtained by DE and SBX Based GA on Six Problems with and without Rotation.} \label{tab:example3} \setlength{\tabcolsep}{0.8mm}{ \begin{tabular}{c|c|c} \hline Original problem&DE&\multirow{2}{*}{SBX based GA}\\ ($x_1,\dots,x_{30}\in[-10,10]$)&(rotation invariant)\\ \hline Schwefel's Function 2.22&1.9799e+1 (2.08e+0)&\hl{5.4310e+0 (2.13e+0)}\\ Schwefel's Function 2.21&\hl{1.5589e+0 (1.87e-1)}&4.5792e+0 (1.21e+0)\\ Quaric Function&\hl{1.8303e+2 (9.67e+1)}&1.6425e+3 (1.03e+3)\\ Griewank Function&4.9032e-1 (1.01e-1)&\hl{4.7700e-1 (1.87e-1)}\\ Ackley's Function&3.7861e+0 (2.96e-1)&\hl{3.3728e+0 (3.75e-1)}\\ Rastrigin's Function&2.4783e+2 (2.34e+1)&\hl{4.2269e+1 (1.08e+1)}\\ \hline\hline Rotated problem&DE&\multirow{2}{*}{SBX based GA}\\ ($\mathbf{x}^\prime=\mathbf{x}M$)&(rotation invariant)\\ \hline Schwefel's Function 2.22&\hl{1.7401e+1 (3.38e+0)}&2.8069e+1 (4.32e+0)\\ Schwefel's Function 2.21&\hl{1.5535e+0 (2.21e-1)}&2.0725e+0 (4.82e-1)\\ Quaric Function&\hl{1.2403e+2 (7.80e+1)}&9.8479e+2 (1.24e+3)\\ Griewank Function&\hl{4.6714e-1 (7.22e-2)}&4.9478e-1 (1.20e-1)\\ Ackley's Function&\hl{3.8477e+0 (2.24e-1)}&4.0805e+0 (5.49e-1)\\ Rastrigin's Function&\hl{2.4588e+2 (1.23e+1)}&2.5095e+2 (1.12e+1)\\ \hline \end{tabular}} \end{table} \section{Sufficient and Necessary Condition of Achieving Invariance Properties} It can be concluded from the above analysis that the three invariance properties are critical to the robustness of operators, and we can judge whether an operator possesses these properties according to their definitions. However, it is still tricky to consider them in the design of new operators explicitly. Therefore, this section deduces the generic form of translation, scale, and rotation invariant operators by three steps. \subsection{Theoretical Derivation} Firstly, let $\mathbf{w}=(w_1,w_2,\dots)=(x_{1d},x_{2d},\dots)$, then a translation invariant operator should satisfy \begin{equation} h(\mathbf{w}+b)=h(\mathbf{w})+b\ . \label{equ:deduce1} \end{equation} To find the generic form of $h(\mathbf{w})$, it is first expanded in a first order Taylor series at an arbitrary point $\mathbf{w}_0$: \begin{equation} h(\mathbf{w})=h(\mathbf{w}_0)+(\mathbf{w}-\mathbf{w}_0)\nabla h(\mathbf{w}_0^\prime)\ , \end{equation} where $\nabla=(\frac{\partial}{\partial w_1},\frac{\partial}{\partial w_2},\dots)^T$ and $\mathbf{w}_0^\prime$ is an unknown point. Based on (\ref{equ:deduce1}), we have \begin{equation} \begin{aligned} &h(\mathbf{w}_0)+(\mathbf{w}+b-\mathbf{w}_0)\nabla h(\mathbf{w}_0^\prime)\\ &\qquad=h(\mathbf{w}_0)+(\mathbf{w}-\mathbf{w}_0)\nabla h(\mathbf{w}_0^{\prime\prime})+b\ , \end{aligned} \label{equ:deduce2} \end{equation} where $\mathbf{w}_0^\prime$ and $\mathbf{w}_0^{\prime\prime}$ are unknown points determined by $\mathbf{w}_0$. Since (\ref{equ:deduce2}) holds for any $b$, we only consider the components including $b$ in (\ref{equ:deduce2}): \begin{equation} bI\nabla h(\mathbf{w}_0^\prime)=b\ , \end{equation} where $I$ is a vector of ones. That is, \begin{equation} I\nabla h(\mathbf{w}_0^\prime)=1 \end{equation} holds for any $\mathbf{w}_0^\prime$, which measn that \begin{equation} \frac{\partial h}{\partial w_1}+\frac{\partial h}{\partial w_2}+\cdots=1\ . \label{equ:deduce3} \end{equation} Obviously, (\ref{equ:deduce3}) is a quasilinear first-order nonhomogeneous partial differential equation \cite{PDE}. Let $h=h(\mathbf{w})$ be the solution of (\ref{equ:deduce3}) determined by a function $g(\mathbf{w},h)=0$, then the following homogeneous partial differential equation can be obtained: \begin{equation} \frac{\partial g}{\partial h}+\frac{\partial g}{\partial w_1}+\frac{\partial g}{\partial w_2}+\cdots=0\ , \label{equ:deduce4} \end{equation} and thus the following first integrals can be obtained: \begin{equation} \left\{ \begin{aligned} h-w_1=&\ c_1\\ w_1-w_2=&\ c_2\\ w_2-w_3=&\ c_3\\ \cdots\ \ \cdots \end{aligned} \right., \end{equation} Therefore, the solution of (\ref{equ:deduce4}) is \begin{equation} g=g(h-w_1,w_1-w_2,w_2-w_3,\dots)\ . \end{equation} Since $g=0$, there must exists a function $\psi$ such that \begin{equation} h-w_1=\psi(w_1-w_2,w_2-w_3,\dots)\ , \end{equation} which is equivalent to \begin{equation} h(\mathbf{w})=w_1+\psi(w_2-w_1,w_3-w_2,\dots)\ . \label{equ:deduce5} \end{equation} Moreover, it is obvious that (\ref{equ:deduce5}) satisfies (\ref{equ:deduce1}), hence (\ref{equ:deduce5}) is a sufficient and necessary condition of (\ref{equ:deduce1}), and the following theorem can be given: \begin{theorem}[\textbf{Translation invariant operator}] A continuously differentiable variation operator $h(x_{1d},x_{2d},\dots)$ is translation invariant if and only if it has the following form: \begin{equation} h(x_{1d},x_{2d},\dots)=x_{1d}+\psi(x_{2d}-x_{1d},x_{3d}-x_{2d},\dots)\ , \end{equation} where $\psi$ can be any continuously differentiable function. \end{theorem} Secondly, a scale invariant operator should satisfy \begin{equation} h(a\mathbf{w})=a\cdot h(\mathbf{w})\ , \label{equ:deduce9} \end{equation} according to (\ref{equ:deduce5}), a translation and scale invariant operator should satisfy \begin{equation} \begin{aligned} &aw_1+\psi(aw_2-aw_1,aw_3-aw_2,\dots)\\ &\qquad=aw_1+a\cdot\psi(aw_2-aw_1,aw_3-aw_2,\dots)\ . \end{aligned} \end{equation} Let $\mathbf{v}=(v_1,v_2,\dots)=(w_2-w_1,w_3-w_2,\dots)$, we have \begin{equation} \psi(a\mathbf{v})=a\cdot \psi(\mathbf{v}), \end{equation} and the first order Taylor series expansion at an arbitrary point $\mathbf{v}_0$ is \begin{equation} \begin{aligned} &\psi(\mathbf{v}_0)+(a\mathbf{v}-\mathbf{v}_0)\nabla \psi(\mathbf{v}_0^\prime)\\ &\qquad=a\psi(\mathbf{v}_0)+a(\mathbf{v}-\mathbf{v}_0)\nabla \psi(\mathbf{v}_0^{\prime\prime})\ , \end{aligned} \label{equ:deduce6} \end{equation} where $\mathbf{v}_0^\prime$ and $\mathbf{v}_0^{\prime\prime}$ are unknown points determined by $\mathbf{v}_0$. Since (\ref{equ:deduce6}) holds for any $a$, we only consider the components including $a$ in (\ref{equ:deduce6}), that is, \begin{equation} \psi(\mathbf{v}_0)=\mathbf{v}_0\nabla \psi(\mathbf{v}_0^\prime) \end{equation} must hold for any $\mathbf{v}_0$ and $\mathbf{v}_0^\prime$, which means that \begin{equation} v_1\frac{\partial \psi}{\partial v_1}+v_2\frac{\partial \psi}{\partial v_2}+\cdots=\psi\ . \end{equation} Let $g(\mathbf{v},\psi)=0$, the following homogeneous partial differential equation can be obtained: \begin{equation} \psi\frac{\partial g}{\partial \psi}+v_1\frac{\partial g}{\partial v_1}+v_2\frac{\partial g}{\partial v_2}+\cdots=0\ , \label{equ:deduce7} \end{equation} and thus the following first integrals can be obtained: \begin{equation} \left\{ \begin{aligned} \ln \psi-\ln v_1=&\ c_1\\ \ln v_1-\ln v_2=&\ c_2\\ \ln v_2-\ln v_3=&\ c_3\\ \cdots\ \ \cdots \end{aligned} \right., \end{equation} Therefore, the solution of (\ref{equ:deduce7}) is \begin{equation} g=g(\ln \psi-\ln v_1,\ln v_1-\ln v_2,\ln v_2-\ln v_3,\dots)\ . \end{equation} Since $g=0$, there must exists a function $\varphi$ such that \begin{equation} \ln \psi-\ln v_1=\varphi(\ln v_1-\ln v_2,\ln v_2-\ln v_3,\dots)\ , \end{equation} hence the generic form of $\psi(\mathbf{v})$ can be determined: \begin{equation} \ln \psi(\mathbf{v})=\ln v_1 + \varphi\left(\ln \frac{v_1}{v_2},\ln \frac{v_2}{v_3},\dots\right)\ . \end{equation} Let $\varphi(u_1,u_2,\dots)=\ln \phi(e^\frac{1}{u_1},e^\frac{1}{u_2},\dots)$, then \begin{equation} \psi(\mathbf{v})=v_1\phi\left(\frac{v_2}{v_1},\frac{v_3}{v_2},\dots\right)\ . \end{equation} According to (\ref{equ:deduce5}), we have \begin{equation} h(\mathbf{w})=w_1+(w_2-w_1)\phi\left(\frac{w_3-w_2}{w_2-w_1},\frac{w_4-w_3}{w_3-w_2},\dots\right). \label{equ:deduce8} \end{equation} Moreover, it is obvious that (\ref{equ:deduce8}) satisfies both (\ref{equ:deduce1}) and (\ref{equ:deduce9}), hence (\ref{equ:deduce8}) is a sufficient and necessary condition of (\ref{equ:deduce1}) and (\ref{equ:deduce9}), and the following theorem can be given: \begin{theorem}[\textbf{Translation and scale invariant operator}] A continuously differentiable variation operator $h(x_{1d},x_{2d},\dots)$ is translation and scale invariant if and only if it has the following form: \begin{equation} \footnotesize h(x_{1d},x_{2d},\dots)=x_{1d}+(x_{2d}-x_{1d})\phi\left(\frac{x_{3d}-x_{2d}}{x_{2d}-x_{1d}},\frac{x_{4d}-x_{3d}}{x_{3d}-x_{2d}},\dots\right), \end{equation} where $\phi$ can be any continuously differentiable function. \end{theorem} Thirdly, a rotation invariant operator should satisfy \begin{equation} \footnotesize h\left(\sum_{i=1}^{D}m_{id}x_{1i},\sum_{i=1}^{D}m_{id}x_{2i},\dots\right)=\sum_{i=1}^{D}m_{id}\cdot h(x_{1i},x_{2i},\dots)\ , \label{equ:deduce10} \end{equation} where $m_{id}\in M$ and $d=1,\dots,D$. Let $\phi(\mathbf{w})=\varphi(w_1,\prod_{i=1}^{2}w_i,\prod_{i=1}^{3}w_i,\dots)$, then (\ref{equ:deduce8}) is equivalent to \begin{equation} h(\mathbf{w})=w_1+(w_2-w_1)\varphi\left(\frac{w_3-w_2}{w_2-w_1},\frac{w_4-w_3}{w_2-w_1},\dots\right)\ . \label{equ:deduce12} \end{equation} According to (\ref{equ:deduce10}), a translation, scale, and rotation invariant operator should satisfy \begin{equation} \begin{aligned} &\sum_{i=1}^{D}m_{id}x_{1i}+\left(\sum_{i=1}^{D}m_{id}x_{2i}-\sum_{i=1}^{D}m_{id}x_{1i}\right)\\ &\cdot\varphi\left(\frac{\sum_{i=1}^{D}m_{id}x_{3i}-\sum_{i=1}^{D}m_{id}x_{2i}}{\sum_{i=1}^{D}m_{id}x_{2i}-\sum_{i=1}^{D}m_{id}x_{1i}},\dots\right)\\ &=\sum_{i=1}^{D}m_{id}\left[x_{1i}+(x_{2i}-x_{1i})\varphi\left(\frac{x_{3i}-x_{2i}}{x_{2i}-x_{1i}},\dots\right)\right]\ , \end{aligned} \end{equation} which is equivalent to \begin{equation} \begin{aligned} &\sum_{i=1}^{D}m_{id}(x_{2i}-x_{1i})\cdot\varphi\left(\frac{\sum_{i=1}^{D}m_{id}(x_{3i}-x_{2i})}{\sum_{i=1}^{D}m_{id}(x_{2i}-x_{1i})},\dots\right)\\ &=\sum_{i=1}^{D}m_{id}(x_{2i}-x_{1i})\varphi\left(\frac{m_{id}(x_{3i}-x_{2i})}{m_{id}(x_{2i}-x_{1i})},\dots\right)\ . \end{aligned} \end{equation} Let $\mathbf{u}_i=(u_{i1},u_{i2},\dots)=(m_{id}(x_{2i}-x_{1i}),m_{id}(x_{3i}-x_{2i}),\dots)$, we have \begin{equation} \sum_{i=1}^{D}u_{i1}\cdot\varphi\left(\frac{\sum_{i=1}^{D}u_{i2}}{\sum_{i=1}^{D}u_{i1}},\dots\right)=\sum_{i=1}^{D}u_{i1}\varphi\left(\frac{u_{i2}}{u_{i1}},\dots\right)\ , \end{equation} and the first order Taylor series expansion at an arbitrary point $\mathbf{u}_0=(u_{01},u_{02},\dots)$ is \begin{equation} \begin{aligned} &\sum_{i=1}^{D}u_{i1}\cdot\left[\varphi(\mathbf{u}_0)+\left(\frac{\sum_{i=1}^{D}u_{i2}}{\sum_{i=1}^{D}u_{i1}}-u_{01},\dots\right)\nabla \varphi(\mathbf{u}_0^\prime)\right]\\ &=u_{11}\cdot\left[\varphi(\mathbf{u}_0)+\left(\frac{u_{12}}{u_{11}}-u_{01},\dots\right)\nabla \varphi(\mathbf{u}_0^{\prime\prime})\right]\\ &+u_{21}\cdot\left[\varphi(\mathbf{u}_0)+\left(\frac{u_{22}}{u_{21}}-u_{01},\dots\right)\nabla \varphi(\mathbf{u}_0^{\prime\prime\prime})\right]\\ &+\cdots\ , \end{aligned} \end{equation} which can be simplified as \begin{equation} \begin{aligned} &\left(\sum_{i=1}^{D}u_{i2}-u_{01}\sum_{i=1}^{D}u_{i1},\dots\right)\nabla \varphi(\mathbf{u}_0^\prime)\qquad\\ &\qquad=\left(u_{12}-u_{01}u_{11},\dots\right)\nabla \varphi(\mathbf{u}_0^{\prime\prime})\\ &\qquad+\left(u_{22}-u_{01}u_{21},\dots\right)\nabla \varphi(\mathbf{u}_0^{\prime\prime\prime})\\ &\qquad+\cdots\ , \end{aligned} \label{equ:deduce11} \end{equation} where $\mathbf{u}_0^\prime,\mathbf{u}_0^{\prime\prime},\dots$ are unknown points determined by $\mathbf{u}_0$. Since (\ref{equ:deduce11}) holds for any $u_{12},u_{22},\dots$, we have \begin{equation} \nabla\varphi(\mathbf{u}_0^\prime)=\nabla\varphi(\mathbf{u}_0^{\prime\prime})=\nabla\varphi(\mathbf{u}_0^{\prime\prime\prime})=\cdots=\mathbf{c}\ , \end{equation} and the form of $\varphi(\mathbf{u})$ can only be \begin{equation} \varphi(\mathbf{u})=c_0+c_1u_1+c_2u_2+\cdots\ , \end{equation} where $c_0,c_1,c_2,\dots$ are constants. According to (\ref{equ:deduce12}), \begin{equation} h(\mathbf{w})=w_1+c_0(w_2-w_1)+c_1(w_3-w_2)+c_2(w_4-w_3)+\cdots\ . \end{equation} Let \begin{equation} \left\{ \begin{aligned} &\ 1-c_0=r_1\\ &c_0-c_1=r_2\\ &c_1-c_2=r_3\\ &\cdots\ \ \cdots\\ \end{aligned} \right.\ , \end{equation} we have \begin{equation} h(\mathbf{w})=r_1w_1+r_2w_2+r_3w_3+\cdots \label{equ:deduce13} \end{equation} and $r_1+r_2+r_3+\cdots=1$. Moreover, it is obvious that (\ref{equ:deduce13}) satisfies (\ref{equ:deduce1}), (\ref{equ:deduce9}), and (\ref{equ:deduce10}), hence (\ref{equ:deduce13}) is a sufficient and necessary condition of (\ref{equ:deduce1}), (\ref{equ:deduce9}), and (\ref{equ:deduce10}), and the following theorem can be given: \begin{theorem}[\textbf{Translation, scale, and rotation invariant operator}] A continuously differentiable variation operator $h(x_{1d},x_{2d},\dots)$ is translation, scale, and rotation invariant if and only if it has the following form: \begin{equation} h(x_{1d},x_{2d},\dots)=r_1x_{1d}+r_2x_{2d}+r_3x_{3d}+\cdots\ , \label{equ:example2} \end{equation} where $r_1,r_2,r_3,\dots$ can be any real constants satisfying $r_1+r_2+r_3+\cdots=1$. \end{theorem} \subsection{Remarks} According to the above theorem, the following three corollaries can be given: \begin{enumerate} \item An operator satisfying (\ref{equ:example2}) is scale invariant. \item An operator satisfying (\ref{equ:example2}) with $r_1+r_2+r_3+\cdots=1$ is scale and translation invariant. \item An operator satisfying (\ref{equ:example2}) with $r_1,r_2,r_3,\dots$ being constants is scale and rotation invariant. \end{enumerate} The theoretical proofs of these corollaries are given in the following. \begin{corollary} If a variation operator satisfying \begin{equation} h(x_{1d},x_{2d},\dots)=r_1x_{1d}+r_2x_{2d}+r_3x_{3d}+\cdots\ , \end{equation} then it is scale invariant. \end{corollary} \begin{IEEEproof} Since \begin{equation} \begin{aligned} h(ax_{1d},ax_{2d},\dots)=&\ ar_1x_{1d}+ar_2x_{2d}+ar_3x_{3d}+\cdots\\ =&\ ah(x_{1d},x_{2d},\dots)\ , \end{aligned} \end{equation} the operator is scale invariant. \end{IEEEproof} \begin{corollary} If a variation operator satisfying \begin{equation} h(x_{1d},x_{2d},\dots)=r_1x_{1d}+r_2x_{2d}+r_3x_{3d}+\cdots \end{equation} with $r_1+r_2+r_3+\cdots=1$, then it is scale and translation invariant. \end{corollary} \begin{IEEEproof} Since \begin{equation} \begin{aligned} &h(x_{1d}+b,x_{2d}+b,\dots)\\ &\quad=r_1(x_{1d}+b)+r_2(x_{2d}+b)+r_3(x_{3d}+b)+\cdots\\ &\quad=r_1x_{1d}+r_2x_{2d}+r_3x_{3d}+\cdots+(r_1+r_2+r_3+\cdots)b\\ &\quad=h(x_{1d},x_{2d},\dots)+b\ , \end{aligned} \end{equation} according to Corollary 1, the operator is scale and translation invariant. \end{IEEEproof} \begin{corollary} If a variation operator satisfying \begin{equation} h(x_{1d},x_{2d},\dots)=r_1x_{1d}+r_2x_{2d}+r_3x_{3d}+\cdots \end{equation} with $r_1,r_2,r_3,\dots$ being constants, then it is scale and rotation invariant. \end{corollary} \begin{IEEEproof} Since $r_1,r_2,r_3,\dots$ are constants, we have \begin{equation} h(\mathbf{x}_1,\mathbf{x}_2,\dots)=r_1\mathbf{x}_1+r_2\mathbf{x}_2+r_3\mathbf{x}_3+\cdots\ . \end{equation} Therefore, \begin{equation} \begin{aligned} &h(\mathbf{x}_1M,\mathbf{x}_2M,\dots)\\ &\qquad=r_1(\mathbf{x}_1M)+r_2(\mathbf{x}_2M)+r_3(\mathbf{x}_3M)+\cdots\\ &\qquad=(r_1\mathbf{x}_1)M+(r_2\mathbf{x}_2)M+(r_3\mathbf{x}_3)M+\cdots\\ &\qquad=h(\mathbf{x}_1,\mathbf{x}_2,\dots)M\ , \end{aligned} \end{equation} according to Corollary 1, the operator is scale and rotation invariant. \end{IEEEproof} Generally, most existing operators including those in GA, DE, and CMA-ES meet the second corollary and are scale and translation invariant. However, the operators in GA and CMA-ES are not rotation invariant since the weights (i.e., $\beta$ in (\ref{equ:SBX}) and $\mathcal{N}(0,{\sigma^\prime_d}^2c)$ in (\ref{equ:example1})) vary on different dimensions. By contrast, the mutation operator of DE is rotation invariant since the weights (i.e., $F$ in (\ref{equ:DE})) keep unchanged on all dimensions. It should be noted that a rotation invariant operator does not necessarily lead to a rotation invariant metaheuristic, and vice versa. For example, the mutation operator of DE is rotation variant, but DE is not rotation invariant due to the crossover operator with $CR<1$ \cite{Caraffini2019Study}. The operator of CMA-ES is not rotation invariant, but CMA-ES is rotation invariant due to the rotation angle adaptive covariance matrix $C$ \cite{algorithm-CMAES}. In fact, a rotation invariant operator may not obtain good performance since offspring solutions can only be the linear combinations of parents. In practice, the rotation invariance can be achieved by rotating offspring solutions according to a covariance matrix learnt from the population \cite{algorithm-CMAES,Pan2021Adaptive}. \section{Principled Approach for Designing Operators} The function of (\ref{equ:example2}) reveals the generic form of translation, scale, and rotation invariant operators, while the optimal values of the weights $r_1,r_2,r_3,\dots$ are not provided. Therefore, this section proposes a principled approach for designing new operators, which searches for high-performance operators by optimizing the weights. \subsection{Parameterization of Variation Operators} In order to introduce randomness, the proposed approach AutoV represents each weight by an independent normal distribution, and it optimizes the mean and variance of each normal distribution instead of the weight. Formally, the proposed AutoV searches for the operators having the following form: \begin{equation} \begin{aligned} h(x_{1d},\dots,x_{td})=&\ \sum_{i=1}^{t}r_ix_{id}\\ {\rm s.\,t.}\ \ \sum_{i=1}^{t}r_i=&\ 1,\ r_{i}\sim\mathcal{N}(\mu_i,\sigma_i^2) \end{aligned} \ , \label{equ:AutoV1} \end{equation} where $\mu_i\in[-1,1]$ and $\sigma_i\in[0,1]$ are the parameters to be optimized. Note that the value of $r_i$ keeps unchanged for $d=1,\dots,D$, hence $r_i$ is still a constant for generating each offspring solution. In the following, we demonstrate that an algorithm equipped with the above operator can converge to the global optimum of continuous optimization problems if given sufficient time. \begin{theorem} Given a continuous function $f(\mathbf{x})$ and its global optimal solution $\mathbf{x}^\ast$, for any $\epsilon>0$ and a metaheuristic based on an elite strategy and a variation operator satisfying \begin{equation} \begin{aligned} h(x_{1d},\dots,x_{td})=&\ (1-\sum_{i=2}^{t}r_i)x_{1d}+\sum_{i=2}^{t}r_ix_{id}\\ {\rm s.t.}\ \ r_{i+1}\sim&\ \mathcal{N}(\mu_i,\sigma_i^2),\ i=1,\dots,t-1 \end{aligned} \ , \label{equ:AutoV1} \end{equation} we have \begin{equation} \lim_{g\rightarrow\infty}P(|f(\mathbf{x}^g)-f(\mathbf{x}^\ast)|<\epsilon)=1\ , \end{equation} where $\mathbf{x}^g$ denotes the best solution in the population at the $g$-th generation. \end{theorem} \begin{IEEEproof} Regarding the evolutionary procedure as a finite-dimension Markov chain \cite{Fogel1994Asymptotic}, let \begin{equation} \begin{aligned} s_1:&\ |f(\mathbf{x}^g)-f(\mathbf{x}^\ast)|<\epsilon\\ s_2:&\ |f(\mathbf{x}^g)-f(\mathbf{x}^\ast)|\geq\epsilon \end{aligned} \ , \end{equation} then the one-step transition probabilities $P(s_1|s_1)=1$ and $P(s_2|s_1)=0$, since the elite strategy always retains the best solution. Now we focus on the transition probabilities $P(s_1|s_2)$ and $P(s_2|s_2)$. Since $f$ is a continuous function, there must exist a $\varepsilon>0$ such that $|f(\mathbf{x})-f(\mathbf{x}^\ast)|<\epsilon$ when $\|\mathbf{x}-\mathbf{x}^\ast\|_\infty<\varepsilon$, where $\|\mathbf{x}-\mathbf{x}^\ast\|_\infty$ denotes the largest difference between $\mathbf{x}$ and $\mathbf{x}^\ast$ over all the $D$ dimensions. According to equation (\ref{equ:AutoV1}), \begin{equation} \begin{aligned} h(x_{1d},\dots,x_{td})=&\ (1-\sum_{i=2}^{t}r_i)x_{1d}+\sum_{i=2}^{t}r_ix_{id}\\ =&\ x_{1d}+\sum_{i=2}^{t}r_i(x_{id}-x_{1d})\\ =&\ x_{1d}+\sum_{i=2}^{t}r_i^\prime\ , \end{aligned} \end{equation} where \begin{equation} r^\prime_{i+1}\sim\mathcal{N}(\mu_i,\sigma_i^2(x_{(i+1)d}-x_{1d})^2)\ . \end{equation} Therefore, \begin{equation} \begin{aligned} &P\left(\|h(\mathbf{x}_1,\dots,\mathbf{x}_t)-\mathbf{x}^\ast\|_\infty<\varepsilon\right)\\ =&\ \prod_{d=1}^{D}P\left(|h(x_{1d},\dots,x_{td})-x_d^\ast|<\varepsilon\right)\\ =&\ \prod_{d=1}^{D}P\left(\left|x_{1d}+\sum_{i=2}^{t}r_{id}^\prime-x_d^\ast\right|<\varepsilon\right)\\ =&\ \prod_{d=1}^{D}P\left(x_d^\ast-x_{1d}-\varepsilon<\sum_{i=2}^{t}r_{id}^\prime<x_d^\ast-x_{1d}+\varepsilon\right)\ . \end{aligned} \end{equation} Since all the $r_{id}^\prime$ are normally distributed, \begin{equation} \begin{aligned} &P\left(\frac{x_d^\ast-x_{1d}-\varepsilon}{t-1}<r_{id}^\prime<\frac{x_d^\ast-x_{1d}+\varepsilon}{t-1}\right)\\ =&\ \int_{\frac{x_d^\ast-x_{1d}-\varepsilon}{t-1}}^{\frac{x_d^\ast-x_{1d}+\varepsilon}{t-1}}\frac{e^{-\frac{c^2}{2\sigma_i^2(x_{(i+1)d}-x_{1d})^2}}}{\sigma_i(x_{(i+1)d}-x_{1d})\sqrt{2\pi}}dc>0\ , \end{aligned} \end{equation} and thus \begin{equation} P\left(\|h(\mathbf{x}_1,\dots,\mathbf{x}_t)-\mathbf{x}^\ast\|_\infty<\varepsilon\right)>0\ , \end{equation} which further indicates that $P(s_1|s_2)>0$. Since $P(s_1|s_2)+P(s_2|s_2)=1$, we have $P(s_2|s_2)<1$. Therefore, \begin{equation} \lim_{g\rightarrow\infty}\left[P(s_2|s_2)\right]^g=0\ , \end{equation} which means that the probability of $s_1: |f(\mathbf{x}^g)-f(\mathbf{x}^\ast)|<\epsilon$ when $g\rightarrow\infty$ is 1. \end{IEEEproof} It is worth noting that some existing operators include multiple parameter sets to be selected with given probabilities (e.g., $\beta$ in SBX), which provide complex search behaviors and can better balance between exploitation and exploration. Hence, the ensemble of multiple parameter sets is also adopted in AutoV. Moreover, the first weight $r_1$ can be omitted since it can be directly set to $1-r_2-r_3-\cdots$. To summarize, an operator in AutoV is determined by the following matrix \begin{equation} \left( \begin{aligned} \mu_{12},\sigma_{12},\mu_{13},\sigma_{13},\dots,\mu_{1t},\sigma_{1t},p_1\\ \mu_{22},\sigma_{22},\mu_{23},\sigma_{23},\dots,\mu_{2t},\sigma_{2t},p_2\\ \dots\ \ \dots\ \ \ \ \ \ \ \ \ \ \ \ \ \\ \mu_{k2},\sigma_{k2},\mu_{k3},\sigma_{k3},\dots,\mu_{kt},\sigma_{kt},p_k \end{aligned} \right)\ , \label{equ:AutoV2} \end{equation} where $\mu_{ji}$ and $\sigma_{ji}$ denote the mean and variance of weight $r_i$ in the $j$-th set, respectively, and $p_j$ denotes the probability of selecting the $j$-th set. When generating a decision variable of an offspring solution, the roulette-wheel selection is first used to select a parameter set (i.e., one row of (\ref{equ:AutoV2})) according to their probabilities. Then, the decision variable is generated by sampling the given normal distribution. In this way, the search of high-performance operators can be formulated as a continuous optimization problem, whose decision vector contains all the elements in (\ref{equ:AutoV2}) and the objective is the performance of the corresponding operator. Generally, it is easy to measure the fitness by investigating the performance of a metaheuristic equipped with the operator, but it is difficult to optimize the decision vector since AutoV does not require any prior knowledge, i.e., none of the existing optimizers are used to evolve a population for solving the problem. To address this issue, the proposed AutoV suggests a novel evolutionary procedure, in which the population can be evolved by itself. \subsection{Procedure of the Principled Approach} \begin{figure*}[!t] \centering \includegraphics[width=1\linewidth]{flow.eps} \caption{Procedure of AutoV.} \label{fig:framework} \end{figure*} The procedure of the proposed AutoV is detailed in Fig.~\ref{fig:framework} and Algorithm~\ref{alg:main}. Given a benchmark problem for performance measurement, AutoV first randomly initializes a population $P$ and evaluates each solution in $P$, where each solution denotes a parameter matrix defined in (\ref{equ:AutoV2}). At each generation, AutoV selects a number of parents from $P$ via binary tournament selection, then uses the parents to generate an offspring population by using the variation operator parameterized by the best solution in $P$. Afterwards, the offspring population is combined with $P$, and half the solutions with better fitness survive for the next generation. The proposed AutoV repeats the above steps until the termination criterion is fulfilled, and returns the solution with the best fitness as the found variation operator. \begin{algorithm}[t] \caption{Procedure of AutoV} \label{alg:main} \SetKwComment{Comment}{//}{} \KwIn{$f$ (a benchmark problem)} \KwOut{$\mathbf{p}$ (the best solution)} $P\leftarrow$ Randomly initialize a population, where each solution is a parameter matrix defined in (\ref{equ:AutoV2})\; \For{each $\mathbf{x}\in P$} { Evaluate the objective value of $\mathbf{x}$ by $Evaluation(\mathbf{x},f)$\; } \While{termination criterion is not fulfilled} { $P^\prime\leftarrow$ Select parents from $P$ via binary tournament selection\; $\mathbf{p}\leftarrow$ The best solution in $P$\; $O\leftarrow$ Use the operator $\mathbf{p}$ to generate offspring based on parents $P^\prime$\; \For{each $\mathbf{o}\in O$} { Evaluate the objective value of $\mathbf{o}$ by $Evaluation(\mathbf{o},f)$; } $P\leftarrow P\cup O$\; $P\leftarrow$ Half the solutions in $P$ with better fitness\; } $\mathbf{p}\leftarrow$ The best solution in $P$\; \textbf{return} $\mathbf{p}$\; \end{algorithm} Since the goal of AutoV is to create high-performance variation operators, it uses the best operator found so far to evolve the population for finding a better operator, and the better operator can then evolve the population for finding a much better operator, where none of the existing metaheuristics are used. For the fitness evaluation of each candidate operator, a simple metaheuristic is established by adopting the candidate operator. As presented in Algorithm~\ref{alg:main2}, the best objective value found by the established metaheuristic on the given benchmark problem is regarded as the fitness of the candidate operator. Besides, we find that a candidate operator may be over-optimized for the given benchmark problem, which means that it considers a perfect solution for the given benchmark problem accidentally. Still, it cannot evolve the population gradually. To solve this issue, AutoV executes the established metaheuristic for multiple runs and uses the median value of the found best objective values as the fitness. \begin{algorithm}[t] \caption{$Evaluation(\mathbf{p},f)$} \label{alg:main2} \SetKwComment{Comment}{//}{} \KwIn{$\mathbf{p}$ (parameter matrix of an operator), $f$ (a benchmark problem)} \KwOut{$fit$ (fitness of $\mathbf{p}$)} $fit\leftarrow\emptyset$\; \For{$run=1$ to $maxRun$} { $P\leftarrow$ Randomly initialize a population, where each solution is a decision vector for $f$\; \For{each $\mathbf{x}\in P$} { Evaluate the objective of $\mathbf{x}$ by $f(\mathbf{x})$\; } \While{termination criterion is not fulfilled} { $P^\prime\leftarrow$ Select parents from $P$ via binary tournament selection\; $O\leftarrow$ Use the operator $\mathbf{p}$ to generate offspring based on parents $P^\prime$\; \For{each $\mathbf{o}\in O$} { Evaluate the objective of $\mathbf{o}$ by $f(\mathbf{o})$; } $P\leftarrow P\cup O$\; $P\leftarrow$ Half the solutions in $P$ with better fitness\; } $\mathbf{x}\leftarrow$ The best solution in $P$\; $fit\leftarrow fit\cup\{f(\mathbf{x})\}$\; } $fit\leftarrow$ Median value of $fit$\; \textbf{return} $fit$\; \end{algorithm} In the experiments, we consider the following five different sets of parents as the input of (\ref{equ:AutoV1}) for generating one offspring solution: \begin{equation} \left\{ \begin{aligned} h_1=&\ h(x_{1d},x_{2d})\\ h_2=&\ h(x_{1d},l_d,u_d)\\ h_3=&\ h(x_{1d},x_{2d},l_d,u_d)\\ h_4=&\ h(x_{1d},x_{2d},x_{3d})\\ h_5=&\ h(x_{1d},x_{2d},x_{3d},l_d,u_d)\\ \end{aligned} \right., \label{equ:operatorfunction} \end{equation} where $x_{1d},x_{2d},x_{3d}$ denote the decision variables of three parents, $l_d$ denotes the lower bound, and $u_d$ denotes the upper bound on the $d$-th dimension. \section{Experimental Studies} \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \footnotesize \centering \caption{Parameter Settings of the Compared Metaheuristics, Where $D$ is the Number of Decision Variables, $\mathbf{u}$ is the Upper Bound, $\mathbf{l}$ is the Lower Bound, and $n=100$ is the Population Size.} \label{tab:setting} \setlength{\tabcolsep}{0.8mm}{ \begin{tabular}{c|l} \hline Metaheuristic&\makebox[5cm][c]{\ \ \ \ \ \ \ Parameter setting}\\ \hline GA \cite{Holland1992Adaptation}&crossover probability: 1,\\ (based on SBX \cite{Operator-SBX} and&mutation probability: $1/D$,\\ polynomial mutation \cite{Operator-polynomial-mutation})&distribution index: 20\\ \hline PSO \cite{Operator-PSO}&inertia weight: 0.4\\ \hline DE \cite{Operator-DE}&\multirow{2}{*}{$CR=0.9$, $F=0.5$}\\ (DE/rand/1/bin)&\\ \hline &initial $\sigma=0.6(\mathbf{u}-\mathbf{l})$,\\ \multirow{2}{*}{CMA-ES \cite{algorithm-CMAES}}&$w_i=\log(\mu+0.5)-\log(i)$, $i\in[1,\mu]$,\\ &$\mu=0.5n$, $c_\sigma=0.15$,\\ &$d_\sigma=1.15$, $c_c=0.105$\\ \hline FEP \cite{Operator-FEP}&initial $\eta=3$\\ \hline CSO \cite{algorithm-CSO}&\multirow{3}{*}{social factor: 0.1}\\ (competitive swarm&\\ optimizer)&\\ \hline SHADE \cite{Algorithm-SHADE}&\multirow{2}{*}{\ --}\\ (parameter adaptative DE)&\\ \hline IMODE \cite{Algorithm-IMODE}&Minimum population size: 4\\ (winner of CEC'2020)&Ratio of archive size: 2.6\\ \hline \end{tabular} } \end{table} To verify the effectiveness of the proposed AutoV, the performance of the operators found by AutoV is first studied. Then, the best operator is compared with eight metaheuristics, where GA \cite{Holland1992Adaptation}, PSO \cite{Operator-PSO}, DE \cite{Operator-DE}, CMA-ES \cite{algorithm-CMAES}, and FEP \cite{Operator-FEP} are classical metaheuristics, CSO \cite{algorithm-CSO} is a competitive swarm optimizer for large-scale optimization, and SHADE \cite{Algorithm-SHADE} and IMODE \cite{Algorithm-IMODE} are hybrid metaheuristics based on multiple operators with parameter adaptation. Based on the settings suggested in the original literature of the compared metaheuristics, we finely tune their parameters for a relatively good performance, where the detailed parameter settings are listed in Table~\ref{tab:setting}. \subsection{Comparison Between the Operators Found by AutoV} For each of the five functions given in (\ref{equ:operatorfunction}), the proposed AutoV optimizes it with a population size of 100 for 1000 generations. As for the fitness evaluation of each candidate operator, the population size is set to 100, the number of generations is set to 100, the number of parameter sets $k$ is set to 10, and the number of runs $maxRun$ is set to 9. Besides, the Rastrigin's Function \cite{Su2020Non} is adopted as the benchmark problem for performance measurement. To compare the performance of the found operators with different functions $h_1$--$h_5$, they are embedded in the simple metaheuristic presented in Algorithm~\ref{alg:main2} and tested on eight benchmark problems with 30 decision variables. These benchmark problems have a variety of unimodal, multimodal, or flat landscapes, whose definitions can be found in \cite{Su2020Non}. For all the metaheuristics, the population size is set to 100 and the number of generations is set to 100. Table~\ref{tab:exp0} lists the minimum objective values found by the five metaheuristics averaged over 30 runs, where the compared operators exhibit similar performance and the operator $h_3$ has slightly better overall performance than the others. It is worth noting that the operator $h_5$ does not obtain the best overall performance, though the function $h_5$ has more parents and is expected to perform better. This is because the function $h_5$ contains more parameters to be optimized, which hinders AutoV from finding high-performance operators. \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \footnotesize \centering \caption{Minimum Objective Values Obtained by the Operators with Different Functions $h_1$--$h_5$ Found by AutoV on Eight Benchmark Problems. Best Results are Highlighted.} \label{tab:exp0} \setlength{\tabcolsep}{1.2mm}{ \begin{tabular}{cccccc} \toprule Problem&$h_1$&$h_2$&$h_3$&$h_4$&$h_5$\\ \midrule Schwefel's&\multirow{2}{*}{\hl{.07e-5}}&\multirow{2}{*}{5.27e+0}&\multirow{2}{*}{2.07e-2}&\multirow{2}{*}{8.72e-3}&\multirow{2}{*}{1.02e-1}\\ Function 2.22\\ \hline Schwefel's&\multirow{2}{*}{1.65e+1}&\multirow{2}{*}{1.19e+1}&\multirow{2}{*}{\hl{1.66e+0}}&\multirow{2}{*}{8.75e+0}&\multirow{2}{*}{1.67e+0}\\ Function 2.21\\ \hline Quartic&\multirow{2}{*}{2.01e-2}&\multirow{2}{*}{1.32e-1}&\multirow{2}{*}{1.12e-2}&\multirow{2}{*}{\hl{9.02e-3}}&\multirow{2}{*}{1.16e-2}\\ Function\\ \hline Generalized\\ Griewank&\hl{1.99e-3}&2.93e+0&2.81e-2&4.46e-1&2.66e-1\\ Function\\ \hline Generalized\\ Schwefel's&-9.17e+3&-1.04e+4&\hl{-1.16e+4}&-5.70e+3&-5.28e+3\\ Function 2.26\\ \hline Ackley's&\multirow{2}{*}{\hl{5.47e-4}}&\multirow{2}{*}{4.77e+0}&\multirow{2}{*}{1.79e-2}&\multirow{2}{*}{8.94e-1}&\multirow{2}{*}{2.05e-1}\\ Function\\ \hline Rosenbrock's&\multirow{2}{*}{1.20e+4}&\multirow{2}{*}{1.17e+5}&\multirow{2}{*}{9.09e+3}&\multirow{2}{*}{8.80e+3}&\multirow{2}{*}{\hl{3.82e+3}}\\ Function\\ \hline Rastrigin's&\multirow{2}{*}{3.76e+1}&\multirow{2}{*}{3.96e+1}&\multirow{2}{*}{\hl{3.88e+0}}&\multirow{2}{*}{9.99e+0}&\multirow{2}{*}{7.86e+0}\\ Function\\ \bottomrule \end{tabular} } \end{table} \begin{figure}[!t] \centering \subfloat{\includegraphics[width=1\linewidth]{exp0.eps}} \caption{Minimum objective values obtained by the operator with function $h_3$ and different numbers of parameter sets $k$ found by AutoV.} \label{fig:exp0} \end{figure} \begin{table}[!t] \renewcommand{\arraystretch}{0.9} \scriptsize \centering \caption{Minimum Objective Values Obtained by the Operator with Function $h_3$ and Different Benchmark Problems for Fitness Evaluation. Best Results are Highlighted.} \label{tab:exp02} \setlength{\tabcolsep}{0.8mm}{ \begin{tabular}{ccccc} \toprule \multirow{3}{*}{Problem}&Optimized on&Optimized on&Optimized on&Optimized on\\ &Schwefel's&Schwefel's&Ackley's&Rastrigin's\\ &Function 2.22&Function 2.21&Function&Function\\ \midrule Schwefel's&\multirow{2}{*}{\hl{2.46e-6}}&\multirow{2}{*}{7.00e-2}&\multirow{2}{*}{4.91e-6}&\multirow{2}{*}{2.07e-2}\\ Function 2.22\\ \hline Schwefel's&\multirow{2}{*}{1.46e+1}&\multirow{2}{*}{\hl{1.36e+0}}&\multirow{2}{*}{1.21e+1}&\multirow{2}{*}{1.66e+0}\\ Function 2.21\\ \hline Quartic&\multirow{2}{*}{1.29e-2}&\multirow{2}{*}{1.23e-2}&\multirow{2}{*}{1.44e-2}&\multirow{2}{*}{\hl{1.12e-2}}\\ Function\\ \hline Generalized&\multirow{3}{*}{6.89e-3}&\multirow{3}{*}{7.02e-2}&\multirow{3}{*}{\hl{1.48e-3}}&\multirow{3}{*}{2.81e-2}\\ Griewank\\ Function\\ \hline Generalized&\multirow{3}{*}{-9.35e+3}&\multirow{3}{*}{-9.76e+3}&\multirow{3}{*}{-9.55e+3}&\multirow{3}{*}{\hl{-1.16e+4}}\\ Schwefel's\\ Function 2.26\\ \hline Ackley's&\multirow{2}{*}{1.00e-4}&\multirow{2}{*}{5.77e-2}&\multirow{2}{*}{\hl{5.71e-5}}&\multirow{2}{*}{1.79e-2}\\ Function\\ \hline Rosenbrock's&\multirow{2}{*}{6.03e+3}&\multirow{2}{*}{\hl{3.89e+3}}&\multirow{2}{*}{7.64e+3}&\multirow{2}{*}{9.09e+3}\\ Function\\ \hline Rastrigin's&\multirow{2}{*}{2.36e+1}&\multirow{2}{*}{1.59e+1}&\multirow{2}{*}{2.91e+1}&\multirow{2}{*}{\hl{3.88e+0}}\\ Function\\ \bottomrule \end{tabular} } \end{table} \begin{table*}[!t] \renewcommand{\arraystretch}{0.7} \centering \caption{Minimum Objective Values Obtained by Nine Metaheuristics on 13 Small-Scale Benchmark Problems with 30 Decision Variables. Best Results are Highlighted.} \label{tab:exp1} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccc} \toprule Small-scale&\multirow{2}{*}{GA \cite{Holland1992Adaptation}}&\multirow{2}{*}{PSO \cite{Operator-PSO}}&\multirow{2}{*}{DE \cite{Operator-DE}}&\multirow{2}{*}{CMA-ES \cite{algorithm-CMAES}}&\multirow{2}{*}{FEP \cite{Operator-FEP}}&\multirow{2}{*}{CSO \cite{algorithm-CSO}}&\multirow{2}{*}{SHADE \cite{Algorithm-SHADE}}&\multirow{2}{*}{IMODE \cite{Algorithm-IMODE}}&\multirow{2}{*}{AutoV}\\ problems \cite{Operator-FEP}&\\ \midrule \multirow{2}{*}{$f_1$}&1.4973e+1 $-$&3.5453e+3 $-$&5.0032e+3 $-$&1.2391e+2 $-$&9.2809e+3 $-$&1.2988e+3 $-$&2.2958e+1 $-$&2.9105e+0 $-$&\hl{8.3702e-1}\\ &(2.0053e+0)&(1.2242e+3)&(5.7205e+2)&(2.6004e+1)&(2.7503e+3)&(5.0772e+2)&(7.4054e+0)&(9.0907e-1)&\hl{(5.333e-1)}\\[1mm] \multirow{2}{*}{$f_2$}&8.3310e-1 $-$&3.2170e+1 $-$&5.1595e+1 $-$&8.6103e+0 $-$&5.1291e+1 $-$&1.5504e+1 $-$&6.2312e+0 $-$&3.2305e-1 $-$&\hl{1.2081e-1}\\ &(1.7114e-1)&(6.6473e+0)&(1.5480e+1)&(3.3770e+0)&(1.0920e+1)&(3.1662e+0)&(1.1158e+0)&(1.0412e-1)&\hl{(2.556e-2)}\\[1mm] \multirow{2}{*}{$f_3$}&1.0621e+4 $-$&7.8076e+3 $-$&6.4156e+3 $-$&5.0816e+3 $-$&2.1959e+4 $-$&1.7542e+3 $\approx$&1.6461e+3 $\approx$&\hl{6.6967e+2 $+$}&2.0131e+3\\ &(2.6732e+3)&(2.9985e+3)&(1.1614e+3)&(1.2254e+3)&(3.5707e+3)&(5.5302e+2)&(5.3479e+2)&\hl{(1.3529e+2)}&(8.307e+2)\\[1mm] \multirow{2}{*}{$f_4$}&2.3222e+1 $-$&2.4005e+1 $-$&3.4503e+1 $-$&8.7765e+0 $-$&5.3464e+1 $-$&1.1521e+1 $-$&9.1224e+0 $-$&8.8346e+0 $-$&\hl{6.4179e+0}\\ &(3.6870e+0)&(3.0366e+0)&(4.1625e+0)&(1.5806e+0)&(9.3989e+0)&(2.0034e+0)&(1.4948e+0)&(2.0574e+0)&\hl{(1.953e+0)}\\[1mm] \multirow{2}{*}{$f_5$}&1.2336e+3 $-$&5.5668e+5 $-$&1.6919e+6 $-$&5.1996e+3 $-$&5.8033e+6 $-$&1.5258e+5 $-$&7.2347e+2 $-$&\hl{1.2120e+2 $\approx$}&2.0580e+2\\ &(6.8051e+2)&(2.7471e+5)&(1.0907e+6)&(2.8736e+3)&(2.9157e+6)&(7.7532e+4)&(3.3376e+2)&\hl{(6.9918e+1)}&(1.981e+2)\\[1mm] \multirow{2}{*}{$f_6$}&1.9375e+1 $-$&3.4820e+3 $-$&4.5314e+3 $-$&1.3163e+2 $-$&8.8923e+3 $-$&1.0266e+3 $-$&3.2250e+1 $-$&3.6250e+0 $-$&\hl{0.0000e+0}\\ &(3.9978e+0)&(1.1301e+3)&(8.0852e+2)&(3.2824e+1)&(3.1734e+3)&(4.3859e+2)&(7.7965e+0)&(2.1998e+0)&\hl{(0.000e+0)}\\[1mm] \multirow{2}{*}{$f_7$}&9.1447e-2 $-$&8.2203e-1 $-$&1.1222e+0 $-$&8.2239e-2 $-$&5.2040e+1 $-$&5.4577e-2 $-$&5.8113e-2 $-$&8.4790e-2 $-$&\hl{2.0943e-2}\\ &(3.8031e-2)&(2.2065e-1)&(3.6673e-1)&(2.6534e-2)&(2.7550e+1)&(2.1989e-2)&(1.4618e-2)&(2.7932e-2)&\hl{(8.192e-3)}\\[1mm] \multirow{2}{*}{$f_8$}&-1.4025e+4 $\approx$&-8.2308e+3 $-$&-1.1684e+4 $-$&-1.2799e+4 $-$&-1.0531e+4 $-$&-1.2285e+4 $-$&-1.4023e+4 $\approx$&\hl{-1.4575e+4 $+$}&-1.3852e+4\\ &(5.0347e+2)&(1.2979e+3)&(5.6921e+2)&(2.8077e+2)&(8.2223e+2)&(4.2013e+2)&(8.5233e+1)&\hl{(2.0746e+2)}&(6.717e+2)\\[1mm] \multirow{2}{*}{$f_9$}&\hl{1.1504e+1 $+$}&1.5864e+2 $-$&2.7333e+2 $-$&2.2788e+2 $-$&2.6914e+2 $-$&1.0901e+2 $-$&1.6771e+2 $-$&2.3842e+1 $-$&1.4546e+1\\ &\hl{(3.3186e+0)}&(1.7533e+1)&(1.4085e+1)&(2.3562e+1)&(2.7653e+1)&(1.7125e+1)&(1.4089e+1)&(5.6385e+0)&(2.163e+0)\\[1mm] \multirow{2}{*}{$f_{10}$}&1.7023e+0 $-$&1.2329e+1 $-$&1.3329e+1 $-$&4.0097e+0 $-$&1.4671e+1 $-$&7.6077e+0 $-$&2.8370e+0 $-$&1.2297e+0 $-$&\hl{2.1621e-1}\\ &(3.1651e-1)&(1.3322e+0)&(8.5004e-1)&(5.6144e-1)&(7.3884e-1)&(1.1400e+0)&(2.1827e-1)&(3.8807e-1)&\hl{(5.119e-2)}\\[1mm] \multirow{2}{*}{$f_{11}$}&1.1563e+0 $-$&2.8566e+1 $-$&4.6051e+1 $-$&2.1092e+0 $-$&1.1710e+2 $-$&1.0340e+1 $-$&1.2644e+0 $-$&1.0158e+0 $-$&\hl{6.7928e-1}\\ &(4.0584e-2)&(3.2736e+0)&(1.3901e+1)&(2.8414e-1)&(3.2432e+1)&(3.2046e+0)&(7.1606e-2)&(3.6145e-2)&\hl{(1.999e-1)}\\[1mm] \multirow{2}{*}{$f_{12}$}&2.8786e+1 $-$&1.0624e+3 $-$&7.1756e+4 $-$&4.0254e+1 $-$&4.9839e+6 $-$&1.3627e+2 $-$&3.1790e+1 $-$&7.2029e+1 $-$&\hl{1.5847e+0}\\ &(1.5748e+1)&(1.3982e+3)&(6.5424e+4)&(5.5667e+0)&(5.9492e+6)&(3.5766e+1)&(9.1497e+0)&(2.4378e+1)&\hl{(1.976e+0)}\\[1mm] \multirow{2}{*}{$f_{13}$}&7.1895e+0 $-$&4.3602e+5 $-$&2.1898e+6 $-$&1.0939e+1 $-$&1.2093e+7 $-$&2.1808e+3 $-$&4.3094e+0 $-$&4.3502e+0 $-$&\hl{1.0075e-1}\\ &(7.2480e+0)&(5.1304e+5)&(1.7255e+6)&(2.7540e+0)&(1.0145e+7)&(5.2706e+3)&(1.5238e+0)&(2.3093e+0)&\hl{(4.216e-2)}\\[1mm] \hline \multicolumn{1}{c}{$+/-/\approx$}&1/11/1&0/13/0&0/13/0&0/13/0&0/13/0&0/12/1&0/11/2&2/10/1&\\ \bottomrule \end{tabular} } \begin{tablenotes} \footnotesize \item["\dag"] '$+$', '$-$' and '$\approx$' indicate that the result is significantly better, significantly worse, and statistically similar to that obtained by AutoV. \end{tablenotes} \end{table*} To study the influence of the number of parameter sets $k$, Fig.~\ref{fig:exp0} plots the performance of the metaheuristic with operator $h_3$ and $k=1,5,10,15,20$ on the eight benchmark problems, where $k=10$ leads to better overall performance than the other settings. On the one hand, a small value of $k$ provides a few different search behaviors, and thus leads to a low performance limit. On the other hand, although a large value of $k$ provides many different search behaviors, it leads to a large number of parameters that are difficult to be optimized. As a consequence, $k=10$ is a proper setting for finding high-performance operators. \begin{table*}[!t] \renewcommand{\arraystretch}{0.7} \centering \caption{Minimum Objective Values Obtained by Nine Metaheuristics on 13 Rotated Small-Scale Benchmark Problems with 30 Decision Variables. Best Results are Highlighted.} \label{tab:exp2} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccc} \toprule Rotated&\\ small-scale&GA \cite{Holland1992Adaptation}&PSO \cite{Operator-PSO}&DE \cite{Operator-DE}&CMA-ES \cite{algorithm-CMAES}&FEP \cite{Operator-FEP}&CSO \cite{algorithm-CSO}&SHADE \cite{Algorithm-SHADE}&IMODE \cite{Algorithm-IMODE}&AutoV\\ problems \cite{Operator-FEP}&\\ \midrule \multirow{2}{*}{$f_1$}&1.7644e+1 $-$&3.1210e+3 $-$&4.3448e+3 $-$&1.2422e+2 $-$&9.2516e+3 $-$&1.2894e+3 $-$&2.9700e+1 $-$&2.0650e+0 $-$&\hl{7.1625e-1}\\ &(5.3457e+0)&(8.2974e+2)&(8.3965e+2)&(2.9570e+1)&(3.9378e+3)&(6.7999e+2)&(1.0900e+1)&(1.0838e+0)&\hl{(5.403e-1)}\\[1mm] \multirow{2}{*}{$f_2$}&1.9315e+1 $-$&3.7244e+1 $-$&4.9671e+1 $-$&1.3854e+1 $-$&8.1606e+1 $-$&1.4780e+1 $-$&1.8746e+1 $-$&1.5749e+0 $\approx$&\hl{1.5246e+0}\\ &(1.0234e+1)&(1.8184e+1)&(8.5749e+0)&(6.8676e+0)&(9.7066e+0)&(3.0538e+0)&(2.6580e+0)&(4.2566e-1)&\hl{(5.190e-1)}\\[1mm] \multirow{2}{*}{$f_3$}&9.9581e+3 $-$&5.3590e+3 $-$&5.3369e+3 $-$&1.3929e+4 $-$&2.2100e+4 $-$&1.5239e+3 $\approx$&1.7530e+3 $\approx$&\hl{8.5084e+2 $+$}&2.1193e+3\\ &(2.6366e+3)&(1.4792e+3)&(1.3071e+3)&(3.9351e+3)&(5.0673e+3)&(9.7521e+2)&(4.7904e+2)&\hl{(2.5578e+2)}&(7.960e+2)\\[1mm] \multirow{2}{*}{$f_4$}&1.4962e+1 $-$&2.4756e+1 $-$&3.6628e+1 $-$&7.8044e+0 $-$&4.6193e+1 $-$&1.2863e+1 $-$&8.1105e+0 $-$&5.5831e+0 $-$&\hl{3.2227e+0}\\ &(5.9860e+0)&(4.8244e+0)&(4.7317e+0)&(9.4482e-1)&(2.4146e+0)&(1.8236e+0)&(8.4983e-1)&(1.2362e+0)&\hl{(1.742e+0)}\\[1mm] \multirow{2}{*}{$f_5$}&2.8300e+4 $-$&8.9721e+5 $-$&1.7625e+6 $-$&9.3046e+3 $-$&5.2830e+6 $-$&1.1350e+5 $-$&2.8513e+3 $-$&2.9329e+2 $\approx$&\hl{2.0175e+2}\\ &(4.8151e+4)&(3.4766e+5)&(7.4240e+5)&(6.0657e+3)&(2.3632e+6)&(6.7006e+4)&(3.4473e+3)&(1.6130e+2)&\hl{(1.540e+2)}\\[1mm] \multirow{2}{*}{$f_6$}&2.3375e+1 $-$&3.6026e+3 $-$&5.0828e+3 $-$&1.1925e+2 $-$&8.6534e+3 $-$&1.2864e+3 $-$&3.2250e+1 $-$&1.3750e+1 $-$&\hl{2.5000e+0}\\ &(3.7009e+0)&(6.8520e+2)&(1.1280e+3)&(2.9011e+1)&(2.1819e+3)&(5.8652e+2)&(6.5629e+0)&(7.1464e+0)&\hl{(2.204e+0)}\\[1mm] \multirow{2}{*}{$f_7$}&8.1742e-2 $-$&8.6616e-1 $-$&1.1051e+0 $-$&7.1672e-2 $-$&4.8419e+1 $-$&1.1136e-1 $-$&6.3319e-2 $-$&4.9230e-2 $-$&\hl{1.7558e-2}\\ &(2.9263e-2)&(4.3408e-1)&(6.6114e-1)&(3.4086e-2)&(2.8474e+1)&(5.2390e-2)&(1.6838e-2)&(2.2971e-2)&\hl{(5.800e-3)}\\[1mm] \multirow{2}{*}{$f_8$}&\hl{-7.6960e+3 $\approx$}&-6.4301e+3 $-$&-5.2233e+3 $-$&-7.6329e+3 $\approx$&-7.4567e+3 $\approx$&-7.5311e+3 $\approx$&-5.5305e+3 $-$&-7.1206e+3 $\approx$&-7.3751e+3\\ &\hl{(4.2092e+2)}&(7.3448e+2)&(4.2305e+2)&(5.6710e+2)&(6.5244e+2)&(2.9851e+2)&(2.9295e+2)&(2.0444e+2)&(2.951e+2)\\[1mm] \multirow{2}{*}{$f_9$}&7.0727e+1 $-$&1.4357e+2 $-$&2.6278e+2 $-$&2.2499e+2 $-$&3.0943e+2 $-$&1.0129e+2 $-$&2.1434e+2 $-$&8.0586e+1 $-$&\hl{4.1289e+1}\\ &(1.4655e+1)&(1.9674e+1)&(1.3668e+1)&(2.5540e+1)&(1.3695e+1)&(1.1932e+1)&(1.3079e+1)&(1.7108e+1)&\hl{(1.364e+1)}\\[1mm] \multirow{2}{*}{$f_{10}$}&2.9250e+0 $-$&1.1749e+1 $-$&1.3413e+1 $-$&4.3282e+0 $-$&1.5657e+1 $-$&7.3892e+0 $-$&3.0076e+0 $-$&2.3761e+0 $-$&\hl{9.6926e-1}\\ &(1.0910e-1)&(5.8613e-1)&(8.0644e-1)&(4.3019e-1)&(1.5809e+0)&(9.1288e-1)&(3.2128e-1)&(5.8902e-1)&\hl{(6.627e-1)}\\[1mm] \multirow{2}{*}{$f_{11}$}&1.1551e+0 $-$&2.9789e+1 $-$&4.5249e+1 $-$&1.9904e+0 $-$&1.1321e+2 $-$&1.2782e+1 $-$&1.1978e+0 $-$&9.4447e-1 $-$&\hl{7.6236e-1}\\ &(2.4035e-2)&(8.3839e+0)&(8.3656e+0)&(2.2665e-1)&(2.2152e+1)&(4.1098e+0)&(7.2707e-2)&(7.3867e-2)&\hl{(1.343e-1)}\\[1mm] \multirow{2}{*}{$f_{12}$}&3.5756e+1 $-$&1.5007e+3 $-$&1.9844e+5 $-$&3.7031e+1 $-$&1.2756e+6 $-$&1.3627e+2 $-$&3.4071e+1 $-$&7.2069e+1 $-$&\hl{1.4346e+0}\\ &(1.7714e+1)&(1.6007e+3)&(1.8928e+5)&(6.1593e+0)&(1.1497e+6)&(6.7341e+1)&(5.6141e+0)&(1.6143e+1)&\hl{(1.133e+0)}\\[1mm] \multirow{2}{*}{$f_{13}$}&4.1967e+0 $-$&6.7210e+5 $-$&3.8989e+6 $-$&1.0194e+1 $-$&9.2196e+6 $-$&7.5595e+3 $-$&5.1567e+0 $-$&6.4904e+0 $-$&\hl{1.4557e-1}\\ &(3.5806e+0)&(4.4019e+5)&(2.7548e+6)&(3.2998e+0)&(7.2385e+6)&(2.0845e+4)&(1.6990e+0)&(1.5676e+0)&\hl{(6.230e-2)}\\[1mm] \hline \multicolumn{1}{c}{$+/-/\approx$}&0/12/1&0/13/0&0/13/0&0/12/1&0/12/1&0/11/2&0/12/1&1/9/3&\\ \bottomrule \end{tabular} } \begin{tablenotes} \footnotesize \item["\dag"] '$+$', '$-$' and '$\approx$' indicate that the result is significantly better, significantly worse, and statistically similar to that obtained by AutoV. \end{tablenotes} \end{table*} Furthermore, the influence of the benchmark problem on fitness evaluation is studied. Table~\ref{tab:exp02} lists the performance of the metaheuristic with operator $h_3$ optimized on four benchmark problems, including the Schwefel's Function 2.22 with a unimodal landscape, the Schwefel's Function 2.21 with a flat landscape, and the Ackley's Function and Rastrigin's Function with multimodal landscapes. It can be found that the four metaheuristics obtain the best performance on the benchmark problem for fitness evaluation, while they obtain quite similar performance on the other problems. In the following experiments, the metaheuristic with operator $h_3$ and $k=10$ optimized on the Rastrigin's Function is used as a representative of AutoV to be compared with existing metaheuristics on various problems. Details of the variation operator found by AutoV are presented in the following, where the components with $r_i\sim\mathcal{N}(0,0)$ are ignored for simplicity: \begin{equation} \scriptsize \begin{aligned} &h(x_1,x_2,l,u)=\\ &\begin{aligned} \left\{ \begin{aligned} &(1-r_2)x_1+r_2x_2,&r_2\sim\mathcal{N}(0.4753,0.0103)\\ &&{\rm if}\ p<0.219\\ &(1-r_2)x_1+r_2x_2,&r_2\sim\mathcal{N}(0.0034,0.0006)\\ &&{\rm if}\ 0.219\leq p<0.436\\ &(1-r_2)x_1+r_2x_2,&r_2\sim\mathcal{N}(0.9999,0.0072)\\ &&{\rm if}\ 0.436\leq p<0.644\\ &(1-r_2)x_1+r_2x_2,&r_2\sim\mathcal{N}(0.9988,0.0167)\\ &&{\rm if}\ 0.644\leq p<0.802\\ &(1-r_2)x_1+r_2x_2,&r_2\sim\mathcal{N}(-0.0070,0.0056)\\ &&{\rm if}\ 0.802\leq p<0.960\\ &(1-r_2-r_3-r_4)x_1+r_2x_2+r_3l+r_4u,&r_2\sim\mathcal{N}(0.7293,0.8055)\\ &&r_3\sim\mathcal{N}(0,0.0014)\\ &&r_4\sim\mathcal{N}(0,0.0014)\\ &&{\rm if}\ 0.960\leq p<0.992\\ &(1-r_2-r_3-r_4)x_1+r_2x_2+r_3l+r_4u,&r_2\sim\mathcal{N}(0.0392,0.3185)\\ &&r_3\sim\mathcal{N}(0,0.0351)\\ &&r_4\sim\mathcal{N}(0,0.0351)\\ &&{\rm if}\ 0.992\leq p<0.997\\ &(1-r_2-r_3-r_4)x_1+r_2x_2+r_3l+r_4u,&r_2\sim\mathcal{N}(-0.9998,0.2478)\\ &&r_3\sim\mathcal{N}(0,0.0011)\\ &&r_4\sim\mathcal{N}(0,0.0011)\\ &&{\rm if}\ 0.997\leq p<0.998\\ &(1-r_2-r_3-r_4)x_1+r_2x_2+r_3l+r_4u,&r_2\sim\mathcal{N}(-0.8547,0.1174)\\ &&r_3\sim\mathcal{N}(0,0.0452)\\ &&r_4\sim\mathcal{N}(0,0.0452)\\ &&{\rm if}\ 0.998\leq p<0.999\\ &(1-r_2-r_3-r_4)x_1+r_2x_2+r_3l+r_4u,&r_2\sim\mathcal{N}(-0.5621,0.1059)\\ &&r_3\sim\mathcal{N}(0,0.0003)\\ &&r_4\sim\mathcal{N}(0,0.0003)\\ &&{\rm if}\ p\geq0.999\\ \end{aligned} \right., \end{aligned} \end{aligned} \end{equation} where $p$ is a uniformly distributed random value sampled in $[0,1]$. It can be found from the definition that the operator contains ten functions whose parameters obey different distributions, which provide a variety of search behaviors that may be effective for different problems. It is worth noting that the ten functions are selected with different probabilities, where the operator uses only $x_1$ and $x_2$ in most cases but rarely uses the lower bound $l$ and the upper bound $u$. This is similar to the genetic algorithm, which performs the crossover between two parents with a large probability for global search and performs the mutation on a single solution with a small probability for local search, where the lower and upper bounds are involved in the mutation operator (e.g., polynomial mutation \cite{Operator-polynomial-mutation}) but ignored in the crossover operator (e.g., SBX \cite{Operator-SBX}). \subsection{Comparison on Small-Scale Benchmark Problems} Then, the proposed AutoV (i.e., the metaheuristic with operator $h_3$) is compared with eight existing metaheuristics on 13 small-scale benchmark problems with 30 decision variables, where the definitions of these problems can be found in \cite{Operator-FEP}. For all the compared metaheuristics, the population size is set to 100 and the number of function evaluations is set to 10000. Table~\ref{tab:exp1} shows the means and standard deviations of the minimum objective values found by the nine metaheuristics, averaged over 30 runs. It can be found that the proposed AutoV obtains the best overall performance, which gains the best results on 9 out of 13 problems. The Wilcoxon rank sum test \cite{Derrac2011Practical} with a significance level of 0.05 is adopted to perform statistical analysis, where AutoV is significantly better than GA, PSO, DE, CMA-ES, FEP, CSO, SHADE, IMODE on 11, 13, 13, 13, 13, 12, 11, and 10 problems, respectively. Furthermore, Fig.~\ref{fig:exp1} depicts the convergence trajectories of the compared metaheuristics on the Step Function and the Penalized Function, where AutoV converges faster than the other metaheuristics. \begin{figure}[!t] \centering \subfloat{\includegraphics[width=1\linewidth]{exp1.eps}} \caption{Convergence trajectories of nine metaheuristics on two problems.} \label{fig:exp1} \end{figure} \begin{table*}[!t] \renewcommand{\arraystretch}{0.7} \centering \caption{Minimum Objective Values Obtained by Nine Algorithms on the CEC'2013 Large-Scale Benchmark Problems with About 1000 Decision Variables. Best Results are Highlighted.} \label{tab:exp3} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccc} \toprule CEC'2013\\ large-scale&GA \cite{Holland1992Adaptation}&PSO \cite{Operator-PSO}&DE \cite{Operator-DE}&CMA-ES \cite{algorithm-CMAES}&FEP \cite{Operator-FEP}&CSO \cite{algorithm-CSO}&SHADE \cite{Algorithm-SHADE}&IMODE \cite{Algorithm-IMODE}&AutoV\\ problems \cite{li2013benchmark}\\ \midrule \multirow{2}{*}{$f_1$}&1.1824e+9 $-$&1.7274e+11 $-$&1.1820e+11 $-$&1.3230e+10 $-$&6.2225e+10 $-$&1.6506e+11 $-$&9.2604e+8 $-$&1.5672e+10 $-$&\hl{7.5437e+8}\\ &(1.5088e+8)&(8.2047e+9)&(5.7453e+9)&(5.9476e+8)&(8.8355e+9)&(6.4000e+9)&(9.2525e+7)&(1.8277e+9)&\hl{(3.599e+7)}\\[1mm] \multirow{2}{*}{$f_2$}&\hl{7.0854e+3 $+$}&4.8900e+4 $-$&4.2656e+4 $-$&1.1326e+4 $+$&8.4194e+4 $-$&4.4704e+4 $-$&1.8598e+4 $-$&2.4830e+4 $-$&1.2736e+4\\ &\hl{(5.5474e+2)}&(1.6515e+3)&(9.8044e+2)&(2.6942e+2)&(5.6859e+2)&(7.5103e+2)&(1.1309e+3)&(6.9359e+2)&(3.996e+2)\\[1mm] \multirow{2}{*}{$f_3$}&1.9790e+1 $-$&2.1096e+1 $-$&2.1011e+1 $-$&2.1334e+1 $-$&2.1457e+1 $-$&2.0977e+1 $-$&1.7116e+1 $-$&2.0013e+1 $-$&\hl{7.6972e+0}\\ &(1.0792e-1)&(3.6567e-2)&(6.4041e-3)&(2.0615e-2)&(8.6678e-3)&(1.9539e-2)&(2.1804e-1)&(1.6968e-2)&\hl{(2.709e-1)}\\[1mm] \multirow{2}{*}{$f_4$}&7.0214e+11 $-$&3.8264e+12 $-$&2.8759e+11 $-$&1.8046e+12 $-$&7.4290e+11 $-$&2.0687e+12 $-$&\hl{4.1680e+10 $+$}&3.5605e+11 $-$&1.7434e+11\\ &(3.6361e+11)&(1.0018e+12)&(4.4452e+10)&(2.2407e+11)&(4.1395e+11)&(5.9504e+11)&\hl{(7.7120e+9)}&(3.1439e+10)&(3.356e+10)\\[1mm] \multirow{2}{*}{$f_5$}&8.0929e+6 $-$&3.3051e+7 $-$&1.6648e+7 $-$&1.1814e+7 $-$&2.0252e+7 $-$&2.7747e+7 $-$&6.6297e+6 $-$&1.0851e+7 $-$&\hl{5.5157e+6}\\ &(1.1105e+6)&(3.3554e+6)&(1.2160e+6)&(3.5863e+5)&(1.9684e+6)&(2.3903e+6)&(8.5226e+5)&(6.3028e+5)&\hl{(7.817e+5)}\\[1mm] \multirow{2}{*}{$f_6$}&7.6705e+5 $-$&1.0156e+6 $-$&9.6778e+5 $-$&1.0725e+6 $-$&7.7643e+5 $-$&9.9765e+5 $-$&1.0802e+5 $-$&2.7032e+5 $-$&\hl{8.3894e+4}\\ &(4.2587e+4)&(6.2467e+3)&(1.9218e+4)&(1.4402e+3)&(6.3779e+4)&(5.1452e+3)&(1.1582e+4)&(4.2843e+4)&\hl{(9.288e+3)}\\[1mm] \multirow{2}{*}{$f_7$}&7.8310e+9 $-$&4.9258e+13 $-$&2.3870e+12 $-$&1.1084e+10 $-$&2.8185e+11 $-$&2.2211e+13 $-$&8.9123e+8 $\approx$&1.3523e+10 $-$&\hl{8.4166e+8}\\ &(2.9908e+9)&(2.3576e+13)&(7.2173e+11)&(9.4838e+9)&(1.4497e+11)&(8.4947e+12)&(1.9320e+8)&(1.6130e+9)&\hl{(7.997e+7)}\\[1mm] \multirow{2}{*}{$f_8$}&2.9461e+16 $-$&1.6998e+17 $-$&\hl{1.2728e+13 $+$}&4.7204e+16 $-$&6.1725e+15 $\approx$&3.2323e+16 $-$&3.7727e+13 $+$&5.4599e+15 $\approx$&4.4372e+15\\ &(1.4136e+16)&(9.1594e+16)&\hl{(1.9676e+12)}&(6.0409e+15)&(3.8102e+15)&(3.0414e+16)&(3.0187e+13)&(2.5604e+15)&(1.770e+15)\\[1mm] \multirow{2}{*}{$f_9$}&7.5728e+8 $-$&2.5298e+9 $-$&1.2725e+9 $-$&8.4502e+8 $-$&1.6577e+9 $-$&2.0244e+9 $-$&5.8016e+8 $-$&7.7668e+8 $-$&\hl{4.4601e+8}\\ &(1.2373e+8)&(2.0469e+8)&(6.3827e+7)&(4.5495e+7)&(1.3394e+8)&(2.1854e+8)&(1.1320e+8)&(4.1594e+7)&\hl{(8.503e+7)}\\[1mm] \multirow{2}{*}{$f_{10}$}&4.4221e+7 $-$&8.7041e+7 $-$&2.1046e+6 $-$&9.6156e+7 $-$&1.7440e+7 $-$&7.6458e+7 $-$&1.2506e+6 $-$&7.8125e+5 $\approx$&\hl{4.8728e+5}\\ &(1.3077e+7)&(2.6524e+6)&(6.4391e+5)&(4.6215e+5)&(3.3440e+6)&(5.2418e+6)&(1.4290e+4)&(2.8894e+5)&\hl{(4.870e+5)}\\[1mm] \multirow{2}{*}{$f_{11}$}&7.4972e+11 $-$&3.0398e+15 $-$&1.1061e+14 $-$&4.0919e+11 $-$&1.1334e+13 $-$&8.9529e+14 $-$&\hl{8.7764e+9 $+$}&4.4608e+11 $-$&1.6943e+11\\ &(4.5447e+11)&(1.4680e+15)&(9.8496e+13)&(1.0871e+11)&(5.1094e+12)&(3.6702e+14)&\hl{(4.5005e+9)}&(1.6708e+11)&(1.144e+11)\\[1mm] \multirow{2}{*}{$f_{12}$}&2.0599e+10 $-$&1.9723e+12 $-$&1.5925e+12 $-$&1.3548e+10 $-$&2.6665e+12 $-$&1.6723e+12 $-$&3.3481e+10 $-$&6.4235e+11 $-$&\hl{7.6900e+8}\\ &(3.9186e+9)&(8.1522e+10)&(3.3820e+10)&(1.3457e+10)&(1.0994e+11)&(2.5065e+10)&(5.8270e+9)&(1.9268e+10)&\hl{(2.144e+7)}\\[1mm] \multirow{2}{*}{$f_{13}$}&4.0277e+10 $-$&1.9190e+15 $-$&5.4725e+13 $-$&8.3330e+10 $-$&3.0193e+12 $-$&1.0082e+15 $-$&\hl{1.0448e+10 $+$}&6.6059e+10 $-$&1.7051e+10\\ &(1.4024e+10)&(8.0609e+14)&(1.7016e+13)&(1.4211e+10)&(1.6034e+12)&(6.2874e+14)&\hl{(2.3651e+9)}&(9.5957e+9)&(4.590e+9)\\[1mm] \multirow{2}{*}{$f_{14}$}&1.0031e+12 $-$&4.5075e+15 $-$&1.2185e+14 $-$&9.6175e+11 $-$&1.2889e+13 $-$&1.1075e+15 $-$&\hl{7.7424e+10 $+$}&7.2323e+11 $-$&2.3734e+11\\ &(3.8285e+11)&(3.3588e+15)&(7.0856e+13)&(4.0377e+11)&(4.1028e+12)&(4.3764e+14)&\hl{(1.3029e+10)}&(8.4437e+10)&(7.915e+10)\\[1mm] \multirow{2}{*}{$f_{15}$}&5.7586e+7 $-$&1.8722e+15 $-$&3.4395e+14 $-$&9.9936e+8 $-$&8.8170e+13 $-$&8.7705e+14 $-$&3.0449e+7 $\approx$&6.1844e+12 $-$&\hl{2.7840e+7}\\ &(1.5169e+7)&(2.4943e+14)&(4.9325e+13)&(1.1950e+9)&(2.5655e+13)&(1.2736e+14)&(4.6006e+6)&(1.4191e+12)&\hl{(2.630e+6)}\\[1mm] \hline \multicolumn{1}{c}{$+/-/\approx$}&1/14/0&0/15/0&1/14/0&1/14/0&0/14/1&0/15/0&5/8/2&0/13/2&\\ \bottomrule \end{tabular} } \begin{tablenotes} \footnotesize \item["\dag"] '$+$', '$-$' and '$\approx$' indicate that the result is significantly better, significantly worse, and statistically similar to that obtained by AutoV. \end{tablenotes} \end{table*} While the 13 benchmark problems are already translated and scaled, they are further rotated by randomly generated orthogonal matrices and challenge the nine metaheuristics. As can be seen from the experimental results listed in Table~\ref{tab:exp2}, the superiority of the proposed AutoV becomes more significant, where AutoV outperforms the other metaheuristics on 11 out of 13 problems. As a consequence, the superiority of AutoV over some classical and state-of-the-art metaheuristics can be verified. Besides, it also implies that AutoV is more effective than the approaches based on the recommendation and combination of existing metaheuristics, whose performance can hardly go beyond the best existing metaheuristic on each problem. \subsection{Comparison on Large-Scale Benchmark Problems} Lastly, the proposed AutoV and the eight existing metaheuristics are compared on the 15 CEC'2013 large-scale benchmark problems \cite{li2013benchmark}. These benchmark problems contain approximately 1000 decision variables and a variety of landscape functions, transformations, and interactions between variables, posing stiff challenges to general metaheuristics. For all the compared algorithms, the population size is set to 100 and the number of function evaluations is set to 120000. Table~\ref{tab:exp3} presents the means and standard deviations of the minimum objective values found by the compared metaheuristics, averaged over 30 runs. It can be observed from the statistical results that the proposed AutoV also exhibits better overall performance than the others on the large-scale benchmark problems, achieving the best results on 9 out of 15 problems. It is noteworthy that although SHADE and IMODE suggest many complex search strategies for the combination of multiple operators and adaptation of parameters, they are still underperformed by AutoV that only contains a simple operator designed automatically. Therefore, the proposed AutoV offers bright prospects to the design of metaheuristics, which can potentially replace the laborious manual design process. \section{Conclusions} To reduce the human expertise in designing metaheuristics, this paper has analyzed the importance of translation, scale, and rotation invariance to the robustness of operators, and deduced the generic form of translation, scale, rotation invariant operators. Based on the deduced generic form, this paper has proposed a principled approach to search for high-performance operators automatically. In contrast to the automated design approaches based on existing metaheuristics, the proposed approach does not rely on any existing techniques, and can obtain competitive performance over some state-of-the-art metaheuristics on complex and large-scale optimization problems. The experimental results have demonstrated the effectiveness and potential of the automated design of variation operators, and further research on this topic is highly desirable. Firstly, it is reasonable to search for high-performance operators based on more complex functions (e.g., include more parents and consider the update of velocity), where more effective operators are expected to be found. Secondly, since the proposed approach searches for operators according to their performance on a benchmark problem, it is reasonable to adopt more representative and practical problems to find high-performance operators for general problems or specific types of problems. Thirdly, it is interesting to develop novel approaches for automatically designing selection strategies for metaheuristics. \bibliographystyle{IEEEtran}
1,108,101,563,079
arxiv
\section{#1}} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \numberwithin{equation}{section} \begin{document} \title{\textbf{Quantum Corrections for ABGB Black Hole}} \author{M. Sharif \thanks{[email protected]} and Wajiha Javed \\ Department of Mathematics, University of the Punjab,\\ Quaid-e-Azam Campus, Lahore-54590, Pakistan.} \date{} \maketitle \begin{abstract} In this paper, we study quantum corrections to the temperature and entropy of a regular Ay\'{o}n-Beato-Garc\'{\i}a-Bronnikov black hole solution by using tunneling approach beyond semiclassical approximation. We use the first law of black hole thermodynamics as a differential of entropy with two parameters, mass and charge. It is found that the leading order correction to the entropy is of logarithmic form. In the absence of the charge, i.e., $e=0$, these corrections approximate the corresponding corrections for the Schwarzschild black hole. \end{abstract} {\bf Keywords:} Black holes; Semiclassical entropy; Quantum tunneling.\\ {\bf PACS numbers:} 04.70.Dy; 04.70.Bw; 11.25.-w \section{Introduction} General Relativity describes that black hole (BH) absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics. Hawking (1974) suggested that BH like a black body with a finite temperature, emits radiation from the event horizon by using quantum field theory in curved spacetime, named as Hawking radiation. Several attempts (Hartle and Hawking 1976; Gibbons and Hawking 1977) have been made to visualize the Hawking radiation spectrum by using quantum mechanics of a scalar particle. However, tunneling (Parikh 2004; Parikh and Wilczek 2000; Srinivasan and Padmanabhan 1999) provides the best way to visualize the source of radiation. The essential idea of the tunneling mechanism is that a particle-antiparticle pair is formed close to the horizon inside a BH. According to this phenomenon, in the presence of electric field, particles have the ability to penetrate energy barriers by following trajectories (not allowed classically). When a particle with positive energy crosses the horizon, it appears as Hawking radiation. When a particle with negative energy tunnels inwards, it is absorbed by the BH, hence its mass decreases and ultimately vanishes. Similarly, motion of the particle may be in the form of outgoing or ingoing radial null geodesics. For outgoing and ingoing motion, the corresponding action becomes complex and real respectively, whereas classically a particle can fall behind the horizon. The emission rate of the tunneling particle from the BH is associated with the imaginary part of the action which, in turn, is related to the Boltzmann factor for the emission at the Hawking temperature. Cognola et al. (1995) investigated the first quantum correction to the entropy for an eternal $4D$ extremal Reissner-Nordstr$\ddot{o}$m (RN) BH by using the conformal transformation techniques. Bytsenko et al. (1998a) suggested that the Schwarzschild-de Sitter BH could be generated due to back-reaction of dilaton coupled matter in the early universe, which is the solution of quantum corrected equations of motion. Bytsenko et al. (1998b) evaluated the first quantum correction to the Bekenstein-Hawking entropy by using Chern-Simions representation of the $3D$ gravity. Bytsenko et al. (2001) calculated the first quantum correction to the finite temperature partition function for a self-interacting massless scalar field by using dimensional regularization zeta-function techniques. Elizalde et al. (1999) investigated the existence of a quantum process (anti-evaporation) opposite to the Hawking radiation (evaporation) as an evidence for supersymmetry. Nojiri and Odintsov (1999a, 1999b, 2000, 2001) studied quantum properties of $2D$ charged BHs and BTZ BH. They found quantum corrected $2D$ charged BH solution. Also, they evaluated the quantum corrections to mass, charge, Hawking temperature and BH entropy. They discussed quantum corrections to thermodynamics (and geometry) of the Schwarzschild-(anti) de Sitter BHs by using large $N$ one-loop anomaly induced effective action for dilaton coupled matter. The same authors also discussed quantum correction to the entropy of expanding universe. There are two modifications of the tunneling approach, namely, Parikh-Wilczek radial null geodesic method (Parikh 2004; Parikh and Wilczek 2000) and the Hamilton-Jacobi method (Srinivasan and Padmanabhan 1999). Recently, based on the Hamilton-Jacobi method, Banerjee and Majhi (2008) developed a tunneling formalism beyond semiclassical approximation. They computed quantum corrections to the Hawking temperature $T=\frac{\kappa_0}{2\pi}$ and Bekenstein-Hawking entropy $S_{BH}=\frac{A}{4\hbar}$ (Bekenstein 1972). The first law of thermodynamics also holds in the context of quantum corrections. When quantum effects are considered, the area law of BH entropy should undergo corrections using loop quantum gravity, i.e., \begin{equation} S=S_{BH}+\alpha\ln S_{BH}+\cdots. \end{equation} Loop quantization reproduces the result of Bekenstein-Hawking entropy of BH. This formalism has been applied on various BHs (Banerjee and Modak 2009; Modak 2009; Zhu et al. 2009a) and FRW universe model (Zhu et al. 2009b). Banerjee and Modak (2009) gave a simple approach to obtain the entropy for any stationary BH. Akbar and Saifullah (2010, 2011) studied quantum corrections to entropy and horizon area for the Kerr-Newmann, charged rotating BTZ and Einstein-Maxwell dilaton-axion BHs. Recently, Larra$\tilde{n}$aga (2011a, 2011b) extended this work for a charged BH of string theory and for the Kerr-Sen BH. Majhi (2009) with his collaborator Samanta (2010) analyzed the Hawking radiation as tunneling of a Dirac particle, photon and a gravitino through an event horizon by applying the Hamilton-Jacobi method beyond the semiclassical approximation. Chen \emph{et al.} (2011) investigated the corrected Hawking temperature and entropy for various BHs, FRW universe model and neutral black rings. Jamil and Darabi (2011) studied quantum corrections to the Hawking temperature, entropy and Bekenstein-Hawking entropy-area relation for a Braneworld BH by using tunneling approach beyond semiclassical approximation. In a recent paper, we have explored these quantum corrections for a Bardeen regular BH (Sharif and Javed 2010). Also, we have discussed thermodynamics of Bardeen BH in noncommutative space (Sharif and Javed 2011). This paper investigates temperature and entropy corrections for the Ay\'{o}n-Beato-Garc\'{\i}a-Bronnikov (ABGB) BH which is a generalization of the entropy correction for the Schwarzschild BH (Banerjee and Majhi 2008). The motivation to study ABGB black hole is its feature to express the location of the horizons in terms of Lambert function which is used in the discussion of the extremal configurations. Outside the event horizon, this BH solution closely resembles with the RN geometry both in its local as well as global structure. Matyjasek (2008) described this BH solution (exists as a perturbative solution and its various characteristics acquire the correction) by using quadratic gravity equations. Here we skip the details of the formulation as it is given in many papers, e.g. (Sharif and Javed 2010) which consists of basic material used to evaluate corrections to the entropy and temperature. The plan of the paper is as follows; In Section \textbf{2}, we evaluate semiclassical thermodynamical quantities (temperature and entropy) for the ABGB regular BH. Section \textbf{3} provides the corrections to these quantities. Finally, in the last section, we summarize the results. \section{Thermodynamical Quantities} When particles with positive energy tunnel across the horizon, a BH loses its mass. The tunneling amplitude of particles emitted by a BH in the form of Hawking radiation can be calculated for a charged regular BH. The general line element of a spherically symmetric BH is given by \begin{equation} {ds}^2=-F{dt}^2+F^{-1}{dr}^2+r^2{d\theta}^2+r^2\sin^2\theta{d\phi}^2, \label{14} \end{equation} where $F=1-2\frac{M(r)}{r}$. This metric can be reduced to well-known BHs for the special choice of $M(r)$. Ay\'{o}n-Beato and Garc\'{\i}a (1999) and Bronnikov (2000) formulated a solution of the coupled system of equations of non-linear electrodynamics and gravity representing a class of the BHs. This is given by \begin{equation} M(r)=m\left[1-\tanh\left(\frac{e^2}{2mr}\right)\right],\label{15} \end{equation} where $m$ is the mass and $e$ is either electric or magnetic charge. This solution describes a regular static spherically symmetric configuration which reduces to the Schwarzschild solution for $e=0$. The ABGB regular BH solution has a spherical event horizon at $F(r_+)=0$ or $r_+=2M$, where $r_+$ is the event horizon. Replacing the value of $M$, $F(r)$ will take the following form \begin{equation} F(r)=1-\frac{2m}{r}\left[1-\tanh\left(\frac{e^2}{2mr}\right)\right], \label{16} \end{equation} whose roots are given in (Matyjasek 2007, 2008) and its area is (Larra$\tilde{n}$aga 2011a, 2011b) \begin{equation} A=\int\sqrt{g_{\theta\theta}g_{\phi\phi}}d\theta d\phi=4\pi r_+^2.\label{24} \end{equation} In terms of power series, the ABGB solution turns out to be \begin{equation} F(r)=1-\frac{2m}{r}+\frac{e^2}{r^2}-\frac{e^6}{12m^2r^4}+ \textsl{O}(\frac{1}{r^6}).\label{} \end{equation} Notice that $F(r)$ differs from the Reissner-Nordstr\"{o}m (RN) solution by terms of order $\textsl{O}(e^6)$. For small $e$, we can neglect terms of order $\textsl{O}(e^6)$ and onward and hence exactly reduces to the RN solution. Here we assume that the terms of order $\textsl{O}(\frac{1}{r^6})$ and the higher orders can be neglected. Thus $F(r)$ can be written as follows \begin{equation} F(r)=1-\frac{2m}{r}+\frac{e^2}{r^2}-\frac{e^6}{12m^2r^4}.\label{17} \end{equation} From this equation, $F(r)=0$ leads to cubic equation in $m$, i.e., \begin{equation} m^3-\frac{r}{2}(1+\frac{e^2}{r^2})m^2+\frac{e^6}{24r^3}=0. \end{equation} Using \emph{Cardan's solution} (Nickalls 1993), we can evaluate the only real root of this equation, i.e., \begin{eqnarray} m&=&\frac{r_+}{6}(1+\frac{e^2}{r_+^2})+\left[\frac{1}{2}\left(-\frac{e^6}{24r_+^3}+ \frac{2r_+^3}{216}(1+\frac{e^2}{r_+^2})^3\right.\right.\nonumber\\&+& \left.\left.\sqrt{\frac{e^{12}}{576r_+^6}-\frac{e^6}{1296} (1+\frac{e^2}{r_+^2})^3}\right)\right]^{\frac{1}{3}}+\left[\frac{1}{2} \left(-\frac{e^6}{24r_+^3}+\frac{2r_+^3}{216}(1+\frac{e^2}{r_+^2})^3 \right.\right.\nonumber\\ &-&\left.\left.\sqrt{\frac{e^{12}}{576r_+^6}- \frac{e^6}{1296}(1+\frac{e^2}{r_+^2})^3}\right)\right]^{\frac{1}{3}}.\label{h} \end{eqnarray} For $e=0$, this reduces to Schwarzschild case whose horizon radius is $r_+=2m$. Now we simplify Eq.(\ref{h}) by Taylor series upto first order approximation. The term in Eq.(\ref{h}) can be written as \begin{equation} \sqrt{\frac{e^{12}}{576r_+^6}-\frac{e^6}{1296}(1+\frac{e^2}{r_+^2})^3}\approx \frac{\sqrt{5}e^6}{72r_+^3}-\frac{e^4}{12\sqrt{5}r_+}-\frac{e^2r_+}{12\sqrt{5}}- \frac{r_+^3}{36\sqrt{5}}. \end{equation} Consequently, the value of $m$ will become \begin{eqnarray} m&\approx&\frac{r_+}{6}(1+\frac{e^2}{r_+^2})+r_+(\frac{1}{216}- \frac{1}{72\sqrt{5}})^{\frac{1}{3}} \left[1+\frac{1}{3(\frac{1}{216}-\frac{1}{72\sqrt{5}})} \left\{\frac{e^6}{r_+^6}\right.\right.\nonumber\\ &\times&\left.\left.(-\frac{7}{432}+ \frac{\sqrt{5}}{144})+ \frac{e^4}{r_+^4}(\frac{1}{72}-\frac{1}{24\sqrt{5}})+ \frac{e^2}{r_+^2}(\frac{1}{72}-\frac{1}{24\sqrt{5}})\right\}\right] \nonumber\\&+&r_+(\frac{1}{216}+\frac{1}{72\sqrt{5}})^{\frac{1}{3}} \left[1+\frac{1}{3(\frac{1}{216}+ \frac{1}{72\sqrt{5}})} \left\{-\frac{e^6}{r_+^6}(\frac{7}{432}+\frac{\sqrt{5}}{144}) \right.\right.\nonumber\\ &+&\left.\left.\frac{e^4}{r_+^4} (\frac{1}{72}+\frac{1}{24\sqrt{5}})+\frac{e^2}{r_+^2}(\frac{1}{72}+ \frac{1}{24\sqrt{5}})\right\}\right].\label{b} \end{eqnarray} Notice that if the term $(\frac{1}{216}- \frac{1}{72\sqrt{5}})^{\frac{1}{3}}$ is solved by Taylor series (which is a divergent series) then the ABGB regular BH mass exactly reduces to the Schwarzschild BH mass $m=0.5r$, otherwise it can be defined as \begin{equation} m=0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+\frac{0.7e^4}{r_+^3}.\label{a} \end{equation} For $e=0$, this expression approximately leads to the Schwarzschild BH mass. The semiclassical Hawking temperature $T_H$ (Akbar 2007; Kothawala et al. 2007) is \begin{equation} T_H={\frac{\hbar F'(r)}{4\pi}}|_{r=r_+}=\frac{\hbar}{2\pi} \left(\frac{m}{r_+^2}-\frac{e^2}{r_+^3}+\frac{e^6}{6m^2r_+^5}\right), \label{18} \end{equation} where $F'(r)$ denotes derivative of $F$ with respect to $r$. The values of $F$ and $m$ are given by Eqs.(\ref{17}) and (\ref{a}) respectively. Substituting this value of $m$ in Eq.(\ref{18}) and simplifying, it follows that \begin{eqnarray} T_H&=&\frac{\hbar}{2\pi} \left(\frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.6999e^4}{r_+^5}+\textsl{O}(\frac{1}{r_+^7}) \right).\label{c} \end{eqnarray} The electric potential is given by (Akbar and Siddiqui 2007) \begin{equation} \Phi=\frac{\partial m}{\partial e}|_{r=r_+}=-4.8 \frac{e^5}{r_+^5}+2.8\frac{e^3}{r_+^3}+1.8\frac{e}{r_+}.\label{19} \end{equation} The semiclassical entropy has the form \begin{equation} S_0(m,e)=\int\frac{dm}{T_H}=\frac{2\pi}{\hbar} \int\frac{dm}{(\frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.6999e^4}{r_+^5})}.\label{20} \end{equation} To evaluate this integral, we use Eq.(\ref{a}) which yields \begin{equation} dm=\left(0.3-\frac{0.9e^2}{r_+^2}+\frac{4e^6}{r_+^6}-\frac{2.1e^4}{r_+^4}\right)dr_+.\label{21} \end{equation} Inserting this value in Eq.(\ref{20}), we obtain \begin{equation} S_0=\frac{2\pi}{\hbar}\int\left(r_+-\frac{2.6667e^2}{r_+}-\frac{10.3333e^4}{r_+^3}+ \frac{18e^6}{r_+^5}+\textsl{O} (\frac{1}{r_+^7})\right)dr_+.\label{abgb1} \end{equation} Integrating this equation, it follows that \begin{equation} S_0=\frac{\pi}{\hbar}\left(-5.3333e^2\ln r_++r_+^2+\frac{10.3333e^4}{r_+^2}-\frac{9e^6}{r_+^4}+ \textsl{O}(\frac{1}{r_+^6})\right).\label{g} \end{equation} It is interesting to mention here that for $e=0$ and $\hbar=1$, we recover the Bekenstein-Hawking area law, i.e., $S_0=\frac{A}{4}$. \section{Corrections to the Thermodynamical Quantities} Here we work out the corrected form of Hawking temperature and the corresponding entropy for the charged regular BH by taking into account the quantum effects on the thermodynamical quantities. \subsection{Hawking Temperature Corrections} The expression for the semiclassical Hawking temperature (\ref{c}) can be written as \begin{equation} T_H\approx\frac{\hbar}{2\pi} \left(\frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.7e^4}{r_+^5}\right).\label{d} \end{equation} The corrected temperature is given by (Sharif and Javed 2010) \begin{equation} T=T_H\left(1-\frac{\beta\hbar}{m^2}\right),\label{22222} \end{equation} where $\beta$ is given by \begin{equation} \beta=-\frac{1}{360\pi}\left(-N_0-\frac{7}{4}N_{\frac{1}{2}} +13 N_1+\frac{233}{4}N_{\frac{3}{2}}-212 N_2\right), \end{equation} $N_s$ refers to the number of spin $s$ fields (Banerjee and Majhi 2008). Inserting the value of $m$ in Eq.(\ref{22222}), it follows that \begin{equation} T=T_H\left[1-\frac{\beta\hbar}{0.09r_+^2}\left\{1-2\left(\frac{3e^2}{r_+^2}- \frac{2.6667e^6}{r_+^6}+\frac{2.3333e^4}{r_+^4}\right)\right\}\right]. \label{e} \end{equation} Using Eq.(\ref{d}) in (\ref{e}), we obtain the quantum correction of temperature $T$ by neglecting the higher order terms \begin{equation} T\approx\frac{\hbar}{2\pi}\left(\frac{0.3}{r}-\frac{0.1e^2}{r^3}+ \frac{0.7e^4}{r^5}\right)-\frac{\beta{\hbar}^2}{0.18\pi r^2}\left( \frac{0.3}{r}-\frac{1.9e^2}{r^3}-\frac{0.1e^4}{r^5}\right).\label{f} \end{equation} For $e=0$, this implies that $T=\frac{0.15\hbar}{\pi r}\left(1- \frac{11.11\beta\hbar}{r^2}\right)$, which approaches to the corrected Hawking temperature of the Schwarzschild BH. \subsection{Entropy Corrections} Here, we evaluate the quantum corrections to the entropy of the ABGB charged regular BH. The corrected form of entropy is (Sharif and Javed 2010) \begin{equation} S(r,t)=S_0(r,t)\left(1+\sum_{i}\alpha_i\frac{\hbar^i}{m^{2i}}\right). \label{2} \end{equation} In terms of horizon radius, this can be written as \begin{equation} S(r,t)=S_0(r,t) \left( 1+\sum_{i}\frac{\alpha_i\hbar^i}{(0.3r_++\frac{0.9e^2}{r_+}- \frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{2i}} \right), \label{3} \end{equation} where the semiclassical entropy can be written from Eq.(\ref{g}) as \begin{equation} S_0\approx\frac{\pi}{\hbar}\left(-5.3333e^2\ln r_++r_+^2+\frac{10.3333e^4}{r_+^2}-\frac{9e^6}{r_+^4}\right). \end{equation} The corrected form of the Hawking temperature is (Sharif and Javed 2010) \begin{equation} T=T_H{\left( 1+\sum_{i}\frac{\alpha_i\hbar^i}{(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{2i}} \right)}^{-1}.\label{i} \end{equation} Using the first law of thermodynamics, $dm=TdS+\Phi de$, we can write the condition for the exact differential as \begin{equation} \frac{\partial}{\partial e}\left(\frac{1}{T}\right)= \frac{\partial}{\partial m}\left(-\frac{\Phi}{T}\right).\label{10} \end{equation} Inserting the value of corrected temperature, it follows that \begin{eqnarray} &&\frac{\partial}{\partial e}\left(\frac{1}{T_H}\right){\left( 1+\sum_{i}\frac{\alpha_i\hbar^i}{(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{2i}} \right)}\nonumber\\&=& \frac{\partial}{\partial m}\left(-\frac{\Phi}{T_H}\right){\left( 1+\sum_{i}\frac{\alpha_i\hbar^i}{(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{2i}} \right)}\label{}. \end{eqnarray} Using the exact differential condition, the entropy in the integral form is \begin{equation} S(m,e)=\int\frac{1}{T}dm-\int\frac{\Phi}{T}de-\int\left(\frac{\partial}{\partial e}\left(\int\frac{1}{T}dm\right)\right)de.\label{11} \end{equation} Substituting the value of corrected temperature, the corresponding corrected entropy will become \begin{eqnarray} S(m,e)&=&\int\frac{1}{T_H}{\left( 1+\sum_{i}\frac{\alpha_i\hbar^i}{(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{2i}} \right)}dm\nonumber\\&-&\int\frac{\Phi}{T_H}{\left( 1+\sum_{i}\frac{\alpha_i\hbar^i}{(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{2i}} \right)}de\nonumber\\ &-&\int\left(\frac{\partial}{\partial e}\left(\int\frac{1}{T_H}\left( 1+\sum_{i}\right.\right.\right.\nonumber\\&&\left.\left.\left.\frac{\alpha_i\hbar^i} {(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{2i}} \right)dm\right)\right)de.\label{abc} \end{eqnarray} We can simplify these complicated integrals by employing the exactness criterion (Sharif and Javed 2010). Consequently, this reduces to \begin{equation} S(m,e)=\int\frac{1}{T_H}{\left( 1+\sum_{i}\frac{\alpha_i\hbar^i}{(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{2i}} \right)}dm\label{} \end{equation} which can be written in expanded form as \begin{eqnarray}\label{1} S(m,e)&=&\int\frac{1}{T_H}dm+\int{\frac{\alpha_1\hbar}{T_H(0.3r_++\frac{0.9e^2}{r_+}- \frac{0.8e^6}{r_+^5}+\frac{0.7e^4}{r_+^3})^{2}}}dm\nonumber\\&+& \int{\frac{\alpha_2\hbar^2}{T_H(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{4}}}dm\nonumber\\ &+&\int{\frac{\alpha_3\hbar^3}{T_H(0.3r_++\frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^{6}}}dm+....\nonumber\\ &=&I_1+I_2+I_3+I_4+.... \end{eqnarray} The first integral $I_1$ has been evaluated in Eq.(\ref{g}) and $I_2, I_3,...$ are corrections due to quantum effects. Thus \begin{equation} I_2=2\pi\alpha_1\int\frac{(0.3-\frac{0.9e^2}{r_+^2}+\frac{4e^6}{r_+^6}- \frac{2.1e^4}{r_+^4})}{( \frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.7e^4}{r_+^5})(0.3r_++ \frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+\frac{0.7e^4}{r_+^3})^2}dr_+, \label{} \end{equation} \begin{equation} I_3=2\pi\alpha_2\hbar\int\frac{(0.3-\frac{0.9e^2}{r_+^2}+\frac{4e^6}{r_+^6}- \frac{2.1e^4}{r_+^4})}{( \frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.7e^4}{r_+^5})(0.3r_++\frac{0.9e^2}{r_+}- \frac{0.8e^6}{r_+^5}+\frac{0.7e^4}{r_+^3})^4}dr_+. \label{} \end{equation} In general, we can write for $k>3$ \begin{eqnarray} I_k&=&2\pi\alpha_{k-1}\hbar^{k-2} \int\left(\frac{(0.3-\frac{0.9e^2}{r_+^2}+\frac{4e^6}{r_+^6}-\frac{2.1e^4}{r_+^4})}{( \frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.7e^4}{r_+^5})}\right.\nonumber\\ &\times&\left.\frac{1}{(0.3r_++ \frac{0.9e^2}{r_+}-\frac{0.8e^6}{r_+^5}+\frac{0.7e^4}{r_+^3})^{2(k-1)}}\right)dr_+. \label{} \end{eqnarray} Replacing all the values in Eq.(\ref{1}), it follows that \begin{eqnarray} S(m,e)&=&2\pi\hbar^{-1}\int\frac{(0.3-\frac{0.9e^2}{r_+^2}+\frac{4e^6}{r_+^6}- \frac{2.1e^4}{r_+^4})}{( \frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.7e^4}{r_+^5})}dr_+\nonumber\\&+& 2\pi\alpha_1\int\frac{(0.3-\frac{0.9e^2}{r_+^2}+\frac{4e^6}{r_+^6}-\frac{2.1e^4}{r_+^4})}{( \frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.7e^4}{r_+^5})(0.3r_++\frac{0.9e^2}{r_+}- \frac{0.8e^6}{r_+^5}+ \frac{0.7e^4}{r_+^3})^2}dr_+\nonumber\\&+&\sum_{k>2} 2\pi\alpha_{k-1}\hbar^{k-2} \int\left(\frac{(0.3-\frac{0.9e^2}{r_+^2}+\frac{4e^6}{r_+^6}-\frac{2.1e^4}{r_+^4})}{( \frac{0.3}{r_+}-\frac{0.1e^2}{r_+^3}+\frac{0.7e^4}{r_+^5})}\right.\nonumber\\&\times&\left. \frac{1}{(0.3r_++\frac{0.9e^2}{r_+}- \frac{0.8e^6}{r_+^5}+\frac{0.7e^4}{r_+^3})^{2(k-1)}}\right)dr_+. \label{123asbc} \end{eqnarray} This gives the quantum correction to the entropy for a ABGB charged BH. For $e=0$, Eq.(\ref{123asbc}) reduces to \begin{equation} S=\frac{A}{4\hbar}+\frac{\pi\alpha_1{\ln A}}{(0.3)^2}-\frac{4\pi^2\hbar\alpha_2}{(0.3)^4A}+...,\label{efg} \end{equation} where $A$ is given by Eq.(\ref{24}). This is approximately similar to the corrected entropy of the Schwarzschild BH (Banerjee and Majhi 2008). It is worth mentioning here that the first term of Eq.(\ref{efg}) is the semiclassical Bekenstein-Hawking area law, i.e., $S_{BH}=\frac{A}{4\hbar}$, while the remaining terms are due to quantum corrections. Thus, $S_{BH}$ is modified by quantum effects. Integrating Eq.(\ref{123asbc}), it follows that \begin{eqnarray} S(m,e)&=&\pi\hbar^{-1}\left(-5.3333e^2\ln r_++r_+^2+\frac{10.3333e^4}{r_+^2} \right.\nonumber\\ &-&\left.\frac{9e^6}{r_+^4}\right)+\pi\alpha_1\left(22.22\ln r_++ \frac{96.22e^2}{r_+^2}\right.\nonumber\\ &+&\left.\frac{27.78e^4}{r_+^4}\right)+\frac{2\pi\hbar{\alpha_2}}{(0.3)^4} \left(-\frac{0.5}{r_+^2}+\frac{3.67e^2}{r_+^4}\right)+.... \label{123456} \end{eqnarray} The entropy (\ref{123asbc}) in terms of $A$ is given as follows \begin{eqnarray} S(m,e)&=&\frac{\hbar^{-1}}{4}\int\frac{(1-\frac{3e^2}{(\frac{A}{4\pi})}+ \frac{13.3333e^6}{(\frac{A}{4\pi})^3}-\frac{7e^4}{(\frac{A}{4\pi})^2})} {(1-\frac{0.3333e^2}{(\frac{A}{4\pi})}+\frac{2.3333e^4}{(\frac{A}{4\pi})^2})}dA \nonumber\\&+& \frac{\alpha_1\pi}{(0.3)^2} \int\left(\frac{(1-\frac{3e^2}{(\frac{A}{4\pi})}+ \frac{13.3333e^6}{(\frac{A}{4\pi})^3}-\frac{7e^4}{(\frac{A}{4\pi})^2})} {(1-\frac{0.3333e^2}{(\frac{A}{4\pi})}+\frac{2.3333e^4}{(\frac{A}{4\pi})^2}) }\right.\nonumber\\&\times&\left.\frac{1}{(1+\frac{3e^2}{(\frac{A}{4\pi})}- \frac{2.6667e^6}{(\frac{A}{4\pi})^3}+ \frac{2.3333e^4}{(\frac{A}{4\pi})^2})^2}\right)dA\nonumber\\&+& \sum_{k>2}\frac{2^{2k-4}\hbar^{k-2}\alpha_{k-1}(\pi)^{k-1}}{(0.3)^{2k-2}} \int\left(\frac{(1-\frac{3e^2}{(\frac{A}{4\pi})}+ \frac{13.3333e^6}{(\frac{A}{4\pi})^3}-\frac{7e^4}{(\frac{A}{4\pi})^2})} {A^{k-1}(1-\frac{0.3333e^2}{\frac{A}{4\pi}}+\frac{2.3333e^4}{(\frac{A}{4\pi})^2}) }\right.\nonumber\\&\times& \left.\frac{1}{(1+\frac{3e^2}{(\frac{A}{4\pi})}- \frac{2.6667e^6}{(\frac{A}{4\pi})^3}+ \frac{2.3333e^4}{(\frac{A}{4\pi})^2})^{2k-2}}\right)dA.\label{erdsa} \end{eqnarray} \begin{eqnarray} S(m,e)&=&\frac{\hbar^{-1}}{4}\int\left(1-\frac{2.66666e^2}{(\frac{A}{4\pi})}- \frac{10.3334e^4}{(\frac{A}{4\pi})^2}+\frac{18e^6}{(\frac{A}{4\pi})^3} \right.\nonumber\\&+&\left.\textsl{O}(\frac{1}{A^4})\right)dA+ \frac{\alpha_1\pi}{(0.3)^2} \int\frac{1}{A}\left(1-\frac{8.6667e^2}{(\frac{A}{4\pi})}+ \frac{5e^4}{(\frac{A}{4\pi})^2} \right.\nonumber\\ &+&\left.\frac{60.8855e^6}{(\frac{A}{4\pi})^3}+\textsl{O}(\frac{1}{A^4})\right)dA+ \frac{4\alpha_2\pi^2\hbar}{(0.3)^4} \int\frac{1}{A^2}\left(1-\frac{14.6667e^2}{(\frac{A}{4\pi})} \right.\nonumber\\&+&\left. \frac{20.3335e^4}{(\frac{A}{4\pi})^2}+\frac{103.775e^6}{(\frac{A}{4\pi})^3}+ \textsl{O}(\frac{1}{A^4})\right)dA+....\label{erdsa} \end{eqnarray} When we take $e=0$, this equation leads to Eq.(\ref{efg}). Solving Eq.(\ref{erdsa}), we obtain \begin{eqnarray} S(m,e)&=&\frac{\hbar^{-1}}{4}\left(A+\frac{1631.78e^4}{A}-\frac{17859.6e^6}{A^2}+ \textsl{O}(\frac{1}{A^3})\right)\nonumber\\&+&\frac{\alpha_1\pi}{(0.3)^2} \left(\ln A+\frac{108.909e^2}{A}-\frac{394.785e^4}{A^2}+\textsl{O}(\frac{1}{A^3})\right) \nonumber\\&+&\frac{4\alpha_2 \pi^2 \hbar}{(0.3)^4} \left(-\frac{1}{A}+\frac{92.1536e^2}{A^2}+\textsl{O}(\frac{1}{A^3})\right)+.... \label{rtyu} \end{eqnarray} \section{Outlook} The semiclassical entropy and temperature of the BH should be corrected due to quantum effects. The tunneling formalism beyond semiclassical approximation is one of the approaches which provides quantum corrections to these thermodynamical quantities of a BH. In general, the corrected form of entropy has a logarithmic leading order term. The entropy of the BH can be calculated by using various methods. For instance, Wald's technique (Wald 1993; Iyer and Wald 1994; Jacobson et al. 1994) is suitable for higher curvature theories while some techniques (Whitt 1985; Audretsch et al. 1993; Jacobson et al. 1995) are based on the field redefinition and Visser's (1992, 1993a, 1993b) Euclidean approach. In this paper, we use quantum tunneling approach beyond semiclassical approximation to study the quantum corrections of temperature and entropy for the ABGB charged regular BH. For this purpose, first of all, we have evaluated the semiclassical temperature and entropy that reduce to the temperature and entropy of the Schwarzschild case (Banerjee and Majhi 2008) for $e=0$. The quantum corrections to the temperature and entropy approximate to the corrected form of temperature (Banerjee and Majhi 2008) and entropy (\ref{efg}) of the Schwarzschild case respectively for zero charge. It is interesting to mention here that the leading order entropy correction of the charged regular BH turns out to be a logarithmic term which is expected due to quantum effects. The other terms involve ascending powers of the inverse of the area (Banerjee and Majhi 2008). The Bekenstein-Hawking entropy-area relationship also reduces to the Schwarzschild when we take zero charge. It is worthwhile to note that quantum corrections to the thermodynamical quantities, i.e., temperature and entropy, given by Eqs.(3.5) and (3.21) respectively, reduce to the classical temperature and entropy ((2.13) and (2.18)) after the correction is disappeared. We would like to point out here that semiclassical thermodynamical quantities and their corresponding corrections has been evaluated by using Taylor's expansion upto the first order approximation. These approximations are valid only for the specific ratio of $e$ and $r$. Consequently, quantum corrections of temperature and entropy with specific ratio of $e$ and $r$ do not represent class of corrections corresponding to semiclassical values of temperature and entropy. Hence quantum corrections to the thermodynamical quantities are not larger than the semiclassical thermodynamical quantities. Finally, it is mentioned that the entropy of this BH solution has also been discussed by Matyjasek. However, he used Wald's and Visser's Euclidean approaches (Matyjasek 2008). In our work, we have analyzed the issue of quantum corrections by using Hamilton-Jacobi method beyond the semiclassical approximation. \vspace{0.25cm} {\bf Acknowledgment} \vspace{0.25cm} We would like to thank the Higher Education Commission, Islamabad, Pakistan, for its financial support through the {\it Indigenous Ph.D. 5000 Fellowship Program Batch-IV}. \section*{References} Akbar, M.: Chin. Phys. Lett. {\bf 24}, 1158(2007)\\ Akbar, M., Saifullah, K.: Eur. Phys. J. C {\bf 67}, 205(2010)\\ Akbar, M., Saifullah, K.: Gen. Relativ. Gravit. {\bf 43}, 933(2011)\\ Akbar, M., Siddiqui, A.A.: Phys. Lett. B {\bf 656}, 217(2007)\\ Audretsch, J., et al.: Phys. Rev. D {\bf 47}, 3303(1993)\\ Ayon-Beato, E., Garcia, A.: Phys. Lett. B {\bf 464}, 25(1999)\\ Banerjee, R., Majhi, B.R.: J. High Energy Phys. {\bf 06}, 095(2008)\\ Banerjee, R., Modak, S.K.: J. High Energy Phys. {\bf 0905}, 063(2009)\\ Bekenstein, J.D.: Nuovo Cimento Lett. {\bf 4}, 737(1972)\\ Bronnikov, K.A.: Phys. Rev. Lett. {\bf 85}, 4641(2000)\\ Bytsenko, A.A., et al.: Phys. Lett. B {\bf 443}, 121(1998a)\\ Bytsenko, A.A., et al.: Phys. Rev. D {\bf 57}, 4917(1998b)\\ Bytsenko, A.A., et al.: Phys. Rev. D {\bf 64}, 105024(2001)\\ Chen, Y.-X., et al.: Europhys. Lett. {\bf 95}, 10008(2011)\\ Cognola, G., et al.: Phys. Rev. D {\bf 52}, 4548(1995)\\ Elizalde, E., et al.: Phys. Rev. D {\bf 59}, 061501(1999)\\ Gibbons, G.W., Hawking, S.W.: Phys. Rev. D {\bf 15}, 2752(1977)\\ Hawking, S.W.: Nature {\bf 248}, 30(1974)\\ Hartle, J.B., Hawking, S.W.: Phys. Rev. D {\bf 13}, 2188(1976)\\ Iyer, V., Wald, R.M.: Phys. Rev. D {\bf 50}, 846(1994)\\ Jacobson, T., et al.: Phys. Rev. D {\bf 49}, 6587(1994)\\ Jacobson, T., et al.: Phys. Rev. D {\bf 52}, 3518(1995)\\ Jamil, M., Darabi, F.: Int. J. Theor. Phys. {\bf 50}, 2460(2011)\\ Kothawala, D., et al.: Phys. Lett. B {\bf 652}, 338(2007)\\ Larra\~{n}aga, A.: Commun. Theor. Phys. {\bf 55}, 72(2011a)\\ Larra\~{n}aga, A.: Pramana J. Phys. {\bf 76}, 553(2011b)\\ Majhi, B.R.: Phys. Rev. D {\bf 79}, 044005(2009)\\ Majhi, B.R., Samanta, S.: Annals Phys. {\bf 325}, 2410(2010)\\ Matyjasek, J.: Phys. Rev. D {\bf 76}, 084003(2007)\\ Matyjasek, J.: Acta Physica Polonica B {\bf 39}, 3(2008)\\ Modak, S.K.: Phys. Lett. B {\bf 671}, 167(2009)\\ Nickalls, R.W.D.: The Mathematical Gazette. {\bf 77}, 354(1993)\\ Nojiri, S., Odintsov, S.D.: Phys. Rev. D {\bf 59}, 044003(1999a)\\ Nojiri, S., Odintsov, S.D.: Phys. Lett. B {\bf 463}, 57(1999b)\\ Nojiri, S., Odintsov, S.D.: Int. J. Mod. Phys. A {\bf 15}, 989(2000)\\ Nojiri, S., Odintsov, S.D.: Int. J. Mod. Phys. A {\bf 16}, 1015(2001)\\ Parikh, M.K.: Gen. Relativ. Gravit. {\bf 36}, 2419(2004) [Int. J. Mod. Phys.\\ D {\bf 13}, 2351(2004)]\\ Parikh, M.K., Wilczek, F.: Phys. Rev. Lett. {\bf 85}, 5042(2000)\\ Sharif, M., Javed, W.: J. Korean Phys. Soc. {\bf 57}, 217(2010)\\ Sharif, M., Javed, W.: Canadian J. Phys. (to appear, 2011)\\ Srinivasan, K., Padmanabhan, T.: Phys. Rev. D {\bf 60}, 024007(1999)\\ Visser, M.: Phys. Rev. D {\bf 46}, 2445(1992)\\ Visser, M.: Phys. Rev. D {\bf 48}, 583(1993a)\\ Visser, M.: Phys. Rev. D {\bf 48}, 5697(1993b)\\ Wald, R.M.: Phys. Rev. D {\bf 48}, 3427(1993)\\ Whitt, B.: Phys. Rev. D {\bf 32}, 379(1985)\\ Zhu, T., et al.: Corrected Entropy of High Dimensional Black Holes.\\ arXiv:0906.4194v2 (2009a)\\ Zhu, T., et al.: JCAP {\bf0908}, 010(2009b)\\ \end{document}
1,108,101,563,080
arxiv
\section{Introduction} In 1696, Johann Bernoulli posed a famous problem to the readers of Acta Eruditorum: ``{\em{Given two points $A$ and $B$ in a vertical plane, what is the curve traced out by a point (particle) acted on only by gravity, which starts at $A$ and reaches $B$ in the shortest time?}}" \citep{andre1696acta}. The solution of this problem was shown to be a cycloid connecting $A$ and $B$ and was probably responsible for the inception of calculus of variations, which has had a significant impact on the evolution of modern science and engineering. We are interested in a fluid dynamic variant of this question where the point particle is replaced by a fluid-filled cylinder. The problem is interesting in that the system dynamics is a function of the time history of angular velocity of the cylinder. This memory dependence of the system is what is likely to make the brachistochrone of a fluid-filled cylinder different from a cycloid. This property has been illustrated by \citet{supekar2014dynamics} in the context of discussing the dynamics of a fluid-filled cylinder rolling down a straight inclined plane. The classical brachistochrone problem has been extended by previous researchers. For example, \citet{extbrac} introduced field varying gravity, \citet{optcontrol} included the effect of coulomb friction between the curve and the particle and \cite{cherkasov2017range} includes both coulomb and viscous friction. Most relevantly, \cite{diss} included the effect of a non-conservative fluid drag force in a simple Stokesian form (as though the point particle was moving in a universe of a viscous fluid). \cite{supekar2014dynamics} discussed the full effects of fluid dynamics on the motion of a fluid-filled cylinder moving down an inclined plane. Despite the long history of the brachistochrone problem, it has not been extended formally into the realm of fluid dynamics except by modeling drag effects phenomenologically. We attempt to close this gap in the literature by invoking the coupled equations of fluid dynamics and rigid body mechanics to calculate the brachistochrone for a fluid-filled cylindrical shell (herein referred to as the bottle) with the fluid viscosity being non-negligible. Specifically, we ask the following question. \begin{displayquote} \textit{Given two points A and B in a vertical plane, what is the curve traced out by the center of mass of a fluid-filled bottle acted on only by gravity, on which the center of mass starts at A and reaches B in the shortest time?} \end{displayquote} The answer to this question will have a broader impact on a wide range of fluid dynamic problems. Problems where a performance measure (say, total flow rate) of a fluid dynamical process is to be optimized while accounting for viscous dissipation are potential candidates which are likely to benefit from the approach discussed herein. For example, consider a related problem where one desires to calculate the shape of an axisymmetric smooth funnel of known inlet and outlet cross-sectional circular areas and of a fixed height difference between the inlet and outlet cross-sections, such that the funnel has a maximum rate of gravitational drainage of a fluid. It is well known that flow through a funnel (or, analogously, through a pressure swirl atomizer) exhibits multiple states as has been discussed by \citet{Taylor1948,Taylor1950} and \cite{Binnie1950}. This is due to the three-dimensional nature of the boundary layer \citep{Bloor1977}. Therefore, the brachistochrone funnel shape is likely to be a non-trivial function of the fluid dynamic parameters. A second related fluid brachistochrone problem could involve calculating the shape of a `water slide' connecting two points which maximizes the flow rate. We are motivated in this work to lay the general optimal control based mathematical framework to obtain solutions to such problems. \section{Problem formulation} \begin{figure} \begin{center} \includegraphics[width=\textwidth,trim={0cm 0cm 0cm 0cm},clip]{fig1} \end{center} \caption{Schematic of the problem statement. The fluid-filled bottle is released from $A$ and allowed to roll towards $B$ subject to acceleration due to gravity in the $-y'$ direction. $\hat{e}_r$ and $\hat{e}_\theta$ define a local co-ordinate system fixed to the center of mass and translating with it. An illustrative instantaneous nonlinear azimuthal velocity profile is indicated for the fluid inside the bottle. The curve shown in solid line is the surface on which the cylinder would roll without slipping while the curve shown in dotted line is the locus of the center of mass. The surface (solid line) is offset by a factor of $R$ from the locus of the center of mass (dotted line).} \label{fig:schematic} \end{figure} Consider a bottle comprising of a thin cylindrical shell of radius $R$, length $h$ and mass $M$ fully filled with a fluid of kinematic viscosity $\nu$ and density $\rho$. Now, consider such a bottle rolling down an inclined curved path $y'=f(x')$ from a point $(0,0)$ (without loss of generality) to an end point $(x'_f,y'_f)$ as shown in figure \ref{fig:schematic}. The dynamics of this body are governed by the Navier-Stokes equations for the fluid and the equations of rigid body motion for the shell. Appropriate interface conditions couple these two sets of equations. In the following section, we derive these governing equations. \subsection{Governing equations of dynamics} The axisymmetric motion of a fluid due to a rotating cylinder was studied by \cite{batchelor} in the usual polar coordinate $(r,\theta)$ description. The corresponding equations written in the stationary frame of reference fixed at the center of the cylinder are \begin{subequations} \begin{equation} \frac{\rho v^2}{r} = \frac{\partial p}{\partial r}, \label{ge1} \end{equation} \begin{equation} \frac{\partial v}{\partial t} = \nu \frac{\partial}{\partial r} \left(\frac{1}{r} \frac{\partial}{\partial r}\left(rv\right)\right). \label{ge2} \end{equation} \label{ge} \end{subequations} \noindent In the above equations, $v$ is the azimuthal velocity of the fluid and $p$ is the pressure field. If the center of mass $G$ is accelerating with an acceleration $\mathbf{a}$, a superposed pressure field $p_s$ results due to the d'Alembert force. This field is given by $\nabla p_s=\rho \mathbf{a}$. However, the velocity field would remain unaltered. The corresponding equations of motion of the bottle are \begin{subequations} \begin{equation} (m+M)R\dot{\Omega} = (m+M)g \sin{\gamma(t)}-f, \label{fbd1} \end{equation} \begin{equation} MR^2\dot{\Omega} = fR-T. \label{fbd2} \end{equation} \label{rbe} \end{subequations} \noindent Here, $m=\rho \pi R^2h$ is the mass of the fluid contained in the shell and $f$ is the contact friction. In addition, $T$ is the torque exerted on the shell wall by the fluid as a result of the wall shear stress and is given by \begin{equation} T= 2\pi \mu R^2h \left.\left( \frac{\partial v}{\partial r}-\frac{v}{r} \right) \right|_{r=R}. \label{ShearStress} \end{equation} \noindent It is to be noted that $p_s$ does not contribute to the torque exerted by the fluid on the wall. The boundary conditions for equations \eqref{ge} are \begin{subequations} \begin{equation} v(R,t)=R \Omega(t), \label{cBC} \end{equation} \begin{equation} \frac{\partial v}{\partial r}(0,t)=0. \end{equation} \end{subequations} \noindent It can be seen that equation \eqref{cBC} provides the necessary coupling between the fluid motion and that of the shell. Since the fluid bottle is assumed to be starting from rest and the fluid is assumed to be quiescent initially, \begin{equation} \Omega(0)=0 \qquad \textrm{and} \qquad v(0,r)=0. \label{ic} \end{equation} Equations \eqref{ge} to \eqref{ic} pose an initial boundary value problem (IBVP) which has been solved closed form by \citet{batchelor} for a constant angular velocity condition $(\Omega(t)=\Omega_0)$. In contrast, the angular velocity of the cylinder in our case evolves in time, which is governed by equations \eqref{rbe}. \citet{supekar2014dynamics} have extended the closed form solution of \citet{batchelor} to the case of a time varying angular velocity using Duhamel's theorem. They studied the case of a cylinder rolling down an inclined plane and showed that the velocity field is given by \begin{equation} v(r,t) = \int_{0}^{t}\left( r+2R\sum_{n=1}^{\infty}\frac{J_1\left(\lambda_n\frac{r}{R}\right)}{\lambda_n J_0(\lambda_n)} \exp{\left(-\lambda_n^2 \frac{\nu (t-\tau)}{R^2}\right)} \right) \dot{\Omega}(\tau) d\tau. \label{VelocityEquation} \end{equation} \noindent A brief discussion of the formal extension of Batchelor's solution to the present case can be found in Appendix \ref{labelB}. We extend their solution to the case where the fluid-filled bottle encounters varying angles of inclination along the curved path. For our case, using equation \eqref{VelocityEquation}, equations \eqref{rbe} reduce to \begin{subequations} \begin{equation} (m+2M) R^2\dot{\Omega}(t) = -T(t) + (m+M) g R sin(\gamma(t)), \label{t1} \end{equation} \begin{equation} T(t) = 4\pi \nu \rho R^2 h \sum_{n=1}^{\infty}{\int_{0}^{t}{exp{\left(-\lambda^2_n\frac{\nu (t-\tau)}{R^2}\right)}\dot{\Omega}(\tau)}}d\tau. \label{t2} \end{equation} \label{TorqueEquation} \end{subequations} \noindent Here, $\lambda_n$ is the $n^{th}$ root of the Bessel function of first kind. We choose to non-dimensionalize the above equations with $\frac{R^2}{\nu}$ as the time scale, $R$ as the length scale and $\rho \pi R^2 h$ as the mass scale. The resulting non-dimensional equations are: \begin{subequations} \begin{equation} \label{eq 1} (1+2\pi_m)\dot{\Omega}(t) = -4 \mathcal{T}(t) + (1+\pi_m)\pi_g \sin(\gamma(t)), \end{equation} \noindent where, \begin{equation} \label{eq 2} \mathcal{T}(t) = \sum_{n=1}^{\infty}{\int_{0}^{t}{exp(-\lambda^2_n(t-\tau))\dot{\Omega}(\tau)}}d\tau. \end{equation} \label{ndeqns} \end{subequations} \noindent In addition, the equations governing the kinematics of the center of mass of the fluid bottle are \begin{subequations} \begin{equation} \dot{x}(t) = \Omega(t) \cos\gamma(t), \end{equation} \begin{equation} \dot{y}(t) = -\Omega(t) \sin\gamma(t). \end{equation} \label{eq 3} \end{subequations} \noindent Two non-dimensional parameters are identified to govern the dynamics of the fluid-filled bottle. They are \begin{align} \pi_m = \frac{M}{m} \qquad \textrm{and} \qquad \pi_g=\frac{g R^3}{\nu^2}. \label{nd} \end{align} Here, $\pi_m$ is the ratio of the cylinder mass to the fluid mass and is a measure of the inertia of the two masses and $\pi_g$ is a measure of the ratio of the viscous time scale to the gravitational time scale. In addition, the co-ordinate variables $(x,y)$ are taken to be dimensionless as well written in units of $R$. Finally, we define $(x_f,y_f)=(\frac{x'_f}{R},\frac{y'_f}{R})$ as the dimensionless terminal point of the brachistochrone. It may be noted that we are interested in calculating the trajectory traced by the center of mass of the fluid-bottle system. The actual inclined curve over which the bottle moves is obtained by offsetting the calculated curve by a distance $R$. \subsection{Variational formulation of the brachistochrone problem} We now formulate the brachistochrone problem in an optimal control framework as discussed in \cite{Liberzon}. In this framework, the position $\big(x(t),y(t)\big)$ and angular velocity $\Omega(t)$ of the fluid bottle comprise the state space and the instantaneous inclination of the candidate curve $\gamma(t)$ is the control action which controls the trajectory of the fluid bottle. The problem is posed to determine the control action $\gamma(t)$ which will minimize the total time of descent ($t_f$) of the fluid bottle from a point A to point B. Mathematically, the corresponding objective function can be written as \begin{equation} \minimize_{\gamma(t) \in \left[-\frac{\pi}{2}\textrm{, }\frac{\pi}{2}\right]} J(\gamma(t))=\int_0^{t_f} dt, \label{opt} \end{equation} \noindent such that \begin{equation} \begin{array}{l} x(0)=x_0,\qquad y(0)=y_0,\\ x(t_f)=x_f,\qquad y(t_f)=y_f. \end{array} \label{const} \end{equation} \noindent Equation \eqref{opt} along with equations \eqref{ndeqns}, \eqref{eq 3} and \eqref{const} as constraints results in a well-posed optimal control problem for determining the brachistochrone for a fluid-filled bottle. We use direct methods of optimal control to numerically solve this optimization problem. In this approach, we discretize the equations \eqref{ndeqns} to \eqref{const} using an explicit forward difference scheme in time. It must be recalled that the velocity field is known closed form and its effect on the motion of the fluid bottle is completely described by equation \eqref{eq 2}. \section{Results} \begin{figure} \begin{center} \centering \captionsetup[subfigure]{labelformat=empty,labelsep=none} \subfloat[(a)]{\includegraphics[width=\textwidth]{fig2a}}\\ \subfloat[(b)]{\includegraphics[width=0.8\textwidth,clip]{fig2b}} \caption{Figure (a) shows brachistochrone curves (shown in dotted lines) computed for terminal points $(x_f,y_f)$ at $B_1(2,-3)$, $B_2(5,-1)$ and $B_3(10,-1)$ and $\pi_g=10^3$ and $\pi_m=0.1$. The solid black curve in each case is the brachistochrone for a non-dissipative point mass i.e., a cycloid passing through the origin and the terminal point. Figure (b) is similar to (a) except for $(x_f,y_f)$ at $B_4(250,-10)$, $B_5(500,-10)$ and $B_6(1000,-10)$ The brachistochrone of the fluid bottle shows a significant deviation from a cycloid for large $x_f$ (see Figure (b)).} \label{fig2} \end{center} \end{figure} \begin{figure} \begin{center} \centering \captionsetup[subfigure]{labelformat=empty,labelsep=none} \subfloat[(a)]{\includegraphics[width=0.5\textwidth,trim={13cm 0cm 13cm 0cm},clip]{fig3a}} \subfloat[(b)]{\includegraphics[width=0.5\textwidth,trim={12.2cm 0cm 12.2cm 0cm},clip]{fig3b}} \caption{Plots (a) and (b) show the variation of the brachistochrone with $\pi_g = \frac{g R^3}{\nu^2}$ at $\pi_m = 0.1$ and $\pi_m = \frac{M}{m}$ at $\pi_g = 10^3$ respectively for $(x_f,y_f)=(10,-1)$. The curves vary monotonically with $\pi_m$ whereas the behaviour is non-monotonic with respect to $\pi_g$.} \label{fig3} \end{center} \end{figure} We will begin with a discussion of the overall system dynamics and the competing objectives governing the optimization problem. In the classical Bernoulli's problem where mechanical energy is conserved, the only objective is to minimize time taken to travel from the start point (say $A$) to an end point (say $B$). In this case, the brachistochrone is governed by an interplay between maximization of kinetic energy and minimization of distance traversed. In contrast, the fluid dynamic variant considered herein is a non-conservative system. Therefore, a third competing objective arises where the fluid-filled bottle needs to conserve energy to reach the end point, while still maximizing kinetic energy. The nature of this objective is best understood by considering a case of large $x_f$. Under this condition, the fluid-filled bottle is required to (first) reach the end point in spite of viscous dissipation. We will first discuss the geometry of the brachistochrone in relation to a cycloid as a function of $x_f$, for both small as well as large $x_f$. Figure \ref{fig2} shows the brachistochrone for these two sets of values of $x_f$. Figure \ref{fig2}(a) shows the brachistochrone curves for three different $x_f$ in the small $x_f$ regime. All brachistochrone curves begin at $(0,0)$ and are shown as dotted lines. The solid lines are plots of a cycloid passing between the same two end-points. As can be seen in this figure, the brachistochrone is different from the cycloid for end-points $B_2(5,-1)$ and $B_3(10,-1)$ while it is almost coincident for end point $B_1(2,-3)$. In addition, one can observe that the initial slope of curve near $(0,0)$ is steeper for the cycloid than for the brachistochrone of the fluid-filled bottle. This is a result of restraining effect of the drag torque in the fluid-filled bottle. For end-points $B_2$ and $B_3$, the brachistochrone requires that the bottle as well as the fluid inside accelerate and then decelerate. In contrast, for end-point $B_1$, the fluid as well as the bottle are only required to accelerate. As a consequence of this physics, fluid dynamic memory effects characterized by the integral in equation \eqref{eq 2} begin to play a dominant role. Therefore, for a given starting point, as $y_f$ decreases while maintaining $x_f$ constant, the brachistochrone tends to approach a cycloid. Figure \ref{fig2}(b) shows the brachistochrone curves for three different $x_f (= 250, 500, 1000)$ in the large $x_f$ regime. $y_f=-10$ in all three cases. As can be seen, the brachistochrone curves show increasing deviation from a cycloid as $x_f$ increases. The undershoot in the brachistochrone is also lower because the objective of the cylinder (for large $x_f$) is increasingly focused on just reaching the end point (while still minimizing the time of travel). \subsection{Effect of $\pi_m$ and $\pi_g$} We will now discuss the effect of the two dimensionless parameters, $\pi_m$ and $\pi_g$ defined in equation \eqref{nd}, on the geometry of the brachistochrone. Figure \ref{fig3}(a) shows the brachistochrone curves between $A(0,0)$ and $B(10,-1)$ for varying values of $\pi_g$ while $\pi_m$ was maintained constant at $0.1$. As $\pi_g$ increases, the brachistochrone begins to deviate from a cycloid. Beyond a critical value, it begins to approach the cycloid again. In other words, the effect of $\pi_g$ is non-monotonic. This is because of the physics at the asymptotic limits of $\pi_g \to 0, \infty$. $\pi_g \to 0$ is equivalent to $\nu \to \infty$. Under this condition, the system behaves like a rigid cylinder, where it is well known that the brachistochrone is a cycloid. When $\nu \to 0$, again, the fluid plays no role and the brachistochrone coincides with a cycloid. While mathematically allowed, one must observe caution in taking the $\nu \to 0$ limit, since non-axisymmetric instabilities could play a role in determining the flow field. A mathematical analysis of these asymptotic limiting cases is included in Appendix \ref{labelA}. Figure \ref{fig3}(b) shows the brachistochrone curves between the same two points for varying values of $\pi_m$ while $\pi_g$ is maintained constant at $10^3$. $\pi_m>1$ indicates that the fluid's mass is greater than that of the bottle. Not surprisingly, as $\pi_m \to 0$, the brachistochrone coincides with a cycloid because the bottle inertia is going to dominate system dynamics. As $\pi_m$ increases, the brachistochrone curves show increasing deviation from a cycloid. It would be interesting to identify the range of $\pi_m$ and $\pi_g$ values where the brachistochrone for a fluid bottle is likely to deviate from a cycloid passing through the same start and end points. In order to quantify this deviation, a geometric norm $\delta$ is defined as \begin{equation}\label{deviation} \delta = \int_{0}^{x_f} \left|f(x) - f_{c}(x)\right| dx. \end{equation} \noindent Here, $\delta$ is the area contained between the actual brachistochrone for the fluid bottle, $y=f(x)$, and a cycloid passing through $A(0,0)$ and $B(x_f,y_f)$, $y=f_{c}(x)$. In order to quantify a performance deviation, we also define a normalized time $T^*$ as a second measure. $T^*$ is a ratio given by \begin{equation} T^*=\frac{t_f}{t_c}, \label{kdev} \end{equation} \noindent where, $t_f$ is the total travel time of the fluid bottle on the brachistochrone between two points $A$ and $B$. $t_c$ is the time taken by a point particle on a cycloid passing through the same two points. This definition of $T^*$ is motivated by a desire to quantify the fluid's role in determining the overall objective of a brachistochrone, which is to transport a bottle between the two points in the fastest time. Since the fluid always exerts an opposing drag (see equation \eqref{eq 1}), $T^*$ is always greater than $1$. As discussed before, $\delta \to 0$ and $T^*$ asymptotes to constant values when $\pi_g \to 0, \infty$ (both for high and low viscosity fluids). From the above discussion, it is to be anticipated that at particular values of $\pi_g$, $\delta$ and $T^*$ would exhibit maxima. We are interested in the behavior of the system near this value of $\pi_g$ where $\delta$ is maximum, since it is near that parametric value where fluid effects are most significant. \begin{figure} \begin{center} \centering \captionsetup[subfigure]{labelformat=empty,labelsep=none} \subfloat[(a)]{\includegraphics[width=0.5\textwidth,trim={12.2cm 0cm 12.2cm 0cm},clip]{fig4a}} \subfloat[(b)]{\includegraphics[width=0.5\textwidth,trim={9cm 0cm 9cm 0cm},clip]{fig4b}}\\ \subfloat[(c)]{\includegraphics[width=0.5\textwidth,trim={13cm 0cm 13cm 0cm},clip]{fig4c}} \subfloat[(d)]{\includegraphics[width=0.5\textwidth,trim={13.5cm 0cm 13.5cm 0cm},clip]{fig4d}} \caption{Figures (a) and (b) show the variation of $\delta$ and $T^*$ with $\pi_g = \frac{g R^3}{\nu^2}$ at different values of $\pi_m$. $\delta \to 0$ for both $\pi_g \to 0,\infty$ in (a). Also, $\delta$ is maximum for $\pi_g \approx 3980$ and is nearly invariant for all $\pi_m$. In (b), the locus of the point of maximum deviation is indicated with a red dotted curve. Figures (c) and (d) depict their variation versus $\pi_m = \frac{M}{m}$ for different values of $\pi_g$.} \label{pim-pig} \end{center} \end{figure} Figure \ref{pim-pig} is a plot of $\delta$ and $T^*$ versus $\pi_g$ and $\pi_m$. Figure \ref{pim-pig}(a) is a plot of $\delta$ versus $\pi_g$ for three different values of $\pi_m$. $\delta$ varies non-monotonically with $\pi_g$. For the case considered in this figure with a terminal point at $(10,-1)$ and $\pi_m=0.1$, the maximum deviation occurs at $\pi_g\approx 3981$. Figure \ref{pim-pig}(b) is a plot of $T^*$ versus $\pi_g$ for varying $\pi_m$. It can be seen from this plot that $T^*$ increases first and then decreases as $\pi_g$ increases.The reason for this behavior lies in the fact that at both low and high viscosity cases, the dissipation due to the fluid vanishes. This implies that a fluid bottle in both these limits will be faster than at other intermediate $\pi_g$ values. As $\pi_m$ increases, $T^*$ becomes insensitive to $\pi_g$. Figure \ref{pim-pig}(c) is a plot of $\delta$ versus $\pi_m$ for three values of $\pi_g$. As $\pi_m$ increases, the mass of the fluid is decreasing in relation to the bottle mass. As expected, $\delta$ decreases and tends to $0$ as $\pi_m \to \infty$. The variation of the geometric norm $(\delta)$ with respect to $\pi_m$ shows a sigmoid-like behavior with an inflection point at $\pi_m\approx 1$. In the low viscosity limit, the fluid bottle is effectively a hoop with mass $M$ and a point mass with mass $m$ at the center. Whereas, in the high viscosity limit, the fluid bottle acts like a hoop of mass $M$ around a solid cylinder of mass $m$. Therefore, in these two limits, dynamics of the fluid bottle will reduce to rigid body dynamics which leads to the brachistochrone being the cycloid itself. This is the reason for the geometric norm $\delta$ to vanish in these two limits (high and low viscosity). Figure \ref{pim-pig}(d) is a plot of $T^*$ versus $\pi_m$ for the same three values of $\pi_g$. Again, as $\pi_m$ increases, $T^*$ increases and reaches an asymptotic limit of $\sqrt{2}$. The asymptotic value of $T^*$ is square root of the ratio of the net inertial mass of a hoop to that of a particle, which is $\sqrt{2}$. See Appendix \ref{labelA} for a mathematical discussion of the origin of this limiting value. \begin{figure} \begin{center} \centering \captionsetup[subfigure]{labelformat=empty,labelsep=none} \includegraphics[scale=0.2,trim={5cm 0cm 45cm 0cm},clip]{fig5} \caption{Plot of dissipation fraction ($\phi$) versus $x_f$ for $y_f=-1$, $\pi_g=1000$ and $\pi_m=0.01$. Dotted lines indicate the two asymptotic regimes. For small $x_f$, $\phi \sim x_f^{0.64}$ and for large $x_f$, $\phi \sim x_f^{0.04}$.} \label{xf-diss} \end{center} \end{figure} \subsection{Role of viscous dissipation} The total initial available energy (in dimensional terms) in the bottle-fluid system is $(M+m)gy'_f$ in all cases. As $x_f$ increases, the brachistochrone is observed to deviate from a cycloid (see Figure \ref{fig2}(b)). We will argue herein that this is due to increasing total dissipative loss and show the consequences thereof. The shape of the optimal curve is determined by a competition between two objectives - (i) the desire to minimize travel time ($t_f$) and (ii) the need to (at least) reach the end point in the face of viscous losses. This competition is best quantified in terms of the variation of the viscous dissipation as a function of end point coordinate $x_f$. We define the total dissipation $\Phi$ as \begin{equation} \Phi=\int_0^{t_f} \mu {\left[ r \frac{\partial}{\partial r}\left( \frac{v}{r}\right)\right]}^2 dt, \end{equation} \noindent which is the total energy dissipated during the travel time $t_f$. We then calculate $\phi=\frac{\Phi}{|(M+m)gy'_f|}$ which represents a fraction of the initial potential energy that has been dissipated by viscous action. Figure \ref{xf-diss} is a plot of $\phi$ as a function of $x_f$ with $y_f$ set equal to $-1$. Two distinct asymptotic regimes can be discovered for short and long $x_f$. For $x_f < 30$, $\phi \sim x_f^{0.64}$. This is the regime where the system initial potential energy is more than sufficient to ensure that the bottle reaches the end point. Therefore, the brachistochrone is primarily governed by the time-minimization objective. On the other hand, for $x_f > 100 $, $\phi \sim x_f^{0.04}$ implying that $\phi$ is a much weaker function of $x_f$. In this regime, the brachistochrone is also governed by a desire to conserve the initial potential energy in order to ensure that the bottle reaches the end point, while still minimizing travel time. Under this action, the time-minimization objective takes a back seat in favor of reaching the end point. This is the reason for the two different slopes at the two asymptotic regimes in Figure \ref{xf-diss}. To draw an analogy to running, the bottle chooses to `sprint' for short $x_f$, while for long $x_f$, the bottle is trying to run a `marathon'. The shapes of the brachistochrone curves under these conditions are presented in Figure \ref{fig2}. The transition between the two states - where the brachistochrone is nearly a cycloid and where the brachistochrone deviates from a cycloid - happens near $x_f \approx 30$ in this case (which would be a function of $y_f$ and other system parameters). In conclusion, the fluid brachistochrone problem is significantly more interesting owing to the fact that viscous dissipation plays a non-monotonic and important role. Several interesting extensions of this study can be explored. For instance, \cite{aristoff2009elastochrone} introduced elastic forces of the inclined surface and discussed the motion of a rigid cylinder on such a straight deformable incline. Also, \cite{balmforth2007dissipative} discussed the dissipative descent of a compound object in a gravitational field, again on a straight inclined plane. It would be interesting to explore brachistochrone versions of these problems. \section{Conclusion} We calculate the brachistochrone for the motion of a fluid-filled cylinder in a uniform gravitational field as a function of the properties of the fluid as well as the terminal points $(x_f,y_f)$. We define $\pi_g$ and $\pi_m$ as two dimensionless parameters governing the shape of the brachistochrone. $\pi_g$ is a ratio of the gravitational to viscous forces and $\pi_m$ is a ratio of the mass of the bottle to that of the fluid. We show that for $\pi_g \to 0,\infty$ as well as for $\pi_m \to \infty$, a cycloid is a good approximation of the brachistochrone. Firstly, we find that the effects of fluid on the deviation from a cycloid are significant only when $\pi_m<1$ i.e., the fluid mass is greater than the mass of the bottle. We find that the maximum deviation from a cycloid occurs for $\pi_g\approx 10^3$ and $\pi_m \to 0$ for a wide range of terminal point conditions. For small $x_f$, the brachistochrone is nearly coincident with a cycloid. Under this condition, the system is not constrained by viscous dissipation to achieve a time-minimized solution. As $x_f$ increases, the bottle chooses to operate in a mode which attempts to reduce viscous losses. Under this condition, the objective is dominated by a need to reach the end condition while still minimizing travel time. All of the above analyses assume laminar flow inside the cylinder and is rigorously valid only in the high viscosity (low Reynolds number) regime. For the low viscosity regime of $\pi_g \to \infty$, this analysis can at best be treated as a heuristic. An interesting application of this work would be to study the brachistochrone motion of liquid drops on a super-hydrophobic surface as has been recently reported by \citet{tracks} and \cite{dropsbrac}. Interestingly, \cite{dropsbrac} compared the time of travel of a liquid droplet on a cycloid and a straight line. In addition, as discussed by \cite{janardan2015liquid}, liquid marbles are interesting fluid dynamic objects. Their brachistochrone motion in a gravitational field (ignoring drop deformation) is an example application where this work has practical implications. For example, a $2~mm$ diameter marble of a fluid whose kinematic viscosity is $10^{-5}~m^2s^{-1}$ corresponds to $\pi_m \approx 0$ and $\pi_g \approx 10^3$. For such a realistic drop, the deviation from cycloid is significant and the framework described here provides a way to compute the brachistochrone. \bibliographystyle{jfm}
1,108,101,563,081
arxiv
\section{Introduction} Black holes, for several reasons, are one of the most peculiar objects in nature. On the one hand, as mathematical constructions in general relativity (GR), they are remarkably simple in their rendition~\cite{wheeler1971introducing}, while on the other, they seem to possess thermodynamic properties~\cite{Bekenstein:1972tm,Bardeen:1973gs,Hawking:1974rv} which are usually ascribed to objects that have a microscopic structure. Over the years, the studies on black holes have bestowed us with insights into several branches of theoretical physics and mathematics. Moreover, with several recent~\cite{LIGOScientific:2018mvr,LIGOScientific:2020ibl,LIGOScientific:2021djp} and upcoming~\cite{Bailes:2021tot} gravitational-wave observations directly concerning black holes, it is expected that they may prove to be useful in verifying many important theoretical predictions in the field. A deeper understanding of black hole physics is also particularly crucial to the efforts towards a consistent theory of quantum gravity. In this regard, despite years of investigations, there is hardly any dispute that much is yet to be deciphered about black holes with quantum characteristics. The most notorious puzzle in the quantum physics of black holes is the black hole quantum information loss problem~\cite{Hawking:1976ra,Mathur:2009hf,Almheiri:2012rt}; it concerns the complete recovery of the initial quantum state that collapsed to form a black hole, from the radiation which is left behind and available to an asymptotic future observer. Several approaches have been suggested over the years to resolve this issue (see~\cite{Harlow:2014yka,Chakraborty:2017pmn,Perez:2017cmj,Raju:2020smc,RevModPhys.93.035002} for a recent review). Insights from condensed matter platforms have also resulted in intriguing quantum analogies which suggest that the final quantum state of black holes as perceived by an external observer could be a superfluid quantum condensate and, hence, leading to a viable alternative paradigm to understand the black hole evaporation process ~\cite{Manikandan:2018urq,Manikandan:2017zhw,Manikandan:2020kdx}. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{fig1.pdf} \caption{(a) Black hole horizon as a perfect absorber, where all the modes propagating to the left of the potential barrier (with $l=2,~s=2$ in this example) are absorbed. (b) Echoes are created when the boundary condition in the near horizon region $x=x_{\delta}$ is replaced by a reflector. The reflectivity depends on the specifics of the modified gravity theory under consideration, for instance the proposal in Refs.~\cite{Bekenstein:2020mmb,Bekenstein:1995ju} suggests that the horizon area is quantized, such that the reflectivity is unity except for at the discrete frequencies given in Eq.~\eqref{spec}, where one expects absorption to occur. (c) Our proposal, where quantum interactions near the horizon facilitates mode conversions, leading to both particle-like and hole-like components for the echo.} \label{echo} \end{figure*} Further inputs to the notion that a black hole could be a condensate in a consistent quantum theory of gravity also comes from several independent considerations, see for instance~\cite{Dvali:2012rt,Dvali:2012gb,Dvali:2012en,Dvali:2011aa,Jacobson:1996zs}. Notwithstanding the elegance of such proposals and their conceptual implications, any consistent quantum description of black holes should yield measurable predictions. With the growing capabilities of gravitational-wave and other observations, there is increasing confidence that substantial inputs for the formulation of a quantum theory of gravity maybe available in the decades to come~\cite{Amelino-Camelia:2009imt,Pikovski:2011zk,PhysRevD.99.124053,Bekenstein:2013ih,Bekenstein:2012yy,Bosso:2016ycv}. In view of this, through this work, we highlight the observational aspects of certain proposals that suggest a black hole could be a condensate of interacting microscopic degrees of freedom. In particular, since the quantum state of a condensate is characterized by very few parameters, as for instance the number density and macroscopic quantum phase of the condensate, such proposals are likely to have effective descriptions with minimal number of parameters. Moreover, if available or upcoming observational facilities can probe the macroscopic quantum features of the condensate, one might gain useful insights about the quantum nature of black hole horizons. Here, we propose a scenario which precisely addresses this possibility, using the framework of the so-called gravitational echoes from a black hole spacetime~\cite{PhysRevLett.116.171101,Cardoso:2016oxy,Cardoso:2017cqb,Cardoso:2019apo,Wang:2019rcf}. The possibility of creation of multiple echoes of gravitational signals from black holes and exotic compact binaries has caught significant attention in recent times~\cite{PhysRevLett.116.171101,Cardoso:2016oxy,Cardoso:2017cqb,Cardoso:2019apo,Wang:2019rcf,Dong:2020odp}. These echoes arise as a result of ordinary reflective boundary conditions imposed at the horizon. Multiple reflections of gravitational waves at the horizon and the photon sphere potential barrier results in a signal pattern reminiscent of the echoes formed as a result of multiple reflections of sound waves. Imposition of non-standard reflective boundary conditions at horizon could be motivated by several proposed modifications of the black hole spacetime that arise, for instance, from quantum gravity considerations. For example, recently authors of Ref.~\cite{Cardoso:2019apo} delineated an approach to verify the proposal in Refs.~\cite{Bekenstein:2020mmb,Bekenstein:1995ju} that a quantum black hole has quantized horizon area spectrum, using gravitational wave observations. We review their approach in Sec.~\ref{gechoes}. In this article, we analyze the consequence of applying a different kind of boundary condition at the event horizon where the near horizon region is treated as an Andreev reflector~\cite{Manikandan:2017zhw,Manikandan:2018urq,Manikandan:2020kdx,Jacobson:1996zs,osti_4071988}. While this consideration is primarily motivated by proposals that treat an evaporating black hole as a leaking superfluid quantum condensate, such modifications may also be understood as emerging from the gravitational self interaction of test fields in the background of an evaporating black hole. In particular, when the test field is a tensor mode of perturbation, our analysis suggests a fundamentally new kind of gravitational wave echo that can be detected. Such modified boundary conditions may also be of relevance to exotic compact objects other than black holes such as neutron stars~\cite{Pani:2018flj,Urbano:2018nrs,Maggio:2019zyv}. See Fig.~\ref{echo} where we compare different echo frameworks in comparison to our proposal. The article is organized as follows. In Sec.~\ref{gechoes}, we review the general notion of echoes from a black hole in gravitational quantum physics. In Sec.~\ref{sect_condensate}, we summarize arguments from the gravitational side which motivates our treatment of the near-horizon region as a mode-converting (Andreev reflecting) condensate. In Sec.~\ref{andreevbosons}, we review the Andreev reflection mechanism for a condensate of superfluid bosons. In Sec.~\ref{newkind}, we discuss how Andreev reflection can provide a novel contribution to the echo from a black hole spacetime. In Sec.~\ref{discuss}, we conclude by discussing the implications of our prediction for near-future gravitational observations. \section{Gravitational echoes\label{gechoes}} The relevant region of spacetime that we are focusing on is described by the following standard Schwarzschild metric: \begin{align} ds^{2}=-\left(1-{\frac {2GM}{r}}\right)\,dt^{2}+\left(1-{\frac {2GM}{r}}\right)^{-1}\,dr^{2}+r^{2}d\Omega ^{2}. \end{align} The master equation describing the dynamics of the perturbations of massless fields in the background of the Schwarzschild metric reads~\cite{Berti:2009kk,Regge:1957td,Thorne:1980ru,Fiziev:2005ki,Cardoso:2019apo}: \begin{align}\label{regge_wheeler} \left[\partial^2_t-\partial^2_x+V_l(x)\right]\psi(x,t)=\mathcal{S}(x,t), \end{align} where $\mathcal{S}(x,t)$ denotes a source term and we have also introduced the tortoise coordinate $x$ via $x=r+2M\log(r/2M-1)$. The effective potential $V_l$ takes the form (in the original Schwarzschild radial coordinate $r$)~\cite{Berti:2009kk,Regge:1957td,Cardoso:2019apo,Fiziev:2005ki,Thorne:1980ru}: \begin{align} V_l(r)=\left(1-\frac{2M}{r}\right)\left[\frac{l(l+1)}{r^2}+\frac{(1-s^2)2M}{r^3}\right]. \end{align} Here $s=0,1,2$ respectively corresponds to scalar, electromagnetic and gravitational perturbations. To resolve the dynamics of test fields described by Eq.~\eqref{regge_wheeler}, one can Fourier analyse the components of the test field $\psi(x,t)$. This results in the following form of Eq.~\eqref{regge_wheeler} in the Fourier space~\cite{Cardoso:2019apo}, \begin{align}\label{regge_wheelerw} \left[\omega^{2}-\partial^2_x+V_l(x)\right]\tilde{\psi}(x,\omega)=\tilde{\mathcal{S}}(x,\omega). \end{align} A requirement owing to causality is that the test fields $\psi(x,t)$ in the time domain obey the Sommerfeld boundary conditions $\partial_{t}\psi+\partial_{x}\psi=0$ as $x\rightarrow\infty$. This translates to the effect that the Fourier components of test fields $\tilde{\psi}(x,\omega)$, as $x\rightarrow\infty$, behave as $\tilde{\psi}(x,\omega)\propto e^{i\omega x}$~\cite{Cardoso:2019apo}. The near horizon region is completely absorbing for a classical black hole, and therefore one traditionally assumes a completely ingoing boundary condition for test fields at the event horizon. It has been suggested that quantum mechanical corrections to the physics of the event horizon may challenge this notion. A well-known example to such a modification is the model pioneered in Refs.~\cite{Bekenstein:2020mmb,Bekenstein:1995ju}, which suggests that the Horizon area $A$ is quantized, \begin{equation} A=\alpha l_{p}^{2} n, \end{equation} where $l_{p}=\sqrt{\hbar G/c^2}$ is the Planck length, $\alpha$ is a dimensionless coefficient (there is some indication that $1<\alpha<30$, see Ref.~\cite{Cardoso:2019apo}), and $n$ is an integer. Such a quantization of the area spectrum implies that the black hole area (and entropy thereof) can only change in discrete units. Subsequently, the frequency of test fields absorbed or emitted by a Schwarzschild black hole also has a discrete spectrum, given by~\cite{Bekenstein:2020mmb,Bekenstein:1995ju,Cardoso:2019apo}, \begin{equation} \omega_{n} = \frac{\alpha c\delta n}{16\pi r_{h}},\label{spec} \end{equation} where $\delta n$ indicates the change in the area quantum, $c$ is the speed of light, and $r_{h}$ is the Schwarzschild radius. A discussion of more general discrete spectrum of black holes can be found in Ref.~\cite{Lochan:2015bha}. Such a quantum gravity modification for the near-horizon region implies that the Fourier components of test fields $\tilde{\psi}(x,\omega)$ satisfy a different boundary condition in the near-horizon region, given by~\cite{Cardoso:2019apo}, \begin{equation} \tilde{\psi}(x,\omega)\propto e^{i\omega x}+R(\omega)e^{-i\omega x}\quad;\quad x\sim x_{\delta}, \label{echoOld} \end{equation} where $R(\omega)$ is the modified reflectivity of the near-horizon region and $x_{\delta}$ is the position in tortoise coordinates corresponding to the Schwarzschild radial coordinate $r=r_{h}+\delta$, satisfying $\delta/r_{h}\ll 1$. For the proposal in Refs.~\cite{Bekenstein:2020mmb,Bekenstein:1995ju}, the reflectivity is expected to go to unity except for the discrete frequencies given in Eq.~\eqref{spec} where absorption lines are expected~\cite{Cardoso:2019apo}. A consequence of modified reflectivity $R(\omega)$ of the near-horizon region is the creation of gravitational echoes of test fields; the potential barrier $V_l(r)$ partially reflects and partially transmits test fields that are emanated or reflected from the near-horizon region, leading to an echo-like signal. As multiple reflections and transmissions occur, it is expected that the potential barrier $V_l(r)$ will have a filtering effect on the measured signal. For the proposal in Refs.~~\cite{Bekenstein:2020mmb,Bekenstein:1995ju}, the measured signal at later times will have sharp absorption peaks at the frequencies $\omega=\omega_{n}$, given in Eq.~\eqref{spec}. As the test field can also be a gravitational perturbation, the prediction in Ref.~\cite{Cardoso:2019apo} is that the above filtering effect may be observed in gravitational wave detectors, and therefore could serve as a potential way to test the black hole area quantization proposed in Refs.~\cite{Bekenstein:2020mmb,Bekenstein:1995ju}. A recent work has in fact looked into the possibility of constraining the parameter $\alpha$ based on presently available gravitational wave observations~\cite{Laghi:2020rgl}. The analysis therein suggests that the information available from gravitational wave observations until October $1$, 2019 is not sufficient enough to fully support or disregard the proposal in Ref.~\cite{Cardoso:2019apo}. It is also worth mentioning that the black hole area quantization also has interesting consequences to the inspiral phase, as has been discussed in Ref.~\cite{Datta:2021row}. Before we conclude this section, we would also like to briefly comment on the source term in Eq.~\eqref{regge_wheeler}. In the first order in perturbation, a non-zero value of the source term $\mathcal{S}(x,t)$ signifies the presence of charges (like, for instance, the electromagnetic charge or mass) in the exterior of the black hole region. In higher order in perturbation, however, the component of the stress-energy that corresponds to self-interaction can also contribute to the source term. In the specific case of gravitational perturbations, such a contribution can arise from the gravitational-wave stress-energy and leads to, for instance, the well-known non-linear memory effect~\cite{Christodoulou:1991cr,Payne:1983rrr,Blanchet:1992br,Wiseman:1991ss}. Now imagine that the near horizon region, in fact, consists of a condensate of a particular matter field. In this case, we expect that the master equation for perturbation modes of that field contains a source term that corresponds to the contribution from the condensate. In the remainder of this article, we shall propose a toy model to study such a system. \section{Near horizon interactions and the condensate picture}\label{sect_condensate} We begin by considering a simple collapsing scenario, namely, the spherical collapse of a shell of massless scalar particles leading to the formation of a black hole. As is well known, the classical geometry describing this process can be obtained by stitching together patches of three exact solutions---(1) Minkowski spacetime (vacuum region inside the shell), (2) ingoing vaidya spacetime (non-vacuum region inside the shell) and (3) the Schwarzschild spacetime (exterior of the shell). However, as conveyed by Hawking's seminal semiclassical arguments, the black hole also evaporates by emitting near-thermal radiation and in due course exhaust all of it's mass (see Fig.~\ref{penrose}). We shall shortly focus on the in-fall of a massless scalar field mode (denoted by late infalling matter in Fig.~\ref{penrose}) into the black hole, long after the formation of event horizon and, at the same time, early enough to have not much of the black hole mass evaporated away. Hence, the spactime region of our interest lies somewhere inside the green circular region in Fig.~\ref{penrose}. The fact that the state of radiation in the asymptotic future is non-vacuum is usually discerned from the non-trivial Bogoliubov transformations connecting the appropriate `in' and `out' modes, as was done in Hawking's seminal work~\cite{Hawking:1975vcx}. To calculate the corresponding Bogoliubov coefficients, one first traces the evolution of positive energy modes on $\mathscr{I}^{+}$ into the past, all the way to $\mathscr{I}^{-}$. This procedure leads to a relation between the positive energy modes on $\mathscr{I}^{+}$ and $\mathscr{I}^{-}$, from which one can derive the Bogoliubov coefficients and, hence, the particle spectrum. However, the tracing the evolution of the out-modes, in the conventional geometric optics approximation, ignores the gravitational interaction between the scalar field modes. It has been suggested that as a consequence of graviton mediated interactions, the Bogoliubov coefficients connecting the `in' and `out'-modes get modified, due to the non-trivial near-horizon scattering amplitude; it is also hoped that such a scenario leads to the resolution of the black hole quantum information loss problem~\cite{tHooft:1996rdg,tHooft:2015pce,Gaddam:2020mwe,Betzios:2016yaq,Betzios:2020xuj}. \begin{figure}[t] \centering \includegraphics[scale=.5]{fig2.pdf} \caption{The Penrose diagram for an evaporating black hole formed as a result of a spherical null-shell collapse. The dotted line at $r=r_{h}+\delta$, with $r_{h}$ being Schwarzschild radius models the interface of the strongly/weakly interacting regions of the Hawking radiation. Alternatively, such an interface may also be relevant in approaches that model the black hole as a graviton condensate, wherein the same acquires the interpretation of a super-fluid/normal-fluid regions (see text for more detail). The focus of this work is to model the scattering of a late in-falling mode at the near-horizon region inside the the green shaded circle.} \label{penrose} \end{figure} The gravitational interaction may also significantly change the dynamics of the late in-falling matter that just crosses the horizon after passing through the outgoing Hawking radiation. For convenience, we may regard two distinct manners in which the gravitational interaction manifest in this scenario. Firstly, owing to the graviton mediated interaction, the vacuum appropriate to $\mathscr{I}^{-}$ will evolve into a state that is quite different from the one generated by a simple Bogoliubov mapping of the kind considered originally by Hawking in~\cite{Hawking:1974rv}. Secondly, the gravitational field of the in-falling matter may engrave, on the Hawking radiation, the information about the in-falling state. A rather rudimentary picture for such an interaction could be visualized in the following way: the spacetime gets slightly modified by the infalling matter, leading to a slight deviation in the trajectory of the outgoing quanta and vice versa. A more formal realization of this picture was considered in Ref.~\cite{Dray:1984ha} to effectively capture the near-horizon scattering of Aichelburg-Sexl gravitational shockwaves. We shall now present a simple model to study the near-horizon dynamics of a late in-falling mode, that incorporates the two above-mentioned effects of graviton mediated interactions. We start with the reasonable assumption that there is a critical distance beyond the horizon up to which the gravitational scattering between the scalar modes becomes relevant. In terms of the standard Schwarzschild radial coordinate $r$, with the event horizon at $r_h$, we shall assume that this critical region of significant gravitational scattering to be within $r=r_h+\delta$, demarcated by the dotted curve in Fig.~\ref{penrose}. Therefore, the outgoing Hawking radiation has two distinct parts: (1) in which the graviton exchange among the scalar field modes is significant ($r<r_h+\delta$) and (2) in which the scalar field modes can be safely assume to be freely propagating in the background black hole geometry. A brief detour to a problem in condensed matter is in order~\cite{Jacobson:1996zs,Manikandan:2017zhw,Manikandan:2018urq,Manikandan:2020kdx}. Recall that a superconductor-normal metal interface is characterized by a region of rapid decrease in the effective coupling of the phonon mediated interaction between electrons. One the superconductor side, the interaction gives rise to a gap, while the same is absent in the normal metal side. An analogous scenario occurs in a superfluid/normal fluid interface as well~\cite{2009PhRvL.102r0405Z}. In light of this, here we explore the possibility of modelling the suface at $r=r_h+\delta$, separating the outgoing Hawking radiation, as an interface between a condensate in the region $r<r_h+\delta$ and a coherent, almost free, distribution of particles in the region $r>r_h+\delta$. We also henceforth refer to the interactions within the aforementioned region as the ``horizon proximity effect", motivated by the analogy to the condensed matter setting~\cite{Manikandan:2017zhw,Manikandan:2018urq,Manikandan:2020kdx,Jacobson:1996zs}. Since we are considering the matter degree of freeedom to be a massless scalar field, it is reasonable to imagine the condensate (in $r<r_h+\delta$) as a superfluid and the outside Hawking radiation (in $r>r_h+\delta$) as the normal fluid. A remarkable consequence of this model is that the near horizon scattering of a late in-falling mode, with the outgoing Hawking radiation, can be modelled as the scattering problem near a superfluid/normal fluid interface. Note that our analysis suggests possible generalizations to electromagnetic and gravitational perturbations as well. The considerations that are to follow may also be applied to models that treat the black hole as a graviton condensate, for instance, along the lines of~\cite{Dvali:2011aa,Dvali:2012en}. In order to provide a better understanding of how assuming a quantum condensate description for the near-horizon region modifies the dynamics of test fields, we now look at the exact dynamics of quantum fluctuations of a condensate at a superfluid/normal fluid boundary. To this end, we shall closely follow the approach in~\cite{2009PhRvL.102r0405Z}. \section{Bosonic analogue of Andreev reflection\label{andreevbosons}} The dynamics of quantum fluctuations of the condensate in superfluid/normal fluid boundaries can be described by the Gross-Pitaevskii equation~\cite{gross1961structure,Pitaevskii1961}: \begin{eqnarray} i\partial_t\Psi(x,t)&=&-\frac{1}{2}\partial^2_x\Psi(x,t)+V(x)\Psi(x,t)\nonumber\\&+&g(x)\left|\Psi(x,t)\right|^2\Psi(x,t),\label{GP} \end{eqnarray} which is a non-linear generalization of the Schr\"{o}dinger equation that accounts for inter-particle interaction, with $g(x)$ being the varying coupling strength. Now, let us consider a solution of the above equation that describes a small perturbation about an equilibrium wave function $\Psi_0$ describing a condensate. Such a solution takes the following form~\cite{2009PhRvL.102r0405Z}: \begin{align} \Psi(x,t)=\Psi_0(x) e^{-i\mu t}+\delta\Psi. \end{align} The perturbation $\delta\Psi$, in turn, can be assumed to have the form~\cite{2009PhRvL.102r0405Z,Zapata:2011ze}: \begin{align} \delta\Psi=e^{-i\mu t}\sum_{j}\left[u_{j}(x)e^{-i\omega_j t}-v^*_{j}(x)e^{i\omega_j t}\right], \end{align} with $u_j$ and $v_j$ satisfying the following coupled \textit{linear} differential equations, known as the Bogoliubov-de Gennes (BdG) equations~\cite{2009PhRvL.102r0405Z,Zapata:2011ze}: \begin{align}\label{BDG} \omega_j\begin{bmatrix} u_j\\ v_j \end{bmatrix} =\begin{bmatrix} \hat{H}&-ge^{i\phi}\left|\Psi_0(x)\right|^2\\ ge^{-i\phi}\left|\Psi_0(x)\right|^2&-\hat{H}^{*} \end{bmatrix} \begin{bmatrix} u_j\\ v_j \end{bmatrix}, \end{align} where, \begin{align} \hat{H}=-\frac{1}{2}\partial^2_x-\mu+V(x)+2g(x)\left|\Psi_0(x)\right|^2. \end{align} and $\phi$ denotes the phase of $\Psi_0^2$, taken to be uniform in view of the condition $\delta/r_{h}\ll 1$. Microscopic approaches to studying the superfluid ground state reveal that it is in fact a relative phase between different coherent occupations of pairing modes in the superfluid quantum ground state~\cite{Guo:2016ddz,fetter2003quantum}. Same holds for a Bardeen-Cooper-Schrieffer (BCS) superconductor~\cite{Bardeen:1957mv}, and permits a second-quantized description of Andreev reflections by treating the phase and charge/particle number as conjugate observables~\cite{Manikandan:2020kdx,PhysRevB.50.3139}. The more generic form of Eq.~\eqref{BDG} where $\omega\rightarrow i\partial_{t}$, and $u_{j}\rightarrow\bar{u}(x,t)$ and $v_{j}\rightarrow\bar{v}(x,t)$, assuming $\bar{u}(x,t)$ and $\bar{v}(x,t)$ are generic functions of $x$ and $t$ satisfy the following continuity equation: \begin{align}\label{continuity} \partial_t\left(\rho_{u}-\rho_{v}\right)+\partial_{x}\left(J_{u}+J_v\right)=0, \end{align} where, the `currents' $J_{u/v}$ and `densities' $\rho_{u/v}$ are defined as: \begin{eqnarray} \rho_{u}&=&\bar{u}(x,t)\bar{u}^*(x,t), ~~\rho_{v}=\bar{v}(x,t)\bar{v}^*(x,t),\nonumber\\ J_{u}&=&\textrm{Im}\left[\bar{u}^*(x,t)\partial_{x}\bar{u}(x,t)\right],~~J_{v}=\textrm{Im}\left[\bar{v}^*(x,t)\partial_{x}\bar{v}(x,t)\right].\nonumber\\ \end{eqnarray} An important physical insight about the nature of $u$ and $v$ modes is revealed by Eq.~\eqref{continuity}---while the excitation of a $u$-type mode introduces a positive change in the particle density of the condensate, that of a $v$-type mode induces a negative change. Hence, one might view the $v$-type modes as the bosonic analogue of holes. In fact, for this reason, we shall henceforth refer to the $v$-type modes as hole-like excitations/modes. More arguments to substantiate this notion are also provided in~\cite{2009PhRvL.102r0405Z}. Also note that by performing integration by parts of Eq.~\eqref{continuity} with the boundary condition that currents vanish at infinity, we arrive at the condition, $\int (\rho_{u}-\rho_{v})dx=\text{constant}$, and the constant can be set to one as a normalization. Therefore the BdG equations also imply the Bogoliubov completeness relation for its components. When $\mu>0$, one can design a scenario in which the condensate is depleting via leaking to an ambient background of a coherent distribution of particles. Such a scenario was analysed in~\cite{2009PhRvL.102r0405Z} by Zapata and Sols, wherein it was also shown that the system exhibits a bosonic analogue of Andreev reflection. To briefly review this finding here, we start by assuming the following simple form for the potential: \begin{align} V(x)=Z\delta(x),\label{pot} \end{align} where $Z$ is the strength of the repulsive delta function at the interface. Further, assuming that the condensate is mostly confined to $x<0$, while the ambient distribution of particles is in the region $x>0$, the corresponding wave function can be approximated to: \begin{align} \left|\Psi_0(x)\right|\approx \sqrt{n_c}\Theta(-x)+\sqrt{n_b}\Theta(x),\label{eqpsi} \end{align} where, $n_c$ and $n_b$ acquire the interpretation of particle density in the condensate and the ambient background regions, respectively and $\Theta(x)$ is the Heaviside-theta function. Following~\cite{2009PhRvL.102r0405Z}, we shall henceforth refer to the region of ambient background ($x>0$) as the normal side. Recall that on the condensate side, we have $n_cg(x)\rightarrow\mu$, while on the normal side, we have $n_bg(x)\rightarrow0$. With these inputs, one finds that a stationary wave function, of positive energy $\omega$, can be spanned by the set of vectors $\left\{\mathbf{e}_{u},\mathbf{e}_{v}\right\}$ on the normal side and $\left\{\mathbf{e}^{(+)}_{\omega},\mathbf{e}^{(-)}_{\omega}\right\}$ on the condensate side, where: \begin{align}\label{sol1} \mathbf{e}_{u}&=\begin{bmatrix} 1\\ 0 \end{bmatrix},\qquad \mathbf{e}_{v}=\begin{bmatrix} 0\\ 1 \end{bmatrix},\\ \mathbf{e}^{(\pm)}_{\omega}&=\frac{1}{\sqrt{e^{\pm2\theta_{\omega}}+1}}\begin{bmatrix} \pm e^{i\phi}e^{\pm\theta_{\omega}}\\ 1 \end{bmatrix}. \end{align} We have dropped the index $j$ for simplicity. We have introduced the parameter $\theta_{\omega}$ through the definition $\theta_{\omega}=\sinh^{-1}(\omega/\mu)$ and we have also retained the convenient $2\times 1$ matrix notation introduced in Eq.~\eqref{BDG}. \begin{widetext} The particular stationary solution $\left(u_{\omega}(x),v_{\omega}(x)\right)$ that describes a stream of quasi-particles being incident on the condensate region from the normal side takes the following form: \begin{align} \mathbf{e}_{u}u_{\omega}(x)+\mathbf{e}_{v}v_{\omega}(x)=\begin{cases} \mathbf{e}_u\left(e^{-ik^{+}_{\omega}x}+r_ne^{ik^{+}_{\omega}x}\right)+\mathbf{e}_{v}r_ae^{ik^{-}_{\omega}x}\quad&;\quad x>0\\ t_p \mathbf{e}^{(+)}_{\omega}e^{-ip^{-}_{\omega}x}+t_e\mathbf{e}^{(-)}_{\omega}e^{+p^{+}_{\omega}x}\quad&;\quad x<0 \end{cases} \end{align} \end{widetext} where the different momenta are give as follows: \begin{align}\label{k_plus_minus} k^{\pm}_{\omega}&=\sqrt{2\mu\left(1\pm\frac{\omega}{\mu}\right)},\\ p^{\pm}_{\omega}&=\sqrt{2}\sqrt{\sqrt{\omega^{2}+\mu^{2}}\pm \mu}. \end{align}The scattering amplitudes $r_n$, $t_p$ and $t_e$ correspond to that for normal reflection, transmission into the condensate and excitation of an evanescent mode, respectively~\cite{2009PhRvL.102r0405Z}. The amplitude of normal reflection $r_{n}$ will be nonzero for nonzero $Z$, the strength of the repulsive delta function at the interface. The situation we consider will have a non-zero $r_{n}$ by default, because one requires $Z\gg\sqrt{\mu}$ to ensure that the wavefunction in Eq.~\eqref{eqpsi} closely matches a stable solution of Eq.~\eqref{GP} and Eq.~\eqref{pot}~\cite{2009PhRvL.102r0405Z}. In addition, one finds that there is one more scattering amplitude, namely $r_a$, which is non-zero in general and, when $\mu>0$ and $|\omega|<\mu$ this amplitude corresponds to excitation of a propagating $v$-type (hole-like) mode on the normal side. Hence, the implication of $r_a\neq0$ is that there is a non-zero amplitude for the process of an incoming particle-like mode to get `absorbed' by the condensate, accompanied by the excitation of a propagating hole-like mode on the normal side---in close analogy to the well known Andreev reflection process in superconductor/normal-metal interfaces~\cite{osti_4071988}. In light of this, $r_a$ is referred to as the amplitude for Andreev reflection~\cite{2009PhRvL.102r0405Z}. An important observation concerning this amplitude, relevant to our discussion, is that the ratio $r_a/r_n$ has the following form: \begin{align}\label{andreev_by_normal} \frac{r_a}{r_n}=e^{-i\phi}\left[e^{i\sigma(\omega,\mu,Z)}f(\omega,\mu,Z)\right], \end{align} where, $\sigma(\omega,\mu,Z)$ and $f(\omega,\mu,Z)$ are real valued functions of their arguments and independent of $\phi$. It also follows from the continuity equation [Eq.~\eqref{continuity}], and the completeness relation for $u$ and $v$ components that the ratio $|r_{a}/r_{n}|\leq 1$. Eq.~\eqref{andreev_by_normal} also implies that Andreev reflection provides us a promising window to probe the relative phase added by a condensate, which in turn is related to the macroscopic quantum state of the condensate. In the superconducting case, it has been suggested that the phase added upon Andreev reflections will have measurable consequences in the current fluctuations observable in a superconductor-normal metal-superconductor junction~\cite{PhysRevB.50.3139}. Experimental observation of Andreev reflections from a superfluid may face additional challenges, given that hole-like excitations are defined w.r.t an outgoing, coherent background of the superfluid. For example, various decoherence mechanisms in the superfluid background can lead to attenuation of a propagating hole-like mode, which is not accounted for in the discussions above. We may include this in the discussions by a phenomenological modification of the Andreev wavevector, $k_{\omega}^{-}\rightarrow k_{\omega}^{-}+i\kappa$, where $\kappa^{-1}$ is the length scale over which the superfluid background decohers. Before we move on, we shall emphasize certain key points concerning our discussion so far. Note that we have deliberately considered a leaky condensate. The reason for this is that we are interested in the study of quantum black holes, which are believed to be well described by leaky bosonic condensates of gravitons in certain models, as for instance in~\cite{Dvali:2012en}. For the bosonic system that we described in this section, the leakage is guaranteed by the existence of hole-like propagating modes, with the corresponding momenta given by $k^{-}_{\omega}$, as in Eq.~\eqref{k_plus_minus}. Microscopically, the leakage may also be related to Andreev reflections from the interface, which allows to exchange modes between the normal fluid side and the superfluid; The time-reversal of a particle-like mode incident on the superfluid from the normal side is the Andreev reflection of a hole-like mode incident on the superfluid from the normal side, resulting in its mode-conversion into a particle-like mode. The depletion of a condensate via such a leakage can be identified with microscopic descriptions of black hole evaporation, as discussed in Refs.~\cite{Manikandan:2017zhw,Manikandan:2018urq,Manikandan:2020kdx,Jacobson:1996zs,tHooft:1996rdg}. Next, we shall see how the above points guide us towards a fundamentally new kind of gravitational echo, when the near-horizon region of a quantum black hole is modelled as a superfluid quantum condensate. \section{Possibility of a new kind of gravitational echo\label{newkind}} Equipped with important insights that we gained from our detour to a purely condensed matter scenario, we now return to our original problem of interest, viz. the black hole. Recall that we set off this article by introducing the Schwarzschild metric as the appropriate classical description of a spherically symmetric black hole spacetime. However, once we acknowledge that there is a finer description of a quantum black hole in terms of a graviton condensate, as for instance elaborated in~\cite{Dvali:2011aa,Dvali:2012en,Dvali:2012gb,Dvali:2012rt}, it is reasonable to picture the Schwarzschild metric as an effective, coarse-grained description. The nature of the corresponding fine-grained description is yet unknown owing, of course, to the unavailability of a consistent quantum theory of gravity. Despite this, however, it may be possible to realize a mean field description of the graviton-condensate picture of the black hole, say, in terms of an appropriate generalization of the Gross-Pitaevskii equation~\cite{Alfaro:2016iqs,Cunillera:2017kwe}. An interesting implication of this, which seems to have not been adequately appreciated previously, is that dynamics of the modes of perturbations of a black hole near the horizon must be described by an appropriate BdG-like equations, as opposed to, say, Eq.~\eqref{regge_wheeler}. Naturally, this must be understood as a suitable extrapolation of the fact that perturbation of a condensate is described by the BdG equations, as we have seen in the last section. Let us see how one can proceed with such a proposal and, most importantly, make predictions with it. Motivated by our analysis based on BdG equations in the condensed matter scenario, we expect that the modes of perturbations of the black hole, in the condensate picture, must be represented by a pair of functions $(u_{l,\omega}(x),v_{l,\omega}(x))$. In the following, we will suppress the $l$ index for simplicity. As in the case of bosonic condensate we considered in the previous section, $u-$type excitations correspond to particle-like modes while $v-$type excitations correspond to hole-like modes. Note that the interpretation of a hole-like mode, in the black hole context, is that it describes a propagating negative particle density, with respect to the ambient background of gravitons furnished by Hawking radiation process. The coupled linear equation satisfied by $(u_{\omega}(x),v_{\omega}(x))$ is expected to depend on the details of near horizon scattering of gravitons. However, in order to understand the key qualitative features of this approach, we shall make some phenomenological assumptions. Note that, in the source free case, the near horizon limit of Eq.~\eqref{regge_wheeler} leads to the well known dispersion relation $k=\pm\omega$ for a propagating plane wave $e^{-i\omega t+ikx}$. In contrast, the existence of hole-like excitation, inferred from our condensed-matter based insights, manifest in the form of modification of the dispersion relation near $x=x_{\delta}+0^{+}$ to the effect of $k=\pm k^{\pm}_{\omega}$, where the index $\pm$ denotes $u$ and $v-$type modes, respectively. Therefore we expect that the near horizon region imparts the following boundary condition: \begin{eqnarray} \begin{bmatrix} u_{\omega}(x)\\ v_{\omega}(x) \end{bmatrix}&\propto&\mathbf{e}_{u}\left(e^{-ik_{\omega}^{+}x}+\mathcal{R}_{n}(\omega)e^{ik^{+}_{\omega}x}\right)\nonumber\\&+&\mathbf{e}_{v}\mathcal{R}_{a}(\omega)e^{ik^{-}_{\omega}x}\quad;\quad x\sim x_{\delta}. \label{echss} \end{eqnarray} Similar to the superfluid/normal fluid example we discussed before, $\mathbf{e}_{u/v}$ can be understood as appropriate normal basis vectors in the region $x>x_{\delta}$. A modification similar to the first term in the above expression is well understood in some cases, for instance, in the context of testing the proposal that the horizon area is quantized~\cite{Bekenstein:2020mmb,Bekenstein:1995ju}, where the normal reflectively $\mathcal{R}_{n}(\omega)$ contain signatures of horizon area quantization, and encripts that in the gravitational echo signal~\cite{Cardoso:2019apo} (see Sec.~\ref{gechoes}). In comparison, suggesting the second term in Eq.~\eqref{echss} proportional to $\mathcal{R}_{a}(\omega)$ resulting from mode conversions near the event horizon is the main contribution of this article. It is worth emphasizing that some of the existing proposals already point at such a modification, for instance, it has been pointed out that the near horizon region may facilitate mode conversions to resolve the Trans-Planckian reservoir problem at the event horizon~\cite{Jacobson:1996zs}. Superconducting and superfluid quantum information mirror analogies primarily motivated by the black hole quantum final state proposal in Ref.~\cite{Horowitz:2003he} also suggests such a modification to the boundary condition applied by the near-horizon region on test fields to resolve the black hole quantum information loss problem~\cite{Manikandan:2017zhw,Manikandan:2018urq,Manikandan:2020kdx}. We now proceed to discuss the observable consequences of the Andreev contribution---proportional to $\mathcal{R}_{a}(\omega)$ in Eq.~\eqref{echss}---on the gravitational echo signals. Although writing down the precise form of $\mathcal{R}_{a}(\omega)$ is beyond the scope of the present article, we make the following remarks based on the known physics of the condensed matter scenario discussed in Sec.~\ref{andreevbosons}. Primarily, we expect that the echo produced in the condensate picture of a quantum black hole will have a particle-like and a hole-like component, possibly out-of-phase from each other, offering a new window to probe the quantum nature of black hole horizons through gravitational echo measurements. The ratio $\mathcal{R}_{a}/\mathcal{R}_{n}$ at the horizon is expected to have a structure similar to that of Eq.~\eqref{andreev_by_normal}, where $\mathcal{R}_{n}$ is required to be nonzero for a stable leaking superfluid condensate of the kind discussed in Sec.~\ref{andreevbosons}~\cite{2009PhRvL.102r0405Z}. Moreover, since Andreev reflections are mode conversions facilitated by the ground state of a quantum superfluid, the corresponding contribution to the gravitational echo signal---if exists---will be an observable signature of quantum gravity at lower energies. We expect the quantum filtering effect of the potential barrier $V_{l}(r)$ resulting in the echo signal can also help resolve this low-energy signature of the near-horizon quantum state, and permit clever experimental schemes to detect the same. Confirming the presence of a hole-like component in the detected signal will require a waveform comprising of a minimum of four echoes. Additionally, the Andreev contribution to the echo may also be measurable as an enhancement in the particle-like component in every other echo (with a periodicity of two echos). The enhancement results from the Andreev reflected hole-like component getting reflected back from the potential barrier $V_{l}(r)$, subsequently Andreev reflecting at the horizon as a particle-like mode. As the phase added upon two Andreev reflections cancel each other, this will be observable as an enhancement of the normally-reflected component of the measured signal with a periodicity of two echoes. Finally, our analysis also suggests possible new experiments on the condensed matter side with exciting applications. Potential hills comparable to $V_{l}(r)$ may be engineered near superfluid/normal fluid interfaces, and superconductor/normal metal junctions. the dynamics of quantum fluctuations of a leaky condensate in such modified potentials can be used to probe echo-like signals from a superfluid (superconductor)/normal fluid (normal metal) boundary, and to investigate their quantum technology applications. \section{Discussion\label{discuss}} There are several interesting proposals that attempt to describe black holes in terms of quantum condensates\cite{Dvali:2011aa,Dvali:2012en,Dvali:2012gb,Dvali:2012rt}. The idea that quantum state of a black hole may be effectively characterised by a simple many-body quantum wavefunction, such as the condensate ground state of a quantum superfluid, is also in harmony with the well known fact that black holes are characterized by very few parameters in their classical rendition (such as their mass, charge and angular momentum). Analyses along this line have also been previously employed to address some of the important issues in proposing a quantum theory of gravity---(1) the trans-Planckian reservoir problem at the event horizons~\cite{Jacobson:1996zs}, and (2) the black hole quantum information loss problem~\cite{Manikandan:2017zhw,Manikandan:2018urq,Manikandan:2020kdx}---both by assigning a mode-converting mirror property to the event horizon, facilitated by Andreev reflections, well known to condensed matter physicists. The present article looked at a possible observational implication for such proposals, within the framework of gravitational echoes. A minimal model to describe the black hole horizon as a condensate/normal fluid boundary reveals that echoes may be created by such spacetimes with both particle-like and hole-like components, going beyond the traditional gravitational echo scenario. Beyond the interest in black hole quantum physics, it is expected that the formalism may directly apply to a wide class of exotic compact objects. Before we conclude, we would like to point out some of the shortcomings of our proposal. Primarily, note that although some independent arguments are presented as to how the black hole horizon may behave as a condensate facilitating mode conversions, the proposal still falls short of presenting an exact correspondence between the two fields. Therefore while the predictions we make offer a new paradigm to probe existing models, they do not necessarily implicate that the models themselves accurately capture the quantum physics of black hole horizons. Secondly, we use a mean-field approach developed in Ref.~\cite{2009PhRvL.102r0405Z} to describe Andreev reflections and extend it to the gravitational echo framework. While such an approach adequately captures the essential details of the scattering process, it is desirable that a microscopic description is also provided as Andreev reflections involve mode conversions between individual modes. We defer this analysis to a future work. Finally, it is likely that computation of the Andreev contribution to a gravitational echo may pose several challenges which the present article did not address. In addition, resolving the Andreev component from the other contributions to gravitational echoes may also present certain technical challenges. Here we briefly summarize some of these issues that we think could be relevant. \begin{itemize} \item As we have mentioned in Sec. \ref{newkind}, the Andreev contribution to the echo signal can be accounted for by considering a two-component vector-like waveform, with $u$ and $v$ components, as shown in Eq.~\eqref{echss}. The evolution of this 2-component object, in the first order perturbative approximation, is expected to be a second-order linear differential equation. Although in the present paper we have alluded to the expected form of this equation in the near-horizon limit, motivated by the analogy with a super-fluid/normal fluid interface, the explicit form of the equation should be dictated by, among possibly several other things, the details of scattering processes in the near-horizon limit. Such details are also expected to guide us to fix the correct near-horizon boundary conditions for the 2-component waveform. On the other hand, we expect that the boundary condition at $r\rightarrow\infty$ for the $u$-component to coincide with the standard outgoing boundary conditions and that of the $v$-component can be more challenging. A possible strategy is, once again, to gain insights from an appropriate condensed-matter analogue system. To this end, we expect, to some degree of approximation, that we can model the system by introducing an appropriate external potential on the normal side of the superfluid/normal-fluid interface. \item Recall, that the Andreev reflected component is defined w.r.t an ambient coherent background of the leakage furnished by the early Hawking radiation. This poses two additional major challenges for computation and detection of the Andreev reflected component to the echo signal. First, various decoherence mechanisms in the background can adversely affect the propagation of the Andreev reflected component. A likely consequence is attenuation of the Andreev reflected component, which can be accounted phenomenologically as discussed in Sec.~\ref{andreevbosons}. The second point is the weakness of gravitational waves from Hawking processes itself. Although Andreev mechanism provide a modified boundary condition for the near horizon region based on proposed microscopic details of how a black hole may evaporate~\cite{Manikandan:2020kdx,Manikandan:2017zhw,Manikandan:2018urq,Jacobson:1996zs}, the contribution Hawking processes make to metric perturbations are still expected to be rather weak. This is so because the observable rate of Hawking processes would still be largely dictated by the principles of black hole thermodynamics~\cite{Page_2005}, and this rate is not expected to be challenged significantly by the microscopic details of near horizon processes. \end{itemize} In summary, while offering a new window to probe the quantum nature of black holes through gravitational echo measurements, resolving the Andreev contribution invites considerable further studies on the theoretical modeling of various gravitational echo scenarios, and their detection through feasible experiments. \section{Acknowledgements} The work of SKM was supported by the Wallenberg Initiative on Networks and Quantum Information (WINQ). KR was supported by the Research Associateship of Indian Association for the Cultivation of Science (IACS), Kolkata, India. The authors acknowledge insightful comments from Andrew N. Jordan, Sayak Dutta, Sumantha Chakraborty, and Kabir Chakravarti.
1,108,101,563,082
arxiv
\section{Introduction} \setcounter{equation}{0} The Nambu-Jona-Lasinio (NJL) model \cite{Nambu:1961a} has received a wide attention during the last decade \cite{Alkofer:1996}, following previous attempts \cite{Skyrme:1961,Adkins:1983} to describe the nucleon in the large-$N\sub{c}$ limit on the basis of mesonic degrees of freedom. In the NJL model, the mesons appear as effective degrees of freedom, parametrizing condensates of the basic fermion fields. The basic model is a quark model with a four-fermion interaction, and therefore nonrenormalizable. The ultraviolet divergences are handled by introducing a cutoff which stays finite. Its functional form and numerical value, therefore, are relevant for the predictions of the model. Various cutoffs have been introduced, a good review of the different possibilities and their numerical impacts is given in \cite{Doring:1992sj}. Previously, the Schwinger proper-time cutoff was used almost exclusively until it was found \cite{Diakonov:1996b} that this regularization violates the momentum sum rule for the parton distributions, and that Pauli-Villars regularization is favored in this respect. The use of the Pauli-Villars regularization opens the possibility to introduce a numerical technique for computing the effective action which has been developed previously \cite{Baacke:1990sb,Baacke:1992nh} and has been applied to a various physical problems involving fluctuation determinants \cite{Baacke:1994aj,Baacke:1995bk}. It is based on using the Euclidean Green functions instead of summation of levels. Computing functional determinants and other expectation values involving the quark continuum by summing over levels and using Minkowski space wave functions requires the introduction of space boundaries in order to discretize the spectrum. The associated space cutoff has to be removed, introducing a numerical limiting procedure. This limiting procedure seems to be technically well under control, as can be seen, e.g., by comparing computations of the sphaleron determinant using level summation \cite{Carson:1990rf} and using Euclidean Green functions \cite{Baacke:1994aj}. Nevertheless, level summation is certainly not a very economical technique. At the same time an alternative computational approach presents the possibility to obtain the predictions of the model in an independent way. The idea of replacing level summation by integrals over Euclidean Green functions is actually a rather old one. It seems to go back to Wichmann and Kroll \cite{Wichmann}. It has been used extensively in calculations of the vacuum polarization in strong fields \cite{Wichmann,Rafelski} and of the Casimir effect \cite{Balian}. A related way of encompassing the discretization of the spectrum has been proposed by Moussallam \cite{Moussallam:1989uk}, who uses Minkowski space phase shifts. There are various methods to perform self-consistent computations \cite{Reinhardt:1988}. As the equation of motion for the meson profiles requires to solve the equation of motion for the meson field, or, equivalently, the chiral angle, it is advantageous to be able to compute the functional derivative of the effective action with respect to the meson field. A technique for computing such derivatives using Euclidean Green functions has been set up recently \cite{Baacke:1995hw} and used for a self-consistent computation of the bubble nucleation rate in the electroweak theory \cite{Surig:1998ne}. It is the purpose of this work to transfer these techniques, {\sf mutatis mutandis}, to the NJL model. Apart from the self-consistent computation of the mesonic profiles of the nucleon we also formulate the sea quark contributions to various other observables, such as the moment of inertia, in terms of Euclidean Green functions. This latter aspect of our work is of course important; without it, one would have to go back to level summation in computing these observables and our technique would loose much of its attractiveness. The NJL model, as considered here, is not a unique theory; there are many versions of it, the most elaborate ones \cite{Zuckert:1994} include the $\rho$, $a_1$ and $\omega$ vector mesons as well, as it was done before in the Skyrme model \cite{Lacombe:1986gi,Baacke:1987fg}. However, even within the restricted class of models in which only the $\pi-\sigma$ fields are taken into account, there is a wide variety. These models differ by the kind of regularization, as mentioned above, but also by applying it to various parts of the spectrum. In a renormalizable theory one would apply it only to the lowest perturbative contributions; in the NJL model it is applied to the finite parts as well. Furthermore, it was usually only applied to the quark sea , and not to the valence contribution. Some models also differ by the way in which the meson degrees of freedom are varied: the variation can be extended to both $\sigma$ and $\pi$ fields, or be restricted to the ``chiral circle'', the 4-sphere being defined by $\sigma^2+\mbox{\boldmath{$\pi$}\unboldmath}^2=f_\pi^2$. Unfortunately, the existence of nucleon solutions is not a robust property of the theory, such solutions are found only within a small subset of the different versions, a situation that must be considered as unsatisfactory. We will not add to the ongoing discussion (see e.g. \cite{Golli:1998rf}), we will use that version of the model recently used by Pobylitsa {\em et al.}\ \cite{Pobylitsa:1998tk} for computing the parton distributions. It only takes into account the $\pi$ and $\sigma$ fields, which are varied on the chiral circle only, and Paul-Villars regularization is applied to the quark sea, not to the filled bound state. \section{The model} \setcounter{equation}{0} Starting with the NJL-Lagrangian \cite{Nambu:1961a} \begin{equation} {{\cal L}}\sub{NJL}= {\bar{\psi}}(i\gamma^{\mu}\partial_{\mu}-m)\psi+ \frac{G}{2} \left[(\bar{\psi}\psi)^2+(\bar{\psi}i\gamma_5{\mbox{\boldmath{$\tau$}\unboldmath}}\psi)^2\right] \end{equation} one obtains after the standard bosonization procedure \begin{equation} \label{boson} S\sub{NJL}=\int d ^4 x\Biggl\{ \bar{\psi} \biggl[i\gamma^{\mu}\partial_{\mu}-g\Bigl(\sigma+i{\mbox{\boldmath{$\pi$}\unboldmath}}\cdot{\mbox{\boldmath{$\tau$}\unboldmath}} \gamma_5\Bigr)\biggr] \psi-\frac{\mu^2}{2}\left(\sigma^2+{\mbox{\boldmath{$\pi$}\unboldmath}}^2\right)+ \frac{m\mu^2}{g}\sigma\Biggr\}\,. \end{equation} The parameters of the model are the fermion self-coupling $G$, the quark mass $m$, and the cutoff scale $\Lambda$. In the bosonized version these parameters appear as the quark-meson coupling $g$ and the symmetry breaking mass parameter $\mu$. They are related to the basic parameters as $g=\mu \sqrt{G}$ and $ m\mu^2=gf_\pi m_\pi^2$. The latter equation expresses $\mu$ in terms of physical constants and of the coupling $g$ which remains a free parameter. A further relation is obtained from the gradient expansion of the effective action. The resulting kinetic term of the pion field is normalized correctly if the Pauli-Villars cutoff is fixed as \begin{equation} \Lambda=M\sqrt{\exp\left(\frac{4\pi^2}{N\sub{c} g^2}\right)}\,. \label{lambda} \end{equation} If the $\sigma-\pi$ field is varied only on the chiral circle, the second term in Eq.\ (\ref{boson}) is absent. For static pion fields the action is proportional to the time $\tau$; the two remaining parts of the effective action then contribute to the energy as \begin{eqnarray} \label{SF} E\sub{fer}&=&\frac{1}{\tau}{\rm Tr}\,\log\Bigl[{-i\gamma^{\mu}\partial_{\mu} +\mbox{\boldmath{$M$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})}\Bigr]\,,\\ E\sub{br}&=&-{m_\pi^2f_\pi}\int d ^3x \left(\sigma-f_\pi\right)\,. \end{eqnarray} Here the trace is taken over the quark sea and - in the case of baryons - over the filled bound states. $\mbox{\boldmath{$M$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})$ is given - on the chiral circle - by \begin{eqnarray} \mbox{\boldmath{$M$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})&=&g\Bigl[\sigma+i\gamma_5{\mbox{\boldmath{$\tau$}\unboldmath}}\cdot{\mbox{\boldmath{$\pi$}\unboldmath}}\Bigr]\nonumber\\ &=&M\Bigl[\exp\left\{i\gamma_5{\mbox{\boldmath{$\tau$}\unboldmath}}\cdot{\mbox{\boldmath{$\phi$}\unboldmath}}(\mbox{\boldmath{$x$}\unboldmath}) \right\}\Bigr]\nonumber\\ &=&M\Bigl[\cos(|\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})|) +i\gamma_5{\mbox{\boldmath{$\tau$}\unboldmath}}\cdot{\mbox{\boldmath{$\hat{\phi}$}\unboldmath}}(\mbox{\boldmath{$x$}\unboldmath})\sin(|\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})|)\Bigr]\,, \end{eqnarray} where we have introduced the ``dynamical quark mass'' $M=g f_\pi$. With the hedgehog ansatz ${\mbox{\boldmath{$\phi$}\unboldmath}}(\mbox{\boldmath{$x$}\unboldmath})=\mbox{\boldmath{$\hat x$}\unboldmath}\vartheta(r)$ the mass becomes \begin{eqnarray} \label{mass} \mbox{\boldmath{$M$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})&=&M\Bigl[\cos(\vartheta(r)) +i\gamma_5{\mbox{\boldmath{$\tau$}\unboldmath}}\cdot{\mbox{\boldmath{$\hat x$}\unboldmath}}\sin(\vartheta(r))\Bigr]\;. \end{eqnarray} \section{Basic relations}\label{basics} \setcounter{equation}{0} Given the profile $\vartheta(r)$, the energy of the corresponding nucleon state consists of the symmetry breaking part that can be evaluated trivially, and the contributions of valence and sea quarks. In order to evaluate the valence quark contribution we have to find the bound state energy by solving, with appropriate boundary conditions, the Dirac equation \begin{equation} (i\nu-H)\psi_0(\mbox{\boldmath{$x$}\unboldmath})=0 \;. \end{equation} Here we have introduced the Dirac Hamiltonian \begin{equation}\label{hami} H=-i\balpha\cdot\mbox{\boldmath{$\nabla$}\unboldmath}+\gamma_0 \mbox{\boldmath{$M$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath}) \;. \end{equation} The computation of the quark sea contribution is more involved. We will recall here a method introduced previously \cite{Baacke:1990sb} in which the computation of the zero point energy is related to the Euclidean Green function \begin{equation} \label{Greensum} S\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)=\sum_{\alpha}\frac{\psi_\alpha(\mbox{\boldmath{$x$}\unboldmath}) \psi_\alpha^\dagger (\mbox{\boldmath{$x$}\unboldmath}')}{-i\nu+E_\alpha}\;. \end{equation} The subscript $\alpha$ is a formal notation for the discrete and continuum eigenstates of the Dirac Hamiltonian $H$; we also indicate the positive energy eigenstates by $\alpha>0$, the negative ones by $\alpha<0$, and the valence eigenstate with $\alpha=0$. $S\sub{E}$ satisfies the equation \begin{equation}\label{DGL1} (i\nu -H)S\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)=-\delta^3(\mbox{\boldmath{$x$}\unboldmath}-\mbox{\boldmath{$x$}\unboldmath}')\,. \end{equation} The zero point energy \begin{equation} E_{\rm sea} = \sum_{\alpha < 0}E_\alpha \end{equation} can be computed as a contour integral around the positive imaginary axis in the complex $\nu$-plane, see Fig. 1, as \begin{equation} E_{\rm sea} = \int_{C_-} \frac{d\nu}{2\pi i}\nu {\rm Tr}\, \int d^3x S\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath},\nu) \;.\end{equation} Deforming the contour to run along the real $\nu$ axis, and subtracting the zero point energy of the free Dirac operator $H_0=-i\balpha\cdot\mbox{\boldmath{$\nabla$}\unboldmath}+\gamma_0M$, the integral takes the form \begin{equation} E_{\rm sea} = -i\int_{-\infty}^\infty \frac{d\nu}{2\pi}\nu {\rm Tr}\, \int d^3x \left[S\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath},\nu)-S\sub{E,0}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath},\nu) \right]\;.\end{equation} The original contour, and therefore also the deformed one, takes into account all negative energy states, whether they are continuum or bound states. The only state that has to be considered separately is the positive energy bound state that is occupied by the valence quarks and, therefore, should be included as well. This can be done by deforming the contour to the one presented in Fig. 1 as a dashed line. It is, however, more convenient to write the contibutions of this state separately, as we will do here. It is convenient to introduce the bosonic Green function $G\sub{E}$ via \begin{equation} S\sub{E}=(i\nu+H)G\sub{E}\, \end{equation} which satisfies \begin{equation}\label{DGL2} \left(\nu^2+H^2\right)G\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)= \left[\nu^2-\Delta+M^2+{\cal{V}}(\mbox{\boldmath{$x$}\unboldmath})\right] G\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)=\delta^3(\mbox{\boldmath{$x$}\unboldmath}-\mbox{\boldmath{$x$}\unboldmath}') \end{equation} with the potential or vertex operator \begin{equation} {\cal{V}(\mbox{\boldmath{$x$}\unboldmath})}=i\mbox{\boldmath{$\gamma$}\unboldmath}\cdot\mbox{\boldmath{$\nabla$}\unboldmath}\mbox{\boldmath{$M$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})\,. \end{equation} In terms of $G\sub{E}$ the energy can be written as \begin{equation} \label{zeropointbos} E_0=\int_{0}^\infty \frac{d \nu}{\pi}\nu^2 \int d ^3 x {\rm {Tr}} \left[G\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath},\nu)-G\sub{E,0}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath},\nu)\right]\,. \end{equation} The expressions for the zero point energy and the subsequent manipulations are formal. Even after subtracting the free zero point energy, they only make sense if properly regularized. In order to understand the divergences of the subtracted zero point energy we use the resolvent expansion of the Green function with respect to the potential ${\cal V}$; if it is inserted into the expression for the zero point energy we obtain \begin{equation} \label{e0pert} E_0=\sum_{n=1}^\infty{\frac{(-1)^n}{2n} {\rm Tr}\, \int \frac{d\nu}{2\pi} \int \prod_{i=1} ^{n}{d^3x_iG\sub{E,0}(\mbox{\boldmath{$x$}\unboldmath}_i-\mbox{\boldmath{$x$}\unboldmath}_{i-1},\nu){\cal V}(\mbox{\boldmath{$x$}\unboldmath}_i)}} \end{equation} with $x_n=x_0$. We have subtracted the free zero point energy by omitting the zeroth order in the sum over $n$. The first order term vanishes after taking the trace, and the only divergent term is the second order one. Explicitly, it takes the form \begin{equation} E_0^{(2)}=\frac{1}{4}\int\frac{d^3q}{(2\pi)^3} {\rm Tr}\, \tilde{\cal V}(\mbox{\boldmath{$q$}\unboldmath})\tilde{\cal V}(-\mbox{\boldmath{$q$}\unboldmath}) \int_0^1 dx \int\frac{d^4p}{(2\pi)^4} \frac{1}{\left[p^2+M^2+q^2x(1-x)\right]^2} \;, \end{equation} where $\tilde{\cal V}(\mbox{\boldmath{$q$}\unboldmath})$ is the Fourier transform of the potential. The logarithmically divergent integral can be defined by using, below the integrand, the Pauli-Villars subtraction \begin{equation} \label{E2reg} E_0^{(2)}=\frac{1}{4}\int\frac{d^3q}{(2\pi)^3} {\rm Tr}\, \tilde{\cal V}(\mbox{\boldmath{$q$}\unboldmath})\tilde{\cal V}(-\mbox{\boldmath{$q$}\unboldmath}) \int_0^1 dx \frac{1}{16 \pi^2}\left\{\ln \Lambda^2 - \ln\left[M^2+q^2x(1-x)\right]\right\} \;. \end{equation} Unlike in the case of renormalized perturbation theory here the regularization will be extended over the finite contributions as well, so that the regularized zero point energy reads \begin{equation}\label{regula} E\sub{0,reg} = E\sub{0}(M)-\frac{M^2}{\Lambda^2}E\sub{0}(\Lambda) \;, \end{equation} where the subtraction is to be understood to be done below the $\mbox{\boldmath{$x$}\unboldmath}$ and $\nu$ integrals, respectively (see below), before the partial wave summation and $\nu$ integration. The factor $M^2/\Lambda^2$ takes into account that the potential $\tilde{\cal V}(\mbox{\boldmath{$q$}\unboldmath})$ contains a factor $M$, and a factor $\Lambda$ in the subtracted part. These prefactors have to be compensated in order to ensure the cancellation of the divergent integrals. The perturbative expansion, Eq.\ (\ref{e0pert}), can be used \cite{Baacke:1995hw} to obtain an expression for the derivative of the zero point energy \begin{eqnarray} \nonumber \frac{\delta E_0}{\delta\phi_a(\mbox{\boldmath{$z$}\unboldmath})}&=& \sum_{n=1}^\infty {\frac{(-1)^n}{2} \int \frac{d\nu}{2\pi}} \,{\rm tr}\, \int d^3x_1 \frac{\delta {\cal V}(\mbox{\boldmath{$x$}\unboldmath}_1)}{\delta \phi_a (\mbox{\boldmath{$z$}\unboldmath})} \\ &&\times\int\left( \prod_{i=2} ^{n}{d^3x_i G\sub{E,0}(\mbox{\boldmath{$x$}\unboldmath}_i-\mbox{\boldmath{$x$}\unboldmath}_{i-1},\nu){\cal V}(\mbox{\boldmath{$x$}\unboldmath}_i)} \right) G\sub{E,0}(\mbox{\boldmath{$x$}\unboldmath}_1-\mbox{\boldmath{$x$}\unboldmath}_n,\nu)\;.\nonumber\\ \end{eqnarray} The perturbative sum can be recollected into the full nonperturbative Green function, so that \begin{eqnarray} \label{deriSeff} \frac{\delta E^{\overline{(2)}}_0} {\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})} =\,{\rm tr}\,\sum_{K,P} \int d ^3x\frac{\delta {\cal V}(\mbox{\boldmath{$x$}\unboldmath})}{\delta \phi_a (\mbox{\boldmath{$z$}\unboldmath})} \int_{0}^\infty \frac{d \nu}{2\pi} G\sub{E}^{\overline{(1)}}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath},\nu)\;. \end{eqnarray} Here and in the following we use superscripts $(j)$ to indicate that the expression is of order $j$ in the potential ${\cal V}$, or its derivative. The symbol $\overline{(j)}$ indicates that the expression is evaluated to all orders in the potential starting with order $j$. So the exact zero point energy, after subtracting the zeroth order, and taking account of the vanishing of the first order, is of order $\overline{(2)}$, as indicated on the l.h.s.\ of the last equation. $G\sub{E}^{\overline{(1)}}$ on the r.h.s.\ includes all orders but the zeroth one. It remains to evaluate the derivative of ${\cal V}({\mbox{\boldmath{$x$}\unboldmath}})$ with respect to $\phi_a$. One obtains \begin{eqnarray} \frac{\delta {\cal V} (\mbox{\boldmath{$x$}\unboldmath})}{\delta \phi_a(\mbox{\boldmath{$z$}\unboldmath})} &=& i M\mbox{\boldmath{$\gamma$}\unboldmath}\cdot\mbox{\boldmath{$\nabla$}\unboldmath}_x \delta^3(\mbox{\boldmath{$x$}\unboldmath}-\mbox{\boldmath{$z$}\unboldmath}) \Biggl[i\gamma_5\tau_a \exp (i\gamma_5\,\mbox{\boldmath{$\tau$}\unboldmath}\cdot\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})) \\ && +i\gamma_5 \left(\tau_a- \phi_a(\mbox{\boldmath{$x$}\unboldmath})\,\mbox{\boldmath{$\tau$}\unboldmath} \cdot\mbox{\boldmath{$\hat{\phi}$}\unboldmath} (\mbox{\boldmath{$x$}\unboldmath}) \right)\frac {\sin|\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})|}{|\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})|}\Biggr]\,.\nonumber \end{eqnarray} Inserting the hedgehog ansatz, $\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})=\mbox{\boldmath{$\hat x$}\unboldmath} \vartheta(r)$ and using the fact that within the trace the expectation value of $\mbox{\boldmath{$\tau$}\unboldmath}$ must be parallel to $\mbox{\boldmath{$\hat{\phi}$}\unboldmath}$, i.e., $\tau_a \to \hat x_a \mbox{\boldmath{$\tau$}\unboldmath}\cdot \mbox{\boldmath{$\hat x$}\unboldmath}$, the derivative of the zero point energy takes the form \begin{eqnarray} \label{Seff31} \left.\frac{\delta E_0^{\overline{(2)}}}{\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})}\right| _{\mbox{\boldmath{$\hat{\phi}$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})=\mbox{\boldmath{$\hat z$}\unboldmath}\vartheta(r)}&=& -M\,{\rm tr}\,\gamma_5 \tau_a \Bigl[\cos(\vartheta(r))-i\gamma_5\mbox{\boldmath{$\tau$}\unboldmath}\cdot {\mbox{\boldmath{$\hat z$}\unboldmath}}\sin(\vartheta(r))\Bigr] \nonumber \\ &&\hspace{10mm}\times\int_{0}^\infty \frac{d \nu}{2\pi}{\mbox{\boldmath{$\gamma$}\unboldmath}} \cdot{\mbox{\boldmath{$\nabla$}\unboldmath}_z}G^{\overline{(1)}}\sub{E}(\mbox{\boldmath{$z$}\unboldmath},\mbox{\boldmath{$z$}\unboldmath},\nu)\,. \end{eqnarray} The gradient acts on the Green function at equal arguments. It is taken after carrying out that limit. The Euclidean Green function can be expanded \cite{Kahana:1984} with respect to K-spin harmonics $\Xi^{K,K_z}_n$ (for details see Appendix A) as \begin{equation} G\sub{E}(\mbox{\boldmath{$z$}\unboldmath},\mbox{\boldmath{$z$}\unboldmath}',\nu)=\sum_{K,K_z,P} g^{K,P}_{mn}(r,r',\nu) \Xi^{K,K_z}_m(\mbox{\boldmath{$\hat z$}\unboldmath}) \otimes\Xi^{K,K_z \dagger}_n(\mbox{\boldmath{$\hat z$}\unboldmath}') \;.\end{equation} The radial Green functions form $ 4\times 4$ matrices; they can be written in terms of mode functions\footnote{ In the following we omit the K-spin and parity superscripts.} $f_n^{\alpha+}(\nu,r)$ and $f_n^{\alpha-}(\nu,r)$ which are solutions regular at $r=0$ and as $r\to \infty$, respectively, of a system of radial differential equations given in Appendix A. Explicitly, they are given by \begin{equation}\label{gtheta} g_{mn}(r,r',\nu)=\kappa\left[ \theta(r-r')f^{\alpha +}_{m} (\nu,r)f^{\alpha -}_{n}(\nu,r') +\theta(r'-r)f^{\alpha -}_{m} (\nu,r)f^{\alpha +}_{n}(\nu,r') \right]\;. \end{equation} The superscript $\alpha$ labels $4$ linearly independent solutions. In this basis, and using the reduced Green functions, the zero point energy takes the form {\cite{Baacke:1992nh}} \begin{equation} \label{BaaE} E_0^{\overline{(2)}}=N\sub{c}\int_0^\infty \frac{d\nu}{\pi}\nu^2\int d r\,r^2\sum_{K,P}(2K+1) \left[g_{11}^{\overline{(2)}}+ g_{22}^{\overline{(2)}}+ g_{33}^{\overline{(2)}}+g_{44}^{\overline{(2)}}\right]\;, \end{equation} where the Green functions are taken at $r=r'$. Analogously, the functional derivative of the energy is obtained as \begin{eqnarray} \label{dedphi} &&\frac{\delta E_0^{\overline{(2)}}}{\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})} \Biggr|_{\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$z$}\unboldmath})={\mbox{\boldmath{$\hat z$}\unboldmath}}\vartheta(r)}=\nonumber M\frac{N\sub{c}}{4\pi^2}{\hat z}_a\sum_{K,P}(-1)^K P \\&&\hspace{1.5cm} \times\int_0^\infty d \nu \Biggl\{\sin(\vartheta(r))\Biggl[(2K+1) \biggl(-g_{12}^{\overline{(1)}'}+g_{34}^{\overline{(1)}'}+\frac{2}{r} \left\{-g_{12}^{\overline{(1)}} +g_{34}^{\overline{(1)}}\right\}\biggr)\Biggr]\nonumber\\&&\hspace{1.5cm} -\cos(\vartheta(r))\Biggl[\frac{1}{2} \biggl(-g_{11}^{\overline{(1)}'}+g_{22}^{\overline{(1)}'}- g_{33}^{\overline{(1)}'} +g_{44}^{\overline{(1)}'}\biggr)\nonumber\\&&\hspace{2cm}+ \frac{1}{r}\biggl((K-1)g_{11}^{\overline{(1)}}+(K+1)g_{22}^{\overline{(1)}}+ K g_{33}^{\overline{(1)}} +(K+2)g_{44}^{\overline{(1)}}\biggr)\nonumber\\&&\hspace{2cm}+ 2\sqrt{K(K+1)}\biggl(-g_{14}^{\overline{(1)}'}- g_{23}^{\overline{(1)}'}-\frac{1}{r} \left\{3 g_{14}^{\overline{(1)}} +g_{23}^{\overline{(1)}}\right\}\biggr)\Biggr]\Biggr\}\;. \end{eqnarray} Both expressions have to be regulated as implied by Eq.\ (\ref{regula}). The NJL-soliton is a system with baryon number equal to one. Therefore, one has to add the bound state part of the fermionic energy \begin{equation} E_0\top{comp}=N\sub{c} E\top{bou}+E_0^{\overline{(2)}}\,. \end{equation} The eigenvalue equation for the bound state reads \begin{equation} \label{EWgl} \Bigl[-\Delta+{\cal V}^{0^+}(\mbox{\boldmath{$x$}\unboldmath})\Bigr]\psi_0=\omega_0^2\psi_0 \;. \end{equation} For $K^P=0^+$ the spinor $\psi_0$ is determined by two radial wave functions $h_0(r)$ and $j_0(r)$, corresponding to the components $u_3$ and $u_4$ for $K^P=0^+$ and for the bound state energy $E_0$; the potential ${\cal V}^{0+}$ is a $2\times 2$ matrix given in Appendix A. The eigenfunctions are normalized as $\int|\psi_0(\mbox{\boldmath{$x$}\unboldmath})|^2 d^3x=1$. Differentiating the bound state equation with respect to $\phi_a(\mbox{\boldmath{$z$}\unboldmath})$ and projecting with $\psi_0^\dagger$ one finds \begin{eqnarray} \frac{\delta}{\delta\phi_a (\mbox{\boldmath{$z$}\unboldmath})} \omega_0&=& \frac{1}{2\omega_0}\int d ^3x \psi_0^\dagger(\mbox{\boldmath{$x$}\unboldmath})\frac{\delta{\cal V}^{0^+}(\mbox{\boldmath{$x$}\unboldmath})}{\delta\phi_a(\mbox{\boldmath{$z$}\unboldmath})} \psi_0(\mbox{\boldmath{$x$}\unboldmath})\,. \end{eqnarray} Inserting the explicit expressions for the potential and the eigenfunctions, the derivative of the bound state energy takes the form \begin{eqnarray} \label{dedbou} &&\frac{\delta E\top{bou}}{\delta \phi_a(\mbox{\boldmath{$z$}\unboldmath})} \Biggr|_{\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$z$}\unboldmath})={\mbox{\boldmath{$\hat z$}\unboldmath}}\vartheta(r)} =\nonumber\\ &&\hspace{.8cm}M\frac{N\sub{c}}{4\pi\omega_0}{\hat z}_a\Biggl\{\sin(\vartheta(r)) \left[h_0'(r) j_0(r)+j_0'(r) h_0(r)+ \frac{2}{r}h_0(r)j_0(r)\right] \nonumber\\&&\hspace{1.8cm} -\cos(\vartheta(r)) \left[-h_0'(r) h_0(r)+j_0'(r)j_0(r)+\frac{2}{r}j_0^2(r)\right] \Biggl\}\;. \end{eqnarray} The bound state energy is convergent. Thus it does not need to be regularized. With a finite regulator, however, this is subject to some arbitrariness. The same argument would hold for any finite subset of sea quark states, or of all the finite parts higher order terms of the perturbative expansion. Finally, the mesonic part of the energy and its derivative have to be evaluated. This is straightforward, as they are given by simple analytical expressions. On the chiral circle one finds \begin{equation} E\top{br}=-4\pi m\sub{\pi}^2f\sub{\pi}^2\int d r\,r^2 \Bigl[\cos(\vartheta(r))-1\Bigr]\, \end{equation} and \begin{eqnarray} \label{dedmes} \frac{\delta E\top{br}}{\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})} \Biggr|_{\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$z$}\unboldmath})={\mbox{\boldmath{$\hat z$}\unboldmath}}\vartheta(r)}&=& m\sub\pi^2f\sub{\pi}^2{\hat z}_a\sin(\vartheta(r))\,. \end{eqnarray} \section{Perturbative expansion of the Green function} \setcounter{equation}{0} After reduction of Eq.\ (\ref{DGL2}) to K-spin partial waves (see Appendix A) the differential equation for the partial wave Green functions $g_{mn}(r,r',\nu)$ becomes \begin{eqnarray} \label{GGleich} &&\left[\delta_{nk}\left(\frac{d ^2}{d r^2}+\frac{2}{r} \frac{d }{d r}-\frac{K_n(K_n+1)}{r^2} -\kappa^2\right)-{\cal{V}}_{nk}(r)\right] g_{km} ( r,r',\nu)\nonumber\\&&= -\delta_{nm}\delta(r-r')/r^2 \,, \end{eqnarray} where $\kappa=\sqrt{\nu^2+M^2}$ and where, again, we suppress the partial wave indices $K$ and $P$. The potential ${\cal V}_{mn}$ depends on the K-spin, its explicit form is given in Appendix A. As already mentioned, we use for the numerical computation of the Green functions their standard expression (\ref{gtheta}) in terms of mode functions $f_n^{\alpha\pm}(\nu,r)$. The functions $f_n^{\alpha\pm}$ form $4\times 2$ linearly independent systems (index $\alpha\pm$) of $4$-component solutions (subscript $n$). A form independent of the choice of basis is given in \cite{Baacke:1992nh,Baacke:1995hw}. Here we use a special convenient basis; it is defined by splitting off the free solutions, i.e.\ the modified Bessel functions $b^{+}_{K_n}(\kappa r)\equiv k_{K_n}(\kappa r)$ and $b^{-}_{K_n}(\kappa r)\equiv i_{K_n}(\kappa r)$ via \footnote{ As for the mode functions $f_n^{\alpha\pm}$ we omit the $K$ - spin and parity assignment for the truncated mode functions $h_n^{\alpha\pm}$.} \begin{equation} \label{fpm} f^{\alpha \pm}_n(\nu,r)=\left[\delta^{\alpha \pm}_n+ h^{\alpha\pm}_n(\nu,r)\right] b^{\pm}_{K_n}(\kappa r) \end{equation} and by imposing the boundary condition \begin{equation} \lim_{r\rightarrow \infty}h^{\alpha\pm}_n(\nu,r)=0\;. \end{equation} The truncated mode-functions are obtained by solving the equations \begin{equation} \label{DGLh} \left[ \frac{d^2}{d r^2}+2 \left(\frac{1}{r}+\kappa\frac{b'^{\pm}_{K_n}(\kappa r)} {b^{\pm}_{K_n}(\kappa r)} \right) \frac{d}{d r} \right] h_n^{\alpha\pm}(\nu,r)= {\cal{V}}^K_{nm}(r) \left[ \delta^\alpha_m+h_m^{\alpha\pm}(\nu,r)\right] \frac{b^{\pm}_{K_m}(\kappa r)}{b^{\pm}_{K_n}(\kappa r)}\,, \end{equation} or, in a short form, \begin{equation} D h={\cal{V}}\left(1+h\right)\;. \end{equation} This equation can be used for a perturbative expansion. Obviously, the functions $h_n^{\alpha\pm}(\nu,r)$ vanish to zeroth order in ${\cal V}$, so, in the notation introduced in the previous section, they are of order $\overline{(1)}$. Once these solutions are known, the differential equation may be iterated to obtain the contribution of order $\overline{(2)}$ via \begin{eqnarray} D h^{\overline{(1)}}&=&{\cal{V}}\left(1+h^{\overline{(1)}}\right)\,,\\ \label{recur} D h^{\overline{(2)}}&=&{\cal{V}}h^{\overline{(1)}}\,. \end{eqnarray} In terms of these functions the expression (\ref{gtheta}) becomes, for $r>r'$, \begin{eqnarray} g_{nm}^{\overline{(1)}}(r,r',\nu)&=&\frac{\kappa}{2} \left[h_{n}^{\overline{(1)}m-}(\nu,r')+ h_{m}^{\overline{(1)}n+}(\nu,r)+ h_{n}^{\overline{(1)}\alpha-}(\nu,r') h_{m}^{\overline{(1)}\alpha+}(\nu,r) \right] \nonumber \\ &&\times k_{K_{m}}(\kappa r)i_{K_n}(\kappa r')\;, \\ g_{nm}^{\overline{(2)}}(r,r',\nu)&=&\frac{\kappa}{2} \left[h_{n}^{\overline{(2)}m-}(\nu,r')+ h_{m}^{\overline{(2)}n+}(\nu,r)+ h_{n}^{\overline{(1)}\alpha-}(\nu,r') h_{m}^{\overline{(1)}\alpha+}(\nu,r) \right] \nonumber \\&& \times k_{K_{m}}(\kappa r)i_{K_n}(\kappa r') \;. \end{eqnarray} These expressions are ready for being inserted into Eqs.~(\ref{BaaE}) and (\ref{dedphi}). In renormalized perturbation theory one would, using further iterations, reduce these expressions to order $\overline{(3)}$ and evaluate the second order analytically as in Eq.~(\ref{E2reg}). \section{Quantization of collective coordinates} \setcounter{equation}{0} The hedgehog system is invariant under K-spin, i.e., under combined space and isospin rotations. For a rotating hedgehog state the Hamiltonian is modified as \cite{Diakonov:1988} \begin{eqnarray} H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})= H(\mbox{\boldmath{$x$}\unboldmath})-\frac{1}{2}\mbox{\boldmath{$\Omega$}\unboldmath}\cdot\mbox{\boldmath{$\tau$}\unboldmath}= -i\balpha\cdot\mbox{\boldmath{$\nabla$}\unboldmath}+\gamma_0 \mbox{\boldmath{$M$}\unboldmath}(\mbox{\boldmath{$x$}\unboldmath})-\frac{1}{2}\mbox{\boldmath{$\Omega$}\unboldmath}\cdot\mbox{\boldmath{$\tau$}\unboldmath}\, , \end{eqnarray} where $\mbox{\boldmath{$\Omega$}\unboldmath}$ is the angular velocity. Assuming $\mbox{\boldmath{$\Omega$}\unboldmath}$ to be small, the problem can be treated perturbatively. The first order in $\mbox{\boldmath{$\Omega$}\unboldmath}$ vanishes, in second order one obtains \begin{equation} S\sub{eff}=-\tau\left[{E_0(\mbox{\boldmath{$\Omega$}\unboldmath})+\frac 1 2 \Omega_a \theta_{ab} \Omega_b}\right]\,, \end{equation} where $\theta_{ab}$ is the moment of inertia \begin{equation} \left. \theta_{ab}=\frac{\delta^2 E_0(\mbox{\boldmath{$\Omega$}\unboldmath})} {\delta \Omega_a \delta \Omega_b}\right|_{\Omega=0}\,. \end{equation} $\theta_{ab}$ is proportional to the unit matrix, $\theta_{ab}= \theta \delta_{ab}$. The collective coordinate can be quantized in the usual way, leading to an extra term $J(J+1)/2\theta$ in the energy. Taking the second derivative with respect to $\Omega$ of the fermion Green function \begin{equation} S\sub{E}^\Omega(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)=-\langle x|\frac{1}{i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})} |x' \rangle \end{equation} one obtains \begin{eqnarray} \frac{\delta^2 S^\Omega\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)}{\delta \Omega_a\delta \Omega_b} &=&-\langle x|\frac{1}{i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})}\frac{\tau_a}{2}\frac{1} {i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})} \frac{\tau_b}{2}\frac{1}{i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})}|x' \rangle \nonumber\\ &&-\langle x|\frac{1}{i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})}\frac{\tau_b}{2}\frac{1} {i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})} \frac{\tau_a}{2}\frac{1}{i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})}|x' \rangle \nonumber\\ \\ &=&-\frac{i}{4}\frac{\partial}{\partial \nu}\langle x| \tau_a\frac{1} {i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})} \tau_b\frac{1}{i\nu -H_\Omega(\mbox{\boldmath{$\Omega$}\unboldmath})}|x' \rangle \,. \end{eqnarray} Inserting this equation in Eq.\ (\ref{zeropointbos}) the tensor can be calculated via \begin{equation} \label{tm} \left.\frac{\delta^2 E_0}{\delta \Omega_a\delta \Omega_b}\right|_{\bOmegaA=0} =N\sub{c}\int_{-\infty}^\infty \frac{d \nu}{8\pi}\int d ^3 x \int d^3 x' {\rm {tr}} \left[\tau_a S\sub{E}^0(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)\tau_b S\sub{E}^0(\mbox{\boldmath{$x$}\unboldmath}',\mbox{\boldmath{$x$}\unboldmath},\nu) \right]\, ,\nonumber\\ \end{equation} where $S\sub{E}^0(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)$ is defined by Eq.\ (\ref{DGL1}). This expression can be rewritten with the "bosonic" Green function (\ref{DGL2}) \begin{eqnarray} \theta_{ab}&=&N\sub{c}\int_{-\infty}^\infty \frac{d\nu}{8\pi} \int d ^3 x \int d^3 x'\\ && \hspace{1cm}\times{\rm {tr}} \Bigl[\tau_a \left\{i\nu+H(\mbox{\boldmath{$x$}\unboldmath})\right\}G\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)\tau_b \left\{i\nu+H(\mbox{\boldmath{$x$}\unboldmath}')\right\}G\sub{E}(\mbox{\boldmath{$x$}\unboldmath}',\mbox{\boldmath{$x$}\unboldmath},\nu) \Bigr]\,.\nonumber \end{eqnarray} The bound state is occupied and, therefore, included into the negative continuum by choosing the dashed integration contour displayed in Fig. 1. If the Green function is expanded into eigenfunctions of the Hamiltonian as in Eq.~(\ref{Greensum}), it can be readily verified that this way one obtains transitions between the positive and negative continuum states as well as transitions between the bound state and the positive continuum as discussed in \cite{Goeke:1991fk}. Here this detailed structure is not explicit. As for the energy, we will consider the bound state contribution separately. We decompose the dashed contour into a contour running along the real $\nu$ axis and a small circle around the bound state pole. We will denote the former contribution which describes the continuum-continuum-transitions by a superscript $c-c$, and the bound state contributions by the superscript $b-c$ as it involves matrix elements between the bound state and the continuum. We begin with considering the continuum contributions. The partial wave reduction of the Green function $G\sub E (\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)$ has been introduced above and is discussed in Appendix A. Here we need the partial wave reduction for the fermion propagator $S\sub E=(i\nu +H)G\sub E$; we denote the associated matrix elements by $s^{K}_{ m n}(r,r',\nu)$. This Green function can again be decomposed into mode functions as in Eq.~\ref{gtheta}, one just has to replace the functions $f_n(\nu,r)$ by the fermionic mode functions $u_{n}(\nu,r)$. These are given explicitly in Appendix A. Furthermore, we have to take the trace with isospin matrices $\tau_a$. As $\theta_{ab} \propto \delta_{ab}$ it is sufficient to compute $\theta_{33}$. The action of $\tau_3$ on the K-spin harmonics is given in Appendix B. Using these expressions, the angular integration over $d\Omega_{r}$ and $d\Omega_{r'}$ and the summation over the third component of $K$ and $K'$ can be performed. $K'$ is fixed by the angular integration to the values $K$, $K-1$ or $ K+1$. Finally, one obtains \begin{eqnarray}\nonumber &&\int d\Omega_{r}\int d\Omega_{r'} {\rm {tr}} \Bigl[\tau_3 \left\{i\nu+H(\mbox{\boldmath{$x$}\unboldmath})\right\}G\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu) \tau_3\left\{i\nu+H(\mbox{\boldmath{$x$}\unboldmath}')\right\}G\sub{E}(\mbox{\boldmath{$x$}\unboldmath}',\mbox{\boldmath{$x$}\unboldmath},\nu) \Bigr]= \\\nonumber&& \hspace{0cm}\sum_{P,K=1}^\infty \Biggl[\frac{(K+1)(2K+1)}{3K}\biggl\{ { s^{K}_{ 11}}({r}, { r'},\nu ){ s^{K}_{ 11}}({ r'}, {r},\nu) + { s^{K}_{ 22}}({r}, { r'},\nu){ s^{K}_{22}}({ r'}, {r},\nu) \\\nonumber&& \hspace{4.8cm}+{s^{K}_{ 12}}({r}, { r'},\nu){ s^{K}_{ 21}}({ r'}, {r},\nu) + { s^{K}_{ 21}}({r}, { r'},\nu){ s^{K}_{ 12}}({ r'}, {r},\nu) \biggr\}\\\nonumber&& \hspace{1.8cm}+\frac{(2K+1)K}{3(K+1)}\biggl\{ { s^{K}_{ 33}}({r}, { r'},\nu){ s^{K}_{ 33}}({ r'}, {r},\nu) + { s^{K}_{ 44}}({r}, { r'},\nu){ s^{K}_{ 44}}({ r'}, {r},\nu) \\\nonumber&& \hspace{4.8cm}+ {s^{K}_{34}}({r}, { r'},\nu){ s^{K}_{ 43}}({ r'},{r},\nu) + { s^{K}_{ 43}}({r}, { r'},\nu){ s^{K}_{ 34}}({ r'}, {r},\nu) \biggr\} \\&&\hspace{1.8cm}\nonumber -\frac{(2K+1)}{3}\biggl\{ { s^{K}_{ 14}}({r}, { r'},\nu){ s^{K}_{ 41}}({ r'}, {r},\nu) + { s^{K}_{ 41}}({r}, { r'},\nu){ s^{K}_{ 14}}({ r'}, {r},\nu) \\\nonumber&& \hspace{4.8cm}+ { s^{K}_{ 13}}({r}, { r'},\nu){ s^{K}_{31}}({r'},{r},\nu) + { s^{K}_{31}}({r}, { r'},\nu){ s^{K}_{13}}({ r'}, {r},\nu) \\\nonumber&& \hspace{4.8cm}+ { s^{K}_{32}}({r}, { r'},\nu){ s^{K}_{ 23}}({r'},{r},\nu) + { s^{K}_{23}}({r}, { r'},\nu){ s^{K}_{ 32}}({ r'}, {r},\nu) \\\nonumber&& \hspace{4.8cm}+ { s^{K}_{ 24}}({r}, { r'},\nu){ s^{K}_{ 42}}({r'},{r},\nu) + { s^{K}_{ 42}}({r}, { r'},\nu){ s^{K}_{24}}({ r'}, {r},\nu) \biggr\} \\\nonumber&&\hspace{1.8cm} +\frac{4K^2-1}{3K}\biggl\{ { s^{K}_{ 22}}({r}, { r'},\nu){ s^{K-1}_{ 44}}({ r'}, {r},\nu) + { s^{K}_{ 12}}({r},{ r'},\nu){ s^{K-1}_{ 43}}({ r'}, {r},\nu) \\\nonumber&& \hspace{3.8cm}+ {s^{K}_{21}}({r}, {r'},\nu){ s^{K-1}_{34}}({r'},{r},\nu) + { s^{K}_{ 11}}({r}, { r'},\nu){ s^{K-1}_{33}}({ r'}, {r},\nu) \\\nonumber&& \hspace{3.8cm}+{s^{K-1}_{33}}({r},{ r'},\nu){ s^{K}_{11}}({r'},{r},\nu) + { s^{K-1}_{ 44}}({r},{r'},\nu){ s^{K}_{22}}({ r'}, {r},\nu) \\\nonumber&& \hspace{3.8cm}+ {s^{K-1}_{34}}({r},{ r'},\nu){ s^{K}_{ 21}}({r'},{r},\nu) + { s^{K-1}_{ 43}}({r},{r'},\nu){ s^{K}_{12}}({ r'}, {r},\nu)\biggr\} \Biggr]\\ \,. \end{eqnarray} As we need this expression in order $\overline{(2)}$, the modified Green functions have to be inserted in order $\overline{(1)}$. One can take advantage of the factorization of the Green functions into mode functions, see~\ref{gthetatilde}, to rewrite this expression in the form \begin{eqnarray}\label{thetaH} && \theta\top{c-c}=\left.\frac{\delta^2 E_0} {\delta \Omega_a\delta \Omega_b}\right|_{\bOmegaA=0} =N\sub{c}\int_{-\infty}^\infty \frac{d \nu}{4\pi} \sum_{P,K=1}^\infty \int_0^\infty d r r^2\int_0^r d r' r'^2 \nonumber\\&&\hspace{1cm}\Biggl[ H^{+\alpha\beta}_{1K}(\nu,r)\left\{\frac{(K+1)(2K+1)}{3K} H^{-\beta\alpha}_{1K}(\nu,r')- \frac{2K+1}{3}H^{-\beta\alpha}_{2K}(\nu,r')\right\}\nonumber\\&&\hspace{1.2cm}+ H^{+\alpha\beta}_{2K}(\nu,r)\left\{\frac{K(2K+1)}{3(K+1)} H^{-\beta\alpha}_{2K}(\nu,r')- \frac{2K+1}{3}H^{-\beta\alpha}_{1K}(\nu,r')\right\}\nonumber\\&&\hspace{1.2cm}+ H^{+\alpha\beta}_{3K}(\nu,r)\left\{\frac{4K^2-1}{3K} H^{-\beta\alpha}_{4K}(\nu,r')\right\}\nonumber\\&&\hspace{1.2cm} +H^{+\alpha\beta}_{4K}(\nu,r)\left\{\frac{4K^2-1}{3K} H^{-\beta\alpha}_{3K}(\nu,r')\right\}\Biggr]\,. \end{eqnarray} The functions $ H^{\pm\alpha\beta}_{iK}(\nu,r)$ are defined as \begin{eqnarray} H^{\pm\alpha\beta}_{1K}(\nu,r)&=&\kappa\left[u_{ 1,K}^{\alpha\pm }(\nu,r) f_{1,K}^{\beta\pm}(\nu,r)+ u_{ 2,K}^{\alpha\pm}(\nu,r) f_{2,K}^{\beta\pm}(\nu,r)\right]\,,\\ H^{\pm\alpha\beta}_{2K}(\nu,r)&=&\kappa\left[u_{ 3,K}^{\alpha\pm}(\nu,r) f_{3,K}^{\beta\pm}(\nu,r)+u_{ 4,K}^{\alpha\pm}(\nu,r) f_{4,K}^{\beta\pm}(\nu,r)\right]\,,\\ H^{\pm\alpha\beta}_{3K}(\nu,r)&=&\kappa\left[u_{ 1,K}^{\alpha\pm}(\nu,r) f_{3,K-1}^{\beta\pm}(\nu,r)+u_{ 2,K}^{\alpha\pm}(\nu,r) f_{4,K-1}^{\beta\pm}(\nu,r)\right]\,,\\ H^{\pm\alpha\beta}_{4K}(\nu,r)&=&\kappa\left[u_{ 3,K-1}^{\alpha\pm}(\nu,r) f_{1,K}^{\beta\pm}(\nu,r)+u_{4,K-1}^{\alpha\pm}(\nu,r) f_{2,K}^{\beta\pm}(\nu,r)\right]\,, \end{eqnarray} in terms of the mode functions $f_n^{\alpha\pm}$ and $u_{\tilde n}^{\alpha\pm}$ defined in Appendix A. One has to combine the orders of $H^{\pm\alpha\beta}_{i}$ in such a way that the result is of total order $\overline{(2)}$: \begin{eqnarray} \theta_{ab}\top{c-c\overline{(1)}}&\sim&H^{+{\overline{(1)}}} H^{-{\overline{(1)}}}+H^{+{(0)}}H^{-{\overline{(1)}}}+ H^{+{\overline{(1)}}}H^{-{(0)}}\,. \end{eqnarray} In fact the first order part which has been included on the right hand side for practical convenience vanishes. The functions $H^{\pm\overline{(1)}}$ are obtained as \begin{eqnarray} H^{\pm{\overline{(1)}}}&\sim&f^{\pm{\overline{(1)}}}f^{\pm{\overline{(1)}}}+ f^{\pm{(0)}}f^{\pm{\overline{(1)}}}+ f^{\pm{\overline{(1)}}}f^{\pm{(0)}}\,; \end{eqnarray} the functions $H^{\pm (0)}$ are composed of free Bessel functions. Since the imaginary part of the integral of Eq.\ (\ref{thetaH}) is antisymmetric, the result equals twice the real part, integrated from $\nu = 0$ up to $\infty$. It is implied that the expressions have to be regularized by Pauli-Villars subtractions. Having presented the continuum-continuum contributions to the moment of inertia we now turn to the bound state contribution. This contribution is given, in terms of eigenfunctions of the Dirac operator by \cite{Goeke:1991fk,Wakamatsu:1991} \begin{eqnarray} \left.\frac{\partial^2\omega_0}{\partial \Omega_a\partial \Omega_a} \right|_{\bOmegaA=0} &=&\frac{N\sub{c}}{2}\sum_{m\ne {\rm bou}}\frac{\langle\psi_0|\tau_a| \psi_m\rangle\langle\psi_m|\tau_b|\psi_0\rangle} {E^m-E\top{bou}}\,. \end{eqnarray} Using the (\ref{Greensum}) for the Green function we find that this expression for the moment of inertia is identical to \begin{eqnarray} \theta\top{b-c}_{ab}=\left.\frac{\partial^2\omega_0} {\partial \Omega_a\partial \Omega_a}\right|_{\bOmegaA=0} &=&\frac{N\sub{c}}{2}{\rm {tr}} \int d^3x d^3x'\psi_0(\mbox{\boldmath{$x$}\unboldmath})\psi^\dagger_0(\mbox{\boldmath{$x$}\unboldmath}') \tau_a S(\mbox{\boldmath{$x$}\unboldmath}',\mbox{\boldmath{$x$}\unboldmath},-iE\top{bou})\tau_b\,.\nonumber\\ \end{eqnarray} The Euclidean Green function at any imaginary argument can again be related to the bosonic Green function via \begin{equation} S\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',-iE\top{bou})= (H+E\top{bou})G\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',iE\top{bou})\,. \end{equation} Since the energy of the bound state is smaller than the mass $M$, the calculation of the Green function is analogous to the one for the continuum part, Eq. (\ref{thetaH}), with \begin{equation} \kappa^2=M^2-E\sub{bou}^2\,. \end{equation} Furthermore, the valence state $0^+$ only couples to $K^P=1^+$ continuum states. Therefore, the expression for this contribution reduces to \begin{eqnarray} \theta\top{b-c}_{ab}&=&\frac{N\sub{c}} {2\kappa} \int_0^\infty d r r^2\Biggl[ {\cal{H}}^{+\alpha}_{3}(\nu,r)\int_0^r d r' r'^2{\cal{H}}^{-\alpha}_{4}(\nu,r') \nonumber\\&&\hspace{1cm}+ {\cal{H}}^{+\alpha}_{4}(\nu,r)\int_0^r d r' r'^2{\cal{H}}^{-\alpha}_{3}(\nu,r')\Biggr] \end{eqnarray} with \begin{eqnarray} {\cal{H}}^{\pm\alpha}_{3}(\nu,r)&=&\frac{\kappa}{2E\sub{bou}} \left[u_{1,1}^{\alpha\pm}(\nu,r)h_0(r) +u_{2,1}^{\alpha\pm}(\nu,r)j_0(r)\right]\,,\\ {\cal{H}}^{\pm\alpha}_{4}(\nu,r) &=&\kappa\left[f_{1,1}^{\alpha\pm}(\nu,r)h_0(r)+ f_{2,1}^{\alpha\pm}(\nu,r)j_0(r)\right]\,. \end{eqnarray} In the first of these equations we have used the Dirac equation to relate the bosonized wave functions $f_{i,0}$ of the bound state to the fermionic components $h_0$ and $j_0$. As for the bound state contribution to the energy, this part of the moment of inertia is finite and will not be Pauli-Villars subtracted. Adding the $c-c$ and $b-c$ contributions the moment of inertia is given by \begin{equation} \theta=\theta\top{b-c}+\theta\top{c-c}(M)- \frac{M^2}{\Lambda^2}\theta\top{c-c}(\Lambda)\,. \end{equation} Proceeding in an analogous way one can obtain expressions for the expectation values of other observables as well. Expressions for $\langle\Sigma_3\rangle$ and $\langle L_3\rangle$ in terms of mode sums have been derived in \cite{Wakamatsu:1991}. The spin expectation value is given by \begin{equation} \langle\Sigma_3\rangle= -\frac{1}{\theta}\frac{N\sub{c}}{2} \sum_{m,n} \frac{\langle \psi_n|\tau_3|\psi_m \rangle \langle \psi_m|\Sigma_3|\psi_n \rangle}{E_m-E_n} \;. \end{equation} It can be rewritten in terms of the Euclidean Green function as \begin{equation} \langle \Sigma_3\rangle =-\frac{N\sub{c}}{\theta}\int_{-\infty}^\infty \frac{d \nu}{8\pi}\int d ^3 x \int d^3 x' {\rm {tr}} \left[\tau_3 S\sub{E}^0(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath}',\nu)\sigma_3 S\sub{E}^0(\mbox{\boldmath{$x$}\unboldmath}',\mbox{\boldmath{$x$}\unboldmath},\nu) \right] \end{equation} where the contour again includes the bound state. The bound state and continuum contributions are obtained separately, as above. \section{Observables} \setcounter{equation}{0} The expressions for the energy and for the moment of inertia have already been presented. The mass of the low-lying baryons with K-spin 0 is given by the sum of static and rotational energy as \begin{equation} M_{J}=E_0+\frac{J(J+1)}{2\theta}\,, \end{equation} therefore \begin{equation} \label{massnuk} M\sub{N}=E_0+\frac{3}{8\theta}\end{equation} and \begin{equation} \label{massdiff} M_\Delta-M\sub{N}=\frac{15}{8\theta}-\frac{3}{8\theta}=\frac{3}{2\theta}\,. \end{equation} The nucleon sigma term is defined as \begin{equation} \Sigma=m_0\int d ^3 x \langle\overline{q}q\rangle\,. \end{equation} As shown in \cite{Meissner:1990} it is given simply by the symmetry breaking part of the energy as \begin{equation} \Sigma=E\top{br}\,. \end{equation} The experimental value is $45\pm9$ MeV \cite{Gasser:1991ce}. The pion-nucleon coupling constant can be obtained \cite{Adkins:1983} from the long range behavior of the meson profile \begin{equation} C=f_\pi \lim_{r\rightarrow \infty} r^2 \sin(\vartheta)\frac{\exp(m_\pi r)} {1+m_\pi r} \end{equation} as \begin{equation} \label{gacont} g\sub{\pi NN}=\frac{8}{3}\pi M\sub{N} C\,. \end{equation} The axial vector coupling constant is given \cite{Wakamatsu:1990ag,Meissner:1991} by the expectation value \begin{equation} g\sub{A}=\langle p\uparrow|\gamma_0\tau_3\gamma_3\gamma_5|p\uparrow\rangle \;.\end{equation} It consists of a valence state contribution \begin{eqnarray} \label{ga_val} g\sub{A}\top{bou}&=&-\frac{N\sub{c}}{3}\,{\rm tr}\,\int d ^3 x \left(\gamma_0\gamma_3\gamma_5\tau_3\right)\psi_0(\mbox{\boldmath{$x$}\unboldmath}) \psi^\dagger_0(\mbox{\boldmath{$x$}\unboldmath}) \nonumber \\&=&\frac{N\sub{c}}{3}\int d r r^2\left[h^2(r)-\frac{1}{3}j^2(r)\right] \end{eqnarray} and a continuum part which, using the Euclidean Green function, can be written as \begin{eqnarray} \label{ga_sea} g\sub{A}\top{con}&=&-\frac{N\sub{c}}{3}\,{\rm tr}\,\int d ^3 x\int_{-\infty}^\infty \frac{d \nu}{2\pi} \left(\gamma_0\gamma_3\gamma_5\tau_3\right)S\sub{E}(\mbox{\boldmath{$x$}\unboldmath},\mbox{\boldmath{$x$}\unboldmath},\nu) \nonumber \\&=& -\frac{2}{9}N\sub{c}\int_0^\infty \frac{d \nu}{2\pi}\int_0^\infty d r r^2 \sum_{K^P} \left[ \left(2K+1\right)s_{11}(r,r,\nu) - \left(2K-1\right)s_{22}(r,r,\nu)\nonumber\right. \\&&\left.\hspace{4.2cm} - \left(2K+3\right)s_{33}(r,r,\nu) + \left(2K+1\right)s_{44}(r,r,\nu)\nonumber\right. \\&&\left.\hspace{4.2cm} +4\sqrt{K\left(K+1\right)} \left\{s_{23}(r,r,\nu)+s_{32}(r,r,\nu) \right\} \right]\,.\nonumber\\ \end{eqnarray} Pauli-Villars subtraction is implied. With the Goldberger-Treiman relation the axial vector coupling constant can also be calculated via \begin{equation} g\sub{A}\top{G-T}=\frac{f_\pi}{M\sub{N}}g\sub{\pi NN} \;. \end{equation} The quadratic radius of \begin{eqnarray} \label{rhoch2} \langle R^2 \rangle\sub{bou}&=&\int_0^\infty r^4 d r\left\{ h_0^2(r)+j_0^2(r) \right\}\,,\\ \langle R^2 \rangle\sub{con}&=&\frac{-1}{2\pi}\int_0^\infty d \nu \int_0^\infty r^4 d r\sum_{K^P}\left\{s_{11}+ s_{22}+s_{33}+s_{44}\right\} \end{eqnarray} has to be compared with an experimental value of $0.62$ fm$^2$. The continuum part is convergent, but so small that regularizing the integral does not change the result. \section{Numerics} \setcounter{equation}{0} We have numerically implemented the expressions for the energy and its functional derivative presented in section 3 in the way described in \cite{Baacke:1992nh} for the energy, and in \cite{Baacke:1995hw} for its functional derivative. The iteration proceeds as follows: For a given meson profile $\vartheta(r)$ one computes the mode functions and evaluates the functional derivative of the energy. One then requires the vanishing of the functional derivative, \begin{equation} \label{eom} {\hat z}_a \left( \frac{\delta E^{\overline{(2)}}_{0,M}}{\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})} +\frac{\delta E\top{bou}}{\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})} +\frac{\delta E\top{br}}{\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})} -\frac{M^2}{\Lambda^2}\frac{\delta E^{\overline{(2)}}_{0,\Lambda}} {\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})} \right) _{\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$z$}\unboldmath})={\mbox{\boldmath{$\hat z$}\unboldmath}}\vartheta(r)}=0 \;. \end{equation} As can be seen from Eqs. (\ref{dedphi}), (\ref{dedbou}) and (\ref{dedmes}), this equation takes the form \begin{equation} \frac{\delta E}{\delta{\phi}_a(\mbox{\boldmath{$z$}\unboldmath})}\Biggr| _{\mbox{\boldmath{$\phi$}\unboldmath}(\mbox{\boldmath{$z$}\unboldmath})={\mbox{\boldmath{$\hat z$}\unboldmath}}\vartheta(r)} =\hat {z}_a \left[A(r) \cos(\vartheta(r))+B(r)\sin(\vartheta(r))\right]\;. \end{equation} The coefficient functions $A(r)$ and $B(r)$ are the results of the numerical computation. Extremizing the energy by requiring the functional derivative to vanish fixes $\vartheta(r)$ via $\tan(\vartheta(r))=-A(r)/B(r)$. This profile is used as the input for the next iteration. This method of iteration has been used previously by \cite{Meissner:1991}. The functions $h^{\alpha\pm}(\nu,r)$ have been computed in order $\overline{(1)}$ and $\overline{(2)}$ by solving (\ref{DGLh}) and its recursion (\ref{recur}) using a Runge-Kutta scheme. The accuracy of these solutions was checked by using the Wronskian relation, which was constant to at least $6$ significant units. The sum over the K-spin was extended to $K_{\rm max}=16$ during the iteration and to $K_{\rm max}=20$ for the final result. It is straightforward to derive, e.g., using asymptotic expansions for the lowest order perturbative contributions, that the power behaviour in angular momentum should be, after regularization, as $K^{-3}$. In Fig. 2a we show the terms in the sum over angular momenta for the integrand of the energy at $\nu=1$ with a power fit $A K^{-3}+B K^{-4}$. An analogous example is given for the $\nu$ integrand of $g_A$, Eq. (\ref{ga_sea}) at $\nu=0$. Using this power fit it is then straightforward to include the sum above $K_{\rm max}$. So with a very good approximation the angular momentum sum runs, effectively, up to $K=\infty$. Likewise the integral over $\nu$ can be extended to $\nu = \infty$ by using a power fit. The $\nu$ integrand for the energy is displayed in Fig. 2c. Here the expected power behaviour is $\nu^{-2}$ and, based on the power fit displayed in this figure we have appended the integral above $\nu = 5 M$. The numerical results for the energy and other static parameters are presented in Table \ref{tab1} and in Figs. 3 - 7. \section{Results and conclusions} \setcounter{equation}{0} We have presented here a self-consistent computation of the nucleon ground state in the Nambu-Jona-Lasinio model. In contrast to most previous calculations we have used a Pauli-Villars cutoff. It has been shown recently \cite{Diakonov:1996b} that such a cutoff is favored by parton sum rules. As the main object of this work we have introduced an alternative method of numerical computation, based on Euclidean Green functions instead of the use of the quark eigenfunctions for real energies. Our method has the advantage that it is not necessary to discretize the continuous spectrum of the sea quarks. While this discretization and the associated limiting procedure seem to be well under control, finite boundaries may introduce spurious effects, as discussed in \cite{Wakamatsu:1991}. Although our results essentially confirm those of other groups, this agreement is by no means guaranteed. Besides presenting the analytical framework for the computation of self-consistent profiles, based on explicit expressions for the energy and its functional derivative, we have also derived explicit expressions for other observables. Again the quark sea contributions can be formulated in terms of the Euclidean Green function. Our numerical results are presented in Table \ref{tab1} and plotted in Figs. 3 to 7. In Table \ref{tab1} we also give some results obtained for $g=4$ in Ref. \cite{Doring:1992sj}, when using the same regularization. In view of the difference of the numerical approaches the agreement is very satisfactory. This agreement holds as well for the mesonic profiles $\pi(r)$, plotted in Fig. 3. In Figs. 4 - 7 we plot the various parts of the energy, the axial vector coupling $g_A$, the mesonic profiles $\vartheta(r)$ and the moment of inertia, as functions of the coupling $g$. One sees that a value of $g \simeq 4$ is preferred by the comparison with experiment. The nucleon mass is still too high, but somewhat lower than the one obtained with Schwinger proper-time cutoff. A lower value for $M\sub{N}$ can be obtained by minimizing the sum of fluctuation energy and rotational energy \cite{Schleif:1997}. We have not followed this issue here. It can be inferred from multiparticle dynamics (see, e.g. \cite{Ring:1980}) that one should subtract an energy corresponding to the c.m.~motion of the quarks, which would result in a the mass near to the physical value. For the present selfconsistent condensate-quark state there is, however, no compelling proof for such a procedure. So such a subtraction is performed by some authors \cite{Pobylitsa:1992bk}, but has not become a general standard. A major difference to other publications on the chiral quark model is observed for the axial coupling constant $g\sub{A}$ which is computed from the quark current, and for which we obtain values between $1.15$ and $1.37$ as shown in Fig. 5. Most other authors have used the Schwinger proper-time cutoff, they obtain results well below $1$. However, similar values for $g\sub{A}$ have been obtained recently by Golli {\em et al.} \cite{Golli:1998rf} who use a Gaussian cutoff for the effective quark mass. The bound state contribution agrees with the one given, e.g., in \cite{Doring:1992sj}. The major part of the increase of $g\sub{A}$ with $g$ comes from the continuum part. The values for $g\sub{A}$ obtained from the asymptotic behavior of the soliton profile via the Goldberger-Treiman relation are somewhat lower than those computed directly from the quark currents, they are in the range between $1.15$ and $1.17$. While a small violation of the Goldberger-Treiman relation is expected, the appreciable increase of this violation with $g$ is somewhat troublesome. As to the variation of the axial vector coupling with the coupling $g$, we find a monotonous increase. The bound state contribution decreases with $g$, this is, however, overcompensated by an increasing continum contribution. This trend is different from the one found by other authors (see, e.g., \cite{Doring:1992sj,Wakamatsu:1993up}), mostly using the Schwinger proper-time cutoff. It agrees, however, with the one found in \cite{Doring:1992sj} for their Pauli-Villars I cutoff, as we may infer from the two values $g\sub{A}=.94$ at $g=3.85$ and $g\sub{A}=.96$ at $g=4$. The dependence of $g\sub{A}$ on $g$ has again the same trend as ours in Ref. \cite{Golli:1998rf} with the Gaussian mass cutoff\footnote{The correspondence between their parameters and ours is somewhat involved, in the range considered here their values of $1/G$ are monotonuos with our coupling $g$, as are their and our values for the nucleon mass.} In the same way the trend of the isoscalar quadratic radius $\langle R^2 \rangle$ with $g$ is found opposite to the one in other cutoff schemes; it again agrees with the Pauli-Villars I results of \cite{Doring:1992sj}. So it seems that it is the regularization scheme and not some numerical deficiency which causes the differences in the trends, an unsatisfactory situation which requires further investigation. While the absolute value of $g\sub{A}$ obtained here suggests a satisfactory agreement with experiment (see, e.g., \cite{Matsinos:1998wp}) it has to be taken into consideration that $g\sub{A}$ is only calculated in ${\cal O}(\Omega^0)$. As shown in \cite{Wakamatsu:1993up} the next orders in $\Omega$ lead to additional contributions of $g^1\sub{A}\sim 0.4$ and $g^2\sub{A}\sim 0.2$, if computed with Schwinger proper-time regularisation. Thus in fact $g\sub{A}$ is overestimated here, as in nearly all regularization schemes, as discussed in \cite{Wakamatsu:1993up}. The problem is not resolved entirely, however, as apparently the expansion in $\Omega$ does not converge well, so that higher corrections may still modify the results appreciably. In conclusion we have presented here a new approach to computing self-consistent meson profiles and static observables of the nucleon in the Nambu-Jona-Lasinio model, using a Pauli-Villars cutoff. The agreement with previous analyses using the same cutoff but different numerical methods is satisfactory in general, some results given here are new. In view of the fact that our numerical procedure is rather economical we think that it is worthwhile to pursue its application, e.g., to alternative versions of the model or to similar self-consistency problems. \section*{Acknowledgements} H.S. thanks the Graduiertenkolleg "Er\-zeug\-ung und Zerf\"alle von Elementar\-teil\-chen" for financial support. \begin{appendix} \section{Partial waves in the $K$ spin basis}\label{anhangXi} \setcounter{equation}{0} The expansion with respect to K-spin harmonics $\Xi_{ij}$ \cite{Kahana:1984} \begin{eqnarray}\label{greenexp} \psi_{_{K,K_z,P}}(\mbox{\boldmath{$x$}\unboldmath}) = \left(\begin{array}{rlcrl} u^{_{K,P}}_1(r)\!\!&\!\!\Xi^{_{K,K_z}}_{1}(\mbox{\boldmath{$\hat x$}\unboldmath})\!\!&\!\!+\!\!&\!\! u_4^{_{K,P}}(r)\!\! &\!\!\Xi^{_{K,K_z}}_{4}(\mbox{\boldmath{$\hat x$}\unboldmath})\\ u_2^{_{K,P}}(r)\!\!&\!\!\Xi^{_{K,K_z}}_{2}(\mbox{\boldmath{$\hat x$}\unboldmath})\!\!&\!\!+\!\!&\!\! u_3^{_{K,P}}(r)\!\! &\!\!\Xi^{_{K,K_z}}_{3}(\mbox{\boldmath{$\hat x$}\unboldmath}) \end{array}\right)\,\nonumber\\ \end{eqnarray} written here for parity $(-1)^{(K+1)}$, reduces the Dirac equation to radial equations for four coupled partial waves $u_i$. The Hamiltonian acting on the radial wave functions with parity $(-1)^{(K+1)}$ via \begin{equation} H^{K^P}_{ij}u_j=E u_i \end{equation} is given by \begin{eqnarray} \nonumber {\cal{H}}^{K^P}&=&\left\{ \begin{array}{cccc} 0&-\displaystyle\frac{d}{dr}-\frac{K+1}{r}&0&0\\ \displaystyle\frac{d}{dr}-\frac{K-1}{r}&0&0&0\\0&0&0&- \displaystyle\frac{d}{dr}-\frac{K+2}{r}\\0&0& \displaystyle\frac{d}{dr}-\frac{K}{r}&0\\ \end{array}\right\}\\ &+&\left\{ \begin{array}{cccc} C(r)&-cS&sS(r)&0\\-cS(r)&-C(r) &0&-sS(r)\\sS(r)&0&-C(r)&-cS(r)\\0&-sS(r)&-cS(r)&C(r)\\ \end{array}\right\}\,, \end{eqnarray} where \begin{eqnarray} S(r)&=&M\sin(\vartheta(r))\,,\\ C(r)&=&M\cos(\vartheta(r))\,,\\ s&=&2\sqrt{K(K+1)}/(2K+1)\,,\\ c&=&1/(2K+1)\,. \end{eqnarray} We ``square'' the Dirac equation to obtain an effective Klein-Gordon equation \begin{equation} \label{boswaveq} \left(-\Delta_{ij}^K+M^2\delta_{ij}+{\cal{V}}^K_{ij}\right)f_j=E^2 f_i\,, \end{equation} where \begin{equation} \Delta^K_{ij}=\delta_{ij}\frac{1}{r^2}\frac{d}{dr}r^2\frac{d}{dr}- \frac{K_i(K_i+1)}{r^2}\,. \end{equation} The orbital angular momenta $K_i$ are given by $K_1=K-1$, $K_2=K_3=K$ and $K_4=K+1$. The bosonized wave functions refer to the same K-spin basis (\ref{greenexp}) as the fermionic ones. The potential for the parity $(-1)^{K+1}$ is given by \begin{eqnarray} \label{pottot} &&{\cal{V}}^K=\\&&\nonumber \left\{ \begin{array}{cccc}\displaystyle c\left(S'+\frac{2KS}{r}\right)&C'&0&\displaystyle s\left(S'-\frac{S}{r}\right)\\ C'&c\displaystyle\left(-S'+\frac{2KS}{r}\right)&\displaystyle s\left(S'+\frac{S}{r}\right)&0\\0& \displaystyle s\left(S'+\frac{S}{r}\right)& \displaystyle c\left(S'+\frac{2(K+1)S}{r}\right)&-C'\\ \displaystyle s\left(S'-\frac{S}{r}\right)&0&-C'& \displaystyle c\left(-S'+\frac{2(K+1)S}{r}\right)\\ \end{array}\right\} \,, \end{eqnarray} where \begin{eqnarray} S'(r)&=&M\frac{d}{dr}\sin(\vartheta(r))\,,\\ C'(r)&=&M\frac{d}{dr}\cos(\vartheta(r))\,. \end{eqnarray} For the parity $(-1)^K$ the sign of the mass has to be changed. While above we have formulated the Dirac and effective Klein-Gordon equations for wave functions with real minkowskian energies $E$ we mainly work with euclidean mode functions whose argument we denote with $\nu=-iE$. So the mode equations are analogous to (\ref{boswaveq}) with $E^2 \to - \nu^2$. In term of these mode finctions the bosonic Green function is given by (\ref{gtheta}). The fermionic euclidean mode functions are related to the bosonized ones via \begin{eqnarray} \label{modmod} u_1&=&\left(i\nu+C\right)f_{1}-\frac{K+1}{r}f_{2}-f_{2}' -cSf_{2}+sSf_{3}\,,\\ u_2&=&\left(i\nu-C\right)f_{2}-\frac{K-1}{r}f_{1}+f_{1}' -cSf_{1}-sSf_{4}\,,\\ u_3&=&\left(i\nu-C\right)f_{3}-\frac{K+2}{r}f_{4}-f_{4}' -cSf_{4}+sSf_{1}\,,\\ u_4&=&\left(i\nu+C\right)f_{4}-\frac{K}{r}f_{3}+f_{3}' -cSf_{3}-sSf_{2} \end{eqnarray} for the parity $(-1)^{(K+1)}$, thus the Green function becomes complex. With these radial functions the fermionic Green function $S_E({\bf x},{\bf x}',\nu)$ in the K-Spin basis reads \begin{equation}\label{gthetatilde} s_{nm}(r,r',\nu)=\kappa\left[ \theta(r-r'){u}^{\alpha +}_n (\nu,r) f^{\alpha -}_{m}(\nu,r') +\theta(r'-r){u}^{\alpha -}_n (\nu,r) f^{\alpha +}_{m}(\nu,r') \right]\,. \end{equation} \section{Some relations for K-spin harmonics} \setcounter{equation}{0} The action of $\tau_3$, $\sigma_3$ and $\sigma_3\tau_3$ on the K-spin harmonics \cite{Kahana:1984} is \begin{eqnarray} \tau_3\Xi^{K,K_z}_{1}&=&\frac{K_z}{K}\Xi^{K,K_z}_{1}- \frac{\sqrt{K^2-K_z^2}}{K}\Xi^{K-1,K_z}_{3 }\,,\\ \tau_3\Xi^{K,K_z}_{2}&=&\frac{K_z}{K}\Xi^{K,K_z}_{2}- \frac{\sqrt{K^2-K_z^2}}{K}\Xi^{K-1,K_z}_{4 }\,,\\ \tau_3\Xi^{K,K_z}_{3}&=&-\frac{K_z}{K+1}\Xi^{K,K_z}_{3}- \frac{\sqrt{(K+1)^2-K_z^2}}{K+1}\Xi^{K+1,K_z}_{1 }\,,\\ \tau_3\Xi^{K,K_z}_{4}&=&-\frac{K_z}{K+1}\Xi^{K,K_z}_{4}- \frac{\sqrt{(K+1)^2-K_z^2}}{K+1}\Xi^{K+1,K_z}_{2 }\,, \\ \sigma_3\Xi^{K,K_z}_{1}&= &\frac{K_z}{K}\Xi^{K,K_z}_{1} -2\frac{\sqrt{K-1}\sqrt{K^2-K_z^2}}{(2K-1)\sqrt{K}}\Xi^{K-1,K_z}_{2} +\frac{\sqrt{K^2-K_z^2}}{(2K-1){K}}\Xi^{K-1,K_z}_{3} \,,\\ \sigma_3\Xi^{K,K_z}_{2}&=& -2\frac{\sqrt{K}\sqrt{(K+1)^2-K_z^2}}{(2K+1)\sqrt{K+1}}\Xi^{K+1,K_z}_{1} -\frac{K_z(2K-1)}{K(2K+1)}\Xi^{K,K_z}_{2} \\ \nonumber && +2\frac{K_z}{(2K+1)\sqrt{K(K+1)}}\Xi^{K,K_z}_{3} -\frac{\sqrt{K^2-K_z^2}}{K(2K+1)}\Xi^{K-1,K_z}_{4} \,,\\ \sigma_3\Xi^{K,K_z}_{3}&= &\frac{\sqrt{(K+1)^2-K_z^2}}{(2K+1)(K+1)}\Xi^{K+1,K_z}_{1} +2\frac{K_z}{(2K+1)\sqrt{K(K+1)}}\Xi^{K,K_z}_{2} \\ \nonumber&& +\frac{K_z(2K+3)}{(2K+1)(K+1)}\Xi^{K,K_z}_{3} -2\frac{\sqrt{K^2-K_z^2}\sqrt{K+1}}{(2K+1)\sqrt{K}}\Xi^{K-1,K_z}_{4} \,,\\ \sigma_3\Xi^{K,K_z}_{4}&= &-\frac{\sqrt{(K+1)^2-K_z^2}}{(2K+3)(K+1)}\Xi^{K+1,K_z}_{2} \\ \nonumber&& -2\frac{\sqrt{(K+1)^2-K_z^2}\sqrt{K+2}}{(2K+3)\sqrt{K+1}}\Xi^{K+1,K_z}_{3} -\frac{K_z}{K+1}\Xi^{K,K_z}_{4} \,,\\ \sigma_3\tau_3\Xi^{K,K_z}_{1}&= &-\frac{K-2K_z^2}{K(2K-1)}\Xi^{K,K_z}_{1} -2\frac{K_z\sqrt{K^2-K_z^2}}{(2K-1)\sqrt{K(K-1)}}\Xi^{K-1,K_z}_{2} \\ \nonumber&& -2\frac{K_z\sqrt{K^2-K_z^2}}{K(2K-1)}\Xi^{K-1,K_z}_{3} +2\frac{\sqrt{(K^2-K_z^2)((K-1)^2-K_z^2)}}{(2K-1)\sqrt{K(K-1)}} \Xi^{K-2,K_z}_{4} \,,\\ \sigma_3\tau_3\Xi^{K,K_z}_{2}&= &-2\frac{K_z\sqrt{(K+1)^2-K_z^2}}{(2K+1)\sqrt{K(K+1)}}\Xi^{K+1,K_z}_{1} +\frac{K-2K_z^2}{K(2K+1)}\Xi^{K,K_z}_{2} \\ \nonumber&& +2\frac{K^2+K-K_z^2}{(2K+1)\sqrt{K(K+1)}}\Xi^{K,K_z}_{3} +2\frac{K_z\sqrt{K^2-K_z^2}}{K(2K+1)}\Xi^{K-1,K_z}_{4} \,,\\ \sigma_3\tau_3\Xi^{K,K_z}_{3}&= &-2\frac{K_z\sqrt{(K+1)^2-K_z^2}}{(2K+1)(K+1)}\Xi^{K+1,K_z}_{1} +2\frac{K^2+K-K_z^2}{(2K+1)\sqrt{K(K+1)}}\Xi^{K,K_z}_{2} \\ \nonumber&& -\frac{K+1+2K_z^2}{(2K+1)(K+1)}\Xi^{K,K_z}_{3} +2\frac{K_z\sqrt{K^2-K_z^2}}{(2K+1)\sqrt{K(K+1)}}\Xi^{K-1,K_z}_{4} \nonumber\,,\\ \sigma_3\tau_3\Xi^{K,K_z}_{4}&= & 2\frac{\sqrt{((K+1)^2-K_z^2) ((K+2)^2-K_z^2)}}{(2K+3)\sqrt{(K+1)(K+2)}}\Xi^{K+2,K_z}_{1} \\&& +2\frac{K_z\sqrt{(K+1)^2-K_z^2}}{(2K+3)(K+1)}\Xi^{K+1,K_z}_{2} \nonumber\\&& +2\frac{K_z\sqrt{(K+1)^2-K_z^2}}{(2K+3)\sqrt{(K+1)(K+2)}}\Xi^{K+1,K_z}_{3} +\frac{1+K+2K_z^2}{(2K+3)(K+1)}\Xi^{K,K_z}_{4} \nonumber\,. \end{eqnarray} \end{appendix} \newpage
1,108,101,563,083
arxiv
\section{Introduction} In the quark model, all kinds of mesons are classified by the spin-parity quantum numbers $J^P$. For example, $J^{p}=0^{-}$ denotes pseudoscalar mesons and $J^{p}=2^{+}$ represents tensor mesons. The p-wave tensor mesons that we study in this paper include isovector mesons $a_{2}(1320)$, isodoublet states $K_{2}^{*}(1430)$ and two isosinglet mesons $f_{2}(1270)$, $f_{2}^{\prime}(1525)$ \cite{jpg37075021,wwprd83014008}. For these nine tensor mesons, both orbital angular momentum and the total spin of quarks are equal to 1. Because of the requirement of the Bose statistics of the tensor meson, the light-cone distribution amplitudes of tensor mesons are antisymmetric under the interchange of momentum fractions of the quark and anti-quark in the flavor SU(3) limit \cite{zheng1,zheng2}. Recently, several experimental measurements about charmless B decay modes involving a light tensor meson (T) in the final states have been obtained \cite{prd82011502,prl97201802,prd79052005,prd78012004,prl96251803, prd72072003,prd71092003,ifc32229,prl101161801,prd79072006,prd78052005,prd80112001,prd75012006,prd78092008}. These decays have been studied in the naive factorization approach \cite{prd491645,prd555581,prd59077504,epjc22683,epjc22695,prd67014002,jpg36095004,arxiv1004.1928, arxiv1010.3077}, with which it can be easily shown that $\langle 0\mid j^{\mu}\mid T \rangle =\,0$, where $j^{\mu}$ is the $(V\pm A)$ or $(S\pm P)$ current \cite{zheng1,zheng2,epjc22683,epjc22695}. The factorizable amplitude with a tensor meson emitted vanishes. So these decays are prohibited in the naive factorization approach. The branching rations predicted in the naive factorization approach are too small compared with the experimental results, which implies the importance of nonfactorizable and annihilation type contributions. The recent QCD factorization (QCDF) approach analysis \cite{zheng2} proved this. It is worth of mentioning that the perturbative QCD (PQCD) approach \cite{wang7,prd63074009} is almost the only method to calculate these kinds of diagrams, without fitting the experiments. In this work we shall study charmless $B_{u(d)}\,\rightarrow\,P\,T$ decays in the perturbative QCD approach based on the $k_{T}$ factorization. Due to the heavy mass of B meson, the two light mesons decayed from the B meson are moving very fast in the rest frame of B meson. The light quarks in the final state mesons are all collinear; while the light spectator quark from B meson is soft. Therefore there must be a hard gluon to kick the light spectator quark in the B meson to form a fast moving light meson. In this case, the hard process dominates the decay amplitude, which make it perturbatively calculable. By keeping the transverse momentum of quarks, the end point singularity in the collinear factorization can be eliminated. Double logarithm appears in the QCD radiative corrections due to the additional energy scale introduced by the transverse momentum. By using the renormalization group equation, the double logarithm can be resumed and leads to the Sudakov factor, which effectively suppresses the endpoint contribution of the distribution amplitude of mesons in the small momentum region to make the perturbative calculation reliable. The annihilation diagrams can also be perturbatively calculated in the PQCD approach, which is proved to be the dominant strong phase in $B$ decays for the direct CP asymmetry \cite{laoban}. Phenomenologically, the PQCD approach has successfully predicted the direct CP asymmetry in hadronic $B$ decays \cite{laoban} and the branching ratios of pure annihilation type $B$ decays \cite{cdlv}. This paper is organized as follows. In Sec.II, we present the formalism and wave functions of the considered B meson decays. Then we perform the perturbative calculations for considered decay channels with the PQCD approach in Sec.III. The numerical results and phenomenological analysis are given in Sec.IV. Sec.V contains the main conclusions and a short summary. Finally, Appendix A contains input parameters and distribution amplitudes used in this paper and Appendix B gives various functions that enter the factorization formulae in the PQCD approach. \section{FORMALISM AND WAVE FUNCTIONs} The related weak effective Hamiltonian $H_{eff}$ \cite{rmp68} for charmless $b\rightarrow d(s)$ transitions can be written as \begin{eqnarray} H_{eff}=\frac{G_{F}}{\sqrt{2}}\left\{\sum_{i=1}^{2}\,C_{i}(\mu) V_{ub}^{*}V_{uD}O_{i}^{u}(\mu)\,-\,V_{tb}^{*}V_{tD} \sum_{j=3}^{10}\,C_{j}(\mu)O_{j}(\mu)\right\}, \end{eqnarray} where $V_{ub}$, $V_{uD}$, $V_{tb}$ and $V_{tD}$ are CKM matrix elements, $D$ denotes the light down quark d or s, and $C_{i(j)}(\mu)$ are Wilson coefficients at the renormalization scale $\mu$. $O_{i(j)}(\mu)$ are the well known effective tree (penguin) operators \cite{rmp68}. The non-leptonic B meson decays involve three energy scales, including the electroweak scale $M_W$, b quark mass scale $M_B$ and the factorization scale $\sqrt{\overline{\Lambda}M_B}$, where $\overline{\Lambda} \equiv M_B-m_b$. When the energy scale is higher than the W boson mass $M_W$, the physics is the electroweak interaction which can be calculated perturbatively. The physics from $M_W$ scale to $M_B$ scale is described by the Wilson coefficients of effective four quark operators, which is the resummation of leading logarithm by renormalization equations. The physics between $M_B$ scale and the factorization scale is calculated by the hard part calculation in the PQCD approach. The physics below the factorization scale is described by the hadronic wave functions of mesons, which are nonperturbative but universal for all decay processes. In the PQCD approach, the decay amplitude can be factorized into the convolution of the Wilson coefficients, the hard scattering kernel and the light-cone wave functions of mesons characterized by different scales, respectively. Then, for $B\,\rightarrow\,M_{2}M_{3}$ decays, the decay amplitude is conceptually written as the convolution, \begin{eqnarray} \mathcal {A}\;\sim\;&&\int\,dx_{1}dx_{2}dx_{3}b_{1}db_{1}b_{2}db_{2}b_{3}db_{3}\nonumber\\ &&\times Tr[C(t)\Phi_{B}(x_{1},b_{1})\Phi_{M_{2}}(x_{2},b_{2}) \Phi_{M_{3}}(x_{3},b_{3})H(x_{i},b_{i},t)S_{t}(x_{i})e^{-S(t)}], \end{eqnarray} where $x_i$ is the longitudinal momentum fractions of valence quarks, $b_{i}$ is the conjugate space coordinate of the transverse momentum $k_{iT}$ of the light quarks, and $t$ is the largest scale in function $H(x_{i},b_{i},t)$. By using the renormalization group equations, the large logarithms $\ln(m_{W}/t)$ are included in the Wilson coefficients $C(t)$. By the threshold resummation, the large double logarithms $\ln^{2}x_{i}$ are summed to give $S_{t}(x_{i})$ which smears the end-point singularities on $x_{i}$ \cite{prd66094010}. The last term, $e^{-S(t)}$, is the Sudakov factor which suppresses the soft dynamics effectively \cite{prd57443}. Thus it makes the perturbative calculation of the hard part $H$ applicable at intermediate scale, i.e., $m_{B}$ scale. We will work in the B meson rest frame and employ the light-cone coordinates for momentum variables. So the B meson momentum is chosen as $P_{1}=\frac{m_{B}}{\sqrt{2}}(1,\,1,\,\textbf{0}_{T})$. For the non-leptonic charmless $B\,\rightarrow\,M_{2}M_{3}$ decays, we assume that the $M_{2}$($M_{3}$) meson moves in the plus(minus) z direction carrying the momentum $P_{2}$($P_{3}$). Then the momenta are given by \begin{eqnarray} P_{2}=\frac{m_{B}}{\sqrt{2}}(1-r_{3}^{2},\,r_{2}^{2},\,\textbf{0}_{T}),\;P_{3}=\frac{m_{B}} {\sqrt{2}}(r_{3}^{2},\,1-r_{2}^{2},\,\textbf{0}_{T}), \end{eqnarray} where $r_{2}=\frac{m_{M_{2}}}{m_{B}}$ and $r_{3}=\frac{m_{M_{3}}}{m_{B}}$. The (light-) quark momenta in B , $M_{2}$ and $M_{3}$ mesons are defined as $k_{1}$, $k_{2}$ and $k_{3}$, respectively. We choose \begin{eqnarray} k_{1}=(x_{1}P_{1}^{+},0,\textbf{k}_{1T}),\;k_{2}=(x_{2}P_{2}^{+},0,\textbf{k}_{2T}), \;k_{3}=(0,x_{3}P_{3}^{-},\textbf{k}_{3T}). \end{eqnarray} For a tensor meson, the polarization tensor $\epsilon_{\mu\nu}(\lambda)$ with helicity $\lambda$ can be constructed via the polarization vectors of a vector meson \cite{zheng1,zheng2}. They are given by \begin{eqnarray} \epsilon^{\mu\nu}(\pm2)\,&\equiv&\,\epsilon(\pm1)^{\mu}\epsilon(\pm1)^{\nu},\nonumber\\ \epsilon^{\mu\nu}(\pm1)\,&\equiv&\,\sqrt{\frac{1}{2}}\left[\epsilon(\pm1)^{\mu}\epsilon(0)^{\nu}\, +\,\epsilon(0)^{\mu}\epsilon(\pm1)^{\nu}\right],\nonumber\\ \epsilon^{\mu\nu}(0)\,&\equiv&\,\sqrt{\frac{1}{6}}\left[\epsilon(+1)^{\mu}\epsilon(-1)^{\nu}\, +\,\epsilon(-1)^{\mu}\epsilon(+1)^{\nu}\right]\,+\,\sqrt{\frac{2}{3}}\epsilon(0)^{\mu} \epsilon(0)^{\nu}. \end{eqnarray} With the tensor meson moving on the plus direction of the z-axis, the polarization vectors of the vector meson are chosen as \begin{eqnarray} \epsilon^{\mu}(0)=\frac{1}{\sqrt{2}m_{T}}(k_{0}+k_{3},\,k_{0}-k_{3},\,0,\,0),\; \epsilon^{\mu}(\pm1)=\frac{1}{\sqrt{2}}(0,\,0,\,1,\,\pm i), \end{eqnarray} where $k_{0}$ denotes the energy and $k_{3}$ is the magnitude of the tensor meson momentum in the B meson rest frame. The polarization tensor satisfies the relations \cite{zheng1,zheng2} \begin{eqnarray} \epsilon^{\mu\nu}(\lambda)=\epsilon^{\nu\mu}(\lambda), & \epsilon^{\mu}_{\mu}(\lambda)=0, & \nonumber\\ \epsilon^{\mu\nu}(\lambda)P_{\mu}=\epsilon^{\mu\nu}(\lambda)P_{\nu}=0, & \qquad \epsilon_{\mu\nu}(\lambda) (\epsilon^{\mu\nu}(\lambda^{\prime}))^{*}=\delta_{\lambda\lambda^{\prime}}. & \end{eqnarray} In the following calculation, we define a new polarization vector $\epsilon_{T}$ for the considered tensor meson for convenience \cite{wwprd83014008}, \begin{eqnarray} \epsilon_{T}(\lambda)=\frac{1}{m_{B}}\epsilon_{\mu\nu}(\lambda)P_{B}^{\nu}, \end{eqnarray} which satisfies \begin{eqnarray} \epsilon_{T\mu}(\pm2)=0, \quad \epsilon_{T\mu}(\pm1)=\frac{\epsilon(0)\cdot P_{B}\epsilon_{\mu}(\pm1)}{\sqrt{2} m_{B}}, \quad \epsilon_{T\mu}(0)=\frac{\sqrt{\frac{2}{3}}\epsilon(0)\cdot P_{B}\epsilon(0)}{m_{B}}. \end{eqnarray} One can find that the new vector $\epsilon_{T}$ is similar to the polarization vector $\epsilon$ of a vector meson, regardless of the related constants \cite{wwprd83014008}. In the PQCD approach, we should choose the proper wave functions for the B meson and light mesons to calculate the decay amplitude. Because the B meson is a pseudoscalar heavy meson, the two structure ($\gamma_{\mu}\gamma_{5}$) and $\gamma_{5}$ components remain as leading contributions \cite{wwprd83014008}. Thus the B meson wave function $\Phi_{B}$ is written as \begin{eqnarray} \Phi_{B}=\frac{i}{\sqrt{6}}\left[\left(\makebox[-1.5pt][l]{/}P+m_{B}\right)\gamma_{5}\phi_{B}(x)\right]. \end{eqnarray} For the distribution amplitude, we can choose \begin{eqnarray} \phi_{B}(x,b)=N_{B}x^{2}(1-x)^{2}\exp\left[-\frac{1}{2}\left(\frac{m_{B}x}{\omega_{B}}\right)^{2} \,-\,\frac{\omega_{B}^{2}b^{2}}{2}\right], \end{eqnarray} where $N_{B}$ is the normalization constant. For the light pseudoscalar meson (P), the wave function is generally defined as \begin{eqnarray} \Phi_{P}(x)\,=\,\frac{i}{\sqrt{6}}\gamma_{5}\left\{\makebox[-1.5pt][l]{/}P\phi_{P}^{A}(x)+m_{0}^{P} \phi_{P}^{P}(x) +m_{0}^{P}(\makebox[0pt][l]{/}n\makebox[0pt][l]{/}v-1)\phi_{P}^{T}(x)\right\}, \end{eqnarray} where $\phi_{P}^{A,P,T}$ and $m_{0}^{P}$ are the distribution amplitudes and chiral scale parameter of the pseudoscalar mesons, respectively. $x$ denotes the momentum fraction carried by the quark in the meson, and $n=(1,\,0,\,\textbf{0})$ and $v=(0,\,1,\,\textbf{0})$ are dimensionless light-like unit vectors pointing to the plus and minus directions, respectively. The wave functions for a generic tensor meson are defined by \cite{wwprd83014008} \begin{eqnarray} &&\Phi_{T}^{L}\,=\,\frac{1}{\sqrt{6}}\left[m_{T}\makebox[0pt][l]{/}\epsilon_{\bullet L}^{*}\phi_{T}(x)\,+\,\makebox[0pt][l]{/}\epsilon_{\bullet L}^{*}\makebox[-1.5pt][l]{/}P\phi_{T}^{t}(x)+m_{T}^{2}\frac{\epsilon_{\bullet}\cdot v}{P\cdot v}\phi_{T}^{s}(x)\right],\nonumber\\ &&\Phi_{T}^{\perp}\,=\,\frac{1}{\sqrt{6}}\left[m_{T}\makebox[0pt][l]{/}\epsilon_{\bullet \perp}^{*}\phi_{T}^{v}(x)\,+\,\makebox[0pt][l]{/}\epsilon_{\bullet \perp}^{*}\makebox[-1.5pt][l]{/}P\phi_{T}^{T}(x)\,+\,m_{T}i\epsilon_{\mu\nu\rho\sigma} \gamma_{5}\gamma^{\mu}\epsilon_{\bullet \perp}^{* \nu}n^{\rho}v^{\sigma}\phi_{T}^{a}(x)\right]. \end{eqnarray} Here $n$ is the moving direction of the tensor meson, and $v$ is the opposite direction. We adopt the convention $\epsilon^{0123}=1$. The vector $\epsilon_{\bullet}\,\equiv\,\frac{\epsilon_{\mu\nu}v^{\nu}}{P\cdot\, v}m_T$ is related to the polarization tensor. The distribution amplitudes can be given by \cite{wwprd83014008,zheng1,zheng2} \begin{eqnarray} &\phi_{T}(x)\,=\,\frac{f_{T}}{2\sqrt{2N_{c}}}\phi_{\|}(x),\;&\phi_{T}^{t}\,=\,\frac{f_{T}^{\perp}}{2\sqrt{2N_{c}}}h_{\|}^{(t)}(x), \nonumber\\ &\phi_{T}^{s}(x)\,=\,\frac{f_{T}^{\perp}}{4\sqrt{2N_{c}}}\frac{d}{dx}h_{\|}^{(s)}(x),\;& \phi_{T}^{T}(x)\,=\,\frac{f_{T}^{\perp}}{2\sqrt{2N_{c}}}\phi_{\perp}(x),\nonumber\\ &\phi_{T}^{v}(x)\,=\,\frac{f_{T}}{2\sqrt{2N_{C}}}g_{\perp}^{(v)}(x),\;&\phi_{T}^{a}(x)\,=\,\frac{f_{T}}{8\sqrt{2N_{c}}}\frac{d}{dx}g_{\perp}^{(a)}(x). \end{eqnarray} The asymptotic twist-2 distribution amplitude is given by \begin{eqnarray} \phi_{\|,\perp}(x)\,=\,30x(1-x)(2x-1). \end{eqnarray} The twist-3 distribution amplitudes are also asymptotic and the forms are chosen as \cite{wwprd83014008,zheng1,zheng2} \begin{eqnarray} &h_{\|}^{(t)}(x)\,=\,\frac{15}{2}(2x-1)(1-6x+6x^{2}), &h_{\|}^{(s)}(x)\,=\,15x(1-x)(2x-1),\nonumber\\ &g_{\perp}^{(a)}(x)\,=\,20x(1-x)(2x-1), &g_{\perp}^{(v)}(x)\,=\,5(2x-1)^{3}. \end{eqnarray} \section{Perturbative calculation}\label{sec:bcdv} \begin{figure}[htb] \begin{center} \vspace{-1cm} \centerline{\epsfxsize=10 cm \epsffile{PTdiagram.ps}} \vspace{-3cm} \caption{Diagrams contributing to the $B\,\rightarrow\,PT$ decays, with a pseudoscalar meson emitted.} \label{fig:lodiagram} \end{center} \end{figure} In this section, we will calculate the hard part $H(t)$, which includes the effective four quark operators and the necessary hard gluon connecting the four quark operator with the spectator quark \cite{lvepjc23275}. There are 8 types of diagrams contributing to the $B\,\rightarrow\,PT$ decays, shown in Fig.1. From the first two diagrams of Fig.1, (1a) and (1b), by perturbative QCD calculations, we gain the decay amplitudes for factorizable emission contribution. For $(V-A)(V-A)$ current, the amplitude is written as, \begin{eqnarray} \mathcal {A}_{eT}^{LL}&=&-8\sqrt{\frac{2}{3}} \pi C_{F} f_{P} M_{B}^{4} \int_{0}^{1}\,dx_{1}dx_{3}\,\int_{0}^{\infty}\,b_{1}db_{1}b_{3}db_{3}\,\phi_{B}(x_{1},b_{1})\nonumber\\ &&\times\left\{\left[\phi_{T}(x_{3})(x_{3}+1)-(\phi_{T}^{s}(x_{3})+\phi_{T}^{t}(x_{3}))r_{T}(2x_{3}-1) \right]h_{ef}(x_{1},x_{3},b_{1},b_{3})E_{ef}(t_{a})\right.\nonumber\\ &&\left.+\left[2r_{T}\phi_{T}^{s}(x_{3})\right]h_{ef}(x_{3},x_{1},b_{3},b_{1})E_{ef}(t_{b})\right\}, \label{ef} \end{eqnarray} where $r_{T}\,=\,\frac{m_{T}}{m_{B}}$, and $C_{F}\,=\,\frac{4}{3}$. $f_{P}$ is the decay constant of the pseudoscalar meson. The function $h_{ef}$, $t_{a,b}$ and $E_{ef}$ can be found in Appendix B. Form Eq.\ref{ef}, we can obtain the $\langle T|V-A|B\rangle$ transition form factor in the PQCD approach. The operators $O_{5},O_{6},O_{7}$, and $O_{8}$ have the structure of $(V-A)(V+A)$. In some decay modes, some of these operators will contribute to the decay amplitude. Because only the axial part of $(V+A)$ current will contribute to the pseudoscalar meson production, we have \begin{eqnarray} \mathcal {A}_{eT}^{LR}\,=\,-\mathcal {A}_{eT}^{LL}. \end{eqnarray} In some cases, in order to get the right color structure, we must do a Fierz transformation for these operators. So we obtain $(S-P)(S+P)$ operators from $(V-A)(V+A)$ ones. The decay amplitude is, \begin{eqnarray} \mathcal {A}_{eT}^{SP}&=&16\sqrt{\frac{2}{3}}C_{F}f_{P}\pi m_{B}^{4}\int_{0}^{1}\,dx_{1}dx_{3}\int_{0}^{\infty}\,b_{1}db_{1}b_{3}db_{3}\cdot\phi_{B}(x_{1},b_{1})\nonumber\\ &&\times \left\{\left[\phi_{T}(x_{3})+r_{T}(\phi_{T}^{s}(x_{3})(x_{3}+2)-\phi_{T}^{t}(x_{3})x_{3})\right]r_{0}h_{ef} (x_{1},x_{3},b_{1},b_{3})E_{ef}(t_{a})\right.\nonumber\\ &&\left. +\left[2r_{T}r_{0}\phi_{T}^{s}(x_{3})\right]h_{ef}(x_{3},x_{1},b_{3},b_{1})E_{ef}(t_{b})\right\}, \end{eqnarray} where $r_{0}=m_{0}^{P}/m_{B}$. For the non-factorizable diagrams Fig.(1c) and (1d), the amplitudes involve all three wave functions. The integration of $b_{3}$ can be performed through $\delta$ function $\delta(b_{1}-b_{3})$, leaving only integration of $b_{1}$ and $b_{2}$. For the (V-A)(V-A), (V-A)(V+A) and (S-P)(S+P) type operators, the amplitudes are \begin{eqnarray} \mathcal {M}_{eT}^{LL}&=&\frac{32}{3}C_{F}\pi m_{B}^{4}\int_{0}^{1}\,dx_{1}dx_{2}dx_{3}\int_{0}^{\infty}\,b_{1}db_{1}b_{2}db_{2}\,\phi_{B}(x_{1},b_{1})\phi_{P}^{A}(x_{2})\nonumber\\ &&\times \left\{\left[\phi_{T}(x_{3})(x_{2}-1)+(\phi_{T}^{s}(x_{3})-\phi_{T}^{t}(x_{3}))r_{T}x_{3}\right]\right.\nonumber\\ &&\cdot \left. h_{enf}(x_{1},1-x_{2},x_{3},b_{1},b_{2})E_{enf}(t_{c})\right.\nonumber\\ &&\left.+\left[\phi_{T}(x_{3})(x_{2}+x_{3})-(\phi_{T}^{s}(x_{3})+\phi_{T}^{t}(x_{3}))r_{T}x_{3}\right]\right.\nonumber\\ && \cdot \left. h_{enf}(x_{1},x_{2},x_{3},b_{1},b_{2})E_{enf}(t_{d})\right\},\label{eq22} \end{eqnarray} \begin{eqnarray} \mathcal {M}_{eT}^{LR}&=&-\frac{32}{3}C_{F}\pi r_{0}m_{B}^{4}\int_{0}^{1}\,dx_{1}dx_{2}dx_{3}\int_{0}^{\infty}b_{1}db_{1}b_{2}db_{2}\,\phi_{B}(x_{1},b_{1})\nonumber\\ &&\times\left\{\left[\phi_{P}^{T}(x_{2})(\phi_{T}(x_{3})(x_{2}-1)+r_{T}(\phi_{T}^{t}(x_3)(-x_{2} +x_{3}+1)+\phi_{T}^{s}(x_{3})(x_{2}+x_{3}-1)))\right.\right.\nonumber\\ &&\left.\left.+\phi_{P}^{P}(\phi_{T}(x_{2}-1)+r_{T}(\phi_{T}^{s}(x_{3})(x_{2}-x_{3}-1)-\phi_{T}^{t}(x_{3})(x_{2}+x_{3}-1)))\right]\right.\nonumber\\ &&\left.\cdot h_{enf}(x_{1},1-x_{2},x_{3},b_{1},b_{2})E_{enf}(t_{c})\right.\nonumber\\ &&\left.+\left[\phi_{P}^{P}(x_{2})(\phi_{T}(x_{3})x_{2}+r_{T}(\phi_{T}^{t}(x_{3})(x_{3}-x_{2})+\phi_{T}^{s}(x_{3})(x_{2}+x_{3})))\right.\right.\nonumber\\ &&\left.\left.+\phi_{P}^{T}(r_{T}(\phi_{T}^{s}(x_{3})(x_{3}-x_{2})+\phi_{T}^{t}(x_{3})(x_{2}+x_{3}))-\phi_{T}(x_{3})x_{2})\right]\right.\nonumber\\ &&\cdot\left.h_{enf}(x_{1},x_{2},x_{3},b_{1},b_{2})E_{enf}(t_{d})\right\}, \end{eqnarray} \begin{eqnarray} \mathcal {M}_{eT}^{SP}&=&-\frac{32}{3}C_{F}\pi m_{B}^{4}\int_{0}^{1}\,dx_{1}dx_{2}dx_{3}\int_{0}^{\infty}\,b_{1}db_{1}b_{2}db_{2}\,\phi_{B}(x_{1},b_{1})\phi_{P}^{A}(x_{2})\nonumber\\ &&\times \left\{\left[\phi_{T}(x_{3})(x_{2}-x_{3}-1)+(\phi_{T}^{s}(x_{3})+\phi_{T}^{t}(x_{3}))r_{T}x_{3}\right]\right.\nonumber\\ &&\cdot \left. h_{enf}(x_{1},1-x_{2},x_{3},b_{1},b_{2})E_{enf}(t_{c})\right.\nonumber\\ &&\left.+ \left[\phi_{T}(x_{3})x_{2}+(\phi_{T}^{t}-\phi_{T}^{s})r_{T}x_{3}\right]\right.\nonumber\\ && \cdot \left. h_{enf}(x_{1},x_{2},x_{3},b_{1},b_{2})E_{enf}(t_{d})\right\}. \end{eqnarray} The factorizable annihilation diagrams Fig.(1e) and (1f), the three kinds of decay amplitudes for these two diagrams are \begin{eqnarray} \mathcal {A}_{aT}^{LL}&=&8\sqrt{\frac{2}{3}}C_{F}f_{B}\pi m_{B}^{4}\int_{0}^{1}\,dx_{2}dx_{3}\int_{0}^{\infty}\,b_{2}db_{2}b_{3}db_{3}\nonumber\\ &&\times\left\{\left[2\phi_{P}^{P}(x_{2})r_{T}r_{0}(\phi_{T}^{s}(x_{3})(x_{3}-2)-\phi_{T}^{t}(x_{3}) x_{3})-\phi_{P}^{A}(x_{2})\phi_{T}(x_{3})(x_{3}-1)\right]\right.\nonumber\\ &&\left.\cdot h_{af}(x_{2},1-x_{3},b_{2},b_{3})E_{af}(t_{e})\right.\nonumber\\ &&\left.+\left[2\phi_{T}^{s}(x_{3})r_{T}r_{0}(\phi_{P}^{T}(x_{2})(x_{2}-1)+\phi_{P}^{P}(x_{2})(x_{2}+1)) -\phi_{P}^{A}(x_{2})\phi_{T}(x_{3})x_{2}\right]\right.\nonumber\\ &&\cdot\left.h_{af}(1-x_{3},x_{2},b_{3},b_{2})E_{af}(t_{f})\right\}, \end{eqnarray} \begin{eqnarray} \mathcal {A}_{aT}^{LR}=-\mathcal {A}_{aT}^{LL}, \end{eqnarray} \begin{eqnarray} \mathcal {A}_{aT}^{SP}&=&16\sqrt{\frac{2}{3}}C_{F}f_{B}\pi m_{B}^{4}\int_{0}^{1}\,dx_{2}dx_{3}\int_{0}^{\infty}\,b_{2}db_{2}b_{3}db_{3}\nonumber\\ &&\times\left\{\left[2\phi_{P}^{P}(x_{2})\phi_{T}(x_{3})r_{0}+\phi_{P}^{A}(x_{2})(\phi_{T}^{s}(x_{3}) +\phi_{T}^{t}(x_{3}))r_{T}(x_{3}-1)\right]\right.\nonumber\\ &&\cdot\left.h_{af}(x_{2},1-x_{3},b_{2},b_{3})E_{af}(t_{e})\right.\nonumber\\ &&\left.-\left[x_{2}r_{0}\phi_{T}(x_{3})(\phi_{P}^{T}(x_{2})-\phi_{P}^{P}(x_{2}))+2\phi_{P}^{A}(x_{2})\phi_{T}^{s}(x_{3})r_{T}\right]\right.\nonumber\\ &&\cdot\left.h_{af}(1-x_{3},x_{2},b_{3},b_{2})E_{af}(t_{f})\right\}. \end{eqnarray} For the non-factorizable annihilation diagrams Fig.(1g) and (1h), all three wave functions are involved in the amplitudes. The integration of $b_{3}$ can be performed by the $\delta$ function $\delta(b_{2}-b_{3})$. The expressions of contributions of these two diagrams are \begin{eqnarray} \mathcal {M}_{aT}^{LL}&=&\frac{32}{3}C_{F}\pi m_{B}^{4}\int_{0}^{1}\,dx_{1}dx_{2}dx_{3}\int_{0}^{\infty}\,b_{1}db_{1}b_{2}db_{2}\,\phi_{B}(x_{1},b_{1})\nonumber\\ &&\times \left\{\left[-r_{T}r_{0}\left(\phi_{P}^{T}(x_{2})(\phi_{T}^{s}(x_{3})(x_{2}-1+x_{3})-\phi_{T}^{t}(x_{3}) (x_{2}-1-x_{3}))\right.\right.\right.\nonumber\\ &&\left.\left.\left.+\phi_{P}^{P}(x_{2})(\phi_{T}^{t}(x_{3})(1-x_{2}-x_{3})+\phi_{T}^{s}(x_{3})(x_{2}-x_{3}+3)) \right)+\phi_{P}^{A}(x_{2})\phi_{T}(x_{3})x_{2}\right]\right.\nonumber\\ &&\left.\cdot h_{anf1}(x_{1},x_{2},x_{3},b_{1},b_{2})E_{anf}(t_{g})\right.\nonumber\\ &&\left.+\left[r_{T}r_{0}\left(\phi_{P}^{P}(x_{2})(\phi_{T}^{s}(x_{3})(x_{2}-x_{3}+1)+\phi_{T}^{t}(x_{3})(x_{2}+x_{3}-1))\right.\right.\right.\nonumber\\ &&\left.\left.\left.-\phi_{P}^{T}(x_{2})(\phi_{T}^{t}(x_{3})(x_{2}-x_{3}+1)+\phi_{T}^{s}(x_{3})(x_{2}+x_{3}-1))\right)\right.\right.\nonumber\\ &&\left.\left.+\phi_{P}^{A}(x_{2})\phi_{T}(x_{3})(x_{3}-1)\right]h_{anf2}(x_{1},x_{2},x_{3},b_{1},b_{2})E_{anf}(t_{h})\right\}, \end{eqnarray} \begin{eqnarray} \mathcal {M}_{aT}^{LR}&=&\frac{32}{3}C_{F}\pi m_{B}^{4} \int_{0}^{1}\,dx_{1}dx_{2}dx_{3}\int_{0}^{\infty}b_{1}db_{1}b_{2}db_{2}\,\phi_{B}(x_{1},b_{1})\nonumber\\ &&\times \left\{\left[r_{T}\phi_{P}^{A}(x_{2})(\phi_{T}^{s}(x_{3})-\phi_{T}^{t}(x_{3}))(x_{3}+1)-r_{0} \phi_{T}(x_{3})(\phi_{P}^{P}(x_{2})+\phi_{P}^{T}(x_{2}))\right.\right.\nonumber\\ &&\cdot\left.\left.(x_{2}-2)\right]h_{anf1}(x_{1},x_{2},x_{3},b_{1},b_{2})E_{anf}(t_{g})\right.\nonumber\\ &&\left.+\left[r_{0}\phi_{T}(x_{3})x_{2}(\phi_{P}^{P}(x_{2})+\phi_{P}^{T}(x_{2}))-r_{T}\phi_{P}^{A}(x_{2}) (\phi_{T}^{s}(x_{3})-\phi_{T}^{t}(x_{3}))(x_{3}-1)\right]\right.\nonumber\\ &&\cdot\left.h_{anf2}(x_{1},x_{2},x_{3},b_{1},b_{2})E_{anf}(t_{h})\right\}, \end{eqnarray} \begin{eqnarray} \mathcal {M}_{aT}^{SP}&=&\frac{32}{3}C_{F}\pi m_{B}^{4}\int_{0}^{1}\,dx_{1}dx_{2}dx_{3}\int_{0}^{\infty}\,b_{1}db_{1}b_{2}db_{2}\,\phi_{B}(x_{1},b_{1})\nonumber\\ &&\times\left\{\left[-r_{T}r_{0}\phi_{P}^{T}(x_{2})(\phi_{T}^{s}(x_{3})(x_{2}-1+x_{3})+\phi_{T}^{t}(x_{3})(x_{2}-1-x_{3}))\right.\right.\nonumber\\ &&\left.\left.+r_{0}r_{T}\phi_{P}^{P}(x_{2})(\phi_{T}^{s}(x_{3})(x_{2}-x_{3}+3)+\phi_{T}^{t}(x_{3})(x_{2}+x_{3}-1))\right.\right.\nonumber\\ &&\left.\left.+\phi_{P}^{A}(x_{2})\phi_{T}(x_{3})(x_{3}-1)\right] h_{anf1}(x_{1},x_{2},x_{3},b_{1},b_{2})E_{anf}(t_{g})\right.\nonumber\\ &&\left.+\left[-r_{0}r_{T}\phi_{P}^{P}(x_{2})(\phi_{T}^{s}(x_{3})(x_{2}+1-x_{3})+\phi_{T}^{t}(x_{3})(1-x_{2}-x_{3}))\right.\right.\nonumber\\ &&\left.\left.-r_{0}r_{T}\phi_{P}^{T}(x_{2})(\phi_{T}^{t}(x_{3})(-x_{2}+x_{3}-1)+\phi_{T}^{s}(x_{3})(x_{2}+x_{3}-1))\right.\right.\nonumber\\ &&\left.\left.+\phi_{P}^{A}(x_{2})\phi_{T}(x_{3})x_{2}\right]h_{anf2} (x_{1},x_{2},x_{3},b_{1},b_{2})E_{anf}(t_{h})\right\}.\label{eq30} \end{eqnarray} \begin{figure}[!htbh] \begin{center} \vspace{-1cm} \centerline{\epsfxsize=10 cm \epsffile{TPdiagram.ps}} \vspace{-3cm} \caption{Diagrams contributing to the $B\,\rightarrow\,PT$ decays, with a tensor meson emitted.} \label{fig:lodiagram2} \end{center} \end{figure} If we exchange the pseudoscalar meson and the tensor meson in Fig.1, the result will be different. Because a tensor meson can not be produced through (V $\pm$ A) or tensor current, the factorizable emission diagrams do not contribute to the amplitude of B decays with a tensor meson emitted \cite{zheng1,zheng2}. Therefore, there are only six diagrams left shown in Fig.2. The individual decay amplitudes for these diagrams can be easily deduced from eq.(\ref{eq22}-\ref{eq30}) by the replacement of the wave functions of the pseudoscalar and the tensor meson, \begin{eqnarray} \phi_P^A(x) \rightarrow -\phi_T(x), & \phi_P^P(x) \rightarrow \phi_T^s(x), & \phi_P^T(x) \rightarrow \phi_T^t(x),\nonumber\\ \phi_T(x) \rightarrow -\phi_P^A(x), & \phi_T^s(x) \rightarrow -\phi_P^P(x), & \phi_T^t(x) \rightarrow -\phi_P^T(x),\\ r_T \rightarrow r_0, & r_0 \rightarrow r_T. \nonumber \end{eqnarray} In addition, we must add a minus sign to ${M}_{eT}^{SP}$ after applying the above replacement. For the 39 $B\rightarrow PT$ decay channels, not all the effective operators contribute to each decay mode. We list the number of effective operators contributing to the individual decay channels in the Appendix B for reference. \section{NUMERICAL RESULTS AND DISCUSSIONS} For the numerical analysis, we need various input parameters, such as decay constants, CKM elements and the wave functions, which are given in Appendix A. The CP-averaged branching ratios for those $B\rightarrow PT$ decays with $\Delta S =1$, together with Isgur-Scora-Grinstein-Wise II (ISGW2) model \cite{prd67014002} and the QCDF results \cite{zheng2} are shown in table I. The experimental data are taken from Ref.\cite{jpg37075021} and Ref.\cite{zheng60}. Similarly, the branching ratios of $B\,\rightarrow\,PT$ decays with $\Delta S =0$ calculated in the PQCD approach are shown in Table II. For illustration, we classify these decays to categories by their dominant topologies indicated through the symbols T (color-allowed tree), C (color-suppressed tree), P (penguin emission) and PA (penguin annihilation). Although we include also the W annihilation and W exchange diagram contributions, none of these channels has dominant contribution from these two topology. For the theoretical uncertainties in our calculation, we estimated three kinds of them: The first errors are caused by the uncertainties of the decay constants of tensor mesons; The second errors are from the decay constant ($f_{B}\,=(\,0.21\,\pm\,0.02)$ GeV) of B meson and the shape parameter ($\omega_{B}\,=\,(0.5\,\pm\,0.05)$ GeV)in the B meson wave function \cite{zheng1,zheng2,wang7,wang13}; The third errors are estimated from the unknown next-to-leading order QCD corrections and the power corrections, characterized by the choice of the $\lambda_{QCD}\,=\,(0.25\,\pm\,0.05)$ GeV and the variations of the factorization scales shown in Appendix B, respectively. It is easy to see that the dominant errors for the branching ratio calculations are from the non-perturbative wave functions. It is easy to see that there are large theoretical uncertainties in any of the individual decay mode calculations. However, we can reduce the uncertainties by ratios of decay channels. For example, simple relations among some decay channels are derived in the limit of SU(3) flavor symmetry \begin{eqnarray} &&\mathcal {B}(B^{0}\rightarrow K_{2}^{*0}\pi^{0})\,\sim\,\mathcal {B}(B^{+}\rightarrow K_{2}^{*+}\pi^{0})\,\sim \,\frac{1}{2}\,\mathcal {B}(B^{0}\rightarrow K_{2}^{*+}\pi^{-})\nonumber\\ &&\sim\,\frac{1}{2}\,\mathcal {B}(B^{+}\rightarrow K_{2}^{*0}\pi^{+}),\nonumber\\ &&\frac{\mathcal {B}(B^{0}\rightarrow a_{2}^{-}K^{+})}{\mathcal {B}(B^{+}\rightarrow a_{2}^{0}K^{+})}= \frac{\mathcal {B}(B^{+}\rightarrow a_{2}^{+}K^{0})}{\mathcal {B}(B^{0}\rightarrow a_{2}^{0}K^{0})}=2. \label{guanxi} \end{eqnarray} One can find that our results basically agree with the relation given above within the errors. \begin{table} \centering \caption{The PQCD predictions of CP-averaged branching ratios (in units of $10^{-6}$) for $B\rightarrow PT$ decays with $\Delta S =1$, together with Isgur-Scora-Grinstein-Wise II (ISGW2) model \cite{prd67014002} and QCDF results \cite{zheng2}. The experimental data are from Ref.\cite{jpg37075021} and Ref.\cite{zheng60}.} \vspace{0.1cm} \begin{tabular}[t]{l!{\;\;\;\;}c!{\;\;\;}c!{\;\;\;\;\;\;\;}c!{\;\;\;\;\;\;\;}c!{\;\;\;\;\;\;\;}r} \hline\hline \vspace{0.3cm} \multirow{1}{*}{Decay Modes} &\multirow{2}{*}{class}& \multirow{2}{*}{This Work} &\multirow{2}{*}{ISGW2 [24]}&\multirow{2}{*}{QCDF [4]} &\multirow{2}{*}{Expt.}\\ \hline \vspace{0.4cm} \multirow{1}{*}{$B^{+}\rightarrow K_{2}^{*0}\pi^{+}$}& \multirow{1}{*}{PA} & \multirow{1}{*}{$0.9^{\;+0.2\;+0.2\;+0.3}_{\;-0.2\;-0.2\;-0.2}$} &\multirow{1}{*}{...}&\multirow{1}{*}{$3.1_{-3.1}^{+8.3}$} & \multirow{1}{*}{$5.6_{-1.4}^{+2.2}$}\\ \vspace{0.1cm} $B^{+}\rightarrow K_{2}^{*+}\pi^{0}$ &PA& $0.4^{\;+0.1\;+0.1\;+0.1}_{\;-0.0\;-0.1\;-0.1}$&0.090 &$2.2_{-1.9}^{+4.7}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{0}K^{+}$&T,PA&$2.1^{\;+0.7\;+0.6\;+0.6}_{\;-0.6\;-0.5\;-0.5}$&0.31&$4.9_{-4.2}^{+8.4}$&$<45$\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{+}K^{0}$&PA&$3.1^{\;+0.9\;+0.9\;+1.1}_{\;-0.8\;-0.8\;-0.9}$&0.011&$8.4_{-7.2}^{+16.1}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow f_{2}K^{+}$&T,PA,P&$11.8^{\;+2.7\;+3.2\;+3.0}_{\;-2.4\;-2.8\;-2.7}$&0.34&$3.8_{-3.0}^{+7.8}$&$1.06_{-0.29}^{+0.28}$\\ \vspace{0.1cm} $B^{+}\rightarrow f^{\prime}K^{+}$&P,PA&$3.8^{\;+0.4\;+0.9\;+1.0}_{\;-0.4\;-0.8\;-0.8}$&0.004&$4.0_{-3.6}^{+7.4}$&$<7.7$\\ \vspace{0.1cm} $B^{+}\rightarrow K_{2}^{*+}\eta$&PA,P&$0.8^{\;+0.2\;+0.3\;+0.3}_{\;-0.2\;-0.2\;-0.3}$&0.031&$6.8_{-8.7}^{+13.5}$&$9.1\pm3.0$\\ \vspace{0.1cm} $B^{+}\rightarrow K_{2}^{*+}\eta^{\prime}$&PA,P&$12.7^{\;+3.7\;+4.5\;+4.0}_{\;-3.2\;-3.5\;-3.5}$&1.41&$12.1_{-12.1}^{+20.7}$&$28.0_{-5.0}^{+5.3}$\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*+}\pi^{-}$&PA&$1.0^{\;+0.2\;+0.2\;+0.3}_{\;-0.2\;-0.2\;-0.2}$&...&$3.3_{-3.2}^{+8.5}$&$<6.3$\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*0}\pi^{0}$&PA&$0.6^{\;+0.2\;+0.1\;+0.2}_{\;-0.1\;-0.1\;-0.1}$&0.084&$1.2_{-1.3}^{+4.3}$&$<4.0$\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{-}K^{+}$&T,PA&$5.0^{\;+1.6\;+1.4\;+1.3}_{\;-1.4\;-1.1\;-1.0}$&0.58&$9.7_{-8.1}^{+17.2}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{0}K^{0}$&PA&$2.0^{\;+0.5\;+0.4\;+0.6}_{\;-0.5\;-0.4\;-0.5}$&0.005&$4.2_{-3.5}^{+8.3}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}K^{0}$&PA,P&$9.2^{\;+2.0\;+2.5\;+2.6}_{\;-1.8\;-2.1\;-2.2}$&0.005&$3.4_{-3.1}^{+8.5}$&$2.7_{-1.2}^{+1.3}$\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}^{\prime}K^{0}$&P,PA&$3.7^{\;+0.3\;+0.7\;+0.9}_{\;-0.4\;-0.8\;-0.9}$&$0.00007$&$3.8_{-3.5}^{+7.3}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*0}\eta$&PA,P&$1.0^{\;+0.2\;+0.3\;+0.3}_{\;-0.2\;-0.2\;-0.3}$&0.029&$6.6_{-8.7}^{+13.5}$&$9.6\pm2.1$\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*0}\eta^{\prime}$&PA,P&$11.6^{\;+3.6\;+4.2\;+3.8}_{\;-2.9\;-3.1\;-3.1}$&1.30&$12.4_{-12.4}^{+21.3}$&$13.7_{-3.1}^{+3.2}$\\ \hline\hline \end{tabular}\label{S} \end{table} \begin{table}[!h] \centering \caption{The PQCD predictions of CP-averaged branching ratios (in units of $10^{-7}$) for $B\rightarrow PT$ decays with $\Delta S =0$, together with Isgur-Scora-Grinstein-Wise II (ISGW2) model \cite{prd67014002} and QCDF results \cite{zheng2}. The experimental data are from Ref.\cite{jpg37075021} and Ref.\cite{zheng60}. } \vspace{0.3cm} \begin{tabular}{l!{\;\;\;\;}c!{\;\;\;}c!{\;\;\;\;\;\;}c!{\;\;\;\;\;}c!{\;\;\;\;\;}r} \hline\hline \vspace{0.3cm} \multirow{1}{*}{Decay Modes} &\multirow{1}{*}{class}& \multirow{1}{*}{This Work} &\multirow{1}{*}{ISGW2 [24]}& \multirow{1}{*}{QCDF [4]} &\multirow{1}{*}{Expt.}\\ \hline \vspace{0.4cm} \multirow{1}{*}{$B^{+}\rightarrow a_{2}^{0}\pi^{+}$}&\multirow{1}{*}{T} & \multirow{1}{*}{$29.1^{\;+12.8\;+14.2\;+3.1}_{\;-10.6\;-10.4\;-2.8}$}&\multirow{1}{*}{26.02}& \multirow{1}{*}{$30_{-12}^{+14}$} & \multirow{1}{*}{...}\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{+}\pi^{0}$&T,C& $0.3^{\;+0.0\;+0.1\;+0.0}_{\;-0.0\;-0.1\;-0.0}$&0.01&$2.4_{-3.1}^{+4.9}$& ...\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{+}\eta$&C,PA,P &$1.0^{\;+0.3\;+0.4\;+0.4}_{\;-0.3\;-0.3\;-0.3}$&2.94&$1.1_{-1.1}^{+2.8}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{+}\eta^{\prime}$&C,PA,P&$3.5^{\;+1.4\;+1.6\;+1.1}_{\;-1.0\;-1.1\;-0.8}$&13.1&$1.1_{-1.2}^{+4.7}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow f_{2}\pi^{+}$&T&$42.5^{\;+18.9\;+18.9\;+4.2}_{\;-15.4\;-13.9\;-3.9}$&28.74&$27_{-12}^{+14}$&$15.7_{-4.9}^{+6.9}$\\ \vspace{0.1cm} $B^{+}\rightarrow f_{2}^{\prime}\pi^{+}$&T&$1.2^{\;+0.3\;+0.4\;+0.1}_{\;-0.2\;-0.3\;-0.1}$&0.37&$0.09_{-0.09}^{+0.24}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow K_{2}^{*+}\bar{K}^{0}$&PA,P&$1.2^{\;+0.2\;+0.2\;+0.3}_{\;-0.2\;-0.2\;-0.3}$&$4.0\times 10^{-4}$&$4.4_{-4.1}^{+7.4}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow \bar{K}_{2}^{*0}K^{+}$&PA&$0.8^{\;+0.1\;+0.2\;+0.3}_{\;-0.1\;-0.2\;-0.2}$&...&$1.2_{-1.2}^{+5.2}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{-}\pi^{+}$&T&$98.9^{\;+35.1\;+42.6\;+5.8}_{\;-29.9\;-32.0\;-9.7}$&48.82&$52_{-18}^{+18}$&$<3000$\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{+}\pi^{-}$&T,PA&$2.7^{\;+0.5\;+0.8\;+0.4}_{\;-0.3\;-0.5\;-0.3}$&...&$2.1_{-1.7}^{+4.3}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{0}\pi^{0}$&C &$4.6^{\;+1.2\;+1.6\;+0.9}_{\;-1.0\;-1.2\;-0.7}$&0.003&$2.4_{-1.9}^{+4.2}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{0}\eta$&C,PA,P&$0.6^{\;+0.1\;+0.2\;+0.1}_{\;-0.1\;-0.1\;-0.1}$&1.38&$0.6_{-0.5}^{+1.6}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{0}\eta^{\prime}$&C,PA,P&$1.8^{\;+0.6\;+0.7\;+0.4}_{\;-0.5\;-0.6\;-0.4}$&6.15&$0.5_{-0.4}^{+2.2}$&..\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}\pi^{0}$&C&$2.8^{\;+0.7\;+0.7\;+0.6}_{\;-0.6\;-0.6\;-0.4}$&0.003&$1.5_{-1.4}^{+4.2}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}^{\prime}\pi^{0}$&P&$0.2^{\;+0.0\;+0.1\;+0.0}_{\;-0.0\;-0.1\;-0.0}$&$4.0\times 10^{-5}$&$0.05_{-0.05}^{+0.12}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}\eta$&C,P,PA&$2.6^{\;+0.7\;+0.8\;+0.7}_{\;-0.5\;-0.6\;-0.6}$&1.52&$1.7_{-1.2}^{+2.3}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}\eta^{\prime}$&C,PA,P&$3.3^{\;+1.0\;+1.1\;+0.9}_{\;-0.8\;-0.9\;-0.9}$&6.8&$1.3_{-1.3}^{+2.2}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}^{\prime}\eta$&PA,P&$0.08^{\;+0.03\;+0.03\;+0.01}_{\;-0.02\;-0.03\;-0.02}$&0.02&$0.02_{-0.03}^{+0.06}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}^{\prime}\eta^{\prime}$&PA,P&$0.09^{\;+0.00\;+0.02\;+0.02}_{\;-0.00\;-0.02\;-0.03}$&0.09&$0.08_{-0.05}^{+0.08}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*+}K^{-}$&PA&$0.16^{\;+0.02\;+0.03\;+0.03}_{\;-0.03\;-0.04\;-0.03}$&...&$0.3_{-0.2}^{+0.7}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*-}K^{+}$&PA&$0.9^{\;+0.1\;+0.3\;+0.2}_{\;-0.1\;-0.1\;-0.2}$&...&$1.3_{-1.0}^{+1.6}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*0}\bar{K}^{0}$&P,PA&$1.5^{\;+0.3\;+0.3\;+0.5}_{\;-0.3\;-0.3\;-0.4}$&$3.0\times 10^{-4}$&$5.4_{-4.9}^{+8.8}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow \bar{K}_{2}^{*0}K^{0}$&P,PA&$0.8^{\;+0.1\;+0.2\;+0.3}_{\;-0.1\;-0.1\;-0.2}$&...&$2.2_{-2.2}^{+5.4}$&...\\ \hline\hline \end{tabular}\label{S2} \end{table} Among considered $B\rightarrow PT$ decays, the PQCD predictions for the CP-averaged branching ratios vary in the range of $10^{-5}$ to $10^{-8}$. From the numerical results, we can see that the predicted branching ratios of penguin-dominated $B\rightarrow PT$ decays in PQCD are larger than those of naive factorization \cite{prd67014002,jpg36095004,arxiv1010.3077} by one or two orders of magnitude, but are close to the QCDF predictions \cite{zheng2}. For the leading tree-dominated modes such as $a_{2}^{-}\pi^{+}$ and $f_{2}\pi^{+}$, the predicted results in PQCD are bigger than those obtained by QCDF \cite{zheng2} but smaller than Ref.\cite{arxiv1010.3077}. The reason is that the B to tensor form factor in this work is larger than that used in Ref.\cite{zheng2}. But for $a_{2}^{0}\pi^{+}$, the result is not larger than but the same as Ref.\cite{zheng2}. This is the result of destructive interference from other topologies. It is worth of remarking that $B^{0}\,\rightarrow \,K_{2}^{*+}K^{-}$ and $B^{0}\,\rightarrow \,K_{2}^{*-}K^{+}$ are pure annihilation modes, which can be perturbatively calculated in the PQCD approach. The decays with a tensor meson emitted are prohibited in the naive factorization approach for the reason that a tensor meson can not be produced from the local (V$\,\pm\,$A) or tensor currents \cite{zheng1,zheng2}. In order to predict these decay channels, it is necessary to go beyond the naive factorization framework to estimate the contributions of the nonfactorizable and annihilation diagrams. Fortunately, in the PQCD approach, the contributions of the nonfactorizable diagrams with a tensor meson emitted (Fig.2, c and d) are sizable and larger than that of the nonfactorizable diagrams emitting a pseudoscalar meson (Fig.1, c and d). The reason is that the asymmetry of the light-cone distribution amplitudes of the tensor meson makes the contributions from Fig.2(c) and (d) strengthen with each other, while the situation is contrary for Fig.1(c) and (d). One can see from Table II that for $B\rightarrow a_{2}\pi$ decays, the $a_{2}^{+}\pi^{-}$ and $a_{2}^{+}\pi^{0}$ modes are highly suppressed relative to $a_{2}^{-}\pi^{+}$ and $a_{2}^{0}\pi^{+}$, respectively. It is a natural consequence of factorization as the tensor meson can not be created from the (V-A) current. For $B\rightarrow a_{2}^{0}\pi^{+}(a_{2}^{-}\pi^{+})$, the dominant contribution is from color-allowed factorizable emission diagrams, while for $B\rightarrow a_{2}^{+}\pi^{0}(a_{2}^{+}\pi^{-})$, this large contribution is prohibited for the above reason. Therefore for $B^{+}\rightarrow a_{2}^{+}\pi^{0}$, the left factorizable emission diagrams are color-suppressed, and for $B^{0}\rightarrow a_{2}^{+}\pi^{-}$, the dominant contribution is from nonfactorizable emission diagrams suppressed by Wilson coefficient $C_{1}$. From table~\ref{tab:suanfu1}, one can see that the factorizable contributions for the $B^{+}\rightarrow K_{2}^{*0}\pi^{+}$ and $B^{0}\rightarrow K_{2}^{*+}\pi^{-}$ decays are 0 because of the emitted meson in these diagrams is the tensor meson. The contributions from nonfactorizable diagrams are suppressed by the small Wilson coefficients $C_{3}$ and $C_{5}$. Therefore the dominant contribution comes from the penguin annihilation diagrams. From table~\ref{S}, one can see that our predictions for the $B^{+}\rightarrow K_{2}^{*0}\pi^{+}$ and $B^{0}\rightarrow K_{2}^{*+}\pi^{-}$ decays are much smaller than that from Ref. \cite{zheng2}. The reason is that in Ref.\cite{zheng2}, there is an extremely large contribution from the quark loop diagrams. In PQCD approach, the quark loop correction is next-to-leading order and not considered in this work. In the $B\rightarrow f_{2}K$ decays, we have tree diagram contribution as well as penguin emission diagram contributions, thus makes the branching ratios much larger than that of the $B^{+}\rightarrow K_{2}^{*0}\pi^{+}$ and $B^{0}\rightarrow K_{2}^{*+}\pi^{-}$ decays. The current experimental measurements still have very large error bars. We expect the future experiment to give more information for these decays. For $B\rightarrow K_{2}^{*}\eta^{(\prime)}$ and $B\rightarrow a_{2}\eta^{(\prime)}$ decays, one finds that $\mathcal {B}(B\rightarrow K_{2}^{*}\eta^{\prime})\,\gg\,\mathcal {B}(B\rightarrow K_{2}^{*} \eta)$ and $\mathcal {B}(B\rightarrow a_{2}\eta)\,\ll\,\,\mathcal {B}(B\rightarrow a_{2}\eta^{\prime})$. For these modes, both $\eta_{q}$ and $\eta_{s}$ will contribute, but the relative sign of the $\eta_{s}$ state with respect to the $\eta_{q}$ is negative for the $\eta$ and positive for the $\eta^{\prime}$, which leads to a destructive interference between $\eta_{q}$ and $\eta_{s}$ for $B\rightarrow K_{2}^{*}\eta$ and $B\rightarrow a_{2}\eta$, but a constructive interference for $B\rightarrow K_{2}^{*}\eta^{\prime}$ and $B\rightarrow a_{2}\eta^{\prime}$. This is very similar to the situation for $B\rightarrow K\eta^{(\prime)}$ and $B_{c}\rightarrow K^{+}\eta^{(\prime)}$ decays \cite{liuxin,liu57}. \begin{table}[!ht] \centering \caption{The PQCD predictions of direct CP asymmetries($\%$) for $B\rightarrow PT$ decays with $\Delta S =1$, comparison with the QCDF results \cite{zheng2}. The experimental data are from Ref.\cite{jpg37075021}.} \vspace{0.3cm} \begin{tabular}{l!{\;\;\;\;\;\;\;\;\;}c!{\;\;\;\;\;\;\;\;\;}c!{\;\;\;\;\;\;\;\;\;}r} \hline\hline\vspace{0.3cm} \multirow{1}{*}{Decay Modes} & \multirow{1}{*}{This Work} & \multirow{1}{*}{QCDF [4]} &\multirow{1}{*}{Expt.}\\ \hline \vspace{0.4cm} \multirow{1}{*}{$B^{+}\rightarrow K_{2}^{*0}\pi^{+}$} & \multirow{1}{*}{$-5.5^{\;+0.3\;+2.6\;+1.6}_{\;-0.4\;-0.0\;-1.2}$} & \multirow{1}{*}{$1.6_{-1.8}^{+2.2}$} & \multirow{1}{*}{$5^{+29}_{-24}$}\\ \vspace{0.1cm} $B^{+}\rightarrow K_{2}^{*+}\pi^{0}$ & $-6.9^{\;+2.6\;+1.6\;+3.7}_{\;-2.9\;-1.1\;-3.6}$ &$0.2_{-14.8}^{+17.8}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{0}K^{+}$&$-52.9^{\;+2.0\;+2.1\;+8.6}_{\;-2.2\;-0.4\;-10.1}$&$27.1_{-35.0}^{+33.3}$&$...$\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{+}K^{0}$&$2.9^{\;+0.1\;+0.1\;+0.5}_{\;-0.1\;-0.2\;-0.8}$&$-0.6_{-0.8}^{+0.4}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow f_{2}K^{+}$&$-24.6^{\;+1.5\;+2.4\;+4.6}_{\;-1.0\;-2.6\;-5.9}$&$-39.5_{-25.5}^{+49.4}$&$-68.0_{-17}^{+19}$\\ \vspace{0.1cm} $B^{+}\rightarrow f^{\prime}K^{+}$&$8.6^{\;+1.5\;+1.4\;+1.5}_{\;-1.6\;-1.0\;-1.8}$&$-0.6_{-6.0}^{+4.3}$&$...$\\ \vspace{0.1cm} $B^{+}\rightarrow K_{2}^{*+}\eta$&$-5.4^{\;+1.1\;+2.2\;+2.3}_{\;-0.6\;-2.0\;-1.3}$&$1.5_{-5.6}^{+7.4}$&$-45\pm30$\\ \vspace{0.1cm} $B^{+}\rightarrow K_{2}^{*+}\eta^{\prime}$&$2.0^{\;+0.1\;+0.1\;+0.9}_{\;-0.1\;-0.3\;-0.5}$&$-1.7_{-3.9}^{+3.2}$&$...$\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*+}\pi^{-}$&$-17.5^{\;+1.4\;+1.6\;+2.7}_{\;-1.6\;-1.8\;-1.3}$&$1.7_{-5.2}^{+4.2}$&$...$\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*0}\pi^{0}$&$-10.7^{\;+0.1\;+1.7\;+1.9}_{\;-0.0\;-1.8\;-1.8}$&$7.1_{-24.1}^{+23.5}$&$...$\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{-}K^{+}$&$-48.3^{\;+1.9\;+1.3\;+7.1}_{\;-2.4\;-0.3\;-9.9}$&$-21.5_{-35.0}^{+28.9}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{0}K^{0}$&$1.9^{\;+0.5\;+0.4\;+0.6}_{\;-0.5\;-0.4\;-0.5}$&$6.7_{-6.9}^{+6.5}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}K^{0}$&$1.2^{\;+0.3\;+0.5\;+0.2}_{\;-0.2\;-0.5\;-0.1}$&$-7.3_{-7.9}^{+8.4}$&$...$\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}^{\prime}K^{0}$&$-1.0^{\;+0.1\;+0.0\;+0.0}_{\;-0.3\;-0.1\;-0.1}$&$0.8_{-0.7}^{+1.2}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*0}\eta$&$-5.0^{\;+0.5\;+0.2\;+1.7}_{\;-0.4\;-0.1\;-1.7}$&$3.2_{-4.8}^{+16.5}$&$-7.0\pm19.0$\\ \vspace{0.1cm} $B^{0}\rightarrow K_{2}^{*0}\eta^{\prime}$&$0.7^{\;+0.1\;+0.1\;+0.3}_{\;-0.0\;-0.0\;-0.2}$&$-2.2_{-4.0}^{+3.3}$&$...$\\ \hline\hline \end{tabular}\label{san} \end{table} We also give the direct CP asymmetry parameters for those $B\rightarrow PT$ decays with $\Delta S =1$, together with the QCDF results \cite{zheng2} shown in table~\ref{san}. The experimental data are taken from Ref.\cite{jpg37075021}. Similarly, the direct CP asymmetry parameters of $B\,\rightarrow\,PT$ decays with $\Delta S =0$ calculated in the PQCD approach are shown in Table~\ref{S3}. The origin of theoretical uncertainties shown in these two tables are the same as those of the branching ratios in table~\ref{S} and \ref{S2}. However, the dominant uncertainty here is the third one from the unknown higher order QCD corrections, since the hadronic parameter uncertainty mostly cancels due to the fact that the CP asymmetry is defined as the ratio of branching ratios. It is easy to see that some channels have very large direct CP asymmetries. But many of them have small branching ratios to make them difficult for experiments. We recommend the experimenters to search for the direct CP asymmetry in the channels like $B^+\to f_2 K^+$, $B^0 \to a_2^- K^+$, $B^+\to a_2^+\eta'$ and $B^+ \to f_2 \pi^+$, for they have both large branching ratios and direct CP asymmetry parameters. In fact, there are already some experimental measurements for the CP asymmetries shown in table~\ref{san} and \ref{S3}. Although the error bars are still large, we are happy to see that all these measured entries have the same sign as our theoretical calculations. This may imply that our approach gives the dominant strong phase in these channels. The decays $B^{0}(\bar B^{0}) \rightarrow a_{2}^{-}\pi^{+}/ a_{2}^{+}\pi^{-}$, $B^{0}(\bar B^{0}) \rightarrow K_{2}^{*+}K^{-}/ K_{2}^{*-}K^{+}$ and $B^{0}(\bar B^{0}) \rightarrow K_{2}^{*0} \bar K^{0}/ \bar K_{2}^{*0}K^{0}$ have a very complicated CP pattern through the $B^0\bar B^0$ mixing. Four decay amplitudes are involved for each group of decays with 5 CP parameters to measure. We refer the readers to the similar situation for $B^{0}(\bar B^{0}) \rightarrow \rho^{-}\pi^{+}/ \rho^{+}\pi^{-}$ decays \cite{pirho}. \begin{table}[!ht] \centering \caption{The PQCD predictions of direct CP asymmetries($\%$) for $B\rightarrow PT$ decays with $\Delta S =0$, comparison with the QCDF results \cite{zheng2}. The experimental data are from Ref.\cite{jpg37075021}.} \vspace{0.3cm} \begin{tabular}{l!{\;\;\;\;\;\;\;\;\;}c!{\;\;\;\;\;\;\;\;\;}c!{\;\;\;\;\;\;\;\;\;}r} \hline\hline \vspace{0.3cm} \multirow{1}{*}{Decay Modes} & \multirow{1}{*}{This Work} & \multirow{1}{*}{QCDF [4]} &\multirow{1}{*}{Expt.}\\ \hline \vspace{0.4cm} \multirow{1}{*}{$B^{+}\rightarrow a_{2}^{0}\pi^{+}$} & \multirow{1}{*}{$-0.6^{\;+0.1\;+0.4\;+0.2}_{\;-0.1\;-0.5\;-0.6}$}& \multirow{1}{*}{$9.6_{-46.6}^{+47.9}$} & \multirow{1}{*}{...}\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{+}\pi^{0}$ & $-5.8^{\;+0.1\;+21.3\;+75.8}_{\;-0.1\;-12.4\;-44.7}$&$-24.3_{-75.7}^{+124.3}$& ...\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{+}\eta$ &$-90.9^{\;+8.4\;+9.6\;+12.3}_{\;-3.7\;-1.0\;-5.1}$&$27.6_{-127.6}^{+73.4}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow a_{2}^{+}\eta^{\prime}$&$-44.5^{\;+0.8\;+1.3\;+6.8}_{\;-0.5\;-0.2\;-8.8}$&$31.3_{-131.3}^{+61.3}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow f_{2}\pi^{+}$&$27.6^{\;+3.4\;+1.0\;+8.9}_{\;-2.5\;-1.4\;-7.1}$&$60.2_{-72.3}^{+27.1}$&$41\pm30$\\ \vspace{0.1cm} $B^{+}\rightarrow f_{2}^{\prime}\pi^{+}$&$0.03^{\;+0.1\;+9.6\;+13.8}_{\;-0.1\;-8.9\;-15.8}$&$0.0$&...\\ \vspace{0.1cm} $B^{+}\rightarrow K_{2}^{*+}\bar{K}^{0}$&$-43.7^{\;+1.3\;+1.8\;+16.4}_{\;-2.0\;-0.5\;-12.4}$&$30.3_{-33.7}^{+51.2}$&...\\ \vspace{0.1cm} $B^{+}\rightarrow \bar{K}_{2}^{*0}K^{+}$&$49.5^{\;+4.7\;+3.1\;+23.5}_{\;-4.2\;-4.8\;-13.1}$&$-0.26_{-0.27}^{+0.23}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{0}\pi^{0}$ &$53.5^{\;+4.7\;+6.9\;+4.2}_{\;-3.8\;-6.9\;-3.5}$&$-86.2_{-26.4}^{+128.9}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{0}\eta$ &$-17.7^{\;+17.7\;+11.2\;+21.8}_{\;-15.7\;-22.6\;-24.5}$&$-76.7_{-19.2}^{+100}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow a_{2}^{0}\eta^{\prime}$&$-59.9^{\;+0.6\;+10.0\;+7.2}_{\;-0.0\;-6.0\;-7.0}$&$-66.0_{-41.1}^{+154}$&..\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}\pi^{0}$&$-9.8^{\;+13.9\;+2.8\;+11.8}_{\;-13.2\;-7.5\;-10.8}$&$-37.2_{-85.5}^{+103.8}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}^{\prime}\pi^{0}$&$-0.7^{\;+2.7\;+1.0\;+6.8}_{\;-2.5\;-1.8\;-6.4}$&$0.0$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}\eta$&$-42.5^{\;+1.7\;+1.4\;+9.1}_{\;-1.1\;-1.8\;-9.8}$&$69.7_{-102.7}^{+25.7}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}\eta^{\prime}$&$-0.05^{\;+0.1\;+5.0\;+5.3}_{\;-0.6\;-5.1\;-5.3}$&$82.3_{-94.8}^{+22.9}$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}^{\prime}\eta$&$70.9^{\;+0.0\;+11.0\;+11.0}_{\;-2.7\;-15.2\;-12.3}$&$0.0$&...\\ \vspace{0.1cm} $B^{0}\rightarrow f_{2}^{\prime}\eta^{\prime}$&$45.5^{\;+3.2\;+13.5\;+18.5}_{\;-6.8\;-12.1\;-18.8}$&$0.0$&...\\ \hline\hline \end{tabular}\label{S3} \end{table} For the decays involving $f_{2}^{(\prime)}$ in the final states, we have taken the $f_{2}-f_{2}^{\prime}$ mixing (Eq.(\ref{ffpmix})) into account, while in Ref.\cite{zheng2}, $f_{2}$ is considered as an $(u\bar{u}+d\bar{d})/\sqrt{2}$ state and $f_{2}^{\prime}$ a pure $s\bar{s}$ state. Although the mixing angle is small, the interference between $f_{2}^q$ and $f_{2}^{s}$ can bring some remarkable change. For example, the branching ratio of $B^{+}\rightarrow f_{2}^{\prime}\pi^{+}$ is bigger than the prediction in Ref.\cite{zheng2}. This can be understood as follows: Because of the contribution from the color-allowed factorizable emission diagrams, although suppressed by the mixing angle, the contribution of $f_{2}^{q}$ term is at the same level with that of $f_{2}^{s}$ term. Due to the enhancement from $f_{2}^{q}$ term, the branching ratio becomes larger than the prediction without taking the mixing into account. The mixing can also bring remarkable change to direct CP asymmetry. For $B\rightarrow f_{2}^{\prime}\eta^{(\prime)}$, the direct CP asymmetries are zero \cite{zheng2}\ when $f_{2}$ is a pure $s\bar{s}$ state. Since the direct CP asymmetry is proportional to the interference between the tree and penguin contributions \cite{laoban}, it should be zero indeed because there are no contributions of penguin operators when $f_{2}^{\prime}$ is a pure $s\bar{s}$ state. When taking the mixing into account, $f_{2}^{q}$ term can provide penguin contributions, then the direct CP asymmetries are no longer zero in this work. For $B\rightarrow f_{2}\,\eta^{(\prime)}$ and $f_{2}^{\prime}\,\eta^{(\prime)}$ decays, the relevant final state mesons contain the same components $\frac{1}{\sqrt{2}}(u\bar{u}+d\bar{d})$ and $s\bar{s}$, therefore they have the similar branching ratios. The small differences among their branching ratios mainly come from the different mixing coefficients, i.e., $\cos\phi$, $\sin\phi$, $\cos\theta$ and $\sin\theta$ (see Appendix A). \section{SUMMARY} We studied the charmless hadronic $B\rightarrow PT$ decays by employing the PQCD approach based on the $k_{T}$ factorization. In addition to usual factorization contributions, we also calculated the non-factorizable and annihilation type diagrams. From our numerical calculation and phenomenological analysis, we found the following results: \begin{itemize} \item The factorizable amplitude with a tensor meson emitted vanishes because a tensor meson cannot be created from the $(V\pm A)$ or $(S\pm P)$ currents. For these decay modes, the non-factorizable and annihilation diagrams' contributions are important. For example, $B^{+}\rightarrow K_{2}^{*0}\pi^{+}$ and $B^{0}\rightarrow K_{2}^{*+}\pi^{-}$ have sizable branching ratios because of the contributions of penguin annihilation diagrams. \item For penguin-dominated $B\rightarrow PT$ decays, because of the dynamical penguin enhancement, the predicated branching ratios are larger by one or two orders of magnitude than those predicted in the naive factorization approach but close to the QCD factorization predictions in Ref.\cite{zheng2} \item For tree-dominated decay modes, the branching ratios predicted by PQCD are usually very small except for $a_{2}^{0}\pi^{+}$, $a_{2}^{-}\pi^{+}$ and $f_{2}\pi^{+}$ modes with branching ratios of order $10^{-6}$ or even larger. This basically agrees with the situation in Ref.\cite{zheng2} and Ref.\cite{arxiv1010.3077}. \item For $B\rightarrow K_{2}^{*}\eta^{(\prime)}$ decays, we find $\mathcal {B}(B\rightarrow K_{2}^{*}\eta^{\prime})\,\gg\,\mathcal {B}(B\rightarrow K_{2}^{*}\eta)$. This large difference can be explained by the destructive and constructive interference between $\eta_{q}$ and $\eta_{s}$. \item From our calculation, we find that the interference between$f_{2}^{q}$ and $f_{2}^{s}$ can bring some remarkable effects to some decays involving a $f_{2}^{\prime}$ meson in branching ratio and direct CP asymmetry. \item We predict large direct CP asymmetry for some of the $B\to PT$ decays that accessible for the near future experiments. \end{itemize} \section*{Acknowledgment} We are very grateful to Xin Liu and Wei Wang for helpful discussions. This work is partially supported by National Science Foundation of China under the Grant No.11075168. \begin{appendix} \section{Input Parameters And Distribution Amplitudes} The masses and decay constants of tensor mesons are summarized in Table V. \begin{table}[!htbh] \centering \caption{The masses and decay constants of light tensor mesons} \vspace{0.3cm} \begin{tabular}{c!{\;\;\;\;\;\;}c!{\;\;\;\;\;\;}c} \hline\hline Tensor(mass(MeV)) & $f_{T}$(MeV) & $f_{T}^{\perp}$(MeV) \\ \hline $f_{2}(1270)$&$102\,\pm\,6$&$117\,\pm\,25$\\ $f_{2}^{\prime}(1525)$&$126\,\pm\,12$&$65\,\pm\,12$\\ $a_{2}(1320)$&$107\,\pm\,6$&$105\,\pm\,21$\\ $K_{2}^{*}(1430)$&$118\,\pm\,5$&$77\,\pm\,14$\\ \hline\hline \end{tabular}\label{S4} \end{table} Other input parameters are \begin{eqnarray} &&\Lambda_{\overline{MS}}^{f=4}=0.25,\;m_{b}=4.8,\;f_{\pi}=0.131,\;f_{K}=0.16,\nonumber\\ &&\;m_{0}^{\pi}=1.4,\;m_{0}^{K}=1.6,\;m_{0}^{\eta_{q}}=1.07,\;m_{0}^{\eta_{s}}=1.92. \end{eqnarray} We adopt the Wolfenstein parameterization for the CKM matrix, $A=0.808$, $\lambda=0.2253$, $\bar{\rho}=0.132$ and $\bar{\eta}=0.341$ \cite{jpg37075021}. The twist-2(3) pseudoscalar meson distribution amplitude(s) $\phi_{P}^{A} \,(\phi_{P}^{P},\phi_{P}^{T})$ ($P=\pi,K$) can be parameterized as \cite{zpc48239,jhep01010}, \begin{eqnarray} &&\phi_{\pi}^{A}(x)\,=\,\frac{3f_{\pi}}{\sqrt{6}}x(1-x)\left[1\,+\,0.44C_{2}^{3/2}(t)\,+\,0.25C_{4}^{3/2}(t)\right],\\ &&\phi_{\pi}^{P}(x)\,=\,\frac{f_{\pi}}{2\sqrt{6}}\left[1\,+\,0.43C_{2}^{1/2}(t)\,+\,0.09C_{4}^{1/2}(t)\right],\\ &&\phi_{\pi}^{T}(x)\,=\,-\frac{f_{\pi}}{2\sqrt{6}}\left[C_{1}^{1/2}(t)\,+\,0.55C_{3}^{1/2}(t)\right],\\ &&\phi_{K}^{A}(x)\,=\,\frac{3f_{K}}{\sqrt{6}}x(1-x)\left[1\,+\,0.17C_{1}^{3/2}(t)\,+\,0.2C_{2}^{3/2}(t)\right],\\ &&\phi_{K}^{P}(x)\,=\,\frac{f_{K}}{2\sqrt{6}}\left[1\,+\,0.24C_{2}^{1/2}(t)\,-\,0.11C_{4}^{1/2}(t)\right],\\ &&\phi_{K}^{T}(x)\,=\,-\frac{f_{K}}{2\sqrt{6}}\left[C_{1}^{1/2}(t)\,+\,0.35C_{3}^{1/2}(t)\right]. \end{eqnarray} The Gegenbauer polynomials can be defined by \begin{eqnarray} &&C_{1}^{1/2}(t)\,=\,t,\;\;\;\;\;\;C_{1}^{3/2}(t)\,=\,3t,\nonumber\\ &&C_{2}^{1/2}(t)\,=\,\frac{1}{2}(3t^{2}-1),\;\;\;C_{2}^{3/2}(t)\,=\,\frac{3}{2} (5t^{2}-1),\nonumber\\ &&C_{3}^{1/2}(t)\,=\,\frac{1}{2}t(5t^{2}-3),\nonumber\\ &&C_{4}^{1/2}(t)\,=\,\frac{1}{8}(35t^{4}-30t^{2}+3),\,\;C_{4}^{3/2}(t)\,=\,\frac{15}{8}(21t^{4}-14t^{2}+1), \end{eqnarray} where $t\,=\,2x-1$. In the above distribution amplitudes for kaon, the momentum fraction $x$ is carried by the "s" quark. For the $\eta\,-\,\eta^{\prime}$ system, we use the quark-flavor basis \cite{liu48}, with $\eta_{q}$ and $\eta_{s}$ defined by \begin{eqnarray} \eta_{q}\,=\,\frac{1}{\sqrt{2}}(u\bar{u}+d\bar{d}),\;\;\;\eta_{s}\,=\,s\bar{s}. \end{eqnarray} The physical states $\eta$ and $\eta^{\prime}$ can be given by \begin{eqnarray} \left( \begin{array}{c}\vspace{0.1cm} \eta\\ \eta^{\prime} \end{array} \right) \,=\, \left(\begin{array}{cc}\vspace{0.1cm} \cos\phi & -\sin\phi\\ \sin\phi & \cos\phi \end{array} \right) \left(\begin{array}{c}\vspace{0.1cm} \eta_{q}\\ \eta_{s} \end{array} \right) \end{eqnarray} The decay constants are related to $f_{q}$ and $f_{s}$ via the same mixing matrix, \begin{eqnarray} \left( \begin{array}{cc} \vspace{0.1cm} f_{\eta}^{q}& f_{\eta}^{s}\\ f_{\eta^{\prime}}^{q}& f_{\eta^{\prime}}^{s} \end{array} \right) \,=\, \left(\begin{array}{cc} \vspace{0.1cm} \cos\phi & -\sin\phi\\ \sin\phi & \cos\phi \end{array} \right) \left(\begin{array}{cc} \vspace{0.1cm} f_{q}& 0\\ 0&f_{s} \end{array} \right). \end{eqnarray} The three input parameters $f_{q}$, $f_{s}$ and $\phi$ have been extracted from related experiments \cite{liu48,liu49}: \begin{eqnarray} f_{q}\,=\,(1.07\,\pm\,0.02)f_{\pi},\;\,f_{s}\,=\,(1.34\,\pm\,0.06)f_{\pi}, \;\,\phi\,=\,39.3^{\circ}\,\pm1.0^{\circ}. \end{eqnarray} Like the $\eta\,-\,\eta^{\prime}$ mixing, the isoscalar tensor states $f_{2}(1270)$ and $f_{2}^{\prime}(1525)$ also have a mixing and can be given by \begin{eqnarray} &&f_{2}\,=\,f_{2}^{q}\cos\theta\,+\, f_{2}^{s}\sin\theta,\nonumber\\ &&f_{2}^{\prime}\,=\,f_{2}^{q}\sin\theta\,-\,f_{2}^{s}\cos\theta, \label{ffpmix} \end{eqnarray} where $f_{2}^{q}\,=\,\frac{1}{\sqrt{2}}(u\bar{u}\,+\,d\bar{d})$, $f_{2}^{s}\,=\,s\bar{s}$ and the mixing angle $\theta\,=\,5.8^{\circ}$\cite{zheng3}, $7.8^{\circ}$\cite{jpg27807} or $(9\,\pm\,1)^{\circ}$ \cite{jpg37075021}. \section{Amplitude And Related Hard Functions} For each individual decay channel, various effective operators contribute to the decay amplitude. We summarize the number of effective operators contributing to every channel in Table~\ref{tab:suanfu1} and \ref{tab:suanfu2} for the $\Delta S=1$ and $\Delta S=0$, respectively, with \begin{eqnarray} a_{1}=\frac{C_{1}}{3}+C_{2},&&\;a_{2}=C_{1}+\frac{C_{2}}{3},\nonumber\\ a_{j}=C_{j}+\frac{C_{j+1}}{3}\,(j=3,5,7,9),&&\;a_{n}=\frac{C_{n-1}}{3}+C_{n}\,(n=4,6,8,10). \end{eqnarray} \begin{table}[!htbh] \caption{The effective operators contributing to each decay mode with $\Delta S=1$} \label{tab:suanfu1} \begin{tabular}{l|c|c|c|c} \toprule[2pt] channels &\multicolumn{2}{c|}{emission}& \multicolumn{2}{c}{annihilation} \\ \hline & factorizable & non-factorizable & factorizable & non-factorizable\\ \hline $B^{0}\rightarrow K_{2}^{*+}\pi^{-}$&--&$C_{1},C_{3},C_{5},C_{7},C_{9}$&$a_{4},a_{6},a_{8},a_{10}$&$C_{3},C_{5},C_{7},C_{9}$\\ $B^{0}\rightarrow a_{2}^{-}K^{+}$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$&$a_{4},a_{6},a_{8},a_{10}$&$C_{3},C_{5},C_{7},C_{9}$\\ \multirow{2}{*}{$B^{0}\rightarrow a_{2}^{0}K^{0}$}&\multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&$C_{2},C_{3},C_{5},C_{7},C_{8},$& \multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{3},C_{5},C_{7},C_{9}$}\\ &&$C_{9},C_{10}$&& \\ \multirow{2}{*}{$B^{0}\rightarrow K_{2}^{*0}\pi^{0}$}&\multirow{2}{*}{$a_{2},a_{7},a_{9}$}&$C_{2},C_{3},C_{5},C_{7},C_{8},$& \multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{3},C_{5},C_{7},C_{9}$}\\ &&$C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{0}\rightarrow f_{2}^{q}K^{0}$}&\multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&$C_{2},C_{3},C_{4},C_{5},C_{6},$& \multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{3},C_{5},C_{7},C_{9}$}\\ &&$C_{7},C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{0}\rightarrow \eta^{q}K_{2}^{*0}$}&\multirow{2}{*}{$a_{2},a_{3},a_{5},a_{7},a_{9}$}&$C_{2},C_{3},C_{4},C_{5},C_{6}, $&\multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{3},C_{5},C_{7},C_{9}$}\\ &&$C_{7},C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{0}\rightarrow f_{2}^{s}K^{0}$}&\multirow{2}{*}{--}&$C_{3},C_{4},C_{5},C_{6},C_{7},$&\multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{3},C_{5},C_{7},C_{9}$}\\ &&$C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{0}\rightarrow \eta^{s}K_{2}^{*0}$}&$a_{3},a_{4},a_{5},a_{6},$&$C_{3},C_{4},C_{5},C_{6},C_{7},$ &\multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{3},C_{5},C_{7},C_{9}$}\\ &$a_{7},a_{8},a_{9},a_{10}$&$C_{8},C_{9},C_{10}$&&\\ $B^{+}\rightarrow K_{2}^{*0}\pi^{+}$&--&$C_{3},C_{5},C_{7},C_{9}$&$a_{1},a_{4},a_{6},a_{8},a_{10}$& $C_{1},C_{3},C_{5},C_{7},C_{9}$\\ $B^{+}\rightarrow K^{0}a_{2}^{+}$&$a_{4},a_{6},a_{8},a_{10}$&$C_{3},C_{5},C_{7},C_{9}$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ \multirow{2}{*}{$B^{+}\rightarrow K_{2}^{*+}\pi^{0}$}&\multirow{2}{*}{$a_{2},a_{7},a_{9}$}&$C_{1},C_{2},C_{3},C_{5},C_{7},$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ &&$C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{+}\rightarrow K^{+}a_{2}^{0}$}&\multirow{2}{*}{$a_{1},a_{4},a_{6},a_{8},a_{10}$}&$C_{1},C_{2},C_{3},C_{5},C_{7},$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ &&$C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{+}\rightarrow K^{+}f_{2}^{q}$}&\multirow{2}{*}{$a_{1},a_{4},a_{6},a_{8},a_{10}$}&$C_{1},C_{2},C_{3},C_{4},C_{5},$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ &&$C_{6},C_{7},C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{+}\rightarrow K_{2}^{*+}\eta^{q}$}&\multirow{2}{*}{$a_{2},a_{3},a_{5},a_{7},a_{9}$}&$C_{1},C_{2},C_{3},C_{4},C_{5},$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ &&$C_{6},C_{7},C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{+}\rightarrow f_{2}^{s}K^{+}$}&\multirow{2}{*}{--}&$C_{3},C_{4},C_{5},C_{6},$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ &&$C_{7},C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{+}\rightarrow \eta^{s}K_{2}^{*+}$}&$a_{3},a_{4},a_{5},a_{6},$&$C_{3},C_{4},C_{5},C_{6},$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ &$a_{7},a_{8},a_{9},a_{10}$&$C_{7},C_{8},C_{9},C_{10}$&&\\ \bottomrule[2pt] \end{tabular} \end{table} \begin{table}[!htbh] \caption{The effective operators contributing to each decay mode with $\Delta S=0$} \label{tab:suanfu2} \begin{tabular}{l|c|c|c|c} \toprule[2pt] &\multicolumn{2}{c|}{emission}& \multicolumn{2}{c}{annihilation} \\ \hline channels & factorizable & non-factorizable & factorizable & non-factorizable\\ \hline \multirow{2}{*}{$B^{0}\rightarrow f_{2}^{q}\pi^{0}$}&$a_{2},a_{4},a_{6},a_{7},$&$C_{2},C_{3},C_{4},C_{5},C_{6},$&$a_{2},a_{4},a_{6},a_{7},$&$C_{2},C_{3},C_{5},C_{7},$\\ &$a_{8},a_{9},a_{10}$&$C_{7},C_{8},C_{9},C_{10}$&$a_{8},a_{9},a_{10}$&$C_{8},C_{9},C_{10}$\\ \multirow{2}{*}{$B^{0}\rightarrow \eta^{q}a_{2}^{0}$}&$a_{2},a_{3},a_{4},a_{5},a_{6},$&$C_{2},C_{3},C_{4},C_{5},C_{6},$&$a_{2},a_{4},a_{6},a_{7},$&$C_{2},C_{3},C_{5},C_{7},$\\ &$a_{7},a_{8},a_{9},a_{10}$&$C_{7},C_{8},C_{9},C_{10}$&$a_{8},a_{9},a_{10}$&$C_{8},C_{9},C_{10}$\\ \multirow{2}{*}{$B^{0}\rightarrow a_{2}^{-}\pi^{+}$}&\multirow{2}{*}{$a_{1},a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{1}, C_{3},C_{5},C_{7},C_{9}$}&$a_{2},a_{3},a_{4},a_{5},a_{6},$&$C_{2},C_{3},C_{4},C_{5},C_{6},$\\ &&&$a_{7},_{8},a_{9},a_{10}$&$C_{7},C_{8},C_{9},C_{10}$\\ \multirow{2}{*}{$B^{0}\rightarrow \pi^{-}a_{2}^{+}$}&\multirow{2}{*}{--}&\multirow{2}{*}{$C_{1},C_{3},C_{5},C_{7},C_{9}$}&$a_{2},a_{3},a_{4},a_{5},a_{6},$&$C_{2},C_{3},C_{4},C_{5},C_{6},$\\ &&&$a_{7},_{8},a_{9},a_{10}$&$C_{7},C_{8},C_{9},C_{10}$\\ \multirow{2}{*}{$B^{0}\rightarrow a_{2}^{0}\pi^{0}$}&$a_{2},a_{4},a_{6},a_{7},a_{8},$&$C_{2},C_{3},C_{5},C_{7},C_{8},$&$a_{2},a_{3},a_{4},a_{5},a_{6},$&$C_{2},C_{3},C_{4},C_{5},C_{6},$\\ &$a_{9},a_{10}$&$C_{9},C_{10}$&$ a_{7},_{8},a_{9},a_{10}$&$ C_{7},C_{8},C_{9},C_{10}$\\ $B^{0}\rightarrow f_{2}^{s}\pi^{0}$&--&$C_{4},C_{6},C_{8},C_{10}$&--&--\\ $B^{0}\rightarrow \eta^{s}a_{2}^{0}$&$a_{3},a_{5},a_{7},a_{9}$&$C_{4},C_{6},C_{8},C_{10}$&--&--\\ \multirow{2}{*}{$B^{0}\rightarrow f_{2}^{q}\eta^{q}$}&$a_{2},a_{3},a_{4},a_{5},a_{6},$&$C_{2},C_{3},C_{4},C_{5},C_{6},$&$a_{2},a_{3},a_{4},a_{5},a_{6},$&$C_{2},C_{3},C_{4},C_{5},C_{6},$\\ &$ a_{7},a_{8},a_{9},a_{10}$&$C_{7},C_{8},C_{9},C_{10}$&$ a_{7},a_{8},a_{9},a_{10}$&$ C_{7},C_{8},C_{9},C_{10}$\\ $B^{0}\rightarrow f_{2}^{s}\eta^{s}$&--&--&$a_{3},a_{5},a_{7},a_{9}$&$C_{4},C_{6},C_{8},C_{10}$\\ $B^{0}\rightarrow f_{2}^{q}\eta^{s}$&$a_{3},a_{5},a_{7},a_{9}$&$C_{4},C_{6},C_{8},C_{10}$&--&--\\ $B^{0}\rightarrow f_{2}^{s}\eta^{q}$&--&$C_{4},C_{6},C_{8},C_{10}$&--&--\\ $B^{0}\rightarrow K_{2}^{*+}K^{-}$&--&--&$a_{2},a_{3},a_{5},a_{7},a_{9}$&$C_{2},C_{4},C_{6},C_{8},C_{10}$\\ $B^{0}\rightarrow K_{2}^{*-}K^{+}$&--&--&$a_{2},a_{3},a_{5},a_{7},a_{9}$&$C_{2},C_{4},C_{6},C_{8},C_{10}$\\ \multirow{2}{*}{$B^{0}\rightarrow K_{2}^{*0}\bar{K}^{0}$}&\multirow{2}{*}{$a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{3},C_{5},C_{7},C_{9}$}&$a_{3},a_{4},a_{5},a_{6},$&$C_{3},C_{4},C_{5},C_{6},$\\ &&&$ a_{7},a_{8},a_{9},a_{10}$&$C_{7},C_{8},C_{9},C_{10}$\\ {$B^{0}\rightarrow \bar{K}_{2}^{*0}K^{0}$}&\multirow{2}{*}{--}&\multirow{2}{*}{$C_{3},C_{5},C_{7},C_{9}$}&$a_{3},a_{4},a_{5},a_{6},$&$C_{3},C_{4},C_{5},C_{6},$\\ &&&$ a_{7},a_{8},a_{9},a_{10}$&$C_{7},C_{8},C_{9},C_{10}$\\ \multirow{2}{*}{$B^{+}\rightarrow a_{2}^{0}\pi^{+}$}&\multirow{2}{*}{$a_{1},a_{4},a_{6},a_{8},a_{10}$}&$C_{1},C_{2},C_{3},C_{5},C_{7},$&\multirow{2}{*}{$a_{1},a_{4}, a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{1},C_{3},C_{5},C_{7},C_{9}$}\\ &&$ C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{+}\rightarrow a_{2}^{+}\pi^{0}$}&$a_{2},a_{4},a_{6},a_{7},a_{8},$&$C_{1},C_{2},C_{3},C_{5},C_{7},$&\multirow{2}{*} {$a_{1},a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{1},C_{3},C_{5},C_{7},C_{9}$}\\ &$ a_{9},a_{10}$&$ C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{+}\rightarrow f_{2}^{q}\pi^{+}$}&\multirow{2}{*}{$a_{1},a_{4},a_{6},a_{8},a_{10}$}&$C_{1},C_{2},C_{3},C_{4},C_{5},$& \multirow{2}{*}{$a_{1},a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{1},C_{3},C_{5},C_{7},C_{9}$}\\ &&$ C_{6},C_{7},C_{8},C_{9},C_{10}$&&\\ \multirow{2}{*}{$B^{+}\rightarrow \eta^{q}a_{2}^{+}$}&$a_{2},a_{3},a_{4},a_{5},a_{6},$&$C_{1},C_{2},C_{3},C_{4},C_{5},$&\multirow{2}{*} {$a_{1},a_{4},a_{6},a_{8},a_{10}$}&\multirow{2}{*}{$C_{1},C_{3},C_{5},C_{7},C_{9}$}\\ &$ a_{7},a_{8},a_{9},a_{10}$&$ C_{6},C_{7},C_{8},C_{9},C_{10}$&&\\ $B^{+}\rightarrow a_{2}^{+}\eta^{s}$&$a_{3},a_{5},a_{7},a_{9}$&$C_{4},C_{6},C_{8},C_{10}$&--&--\\ $B^{+}\rightarrow \pi^{+}f_{2}^{s}$&--&$C_{4},C_{6},C_{8},C_{10}$&--&--\\ $B^{+}\rightarrow K^{+}\bar{K}_{2}^{*0}$&--&$C_{3},C_{5},C_{7},C_{9}$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ $B^{+}\rightarrow K_{2}^{*+}\bar{K}^{0}$&$a_{4},a_{6},a_{8},a_{10}$&$C_{3},C_{5},C_{7},C_{9}$&$a_{1},a_{4},a_{6},a_{8},a_{10}$&$C_{1},C_{3},C_{5},C_{7},C_{9}$\\ \bottomrule[2pt] \end{tabular} \end{table} For factorizable emission diagrams Fig.1. (1a) and (1b), the h function is given by \begin{eqnarray} h_{ef}(x_{1},x_{3},b_{1},b_{3})\,&=&\,K_{0}(\sqrt{x_{1}x_{3}}m_{B}b_{1})\nonumber\\ &&\times\left\{\theta(b_{1}-b_{3})K_{0}\left(\sqrt{x_{3}}m_{B}b_{1}\right)I_{0}\left(\sqrt{x_{3}}m_{B}b_{3}\right)\right.\nonumber\\ &&\left.+\theta(b_{3}-b_{1})K_{0}\left(\sqrt{x_{3}}m_{B}b_{3}\right)I_{0}\left(\sqrt{x_{3}}m_{B}b_{1}\right)\right\}\nonumber\\ &&\times S_{t}(x_{3}). \end{eqnarray} The hard scales \begin{eqnarray} &&t_{a}\,=\,\max\{\sqrt{x_{3}}m_{B},\,1/b_{1},\,1/b_{3}\}\nonumber\\ &&t_{b}\,=\,\max\{\sqrt{x_{1}}m_{B},\,1/b_{1},\,1/b_{3}\}, \end{eqnarray} are the maximum energy scales in each diagrams to cancel the large logarithmic radiative corrections. The $S_{t}$ re-sums the threshold logarithms $\ln^{2}x$ in the hard kernels to all orders, which is given by \cite{prd66094010} \begin{eqnarray} S_{t}(x)\,=\,\frac{2^{1+2c}\Gamma(3/2+c)}{\sqrt{\pi}\Gamma(1+c)}[x(1-x)]^{c}, \end{eqnarray} with $c\,=\,0.3$ in this work. In the nonfactorizable contributions, the $S_{t}(x)$ provides a very small numerical effect to the amplitude \cite{plb555}. Therefore, we omit the $S_{t}(x)$ in those contributions. The evolution factors $E_{ef}(t_{a})$ and $E_{ef}(t_{b})$ in the matrix elements (see section III) are given by \begin{eqnarray} E_{ef}(t)\,=\,\alpha_{s}(t)\exp[-S_{B}(t)-S_{3}(t)]. \end{eqnarray} The Sudakov exponents are defined as \begin{eqnarray} S_{B}(t)\,=\,s\left(x_{1}\frac{m_{B}}{\sqrt{2}},b_{1}\right)\,+\,\frac{5}{3}\int_{1/b_{1}}^{t}\frac{d\bar{\mu}}{\bar{\mu}}\gamma_{q}(\alpha_{s}(\bar{\mu})), \end{eqnarray} \begin{eqnarray} S_{2}(t)\,=\,s\left(x_{2}\frac{m_{B}}{\sqrt{2}},b_{2}\right)\,+\,s\left((1-x_{2})\frac{m_{B}}{\sqrt{2}},b_{2}\right) \,+\,2\int_{1/b_{2}}^{t}\frac{d\bar{\mu}}{\bar{\mu}}\gamma_{q}(\alpha_{s}(\bar{\mu})), \end{eqnarray} \begin{eqnarray} S_{3}(t)\,=\,s\left(x_{3}\frac{m_{B}}{\sqrt{2}},b_{3}\right)\,+\,s\left((1-x_{3})\frac{m_{B}}{\sqrt{2}},b_{3}\right) \,+\,2\int_{1/b_{3}}^{t}\frac{d\bar{\mu}}{\bar{\mu}}\gamma_{q}(\alpha_{s}(\bar{\mu})), \end{eqnarray} where the $s(Q,b)$ can be found in the Appendix A in the Ref.\cite{prd63074009}. For the other diagrams, the related functions are summarized as follows: \begin{eqnarray} &&t_{c}\,=\,\max\{\sqrt{x_{1}x_{3}}m_{B},\sqrt{|1-x_{1}-x_{2}|x_{3}}m_{B},1/b_{1},1/b_{2}\},\nonumber\\ &&t_{d}\,=\,\max\{\sqrt{x_{1}x_{3}}m_{B},\sqrt{|x_{1}-x_{2}|x_{3}}m_{B},1/b_{1},1/b_{2}\},\\ &&E_{enf}(t)\,=\,\alpha_{s}(t)\cdot\exp[-S_{B}(t)-S_{2}(t)-S_{3}(t)]\mid\,_{b_{1}\,=\,b_{3}}, \end{eqnarray} \begin{eqnarray} h_{enf}(x_{1},x_{2},x_{3},b_{1},b_{2})\,&=&\,\left[\theta(b_{2}-b_{1})K_{0}(\sqrt{x_{1}x_{3}}m_{B}b_{2})I_{0}(\sqrt{x_{1}x_{3}}m_{B}b_{1})\right.\nonumber\\ &&\left.+\theta(b_{1}-b_{2})K_{0}(\sqrt{x_{1}x_{3}}m_{B}b_{1})I_{0}(\sqrt{x_{1}x_{3}}m_{B}b_{2})\right]\nonumber\\ &&\cdot \left\{\begin{array}{ll} \frac{i\pi}{2}H_{0}^{(1)}\left(\sqrt{(x_{2}-x_{1})x_{3}}m_{B}b_{2}\right),& x_{2}-x_{1}>0;\\ K_{0}\left(\sqrt{(x_{1}-x_{2})x_{3}}m_{B}b_{2}\right),&x_{1}-x_{2}>0. \end{array}\right. \end{eqnarray} \begin{eqnarray} &&t_{e}\,=\,\max\{\sqrt{1-x_{3}}m_{B},1/b_{2},1/b_{3}\},\nonumber\\ &&t_{f}\,=\,\max\{\sqrt{x_{2}}m_{B},1/b_{2},1/b_{3}\},\\ &&E_{af}(t)\,=\,\alpha_{s}(t)\cdot \exp[-S_{2}(t)-S_{3}(t)], \end{eqnarray} \begin{eqnarray} h_{af}(x_{2},x_{3},b_{2},b_{3})\,&=&\,(\frac{i\pi}{2})^{2}H_{0}^{(1)}\left(\sqrt{x_{2}x_{3}}m_{B}b_{2}\right)\nonumber\\ &&\left[\theta(b_{2}-b_{3})H_{0}^{(1)}\left(\sqrt{x_{3}}m_{B}b_{2}\right)J_{0}\left(\sqrt{x_{3}}m_{B}b_{3}\right)\right.\,+\nonumber\\ &&\left.\theta(b_{3}-b_{2})H_{0}^{(1)}\left(\sqrt{x_{3}}m_{B}b_{3}\right)J_{0}\left(\sqrt{x_{3}}m_{B}b_{2}\right)\right]\cdot S_{t}(x_{3}). \end{eqnarray} \begin{eqnarray} &&t_{g}\,=\,\max\{\sqrt{x_{2}(1-x_{3})}m_{B},\sqrt{1-(1-x_{1}-x_{2})}m_{B},1/b_{1},1/b_{2}\}\nonumber\\ &&t_{h}\,=\,\max\{\sqrt{x_{2}(1-x_{3})}m_{B},\sqrt{|x_{1}-x_{2}|(1-x_{3})}m_{B},1/b_{1},1/b_{2}\},\\ &&E_{anf}\,=\,\alpha_{s}(t)\cdot \exp[-S_{B}(t)-S_{2}(t)-S_{3}(t)]\mid\,_{b_{2}=b_{3}}, \end{eqnarray} \begin{eqnarray} h_{anf1}(x_{1},x_{2},x_{3},b_{1},b_{2})\,&=&\,\frac{i\pi}{2}\left[\theta(b_{1}-b_{2})H_{0}^{(1)}\left(\sqrt{x_{2} (1-x_{3})}m_{B}b_{1}\right)J_{0}\left(\sqrt{x_{2}(1-x_{3})}m_{B}b_{2}\right)\right.\nonumber\\ &&\left.+\theta(b_{2}-b_{1})H_{0}^{(1)}\left(\sqrt{x_{2}(1-x_{3})}m_{B}b_{2}\right)J_{0}\left(\sqrt{x_{2}(1-x_{3})}m_{B}b_{1}\right)\right]\nonumber\\ &&\times K_{0}\left(\sqrt{1-(1-x_{1}-x_{2})x_{3}}m_{B}b_{1}\right), \end{eqnarray} \begin{eqnarray} h_{anf2}(x_{1},x_{2},x_{3},b_{1},b_{2})\,&=&\,\frac{i\pi}{2}\left[\theta(b_{1}-b_{2})H_{0}^{(1)}\left(\sqrt{x_{2}(1-x_{3})}m_{B}b_{1}\right)J_{0} \left(\sqrt{x_{2}(1-x_{3})}m_{B}b_{2}\right)\right.\nonumber\\ &&\left.+\theta(b_{2}-b_{1})H_{0}^{(1)}\left(\sqrt{x_{2}(1-x_{3})}m_{B}b_{2}\right)J_{0}\left(\sqrt{x_{2}(1-x_{3})}m_{B}b_{1}\right)\right]\nonumber\\ &&\times \left\{\begin{array}{ll} \frac{i\pi}{2}H_{0}^{(1)}\left(\sqrt{(x_{2}-x_{1})(1-x_{3})}m_{B}b_{1}\right),&\;\;\;x_{1}-x_{2}<0,\\ K_{0}\left(\sqrt{(x_{1}-x_{2})(1-x_{3})}m_{B}b_{1}\right),&\;\;\;x_{1}-x_{2}>0, \end{array}\right. \end{eqnarray} where $H_{0}^{(1)}(z)\,=\,J_{0}(z)\,+\,iY_{0}(z)$. \end{appendix}
1,108,101,563,084
arxiv
\section{Introduction \label{sec:intro}} The extended main sequence turnoff (eMSTO) phenomenon---i.e., the notion that the main sequence turnoff (MSTO) in the color--magnitude diagram (CMD) is much wider than the prediction from single stellar population modeling---first discovered in NGC 1846 by \citet{2007MNRAS.379..151M}, is a common feature found in a large fraction of young and intermediate-age ($\leqslant\unit[2]{Gyr}$) massive Large and Small Magellanic Cloud clusters \citep[e.g.][]{2008ApJ...681L..17M, 2009A&A...497..755M, 2011ApJ...737....3G, 2014ApJ...784..157L, 2017MNRAS.467.3628C, 2015MNRAS.450.3750M}. In the past few years, our understanding of the eMSTO phenomenon in these clusters has been enriched and enhanced significantly. Rather than owing to intrinsic age spreads, stellar rotation is believed to play an important role in shaping the morphology of the eMSTO \citep[e.g.,][]{2014Natur.516..367L, 2015MNRAS.453.2070N, 2009MNRAS.398L..11B}. This theory is further reinforced by multiple lines of photometric and spectroscopic evidence. Using narrow-band photometry, \citet{2017MNRAS.465.4795B} detected a large fraction ($\sim 30$--60\%) of Be stars in the MSTO regions of NGC 1850 ($\sim\unit[80]{Myr}$) and NGC 1856 ($\sim\unit[280]{Myr}$), favoring the interpretation that their split main sequences are caused by the effects of fast rotators. Similar mechanisms were later confirmed in NGC 1866 ($\sim\unit[200]{Myr}$) and NGC 1818 ($\sim\unit[40]{Myr}$) through high-resolution spectroscopic surveys, suggesting that these clusters host a blue main sequence composed of slow rotators and a red one composed of fast rotators \citep{2017ApJ...846L...1D, 2018AJ....156..116M}. Aided by \textit{Gaia} Data Release 2 (DR2), we can derive `clean' samples of open clusters in the Milky Way free of contamination by field stars \citep{2018A&A...618A..93C}. The discovery of eMSTOs in Galactic open clusters, similar to those observed in Magellanic Cloud clusters, has opened up a new chapter in our comprehension of the formation of eMSTOs and the evolution of open clusters. \citet{2018ApJ...869..139C} found the existence of eMSTOs in all 12 open clusters younger than $\sim\unit[1.5]{Gyr}$ they analyzed (including NGC 5822), suggesting that eMSTOs are a common feature of intermediate-age clusters in the Milky Way and that they are regulated by the same mechanism as that operating in Magellanic Cloud clusters. The understanding of open clusters, which used to be considered prototypes of a single stellar population, encountered a significant upheaval due to the discovery of eMSTOs in Galactic open clusters. It has been reported that stars exhibit a wide range of rotation rates both in the field \citep{2010ApJ...722..605H} and in open clusters \citep{2006ApJ...648..580H}. A number of studies have explored the effect of stellar rotation on the CMD around the MSTO region \citep{2015ApJ...807...58B, 2015ApJ...807...25B, 2015ApJ...807...24B}. However, the direct connection between the stellar rotation rates of MSTO stars and their loci in the CMD was only revealed recently. \citet{2018MNRAS.480.3739B} used Very Large Telescope/FLAMES spectroscopy of 60 cluster members in NGC 2818, an $\unit[800]{Myr}$ open cluster, to measure the stellar rotational velocities and found that stars exhibiting high rotational velocities are located on the red side of the eMSTO and those rotating slowly on the blue side, in agreement with the prediction of the stellar rotation scenario. \citet{2018ApJ...863L..33M} also reported that the multiple sequences they found in the young cluster NGC 6705 correspond to stellar populations with different rotation rates. In this paper, we present a spectroscopic survey of MSTO stars in the nearby ($\sim\unit[760]{pc}$) intermediate-age (\unit[0.9]{Gyr}) open cluster NGC 5822. We find the presence of an eMSTO in this cluster and verify that it is not an artifact caused by differential extinction. The loci of the main sequence stars in the eMSTO region show a clear correlation with the projected rotational velocities, with fast rotators lying on the red side of the eMSTO and slow rotators on the blue side. By comparison with a synthetic cluster within the framework of stellar rotation, we argue that the observed morphology of the eMSTO in the CMD can be properly explained by the model and that stellar rotation is likely the main contributor to the eMSTO morphology in NGC 5822. This article is organized as follows. In Section \ref{sec:data} we present the observations, data reduction procedures, and membership determination. Section \ref{sec:eMSTO} reports our main results, showing a strong correlation between the stellar rotation rates and their loci in the CMD region covered by the eMSTO. A discussion and our conclusions are summarized in Section \ref{sec:discussion}. \section{Data Reduction and Analysis \label{sec:data}} \subsection{Spectroscopic data} We selected spectroscopic candidates in NGC 5822 using the photometric survey in the $UBVI$ and $uvbyCa\mathrm{H}\beta$ systems undertaken by \citet{2011AJ....142..127C}. These broad-band observations were obtained with the Y4KCAM camera mounted on the Cerro Tololo Inter-American Observatory (CTIO) \unit[1]{m} telescope and the intermediate- and narrow-band imaging was carried out using the CTIO \unit[0.9]{m} telescope. Through a cross-correlation with the UCAC3 database \citep{2000AJ....120.2131Z}, these authors derived 136 probable photometric members and 322 probable non-members. We obtained spectroscopic observations with the Southern African Large Telescope \citep[SALT;][]{2006SPIE.6267E..0ZB} equipped with the Robert Stobie Spectrograph (RSS) using its multi-object spectroscopy (MOS) capability over 9 nights from 2018 February 8/9 to 2018 August 14/15 under programs 2017-2-SCI-038 and 2018-1-SCI-006. Six masks were designed to cover 88 stars (including repetitions) in NGC 5822 as part of program 2017-2-SCI-038, with three masks observed for the second time the following semester (see Table \ref{tab:obs}). The PG2300 grating was used with a \unit[1]{arcsec} wide short slit binned $2\times 2$, offering a nominal spectral resolution of $\sim 4000$ with a per-pixel resolution of \unit[0.33]{\AA} at a central wavelength of \unit[4884.4]{\AA}. Regular bias, argon arc lamp, and quartz lamp flat field calibration frames were taken as part of normal SALT operations. We used the PySALT package \citep{2010SPIE.7737E..25C} to perform the primary reduction and wavelength calibration. For all of our samples, we obtained spectra with a signal-to-noise ratio (SNR) per pixel in excess of 200. \begin{deluxetable*}{ccCCCCl}[b!] \tablecaption{Observation Log of the SALT Runs \label{tab:obs}} \tablewidth{0pt} \tablehead{ \colhead{Mask Name} & \colhead{Programme} & \colhead{$\alpha_\mathrm{J2000}$} &\colhead{$\delta_\mathrm{J2000}$} &\colhead{$N$\tablenotemark{a}} & \colhead{Exp. Time (s)} & \colhead{Date (UT)} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)}& \colhead{(6)} &\colhead{(7)} } \startdata NGC5822p2 &2017-2-SCI-038& 15^\mathrm{h}04^\mathrm{m}33.43^\mathrm{s}&-54\arcdeg26\arcmin33.16\arcsec & 13 & 600 & 2018 Feb 8 \\ &2018-1-SCI-006 & 15^\mathrm{h}04^\mathrm{m}33.43^\mathrm{s}&-54\arcdeg26\arcmin33.16\arcsec & 13 & 764 & 2018 Aug 3 \\ NGC5822p3 &2017-2-SCI-038& 15^\mathrm{h}04^\mathrm{m}19.43^\mathrm{s}&-54\arcdeg16\arcmin44.52\arcsec & 10 & 600 & 2018 Apr 26 \\ NGC5822p4 &2017-2-SCI-038& 15^\mathrm{h}03^\mathrm{m}11.44^\mathrm{s}&-54\arcdeg16\arcmin22.93\arcsec & 11 & 600 & 2018 Feb 8 \\ &2018-1-SCI-006& 15^\mathrm{h}03^\mathrm{m}11.44^\mathrm{s}&-54\arcdeg16\arcmin22.93\arcsec & 11 & 764 & 2018 Jul 30 \\ NGC5822p6 &2017-2-SCI-038& 15^\mathrm{h}03^\mathrm{m}09.95^\mathrm{s}&-54\arcdeg31\arcmin46.39\arcsec & 9 & 600 & 2018 Feb 11 \\ &2018-1-SCI-006& 15^\mathrm{h}03^\mathrm{m}09.95^\mathrm{s}&-54\arcdeg31\arcmin46.39\arcsec & 9 & 764 & 2018 Aug 14 \\ NGC5822p8 &2017-2-SCI-038& 15^\mathrm{h}05^\mathrm{m}10.13^\mathrm{s}&-54\arcdeg19\arcmin41.91\arcsec & 7 & 600 & 2018 Feb 26 \\ NGC5822p9 &2017-2-SCI-038& 15^\mathrm{h}04^\mathrm{m}36.35^\mathrm{s}&-54\arcdeg34\arcmin51.48\arcsec & 5 & 600 & 2018 Apr 30 \\ \enddata \tablenotetext{a}{Number of science slits in each field.} \end{deluxetable*} \subsection{Membership determination} We exploited the \textit{Gaia} DR2 \citep{2016A&A...595A...1G, 2018A&A...616A...1G} to analyze the stellar photometry, proper motions, and parallaxes, and to perform membership determination in the NGC 5822 field. First, we acquired the stellar catalog from the \textit{Gaia} database within 2.5 times the cluster radius \cite[35\arcmin;][]{2002A&A...389..871D}. In the vector-point diagram (VPD) of stellar proper motions, NGC 5822 showed a clear concentration centered at $(\mu_\alpha\cos\theta, \mu_\delta) \approx \unit[(-7.44, -5.52)]{mas\,yr^{-1}}$. The other overdensity located at $(\mu_\alpha\cos\theta, \mu_\delta) \approx \unit[(-3.67, -2.52)]{mas\,yr^{-1}}$ corresponds to a nearby cluster, NGC 5823. Then, we derived the quantity $\mu_\mathrm{R} = \sqrt{(\mu_\alpha\cos\theta-<\mu_\alpha\cos\theta>)^2+(\mu_\delta-<\mu_\delta>)^2}$ and applied a cut of $\mu_\mathrm{R} = \unit[0.4]{mas\,yr^{-1}}$ to conduct our primary membership selection. Next, we placed a further constraint on the parallaxes by estimating the mean parallax of the proper-motion-selected stars ($\langle\varpi\rangle=\unit[1.18]{mas\,yr^{-1}}$) and adopted stars with parallaxes within $\unit[0.115]{mas\,yr^{-1}}$ as cluster members. Note that this approach is slightly different from that adopted by \citet{2018ApJ...869..139C}, in the sense that we adopted a straight cut in both $\mu_\mathrm{R}$ and $\varpi$ rather than applying different selection criteria for stars of different brightnesses. One reason for this approach is that NGC 5822 is sufficiently close that its member stars can be easily separated from field stars using parallaxes (see the top right-hand panel of Fig. \ref{fig:selection}). On the other hand, the limited number of stars in NGC 5822 makes it hard to reliably calculate the corresponding rms for each magnitude bin. Since we did not set out to compile a homogeneous database for multiple clusters, our approach is suitable for our analysis of this single cluster. We present the spatial distribution as well as the CMD of the member stars of NGC 5822, together with all stars in the field, in the bottom panels of Fig. \ref{fig:selection}. We present the CMD of NGC 5822 color-coded by the stellar classifications based on their loci in Fig. \ref{fig:classification}. Member stars classified as MSTO, MS, and giant stars are marked as green squares, blue triangles, and red diamonds, respectively. Member stars with spectroscopic data are presented using solid markers and field stars with spectroscopic data are shown as gray circles. Following decontamination of the field stars, 24 member stars (21 MSTO and 3 MS stars) were left in our observational sample; 13 member stars were observed a second time. We estimate that the total number of member MSTO stars in this cluster is $\sim$107, suggesting that the completeness of our observed sample is around 20\% in the eMSTO region. \begin{figure}[ht!] \gridline{\fig{vpd.pdf}{0.5\textwidth}{} \fig{parallax.pdf}{0.5\textwidth}{} } \gridline{\fig{space.pdf}{0.5\textwidth}{} \fig{cmd.pdf}{0.5\textwidth}{} } \caption{{\bf (top left)} Vector-point diagram of the proper motions for stars brighter than $G = \unit[15]{mag}$ within 87.5\arcmin of the center of NGC 5822. The red circle shows the primary selection of cluster members. {\bf (top right)} $G$ band vs. parallaxes. The primary members selected from the proper motions are marked as solid dots and the parallax-selected members are marked as red dots. The vertical dashed lines represent the selection criteria applied to the parallaxes. {\bf (bottom left)} Spatial distribution of stars selected, with cluster members highlighted as red points. {\bf (bottom right)} CMD of all stars in the field (gray dots) and member stars of NGC 5822 (red solid dots). The eMSTO is visible around $G\sim\unit[11.5]{mag}$. \label{fig:selection}} \end{figure} \begin{figure}[ht!] \plotone{classification.pdf} \caption{CMD of NGC 5822 color-coded by the stellar classifications based on their loci. Member stars classified as MSTO, MS, and giant stars are marked as green squares, blue triangles, and red diamonds, respectively. Member stars with spectroscopic data are presented using solid markers and field stars with spectroscopic data are shown as gray circles. \label{fig:classification}} \end{figure} This cluster shows a clear eMSTO feature around $G\sim\unit[11.5]{mag}$. To further demonstrate that this is not an artifact owing to residual differential reddening, we estimated the degree of the spatial variation of the reddening and found that its influence is negligible compared with the extent of the eMSTO. Given its close distance and low Galactic latitude, we found that we could not use a 2D reddening map \citep[e.g.][]{2011ApJ...737..103S} to estimate the differential reddening. Instead, we adopted the method of \citet{2013ApJ...769...88N}, who assumed a two-component model for the distribution of the dust, including the mean density of dust along the plane $\rho_\mathrm{D}$ and a scale height $H_\mathrm{D}$. Therefore, the prediction for the reddening in a given direction is given by \begin{equation} E(B-V)=\rho_\mathrm{D}\int_0^d\exp(-r\sin(|b|)/H)\,\mathrm{d}r, \end{equation} where $b$ represents Galactic latitude and $d$ is the distance derived from the corresponding \textit{Gaia} parallax. The distance was derived from the parallax by implementing the formalism of \citet{2016ApJ...832..137A}. $H=\unit[164]{pc}$ is the dust scale height and $\rho_\mathrm{D}=\unit[0.427]{mag\,kpc^{-1}}$ \citep{2013ApJ...769...88N}. The average extinction for cluster members is around $E(B-V)=\unit[0.126]{mag}$ with a standard deviation of $\sigma_{E(B-V)} = \unit[0.004]{mag}$. Using the \citet{1989ApJ...345..245C} and \citet{1994ApJ...422..158O} extinction curve with $R_V = 3.1$, we corrected the reddening to the average reddening value. In Fig. \ref{fig:cmd} we present the CMD of selected cluster member stars before (left) and after (right) our differential reddening correction. A visual inspection suggests that the morphology of the CMD remains unchanged and the eMSTO still exists after having applied this correction. We used the Padova group's PARSEC 1.2S isochrones \citep{2012MNRAS.427..127B} to perform our CMD fits based on visual matching. \citet{2018ApJ...869..139C} derived an age of $\sim\unit[1]{Gyr}$ and a solar-like metallicity ($Z_\odot = 0.0152$). Our best fit agrees with these results (Fig. \ref{fig:cmd}). The best-fitting isochrone has an age of \unit[0.9]{Gyr} for $Z = 0.017$ and a distance of $\unit[\sim 760]{pc}$. The binary sequence is clearly visible in the CMD and \citet{2018ApJ...869..139C} estimated the fraction of unresolved binaries with $q>0.7$ at 0.131. \begin{figure}[ht!] \plotone{red.pdf} \caption{Comparison of the CMD of selected cluster member stars before (left) and after (right) differential reddening correction. Different colors represent different extinctions $E(B-V)$. The best-fitting isochrone for the bulk stellar population is shown in the right-hand panel as a red solid line. The best-fitting isochrone (red solid line) has an age of \unit[0.9]{Gyr} with $Z = 0.017$ and a distance modulus of around 9.4 ($\unit[\sim 760]{pc}$). Isochrones for ages of $\log( t \mbox{ yr}^{-1})= 9.05$ and 9.15 are also overplotted (red dashed lines). \label{fig:cmd}} \end{figure} \subsection{Rotational velocities} The projected rotational velocities were measured by fitting the absorption line profiles of H$\beta$ and the Mg {\sc i} triplet. We compiled a library of high-resolution synthetic stellar spectra with effective temperatures, $T_\mathrm{eff}$, ranging from \unit[5000]{K} to \unit[8000]{K} (in steps of \unit[100]{K}), surface gravities from $\log g = 3.5$ to $\log g = 5.0$ (in steps of 0.1), and metallicities from [Fe/H] = $-1.0$ to [Fe/H] = $1.0$ dex (in steps of 0.5 dex) from the Pollux database \citep{2010A&A...516A..13P}. We applied the latest ATLAS12 model atmospheres \citep{2005MSAIS...8..189K} where blanketed model atmospheres handle line opacity in stellar atmospheres using the Opacity Sampling technique. The models assume a plane parallel geometry, hydrostatic and radiative equilibrium, as well as local thermodynamic equilibrium. The microturbulent velocity was fixed to $\unit[2]{km\,s^{-1}}$ for all models. Synthetic spectra were then generated using the SYNSPEC tool \citep{1992A&A...262..501H}. Each model spectrum was convolved with the rotational profile for a given rotational velocity and implemented with an instrumental broadening as well as a radial velocity shift. Given that the light enters through off-axis slits (in the dispersion direction) in the MOS, the actual resolution may vary from slit to slit and from mask to mask. Therefore we adopted the full width at half maximum (FWHM) of the corresponding arc lines as an indicator of the instrumental broadening effect. Then, we used the Markov chain Monte Carlo \citep[emcee;][]{2013PASP..125..306F} method to sample the five-dimensional parameter space ($v\sin i$, $v_\mathrm{r}$, $T_\mathrm{eff}$, $\log g$, [Fe/H]) to employ a $\chi^2$ minimization. For each of the 3000 runs of the MCMC procedures, $\chi^2$ values and their associated probabilities $e^{-\chi^2/2}$ were stored. Probability distributions were then generated by projecting the sum of the probabilities onto the dimension considered. A Gaussian fit to the distribution provides its width $\sigma$, which we adopt as the uncertainty. To estimate the influence of instrumental broadening on the determination of the rotational velocities, we generated a set of mock spectra by sampling the projected rotational velocities $v\sin i$ from $\unit[20]{km\,s^{-1}}$ to $\unit[200]{km\,s^{-1}}$, assuming a uniform $\mathrm{SNR} = 200$ and a reasonable uncertainty for the instrumental broadening ($\sigma_\mathrm{FWHM} = \unit[0.1]{\AA}$), and we measured the best-fitting parameters from those mock spectra. We repeated this procedure 100 times and estimated the median values and the 68th percentiles of the velocity distribution. In Fig. \ref{fig:error} we present a comparison of the rotational velocities of the mock data with those derived through profile fitting. The blue shadowed region corresponds to $\unit[1]{\sigma}$ and the one-to-one relation is indicated by an orange solid line. Given the intermediate spectral resolution, it is hard to differentiate the effect of rotational from instrumental broadening for slow rotators. Therefore, we defined the detection limit of the rotational velocity to be the velocity where its uncertainty is around half of the actual value and, for slow rotators with $v\sin i \leqslant \unit[55]{km\,s^{-1}}$, the uncertainties of the measurements are comparable to their actual values, while the uncertainty is less than 5\% and 3\% for the mock spectra with $v\sin i\geqslant \unit[100]{km\,s^{-1}}$ and $v\sin i\geqslant \unit[150]{km\,s^{-1}}$, respectively. \begin{figure}[ht!] \plotone{error.pdf} \caption{Rotational velocities of the mock data (horizontal axis) vs. rotational velocities derived from profile fitting. The blue shadowed region corresponds to $\unit[1]{\sigma}$ and the one-to-one relation is indicated by an orange solid line. The red dashed lines represent the lower limit of reliable measurements of the rotational velocity ($\unit[55]{km\,s^{-1}}$) where the uncertainty is around half of the actual value. \label{fig:error}} \end{figure} \section{Extended MSTOs and stellar rotation \label{sec:eMSTO}} The eMSTO of NGC 5822, if interpreted as an age difference, is around $\unit[300-350]{Myr}$. \citet{2018ApJ...869..139C} estimated the ages of the stars around the eMSTO region by linearly interpolating a grid of isochrones and calculated the FWHM of the cluster's age distribution, which gives a spread of $\unit[270\pm52]{Myr}$. They also showed that the FWHM of the NGC 5822 eMSTO follows the correlation between the width of the eMSTO and cluster age applicable within the framework of stellar rotation. \begin{figure}[ht!] \gridline{\fig{rot.pdf}{0.4\textwidth}{} \fig{spec.pdf}{0.6\textwidth}{} } \caption{{\bf (left)} CMD of NGC 5822 with the member stars color-coded by their rotational velocities. The best-fitting isochrone is shown as the red curve. A clear trend between stellar rotation and their loci in the CMD region is seen, in the sense that the rapid rotators (yellow) tend to lie on the red side of the eMSTO while the slow rotators (blue) are usually found on the blue side. {\bf (right)} {\bf Two sample spectra of a slow rotator (top) and a fast rotator (bottom). H$\beta$ and Mg {\sc i} triplets of the same object are shown in the left- and right-hand columns, respectively. For each spectrum, the best-fitting models are presented as orange curves.} \label{fig:rot}} \end{figure} In Fig. \ref{fig:rot}, we present the CMD of NGC 5822, with the member stars color-coded by their rotational velocities. We found that their loci in the CMD region covered by the eMSTO strongly depend on stellar rotation, in the sense that rapid rotators tend to lie on the red side of the eMSTO while slow rotators are usually found on the blue side. Similar results have also been discovered in young and intermediate-age clusters in the Magellanic Clouds \citep{2017ApJ...846L...1D, 2018MNRAS.480.1689K}, as well as in Galactic open clusters \citep{2018ApJ...863L..33M, 2018MNRAS.480.3739B}. The stellar structural parameters as well as the inferred projected rotational velocities are listed in Table \ref{tab:res}. \begin{deluxetable*}{lCCCRRRRR}[b!] \tablecaption{Properties of member stars with rotational velocity measurements. \label{tab:res}} \tablewidth{0pt} \tablehead{ \colhead{\text{Gaia} ID}&\colhead{$G$ (mag)}&\colhead{$G_\mathrm{bp}$ (mag)}&\colhead{$G_\mathrm{rp}$ (mag)} &\colhead{$T$ (K)\tablenotemark{a}} &\colhead{$\log g$\tablenotemark{a}} & \colhead{Fe/H\tablenotemark{a}} & \colhead{$v\sin i$ ($\unit{km\,s^{-1}}$)\tablenotemark{b}} & \colhead{$v_\mathrm{rv}$ ($\unit{km\,s^{-1}}$)\tablenotemark{c}} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)}& \colhead{(6)} &\colhead{(7)}&\colhead{(8)}&\colhead{(9)} } \startdata 5887641469783582208 & 12.28 & 12.50 & 11.92& 6900\pm 100 & 3.5\pm 0.1 & 0.0\pm 0.5 & 82.0\pm12.4 & -17.8\pm5.8 \\ 5887666444964220416 & 11.21 & 11.38 & 10.93& 7400\pm 100 & 3.9\pm 0.1 & 0.0\pm 0.5 & 182.3\pm6.1 & -30.2\pm7.0 \\ 5887666444964220416\tablenotemark{*} & 11.21&11.38 & 10.93& 7200\pm100 & 3.6\pm0.1 & 0.0\pm0.5 & 176.9\pm6.1 & -25.2\pm5.1 \\ 5887669198096568960 & 12.03 & 12.27 & 11.64& 6800\pm 100 & 4.2\pm 0.1 & -0.5\pm 0.5 & 110.8\pm8.1 & -42.0\pm6.1 \\ 5887668648340678784 & 10.89 & 11.07 & 10.61& 7200\pm 100 & 4.4\pm 0.1 & 0.0\pm 0.5 & 16.0\pm38.8 & -30.3\pm6.6 \\ 5887642466216028032 & 11.82 & 12.02 & 11.50& 7100\pm 100 & 4.1\pm 0.1 & -0.5\pm 0.5 & 206.2\pm5.8 & -24.1\pm6.7 \\ 5887642466216028032\tablenotemark{*} & 11.82&12.02 & 11.50& 7000\pm100 & 4.2\pm0.1 & -0.5\pm0.5 & 211.3\pm5.6 & -23.1\pm14.1 \\ 5887668575267987840 & 11.92 & 12.10 & 11.61& 6900\pm 100 & 3.7\pm 0.1 & -0.5\pm 0.5 & 163.0\pm6.1 & -26.8\pm13.6 \\ 5887698644394381952 & 10.65 & 10.87 & 10.31& 7200\pm 100 & 3.6\pm 0.1 & 0.0\pm 0.5 & 42.7\pm24.7 & -40.1\pm6.9 \\ 5887698644394381952\tablenotemark{*} & 10.65&10.87 & 10.31& 7200\pm100 & 3.5\pm0.1 & 0.0\pm0.5 & 53.9\pm20.3 & -33.4\pm4.4 \\ 5887719054073235584 & 11.07 & 11.27 & 10.76& 7400\pm 100 & 4.0\pm 0.1 & 0.0\pm 0.5 & 177.6\pm6.1 & -31.6\pm5.2 \\ 5887719054073235584\tablenotemark{*} & 11.07&11.27 & 10.76& 7000\pm100 & 4.0\pm0.1 & -0.5\pm0.5 & 181.7\pm6.1 & -32.9\pm6.5 \\ 5887644935768013056 & 11.53 & 11.72 & 11.21& 7400\pm 100 & 4.1\pm 0.1 & 0.0\pm 0.5 & 194.0\pm6.0 & -42.3\pm5.2 \\ 5887644935768013056\tablenotemark{*} & 11.53&11.72 & 11.21& 7400\pm100 & 4.0\pm0.1 & 0.0\pm0.5 & 194.2\pm6.0 & -46.4\pm9.5 \\ 5887671255327574144 & 12.08 & 12.29 & 11.75& 7000\pm 100 & 3.5\pm 0.1 & 0.0\pm 0.5 & 33.4\pm29.0 & -21.5\pm6.0 \\ 5887695414549752704 & 10.92 & 11.13 & 10.60& 7300\pm 100 & 3.8\pm 0.1 & 0.0\pm 0.5 & 120.9\pm7.3 & -36.6\pm6.6 \\ 5887695414549752704\tablenotemark{*} & 10.92&11.13 & 10.60& 7200\pm100 & 3.7\pm0.1 & 0.0\pm0.5 & 124.9\pm7.0 & -26.0\pm6.8 \\ 5887718740485998208 & 11.19 & 11.43 & 10.83& 7000\pm 100 & 3.6\pm 0.1 & 0.0\pm 0.5 & 246.9\pm2.4 & -27.7\pm8.6 \\ 5887718740485998208\tablenotemark{*} & 11.19&11.43 & 10.83& 7100\pm100 & 4.3\pm0.1 & -0.5\pm0.5 & 239.0\pm3.5 & -33.2\pm7.7 \\ 5887668197310871168 & 12.04 & 12.23 & 11.72& 7100\pm 100 & 3.7\pm 0.1 & 0.0\pm 0.5 & 89.0\pm11.0 & 15.9\pm7.1 \\ 5887642981611923200 & 11.66 & 11.88 & 11.30& 7000\pm 100 & 4.2\pm 0.1 & -0.5\pm 0.5 & 249.1\pm2.1 & -28.2\pm6.8 \\ 5887642981611923200\tablenotemark{*} & 11.66&11.88 & 11.30& 6900\pm100 & 3.5\pm0.1 & -0.5\pm0.5 & 249.6\pm2.0 & -26.5\pm9.7 \\ 5887722313898780672 & 12.02 & 12.22 & 11.69& 7400\pm 100 & 4.3\pm 0.1 & 0.0\pm 0.5 & 120.0\pm7.3 & -31.9\pm5.2 \\ 5887722313898780672\tablenotemark{*} & 12.02&12.22 & 11.69& 7300\pm100 & 4.1\pm0.1 & 0.0\pm0.5 & 143.0\pm6.3 & -26.9\pm8.1 \\ 5887671397119565312 & 12.74 & 13.04 & 12.26& 6300\pm 100 & 4.0\pm 0.1 & -0.5\pm 0.5 & 48.2\pm22.4 & -28.6\pm8.4 \\ 5887698296441666688 & 11.97 & 12.17 & 11.64& 7400\pm 100 & 3.9\pm 0.1 & 0.5\pm 0.5 & 108.2\pm8.3 & -31.9\pm5.0 \\ 5887698296441666688\tablenotemark{*} & 11.97&12.17 & 11.64& 7400\pm100 & 3.7\pm0.1 & 0.5\pm0.5 & 83.7\pm12.0 & -29.3\pm5.5 \\ 5887697987204027264 & 11.14 & 11.33 & 10.85& 7400\pm 100 & 3.8\pm 0.1 & 0.0\pm 0.5 & 178.9\pm6.1 & -31.6\pm8.9 \\ 5887697987204027264\tablenotemark{*} & 11.14&11.33 & 10.85& 7400\pm100 & 3.9\pm0.1 & 0.0\pm0.5 & 175.1\pm6.1 & -38.0\pm6.1 \\ 5887671534558688640 & 12.79 & 13.04 & 12.38& 6500\pm 100 & 3.7\pm 0.1 & -0.5\pm 0.5 & 138.4\pm6.4 & -21.8\pm7.8 \\ 5887670224535430272 & 12.26 & 12.47 & 11.91& 6600\pm 100 & 3.5\pm 0.1 & -0.5\pm 0.5 & 34.1\pm28.7 & -58.6\pm8.3 \\ 5887671431479631360 & 12.77 & 13.02 & 12.36& 6600\pm 100 & 3.7\pm 0.1 & -0.5\pm 0.5 & 144.0\pm6.3 & -30.3\pm7.0 \\ 5887665895208406272 & 12.01 & 12.21 & 11.66& 7000\pm 100 & 3.9\pm 0.1 & -0.5\pm 0.5 & 201.6\pm5.9 & -32.7\pm8.1 \\ 5887665895208406272\tablenotemark{*} & 12.01&12.21 & 11.66& 7000\pm100 & 4.4\pm0.1 & -0.5\pm0.5 & 202.4\pm5.9 & -42.0\pm6.7 \\ 5887667544475841152 & 12.14 & 12.36 & 11.77& 6800\pm 100 & 4.0\pm 0.1 & -0.5\pm 0.5 & 65.2\pm16.6 & -51.4\pm4.4 \\ 5887642805464232704 & 12.24 & 12.45 & 11.87& 7200\pm 100 & 4.3\pm 0.1 & 0.5\pm 0.5 & 86.0\pm11.6 & -14.4\pm9.7 \\ 5887642805464232704\tablenotemark{*} & 12.24&12.45 & 11.87& 7000\pm100 & 4.3\pm0.1 & 0.5\pm0.5 & 123.5\pm7.1 & -0.0\pm13.6 \\ \enddata \tablenotetext{a}{Uncertainty adopted from the step size of the model grid.} \tablenotetext{b}{Uncertainty estimated from the mock test.} \tablenotetext{c}{Uncertainty given by the MCMC procedure.} \tablenotetext{*}{Duplicate observation in Programme 2018-1-SCI-006} \tablecomments{(1) \textit{Gaia} DR2 ID; (2,3,4) Extinction-corrected \textit{Gaia} bands (5) Effective temperature; (6) Surface gravity; (7) Metallicity; (8) Projected rotational velocity; (9) Radial velocity.} \end{deluxetable*} We also compared the observed cluster data with a synthetic cluster data set that included the effects of stellar rotation. The synthetic cluster data waere derived from the SYCLIST models \citep{2013A&A...553A..24G, 2014A&A...566A..21G}, assuming a metallicity of $Z=0.014$, an age of $\log( t \mbox{ yr}^{-1}) = 8.95$, and a binary fraction of 0.131, with a rotational distribution derived from \citet{2010ApJ...722..605H} and a random rotation axis distribution. The model also accounts for the limb-darkening effect \citep{2000A&A...359..289C} as well as for the gravity darkening law of \citet{2011A&A...533A..43E}. In the left-hand panel of Fig. \ref{fig:model}, the synthetic cluster is superposed onto the CMD of NGC 5822 and the eMSTO feature is well-reproduced and consistent with coeval stellar populations with different rotation rates. The projected rotational velocity of the synthetic cluster follows a similar trend as the real member stars, which become redder as the stellar rotation rates increase. In the middle panel we present a realistic synthetic cluster with a number of stars comparable to that in the observed CMD. To provide a better comparison with the simulation, we introduced the pseudo-color $\Delta{(G_\mathrm{bp}-G_\mathrm{rp})}$ as the normalized color difference with respect to the blue ridgeline in the direction determining how stellar rotation may change the locus of a star in the CMD (black arrow) to represent the deviation in color which may be caused by stellar rotation. We adopted the blue edge of the synthetic cluster, which represents the population of non-rotating stars, as the fiducial ridgeline. In the right-hand panel of Fig. \ref{fig:model}, the $\Delta{(G_\mathrm{bp}-G_\mathrm{rp})}$ vs. $v\sin{i}$ diagram for all stars with projected rotational velocity measurements is shown, and the gray dots represent the same distribution for the synthetic cluster. We found that most of our targets follow the trend predicted by the stellar rotational model, where the pseudo-color is close to zero for slow rotators and it increases significantly as the rotational velocity increases. Two outliners in the right-hand panel of Fig. \ref{fig:model} (\textit{Gaia} ID: 5887669198096568960 and 5887671397119565312), which have relatively large pseudo-colors compared with their rotational velocities, may result from contamination by binary stars. Since their locations in the CMD coincide with the equal-mass binary sequence, they are likely unresolved binaries, particularly the star with \textit{Gaia} ID 5887671397119565312, whose low mass ($\unit[1.2-1.3]{M_\odot}$) is close to the minimum mass for large stellar rotation. With such a low mass, stars brake efficiently early on the MS and evolve back to the non-rotating tracks \citep{2018arXiv181205544G}. Therefore, stellar rotation is unlikely the cause of such a large shift in color and binary stars might be a plausible explanation of these two outliners. \begin{figure}[ht!] \gridline{\fig{model.pdf}{0.66\textwidth}{} \fig{rot_model.pdf}{0.34\textwidth}{} } \caption{{\bf (left)} CMD of the observed cluster (gray dots), as well as the synthetic cluster, with colors representing the projected rotational velocities. Larger points with white borders reflect measurements of member stars. The adopted ridgeline is shown as a blue dashed line. The black arrow represents indicates how stellar rotation affects the locus of a star in the CMD. {\bf (middle)} {\bf Realistic synthetic cluster with a number of stars comparable to that in the observed CMD.} {\bf (right)} Pseudo-color vs. rotational velocity. Stars with projected rotational velocity measurements are marked as orange dots and the synthetic cluster stars are marked as gray dots.\label{fig:model}} \end{figure} \section{Discussion and conclusions\label{sec:discussion}} NGC 5822 is an intermediate-age (\unit[0.9]{Gyr}) Galactic open cluster exhibiting an eMSTO. Through membership determination based on \textit{Gaia} proper motions and parallaxes, we investigated the CMDs of NGC 5822 and confirmed that the eMSTO is unlikely an artifact caused by differential extinction. By exploiting SALT/RSS data, we derived the projected rotational velocities of 24 member stars and found that stellar rotation is strongly correlated with the stellar loci in the CMD in the MSTO region. The red side of the eMSTO is occupied by fast rotators while the blue side is mainly composed of slow rotators. By comparison with a synthetic cluster, we have shown that the eMSTO of NGC 5822 can be properly reproduced and the rotational velocities of the eMSTO stars follow the same pattern as that predicted by the stellar rotation model. \begin{figure}[ht!] \plotone{prob.pdf} \caption{Distributions of projected rotational velocities $v\sin i$ (blue) and true rotational velocities $v_\mathrm{rot}$ (orange) for member stars in the MSTO region of NGC 5822. Both velocities were estimated by taking the average velocities of the nearest 50 stars in the synthetic CMD. \label{fig:prob}} \end{figure} Combined with NGC 6705 \citep[250 Myr;][]{2018ApJ...863L..33M} and NGC 2818 \citep[800 Myr;][]{2018MNRAS.480.3739B}, we have confirmed the correlation between stellar rotation and stellar positions in the eMSTO/split MS in three Galactic open clusters. \citet{2018ApJ...869..139C} found that Galactic open clusters also follow the trend between cluster age and the extent of eMSTO seen in Magellanic Cloud clusters \citep{2013ApJ...776..112Y}, suggesting that they should be regulated by a similar mechanism. Split MSs are believed to be composed of two stellar populations characterized by different rotation rates: the blue MS is composed of slow rotators and the red MS is composed of rapid rotators \citep{2016MNRAS.458.4368M}. Spectroscopic surveys of the young clusters NGC 1818 \citep{2018AJ....156..116M} and NGC 6705 \citep{2018ApJ...863L..33M} confirmed the existence of slowly and rapidly rotating populations and found that these two subgroups are well separated in projected rotational velocity, with a difference in mean $v\sin i$ greater than $\unit[100]{km\,s^{-1}}$. Meanwhile, in the intermediate-age clusters NGC 2818 \citep{2018MNRAS.480.3739B} and NGC 5822, such a result is barely seen, which may due to small-number statistics and selection effects. Therefore, we estimated the rotational velocities for all MSTO stars in NGC 5822 based on our synthetic cluster to check the distribution of the stellar rotation rates. On the basis of previous analyses, we argue that the synthetic cluster can properly reproduce the observed results and that it can be taken as a reasonable approximation to the real cluster. Thus, for each member star in the NGC 5822 MSTO region, we inferred the rotational velocities by taking the average velocities of the nearest 50 stars in the synthetic CMD. The distributions of projected rotational velocity $v\sin i$ and $v_\mathrm{rot}$ are presented in Fig. \ref{fig:prob}. We found that the projected rotational velocities show a dip around $\unit[150]{km\,s^{-1}}$, similar to the results for NGC 1818 and NGC 6705. However we suggest that this is an artifact caused by projection effects. In Fig. \ref{fig:prob}, the `slow' rotators have a peak at $\unit[100]{km\,s^{-1}}$ and they have a dearth of stars with $v\sin i \sim \unit[50]{km\,s^{-1}}$, which is different from the results for the young clusters where the slowly rotating populations have lower mean velocities and do not show a gap in slowly rotating stars. The distribution of the true rotational velocities is also shown in Fig. \ref{fig:prob} and the fact that it shows a single peak around $\unit[200]{km\,s^{-1}}$ further confirms that the equatorial velocities in NGC 5822 should follow a unimodal distribution. On the other hand, projection effects are unlikely to explain the large difference in projected rotational velocities found in young clusters and the true rotation rates of MSTO stars in young clusters should in all probability show a bimodal distribution given the fact that the split MSs can be separated into distinct sequences in the CMD. Since the typical masses of the split MSs in young clusters ($\geqslant\unit[2.5]{M_\odot}$) and eMSTOs in intermediate-age clusters ($\unit[1.4]{M_\odot}$--$\unit[2]{M_\odot}$) are different, if a split MS and eMSTO are present in the evolutionary sequence of star clusters, these two distributions of stellar rotation may coexist in the same cluster. This may hint that the stellar rotation distribution in clusters follow a similar pattern as the field population in the sense that stars more massive than $\unit[2.5]{M_\odot}$ show a bimodal equatorial velocity distribution while less massive stars have a unimodal rotation distribution \citep{2012A&A...537A.120Z}. However, spin alignment in clusters may play an overlooked role. \citet{2017NatAs...1E..64C} found evidence of spin alignment among the red giant stars in the two old open clusters NGC 6791 and NGC 6819. \citet{2018NatAs.tmp..156L} inferred the $v_\mathrm{rot}$ and the inclination angles $i$ of MSTO stars in NGC 6705 from Monte Carlo simulations where $v_\mathrm{rot}$ has a linear distribution and $i$ a Gaussian distribution. They argued that cluster members have highly aligned spin axes, which implies a link between stellar rotation and rotational kinetic energy in the progenitor molecular cloud. We also estimated the stellar masses following similar procedures as \citet{2018ApJ...862..133S}. In essence, we generated a synthetic cluster of 10,000 stars with the initial masses generated through Monte Carlo sampling of a Kroupa stellar initial mass function \citep[IMF;][]{2001MNRAS.322..231K}. Then, we calculated the ratio of the number of member stars with $G$ magnitudes from \unit[12.5]{mag} to \unit[13.5]{mag} to that of the synthetic cluster for the same magnitude range. We multiplied the integrated mass obtained for the synthetic cluster by this ratio to estimate the total stellar mass in the cluster, $\unit[1.7\pm0.3\times10^3]{M_\odot}$. We confirmed that changing the magnitude range will not affect the estimation of the total mass significantly. \citet{2018MNRAS.480.3739B} reported a mass of $\unit[2800]{M_\odot}$ for NGC 2818. These results suggest that eMSTOs are not exclusive to massive clusters ($\unit[10^4-10^5]{M_\odot}$). One possible source that could also give rise to a broadened MSTO is stellar variability. \citet{2016ApJ...832L..14S} argued that the instability strip intersects with the MSTO region of a cluster with an age of $\unit[\sim 1-3]{Gyr}$ and can make a significant contribution to the observed eMSTOs. Follow-up observations of NGC 1846 revealed the presence of a group of (mainly $\delta$ Scuti) variable stars around the eMSTO region. However, the number fraction of variable stars was not sufficient to produce the observed width of the eMSTO \citep{2018AJ....155..183S}. Certain types of variable or binary stars (e.g., EA-type eclipsing binaries) exhibit large changes in radial velocity over time and can be detected through multi-epoch observations. However, we did not find such candidates in our sample because of the limited spectral resolution and the small number of observations. Follow-up photometric and spectroscopic observations are required to further investigate ithe role of variability in shaping the morphology of the eMSTO region. \acknowledgments R. d. G. and L. D. acknowledge research support from the National Natural Science Foundation of China through grants 11633005, 11473037, and U1631102. R. d. G. is grateful for support from the National Key Research and Development Program of China through grant 2017YFA0402702 from the Chinese Ministry of Science and Technology (MOST). L. D. also acknowledges support from MOST through grant 2013CB834900. \vspace{5mm} \facilities{SALT(RSS)} \software{PySALT \citep{2010SPIE.7737E..25C}, PARSEC \citep[1.2S;][]{2012MNRAS.427..127B}, Astropy \citep{2013A&A...558A..33A}, Matplotlib \citep{2007CSE.....9...90H}, SYNSPEC \citep{1992A&A...262..501H}, emcee \citep{2013PASP..125..306F}}
1,108,101,563,085
arxiv
\section{Introduction} \label{sec:intro} Recommendation technology has been widely adopted by many online services in recent years, to help relieve users of massive information overload~\cite{Adomavicius2005}. Collaborative filtering (CF) is one of the most popular and successful techniques for recommender systems~\cite{Ekstrand2011} and is deployed, for example, by Amazon and Netflix. The core idea behind CF is that users whose past interests were similar will also share common interests in future. Users' interests are inferred from patterns of interaction, either explicit or implicit, of users with items. In a typical case involving explicit feedback, users are asked/allowed to explicitly rate items, using a pre-defined rating scale (graded relevance), e.g., 1-5 stars on the Netflix movie recommendation site. A higher grade indicates stronger preference for the item, reflecting a higher relevance of the item with respect to the user. In a typical case involving implicit feedback~\cite{Hu2008}, users interact with items by downloading or purchasing, and their preferences are deduced from the resulting patterns. In this paper we propose a new CF model for the case of graded relevance data. One way to measure the fit of a learned model for graded relevance data (e.g., ratings) is to use a metric such as Root-Mean-Square Error. This metric was adopted as the evaluation metric in the Netflix Prize contest\footnote{http://www.netflixprize.com/}. However, it is now widely recognized that recommendation approaches optimized to minimize the error rate usually achieve poor performance on the top-N recommendation task~\cite{Cremonesi2010,Gunawardana2009}. In practice, users focus their attention on only a small number of recommendations, effectively ignoring all but a short list of $N$ recommended items. For this reason, it is more useful to focus the recommendation model on making this short list of \emph{top-N} items as relevant as possible, rather than on accurately predicting ratings of non-relevant items. Despite the different nature of the underlying data (graded v.s. binary) the ultimate objective of CF models in all cases is the same, i.e., \emph{to generate a top-N recommendation list of relevant items to individual users.} This task is essentially a ranking task, i.e., ranking items according to their relevance to the user. Consequently, CF models that optimize for a ranking metric are particularly well suited to address it. Various models use learning to rank~\cite{Liu2009b} techniques to optimize binary relevance ranking metrics. For example, several CF models~\cite{Rendle2009,Shi2012,Shi2012b} compute near optimal ranked lists with respect to the Area Under the Curve (AUC), Average Precision (AP)~\cite{Manning2008} and Reciprocal Rank~\cite{Voorhees1999} metrics. However, metrics that are defined to handle binary relevance data are not directly suitable for graded relevance data. In order to apply binary metrics, and CF methods that optimize for these metrics, to graded relevance data, it is necessary to convert the data to binary relevance data. This conversion is generally accomplished by imposing a threshold (e.g., defining rating levels 1-3 as non-relevant and 4-5 as relevant). This process has two major drawbacks: 1) we {\it lose grading information} among the rated items, e.g., items rated with a 5 are more relevant than items rated with a 4. This information is crucial in building precise models. 2) the choice of the {\it threshold relevance is arbitrary} and will have an impact on the performance of different recommendation approaches. A well-known metric in the area of information retrieval (IR) is Normalized Discounted Cumulative Gain (NDCG)~\cite{Jarvelin2002}, which can be used to measure the performance of ranked results with graded relevance and is often used for evaluating recommender systems~\cite{Balakrishnan2012,Liu2008,Liu2009,Volkovs2012,Weimer2007}. NDCG is dependent on both the grades and the positions of the items in the ranked list. However, NDCG is a so-called ``non-cascade" metric. Under ``cascade metrics", such as Average precision and Reciprocal Rank, the contribution of a given item has a dependence on the relevance of higher ranked items. Instead, NDCG assumes independence between the items in the ranked list, i.e., each item contributes to the quality of the ranked list solely based on its own grade and position, while ignoring the impact of items that are ranked above it. The ``non-cascade" nature of NDCG, has recently drawn criticism from authors who point out the advantages of cascade metrics~\cite{Chapelle2009,Robertson2010}. Graded Average Precision (GAP)~\cite{Robertson2010} has been proposed as a generalized version of Average Precision in the use scenario with graded relevance data. GAP, being similar to Average Precision, reflects the overall quality of the top-N ranked items. Moreover, it inherits all the desirable properties of AP: top-heavy bias, high informativeness, elegant probabilistic interpretation, and solid underlying theoretical basis~\cite{Robertson2010}. In this paper, we propose a new CF approach, i.e., a latent factor model for Graded Average Precision (\textbf{{\it GAPfm}}), that learns latent factors of users and items so as to directly optimizes GAP of top-N recommendations. The contributions in this paper can be summarized as: \vspace{-2pt} \begin{packed_item} \item We propose a novel CF approach that directly optimizes GAP. We show that {\it GAPfm} outperforms state-of-the-art methods for various evaluation metrics including GAP, Precision and NDCG. \item We conduct a theoretical analysis of the smoothed approximation of GAP and support its validity. \item We provide a learning algorithm that scales linearly with the number of observed grades in the dataset. Moreover, we further exploit properties of GAP and provide a sub-linear complexity learning algorithm that is suitable for large data sets. \end{packed_item} \vspace{-2pt} The reminder of the paper is organized as follows: in Section~\ref{sec:rw} we discuss previous research contributions that are related to our approach proposed in this paper. Then, in Section~\ref{sec:prob}, we introduce the notation and terminology adopted throughout the paper, after which, in Section~\ref{sec:lfmgap}, we present the details of {\it GAPfm}. The experimental evaluation is presented in Section~\ref{sec:exp}. Finally, Section~\ref{sec:conclude} summarizes our contributions and discusses future work. \section{Related work} \label{sec:rw} The approach proposed in this paper is rooted in the research areas of collaborative filtering and learning to rank. In the following, we discuss related contributions in each of the two areas, and position our work with respect to them. \textbf{Collaborative Filtering.} A large portion of recent CF approaches tries to address the rating prediction problem, as defined in the Netflix Prize contest. CF approaches can categorized broadly into two categories: memory-based and model-based~\cite{Adomavicius2005}. Memory-based approaches rely on the similarities between users or items, and generate rating predictions by aggregating preference data over similar users (user-based)~\cite{Herlocker1999} or similar items (item-based)~\cite{Sarwar2001}. Model-based approaches learn a prediction model based on a set of training data, and then use the prediction model to generate recommendations for individual users~\cite{Agarwal2009,Hofmann2004}. Latent factor models (or more specifically, matrix factorization techniques) have attracted significant research attention, due to their superior performance on the rating prediction problem, as witnessed during the Netflix Prize contest~\cite{Koren2008,Koren2009a}. The methods developed to attack the Netflix Prize were highly effective for the rating prediction task, but have turned out to have relatively poor performance on the top-N recommendation task~\cite{Cremonesi2010}. A few contributions have been proposed specifically to address the ranking problem in CF. Bayesian personalized ranking (BPR)~\cite{Rendle2009} and Collaborative Less-is-More Filtering (CLiMF)~\cite{Shi2012b} seek to improve top-N recommendation by directly optimize binary relevance measures, i.e., Area Under the Curve (AUC) in BPR and Reciprocal Rank in CLiMF. In a similar spirit, TFMAP~\cite{Shi2012} directly optimizes Average Precision for context-aware recommendations. All of these methods use binary implicit-feedback data. However, as discussed in Section~\ref{sec:intro}, these methods are not well-suited for graded relevance datasets, since they are not able to fully exploit the information encoded in the grade levels. Research that deals with the ranking problem for cases involving graded relevance data includes EigenRank~\cite{Liu2008} and probabilistic latent preference analysis~\cite{Liu2009}, which exploit pair-wise comparisons between the rated items. Collaborative competitive filtering~\cite{Yang2011b} has further advanced the performance of top-N recommendation by imposing local competition, i.e., constraining items that users have seen but not rated to be less preferred that items both seen and rated. However, none of these methods are designed to optimize for any specific ranking/evaluation measure. To our knowledge, the only existing CF approach, that directly optimizes a graded evaluation measure is CofiRank~\cite{Weimer2007}, which minimizes a convex upper bound of the NDCG loss through matrix factorization. Some of the latest contributions aim at enhancing the performance of CofiRank and boosting the NDCG score of the ranking results~\cite{Balakrishnan2012,Volkovs2012}. These approaches are often referred to as \emph{collaborative ranking}, and are evaluated by their performance on ranking graded items. Note that these approaches solve a different problem. Instead of addressing the top-N recommendation task, they rank a list of rated items that has already been given, i.e., pre-specified. As mentioned in Section~\ref{sec:intro}, even results that are ranked optimally in terms of NDCG may still yield suboptimal top-N recommendations. The new approach introduced by this paper, GAPfm, directly optimizes a recently proposed cascade metric GAP. In our experimental evaluation, we demonstrate that GAPfm outperforms CoFiRank with respect to a range of conventional top-N evaluation metrics. \textbf{Learning to Rank.} The task of learning to rank is to learn a ranking function that is used to rank documents for given queries~\cite{Liu2009b}. Inspired by the analogy between query-document relations in IR and user-item relations in recommender systems, many CF methods were proposed recently ~\cite{Balakrishnan2012,Hong2012a,Rendle2009,Shi2012,Shi2012b,Volkovs2012,Weimer2007}. Our work in this paper also falls into this category, and in particular, it is closely related to one sub-area of learning to rank, i.e., direct optimization of evaluation metrics. Note that the key challenge of directly optimizing evaluation metrics lies in the non-smoothness~\cite{Burges2006} of these measures. Specifically, these metrics are defined on the rankings and the grades (in the case of graded relevance data) of documents/items in a list, which are indirectly determined by the parameters of the ranking function. Research contributions have been made to directly optimize evaluation metrics by exploiting structured estimation techniques~\cite{TsoJoaHofAlt05,Xu2008} that minimize convex upper bounds of loss functions based on evaluation measures, e.g., SVM-MAP~\cite{Yue2007} and AdaRank~\cite{Xu2007}. In addition, SoftRank~\cite{Taylor2008} and its extensions~\cite{Chapelle2010} were proposed to use smoothed versions of evaluation measures, which can then be directly optimized. Our work can be considered part of this research direction, since our proposed approach optimizes a smoothed version of GAP. The difference with previous work is that we integrate GAP optimization with latent factor models for learning optimal top-N recommendation in the graded relevance domains. We also contribute a learning algorithm that guarantees that the proposed approach is highly scalable. \vspace{-4pt} \section{Notation and Terminology} \label{sec:prob} We denote the graded relevance data from $M$ users to $N$ items as a matrix $Y^{M\times N}$, in which the entry $y_{mi}$ denotes the grade given by user $m$ to item $i$. Note that we have $y_{mi}\in\{1,2,\ldots,y_{max}\}$, in which $y_{max}$ is the highest grade. Note also that $y_{mi}=0$ indicates that user $m$'s preference for item $m$ is unknown. $|Y|$ denotes the number of nonzero entries in $Y$. In addition, $I_{mi}$ serves as an indicator function that is equal to 1, if $y_{mi}>0$, and 0 otherwise. We use $U^{D\times M}$ to denote the latent factors of $M$ users, and in particular $U_m$ denotes a $D$-dimensional (column) vector that represents the latent factors for user $m$. Similarly, $V^{D\times N}$ denotes the latent factors of $N$ items and $V_i$ represents the latent factors of item $i$. Note that the latent factors in $U$ and $V$ are model parameters that need to be estimated from the data (i.e., a training set). The relevance between user $m$ and item $i$ is predicted by the latent factor model, i.e., using the inner product of $U_m$ and $V_i$, as below: \begin{align} \label{eqn:tensor} f_{mi}= \left \langle U_m,V_i\right \rangle =\sum_{d=1}^{D}U_{md}V_{id} \end{align} To produce a ranked list of items for a user $m$ all items are scored using Eq.~(\ref{eqn:tensor}) and ranked according to the scores. In the following, we use $R_{mi}$ to denote the rank position of item $i$ for user $m$, according to the descending order of predicted relevances of all items to the user. For example, if the predicted relevance of item $i$ is higher than that of all the other items for user $m$, i.e., if $f_{mi}>f_{mj}, j=1,2,\ldots,N$ and $j\neq i$, then $R_{mi}=1$. Taking into account both the original definition of GAP in~\cite{Robertson2010} and the notation introduced above, we can rewrite the formulation of GAP for a ranked item list recommended for user $m$ as follows: \begin{align} \label{eqn:gap} GAP_m=& \frac{1}{Z_{m}}\sum_{i=1}^N \frac{I_{mi}}{R_{mi}} \sum_{j=1}^{N}I_{mj}\mathbb{I}(R_{mj} \leq R_{mi}) \nonumber \\ & \Big( \mathbb{I}(y_{mi}<y_{mj}) \sum_{l=1}^{y_{mi}}\delta_l + \mathbb{I}(y_{mj} \leq y_{mi}) \sum_{l=1}^{y_{mj}}\delta_l \Big) \end{align} where $\mathbb{I}(\cdot)$ is an indicator function, which is equal to 1 if the condition is true, and otherwise 0. $\delta_l$ denotes the thresholding probability~\cite{Robertson2010} that the user sets as a threshold of relevance at grade $l$, i.e., regarding items with grades equal or larger than $l$ as relevant ones, and items with grades lower than $l$ as irrelevant ones. $Z_m$ is a constant normalizing coefficient for user $m$, as defined below: \begin{align} \label{eqn:zm} Z_m=\sum_{l=1}^{y_{max}}n_{ml}\sum_{k=1}^{l}\delta_k \end{align} where $n_{ml}$ denotes the number of items rated with grade $l$ by user $m$. For notational convenience in the rest of the paper, we substitute the last term of the parentheses in Eq.~(\ref{eqn:gap}), as shown below: \begin{align} \label{eqn:subs} \beta_{mij}:= \mathbb{I}(y_{mi}<y_{mj}) \sum_{l=1}^{y_{mi}}\delta_l + \mathbb{I}(y_{mj} \leq y_{mi}) \sum_{l=1}^{y_{mj}}\delta_l \end{align} We assume that each grade $l$ is an integer ranging from 1 to $y_{max}$, since usually a non-integer grade scale can be transformed to an integer grade scale by multiplying by a constant factor, e.g., the scale of 1 to 5 stars with half star increment can be transformed to the scale of 1 to 10 stars by multiplying factor 2. As suggested in~\cite{Robertson2010}, the value of $\delta_l$ for each grade needs to be empirically tuned according to the specific use cases. In this paper, we adopt an exponential mapping function that maps the grade $l$ to the thresholding probability $\delta_l$, as shown in Eq.~(\ref{eqn:gradeprop2}). Note that other expressions for the definition of $\delta_l$ can be also used, without influencing the main results on the optimization of GAP, as presented in the next section. \begin{align} \label{eqn:gradeprop2} \delta_l=\left\{\begin{matrix} \frac{2^l-1}{2^{y_{max}}}, & y_{max}>1\\ 1, & y_{max}=1 \end{matrix}\right. \end{align} Using the introduced terminology, the research problem investigated in this paper can be stated as: \emph{Given a top-N recommendation scenario involving graded relevance data $Y$, learn latent factors of users and items, $U$ and $V$, through the direct optimization of GAP as in Eq.~(\ref{eqn:gap}) across all the users and items.} \section{GAP\lowercase{fm}} \label{sec:lfmgap} In this section, we present the details of the proposed recommendation approach, {\it GAPfm}. We first introduce a smoothed version of GAP, for which we can use to optimize a latent factor model. Further, we analyze the complexity of the learning algorithm, and propose an adaptive selection strategy that achieves constant complexity for {\it GAPfm}. \subsection{Smoothed Graded Average Precision} As shown in Eq.~(\ref{eqn:gap}), GAP depends on the rankings of the items in the recommendation lists. However, the rankings of the items are not smooth with respect to the predicted user-item relevance, and thus, GAP results in a non-smooth function with respect to the latent factors of users and items, i.e., $U$ and $V$. Therefore, standard optimization methods cannot be used to maximize the objective function as in Eq.~(\ref{eqn:gap}). In this work, we exploit core ideas from the literature on learning to rank~\cite{Chapelle2010} and recent work that successfully used smoothed approximations of evaluation metrics for CF with implicit feedback data~\cite{Shi2012,Shi2012b}. We approximate the rank-based terms in the GAP metric with smoothed functions with respect to the model parameters (i.e., the latent factors of users and items). Specifically, the rank-based terms $1/R_{mi}$ and $\mathbb{I}(R_{mj}\leq R_{mi})$ in Eq.~(\ref{eqn:gap}) are approximated by smoothed functions with respect to the model parameters $U$ and $V$, as shown below: \begin{align} \label{eqn:approx1} \mathbb{I}(R_{mj} \leq R_{mi})\approx g(f_{mj}-f_{mi}) \end{align} \vspace{-2pt} \begin{align} \label{eqn:approx2} \frac{1}{R_{mi}}\approx g(f_{mi}) \end{align} \noindent where $g(x)$ is a logistic function, i.e., $g(x)=1/(1+e^{-x})$. The basic assumption of the approximation in Eq.~(\ref{eqn:approx1}) is validated in~\cite{Chapelle2010}, i.e., the condition of item $j$ being ranked higher than item $i$ is more likely to be satisfied, if item $j$ has relatively higher relevance score than item $i$. However, the approximation in Eq.~(\ref{eqn:approx2}) proposed in~\cite{Shi2012,Shi2012b} is heuristic, and the rationale of this approximation has not been well justified. In this paper, we present our theoretical analysis of the approximation in Eq.~(\ref{eqn:approx2}) and support its validity. The detail of the validation is given in Appendix~\ref{app:approx}. We attain a smoothed version of $GAP_m$ by substituting the approximations introduced in Eq.~(\ref{eqn:approx1}) and (\ref{eqn:approx2}) into Eq.~(\ref{eqn:gap}), as shown below: \begin{align} \label{eqn:smoothgap} GAP_m \approx & \sum_{i=1}^N I_{mi}g(f_{mi}) \sum_{j=1}^{N}I_{mj} \beta_{mij}g(f_{m(j-i)}) \end{align} Note that for notation convenience, we make use of the substitution $f_{m(j-i)}:=\left \langle U_m,V_j\right \rangle-\left \langle U_m,V_i\right \rangle$. In Eq.~(\ref{eqn:smoothgap}), we drop the coefficient $1/Z_m$, which is independent of latent factors, and thus, has no influence on the optimization of $GAP_m$. Taking into account GAP of all $M$ users (i.e., the average GAP across all the users) and the regularization for the latent factors, we obtain the objective function of {\it GAPfm} as below: \begin{align} \label{eqn:f} F(U,V)=& \frac{1}{M}\sum_{m=1}^M \sum_{i=1}^N I_{mi}g(f_{mi}) \sum_{j=1}^{N}I_{mj} \beta_{mij}g(f_{m(j-i)}) \nonumber \\ & -\frac{\lambda}{2}(\|U\|^2+\|V\|^2) \end{align} $\|U\|$ and $\|V\|$ are Frobenius norms of $U$ and $V$, and $\lambda$ is the parameter that controls the magnitude of regularization. Note that the constant coefficient $1/M$ is also dropped in the following, since it has no influence on the optimization of $F(U,V)$. \subsection{Optimization} \label{sec:opt} Since the objective function in Eq.~(\ref{eqn:f}) is smooth over the model parameters $U$ and $V$, we can optimize it using stochastic gradient ascent. In each iteration, we optimize $F(U_m,V)$ for user $m$ independently of all the other users. The gradients of $F(U_m,V)$ with respect to user $m$ and item $i$ can be computed as follows: {\small \begin{align} \label{eqn:fdu} \frac{\partial F}{\partial U_m}=& \sum_{i=1}^{N}I_{mi} \big [ g'(f_{mi})\sum_{j=1}^N I_{mj}\beta_{mij}g(f_{m(j-i)})V_i \nonumber \\ &+ g(f_{mi}) \sum_{j=1}^N I_{mj}\beta_{mij}g'(f_{m(j-i)})(V_j-V_i) \big] -\lambda U_m \end{align} \begin{align} \label{eqn:fdv} \frac{\partial F}{\partial V_i}= & I_{mi}\big [ g'(f_{mi}) \sum_{j=1}^{N}I_{mj} \beta_{mij}g(f_{m(j-i)}) \nonumber \\ &+ \sum_{j=1}^N I_{mj} (\beta_{mji}g(f_{mj})-\beta_{mij}g(f_{mi}))g'(f_{m(j-i)}) \big]U_m \nonumber \\ & -\lambda V_i \end{align} } The derivation of the gradient in Eq.~(\ref{eqn:fdu}) is rather straightforward. However, the derivation of Eq.~(\ref{eqn:fdv}) is more sophisticated, since the latent factors of different items are coupled. For this reason, we leave its complete derivation in Appendix~\ref{app:fdv}. The complexity of computing the gradient in Eq.~(\ref{eqn:fdu}) is $O(DS_m^2+D)$, where $S_m$ denotes the number of items rated/graded by user $m$. Taking into account all the $M$ users, the complexity of computing gradients in Eq.~(\ref{eqn:fdu}) in one iteration can be denoted as $O(D\overline{S}^2M+DM)$, in which $\overline{S}$ is the average number of rated items across all the users. Similarly, for a given user $m$, the complexity of computing the gradient in Eq.~(\ref{eqn:fdv}) is $O(DS_m^2+DS_m)$, and the complexity in one iteration over all the users is $O(D\overline{S}^2M+D\overline{S}M)$. Note that since we have the conditions $|Y|=\overline{S}M$ and $|Y|>>M,\overline{S},D$, the overall complexity of {\it GAPfm} in one iteration can be regarded as $|Y|$, which is linear to the number of observed ratings in the given dataset. In addition, as can be observed, for a fixed set of latent factors $V$, updating the latent factors of user $m$ as in Eq.~(\ref{eqn:fdu}) can be conducted independently of updates of all the other users. Therefore, in each iteration, the optimization of $U$ for individual users can be done in parallel. As a result, the time complexity of updating $U$ in practice can even be a constant, which is determined by the computing facilities (i.e., number of processors), while not bounded by the scale of $|Y|$. In Section~\ref{sec:exp}, we will experimentally show the exploitation of parallel computing for {\it GAPfm} and the resulting benefit. However, as shown in Eq.~(\ref{eqn:fdv}), the latent factors of one item, e.g., item $i$, is coupled with the latent factors of some other items. Therefore, the advantages of using parallel computing for updating $U$ cannot be transferred to the update of $V$. Although the linear complexity with respect to the scale of the given dataset $|Y|$ is already a crucial advantage for the scalability of {\it GAPfm}, we shall still investigate possibilities to further reduce the computational complexity, especially for the cases when the data is of extremely large scale. \subsection{Adaptive Selection} \label{sec:adapt} \label{sec:fast-learning} The key idea we propose to reduce the complexity of updating $V$ is to adaptively select a fixed number ($K$) of items for each user in each iteration and only update their latent factors. The criterion of adaptive selection is to select the $K$ most ``misranked'' (or disordered) items for each user in each iteration. We present the theoretical support for this technique in Appendix~\ref{app:adapt}. For example, if user $m$ has three rated items with ratings 2, 4, 5 (i.e., the rank of the third item is 1), and the predicted relevance scores (i.e., $f_{mi}$) after an iteration are 0.3, 0.5, 0.1 (i.e., the rank of the third item is predicted to be 3), then the third item is the most ``misranked'' one, which (assuming $K$ is set to 1) will be selected for updating its latent factors in the next iteration. Essentially, the $K$ items are adaptively selected representing the worst offending items with respect to the loss function. The optimization of the latent factors of these selected items yields the highest benefit in terms of optimizing GAP. We summarize the adaptive selection strategy in Algorithm~\ref{alg:adapt}. Note that with a fixed $K$, the complexity of updating $V$ in one iteration is $O(DK^2M+DKM)$. As a result, using $C=KM$ (note that $C>>D,K$), the overall complexity of {\it GAPfm} is in the magnitude of $O(C)$, which is a constant being independent of the scale of $|Y|$. In Section~\ref{sec:exp}, we will experimentally validate the complexity of {\it GAPfm} and the usefulness of the adaptive selection strategy. Summarizing, the entire learning algorithm of {\it GAPfm} is illustrated in Algorithm~\ref{alg:lfmgap}. \begin{algorithm}[t] \SetAlgoNoLine \KwIn{Graded items by user $m$, i.e., $N_m=\{y_{mi}>0, i=1,2,\ldots,N\}$, latent factors $U_m$ and $V$, and the number of selected items $K$.} \KwOut{Selected item set for user $m$: $T_{m}$.} \If{$\sum_{i=1}^NI_{mi}\leq K$} {$T_m=N_m$\; break \;} Compute $r$ as a vector representing the ranks of items in $N_m$ according to the descending order of $y_{mi}$\; Compute $\hat{r}$ as a vector representing the ranks of items in $N_m$ according to the descending order of $\left \langle U_m,V_i\right \rangle$\; $dist=abs(r-\hat{r})$; \% a vector of absolute error between $r$ and $\hat{r}$ Sort elements of $dist$ in descending order\; Set $idx$ as the indexes of top $K$ elements in $dist$\; $T_m=N_m[idx]$\; \caption{Fast Learning: AdaptiveSelection} \label{alg:adapt} \end{algorithm} \begin{algorithm}[t] \SetAlgoVlined \KwIn{Training set $Y$, the number of items for adaptive selection $K$, regularization parameter $\lambda$, learning rate $\gamma$, and the maximal number of iterations $itermax$.} \KwOut{The learned latent factors $U$ and $V$.} \For{$m=1,2,\ldots,M$ }{ \% Index graded items from each user\; $N_m=\{i|y_{mi}>0,i=1,2,\ldots,N\}$\; } Initialize $U^{(0)}$ and $V^{(0)}$ with random values, and $t=0$\; \Repeat{$t \geq itermax$}{ \% Updating $U$ in parallel\; \For{$m=1,2,\ldots,M$ }{ $U_m^{(t+1)}=U_m^{(t)}+\gamma \frac{\partial F}{\partial U_m^{(t)}}$ based on Eq.~(\ref{eqn:fdu})\; } \% Updating $V$\; \For{$m=1,2,\ldots,M$ }{ $T_m=AdaptiveSelection(N_m,U_m^{(t)},V^{(t)},K)$\; \For{$i\in T_m$}{ $V_i^{(t+1)}=V_i^{(t)}+\gamma \frac{\partial F}{\partial V_i^{(t)}}$ based on Eq.~(\ref{eqn:fdv})\; } } $t=t+1$\;} $U=U^{(t)}$, $V=V^{(t)}$\; \caption{{\it GAPfm}} \label{alg:lfmgap} \end{algorithm} \subsection{Discussion} In addition to the aforementioned learning algorithm of {\it GAPfm} and its complexity analysis, we further discuss two characteristics of GAP that provide more insight into the usefulness of {\it GAPfm}. First, since GAP can be regarded as a generalization of AP to multi-grade data~\cite{Robertson2010}, {\it GAPfm} can be also seen as a generalization of~\cite{Shi2012}, a CF approach that directly optimizes AP in the implicit feedback domain. This can also be seen by looking at the smoothed version of GAP in Eq.~(\ref{eqn:smoothgap}), for the case of $y_{max}=1$. For better readability, we leave the detailed proof in Appendix~\ref{app:char}. This characteristic indicates that although {\it GAPfm} is specifically designed for the recommendation domains with graded relevance data, it can also be utilized for the optimization of AP in the domains with implicit feedback data , as it becomes equivalent (for $y_{max} =1$) to the approach proposed in~\cite{Shi2012}. Second, since GAP is an approximation to the area under the graded precision-recall curve as illustrated in~\cite{Robertson2010}, {\it GAPfm} can also be extended to the optimization of graded precision (GP) and graded recall (GR) at the top-N part of the recommendation list, as shown in Appendix~\ref{app:char}. Note that similar to {\it GAPfm}, we can first approximate GP@n and GR@n by smoothed functions of latent factors, and then learn the latent factor models in the same fashion as approached in {\it GAPfm} (c.f. Section~\ref{sec:opt} and \ref{sec:adapt}). This characteristic of {\it GAPfm} is important for real-world applications, since {\it GAPfm} can be simply adapted in real-world systems and tuned to achieve high recall or precision at certain position of users' recommendation lists. We also note that, beyond the scope of the current paper, our approach could also be applied for optimizing another recently proposed evaluation metric, expected reciprocal rank~\cite{Chapelle2009}. Here, we keep our focus on the optimization of GAP, and on understanding the properties of {\it GAPfm} and evaluating its performance. \section{Experimental Evaluation} \label{sec:exp} In this section we present a series of experiments to evaluate the proposed {\it GAPfm} algorithm. We first give a detailed description of the dataset and setup that is used in the experiments. Then, we validate several properties of {\it GAPfm}, as mentioned in Section~\ref{sec:lfmgap}. Finally, we compare the recommendation performance between {\it GAPfm} and several state-of-the-art alternative approaches. We design the experiments in order to address the following research questions: 1) Is {\it GAPfm} effective for optimizing GAP? 2) Is {\it GAPfm} scalable for large-scale use cases? 3) Does {\it GAPfm} outperform state-of-the-art CF approaches for top-N recommendation? \subsection{Experimental Setup} \subsubsection{Dataset} \label{sec:data} The Netflix Prize dataset is one of the most used graded relevance dataset for CF\footnote{http://en.wikipedia.org/wiki/Netflix\_Prize}. Two parts of the dataset are used, i.e., the training set and the probe set. The training set consists of ca. 99M ratings (integers scaled from 1 to 5) from ca. 480K users to 17.7K movies. The probe set contains ca. 1.4M ratings disjoint from the training set. The training set is used for building recommendation models, and the probe set is used for the evaluation. In the experiments, we exclude the users with less than 50 ratings from the training set. This choice is made to guarantee that users have sufficient number of ratings so as to facilitate our investigation on different cases in terms of the number of ratings per user. This also allows us to analyze the complexity of {\it GAPfm}, as discussed in Section~\ref{sec:fast-learning}. Note that this exclusion criterion removes one third of users, without dramatically reducing the size of the dataset, i.e., it only results in an reduction of 4\% of the number of ratings. The statistics of the training set are summarized in Table~\ref{tab:data}. \begin{table}[t \small \centering \caption{\small Statistics of the training set in the experiments.} \label{tab:data} \begin{tabular}{c c c c} \hline \# users & \# movies & \# ratings & Sparseness \\ \hline 319275 & 17770 & 95085018 & 98.32\% \\ \hline \end{tabular} \end{table} \subsubsection{Experimental Protocol} \label{exp:protocol} We randomly select a certain number of rated movies and their ratings for each user in the training set to form a training data fold. For example, under the condition of ``Given 10'', we randomly select 10 rated movies for each user in order to generate a training data fold, from which the ratings are then used as the input data to train the recommendation models. In the experiments we investigate a variety of ``Given'' conditions, i.e., 10, 20, 30, and 50. Based on the learned recommendation models, recommendation lists can be generated for each user, and the performance can be measured according to the ground truth in the probe set. Note, that the probe set was originally designed for the purpose of measuring the accuracy of rating prediction in the Netflix Prize competition. However, it is not guaranteed that for any user the performance of a recommendation list is measurable. For example, it is infeasible to measure the performance of a ranked list if a user has only one rating in a probe set. For this reason, in our experiments we choose to measure the recommendation performance only based on the ground truth of the users who have at least 5 rated movies in the probe set (ca. 14\% users in the probe set). Note that the choice of 5 is set to allow all the evaluation metrics (as introduced later in this section) to achieve the highest possible value $1$ for the task of top-5 recommendation. In addition to GAP, two additional evaluation metrics are used in the experiments to measure the recommendation performance, i.e., NDCG and Precision. As mentioned in Section~\ref{sec:intro}, NDCG is a well-known evaluation metric for measuring the performance of ranked results with graded relevance. Precision is a traditional evaluation measure that reflects the ratio of relevant items in the ranked list given a truncated position. A relevance threshold of determining relevant items is necessary when precision is applied to the graded relevance scenarios. We set the relevance threshold to be 5 (the highest rating value in the dataset), when measuring the precision of the recommendation list. This choice is also supported by literature on recommender systems evaluation~\cite{Cremonesi2010}. Since in recommender systems the user's satisfaction is dominated by only a few items on the top of the recommendation list, our evaluation in the following experiments focuses on the performance of the top-5 recommended items, i.e., GAP@5, NDCG@5 and P@5 (averaged across all the test users) are used to measure the recommendation performance. Note that in the evaluation we cannot treat all the unrated items/movies as irrelevant to a given user. A widely-used practical strategy,~\cite{Cremonesi2010,Koren2008,Shi2012}, is to first randomly select 1000 unrated items (which are assumed to be irrelevant to the user) in addition to the ground truth (i.e., rated items) for each user in the probe set, and then evaluate the performance of the recommendation list that consists of only the selected unrated items and the ground truth items. This evaluation strategy is also adopted here. We randomly select 1.5\% (a similar size to the probe set) of the data in the training set to generate a validation set, which is used to determine parameters that are involved in {\it GAPfm} and baseline approaches and also to investigate the properties of {\it GAPfm} as presented in the next section. \paragraph{Parameter setting} We set the dimensionality of latent factors to be 10 for both {\it GAPfm} and other baseline approaches based on latent factor models. The remaining parameters are empirically tuned so as to yield the best performance in the validation set, i.e., for {\it GAPfm} we set the regularization parameter $\lambda$=0.001 and the learning rate $\gamma$=$10^{-5}$. \paragraph{Implementation} We implement the proposed {\it GAPfm} in Matlab using its parallel computing toolbox, and run the experiments on a single PC with Intel Core-i7 (8 cores) 2.3GHz CPU and 8G RAM memory. For the purpose of reproducibility, we make our implementation code of {\it GAPfm} publicly available (link blinded for review). \subsection{Validation: Effectiveness} In the first experiment we investigate the effectiveness of {\it GAPfm}, i.e., whether learning latent factors based on {\it GAPfm} contributes to the improvement of GAP. We use the training set under the conditions ``Given 10'', ``Given 20'' and ``Given 30'', respectively, to train the latent factors, $U$ and $V$, which are then used to generate recommendation lists for individual users. The performance of GAP is measured according the hold-out data in the validation set along the iterations of the learning algorithm as described in Section~\ref{sec:lfmgap}. The results are shown in Fig.~\ref{fig:effect}, which demonstrates that GAP gradually improves along the iterations and attains convergence after a certain number of iterations. For example, under the condition of ``Given 10'', it converges after 60 iterations, while it converges with less iterations as more data from the users is available for training. According to the observation from this experiment, we can confirm a positive answer to our first research question. \begin{figure}[t] \centerline{ \includegraphics[width=7.5cm]{effect} } \caption{\small The Effectiveness of {\it GAPfm}.} \label{fig:effect} \end{figure} \subsection{Validation: Scalability} \label{sec:scale} We conduct three experiments to validate the scalability of {\it GAPfm}. In the first experiment we empirically investigate the benefits that we can draw from employing parallel computing for updating latent user factors in {\it GAPfm}. We then validate the overall complexity of {\it GAPfm} as theoretically analyzed in Section~\ref{sec:opt}. The last experiment is conducted to investigate the impact of the proposed adaptive selection strategy, as discussed in Section~\ref{sec:adapt}. \subsubsection{Parallel Updating of Latent User Factors} As shown in Section~\ref{sec:opt}, the update of latent user factors $U$ in {\it GAPfm} can be conducted in parallel. By utilizing multiple cores/processors of the computing machine for the experiments, we empirically investigate the average runtime for updating $U$ per iteration against the number of processors employed. Note that updating $U$ in parallel has no influence on the quality of the resulting latent factors, and thus, the performance of the recommendation lists is not influenced. As shown in Fig.~\ref{fig:par}, for each of the ``Given'' conditions, the runtime of updating $U$ in the learning algorithm is reduced remarkably when increasing the number of processors for parallelizing the learning process. Note that the reduction of runtime for updating $U$ is not exactly inversely-proportionate to the number of processors, due to the system process maintenance constraints. However, the experiments are sufficient to demonstrate that parallelization can be used to speedup the optimization of the latent user factors in {\it GAPfm}, given adequate computing facilities. \begin{figure}[t] \centerline{ \includegraphics[width=7cm]{par} } \caption{\small The runtime of updating $U$ in parallel in the learning algorithm of {\it GAPfm}.} \label{fig:par} \end{figure} \subsubsection{Linear Complexity} Since the size of the data used for training the latent factors in {\it GAPfm} varies under different ``Given'' conditions, we can empirically measure the complexity of {\it GAPfm} by observing the computational time consumed at each condition. In Fig.~\ref{fig:linear}, the average iteration time is shown, which grows nearly linearly as the number of ratings used for training increases. This result validates the scalability of {\it GAPfm} under the basic setting, i.e., it scales linearly to the amount of observed data in the training dataset. \begin{figure} \centerline{ \includegraphics[width=7.5cm]{linear} } \caption{\small The relationship of the average iteration time against the size of data for training {\it GAPfm}.} \label{fig:linear} \end{figure} \subsubsection{Impact of Adaptive Selection} In order to investigate the impact of the adaptive selection strategy for {\it GAPfm}, we design an experiment under the condition of ``Given 50" to measure the computational cost for the cases where different numbers of items, i.e., different values for $K$ (varying from 5 to 50), are adaptively selected during iterations for each user for training {\it GAPfm}. Meanwhile, we measure the performance of GAP based on the hold-out data in the validation set, corresponding to each case of $K$. The results are shown in Fig.~\ref{fig:adapt}. Note that increasing $K$ is equivalent to increasing the size of the data for training {\it GAPfm}. Thus, as shown in the previous experiment, the average iteration time increases almost linearly to the growth of $K$. However, we do observe that {\it GAPfm} already achieves relatively high GAP even with a small value of $K$, compared to the GAP achieved when the latent factors of all the items (i.e., $K=50$) are updated. For instance, when only 20 items are adaptively selected for updating their latent factors in each iteration of the learning algorithm, i.e., $K=20$, nearly 75\% of the runtime can be saved, compared to the case of updating the latent factors of all the items. Meanwhile, the drop of GAP is only around 5\%, i.e., the GAP is 0.206 when $K=20$, and 0.216 when $K=50$. For large-scale datasets where computational cost is crucial, the small performance hit introduced by the adaptive selection process pays off due to the large gain attained in terms of computational cost. As analyzed in Section~\ref{sec:adapt}, we can practically maintain a constant complexity of {\it GAPfm} by using adaptive selection with a fixed $K$. In next section, we will further demonstrate the performance of {\it GAPfm} with adaptive selection in the case of $K=20$. Finally, we also conducted an experiment for validating the utility of adaptive selection, compared to random selection, i.e., in the case of $K=20$ we randomly select 20 rated items for updating their latent factors in each iteration of {\it GAPfm}. Random selection for $K=20$ produced a value of GAP = 0.166, which is 19\% lower than computed using adaptive selection. In total, the experimental results confirm the value of the proposed adaptive selection for {\it GAPfm} in terms of both recommendation performance and the computational cost. \begin{figure} \centerline{ \includegraphics[width=7.5cm]{adapt} } \caption{\small The impact of value $K$ on both the computational cost and the performance of GAP.} \label{fig:adapt} \end{figure} Summarizing, the observations from the experiments presented above allow us to give a positive answer to our second research question, i.e., {\it GAPfm} can be highly scalable (with constant complexity independent of the scale of the given dataset) and used for large-scale datasets. \begin{table*}[th \small \centering \caption{\small Performance comparison of {\it GAPfm} and the baseline approaches on the Netflix dataset.} \label{tab:perfnf} \begin{tabular}{c|c c c|c c c|c c c} \hline & & Given 10 & & & Given 20 & & & Given 30 & \\ & P@5 & NDCG@5 & GAP@5 & P@5 & NDCG@5 & GAP@5 & P@5 & NDCG@5 & GAP@5\\ \hline PopRec & 0.023 & 0.080 & 0.165 & 0.024 & 0.084 & 0.171 & 0.026 & 0.085 & 0.176\\ SVD++ & 0.023 & 0.065 & 0.133 & 0.027 & 0.071 & 0.139 & 0.032 & 0.087 & 0.168\\ CofiRank & 0.027 & 0.093 & 0.181 & 0.027 & 0.088 & 0.182 & 0.025 & 0.088 & 0.186\\ {\it GAPfm} & {\bf 0.040} & {\bf 0.109} & {\bf 0.195} & {\bf 0.042} & {\bf 0.111} & {\bf 0.207} & {\bf 0.042} & {\bf 0.114} & {\bf 0.213}\\ \hline \end{tabular} \end{table*} \subsection{Performance on Top-N Recommendation} \label{sec:perf} We compare the performance of {\it GAPfm} with three baseline approaches. Each of the baseline approaches is listed and briefly introduced below: \begin{packed_item} \item \textbf{PopRec} is a naive and non-personalized baseline that recommends movies in terms of their popularity, i.e., the number of ratings from all the users. Note that another naive baseline based on the average rating of movies was also tested, but it achieved rather low performance compared to other baselines. For this reason, we excluded it from our experimental results reported in this paper. Although being a naive baseline, PopRec is shown in literature to have competitive performance for the top-N recommendation task on the Netflix dataset~\cite{Cremonesi2010}. \item \textbf{SVD++} is a state-of-the-art CF approach, which is shown to be superior for the rating prediction task as witnessed in the Netflix Prize contest~\cite{Koren2008}. We use the implementation of SVD++ available in GraphLab\footnote{http://graphlab.org/toolkits/collaborative-filtering/}~\cite{Low2012}. Note that the dimensionality of latent factors used in SVD++ is set to 10, the same as the setting for the proposed {\it GAPfm}. Other parameters, such as the regularization parameter, are tuned according to the observation from the validation set. \item \textbf{CofiRank} is a state-of-the-art CF approach that specifically optimizes for the ranking performance of recommendation results~\cite{Weimer2007}. In addition, to the best of our knowledge, CofiRank is also the only well-established CF approach that directly optimizes NDCG measure for graded relevance datasets. Note that the implementation of CofiRank is based on the publicly available software package from the authors\footnote{http://www.cofirank.org/}. The dimensionality of latent factors is also set to 10, and the remaining parameters are tuned on the validation set. \end{packed_item} \vspace{-2.5mm} In Table~\ref{tab:perfnf}, we present the performance of {\it GAPfm} and the baseline approaches for ``Given'' 10 to 30 items per user in the training set. Under all three conditions, {\it GAPfm} largely outperforms all the baselines, i.e., over 30\% in P@5, 15\% in NDCG@5 and 10\% in GAP@5. All the improvements are statistically significant, according to Wilcoxon signed rank significance test with p<0.01. The results indicate that the proposed {\it GAPfm} is highly competitive for the top-N recommendation task. We also demonstrate that the optimization of GAP leads to improvements in terms of precision and NDCG. We also notice that SVD++ is only slightly better than PopRec in P@5 (which was also observed in related work of ~\cite{Cremonesi2010}), but worse than PopRec in both NDCG@5 and GAP@5. This result again indicates that optimizing rating predictions do not necessarily lead to good performance for top-N recommendations. In Table~\ref{tab:perfadapt}, we show the performance of {\it GAPfm} with adaptive selection for ``Given 50'' items per user in the training set, which simulates the case of relatively large user profiles. As indicated in Section~\ref{sec:scale}, we adopt $K=20$ for the adaptive selection in {\it GAPfm}. {\it GAPfm} still achieves a large improvement over all of the baselines across all the metrics, and also slightly outperforms {\it GAPfm} with adaptive selection, i.e., ca. 6\% in NDCG@5 and ca. 8\% in GAP@5. However, we observe that {\it GAPfm} with adaptive selection still improves over PopRec, SVD++ and CofiRank to a significant extent. Moreover, as mentioned before, under the condition of ``Given 50", training {\it GAPfm} with adaptive selection at $K=20$ saves around 75\% computation time. Therefore, the drop in recommendation performance can be considered to be acceptable for real-world applications. \begin{table}[t \small \centering \caption{\small Performance of {\it GAPfm} with adaptive selection ($K=20$) on the Netflix dataset under the condition ``Given 50''} \label{tab:perfadapt} \begin{tabular}{c c c c} \hline & P@5 & NDCG@5 & GAP@5\\ \hline PopRec & 0.025 & 0.085 & 0.172\\ SVD++ & 0.031 & 0.080 & 0.150\\ CofiRank & 0.023 & 0.084 & 0.188\\ {\it GAPfm} & 0.044 & 0.122 & 0.219\\ {\it GAPfm}+Adaptive Selection & 0.044 & 0.115 & 0.201\\ \hline \end{tabular} \end{table} \begin{table*}[ht! \small \centering \caption{\small NDCG performance of {\it GAPfm} compared to WLT on the MovieLens 100K dataset} \label{tab:perfrated} \begin{tabular}{c|c c c|c c c|c c c|c c c} \hline & & Given 10 & & & Given 20 & & & Given 30 & & & Given 40 & \\ & @1 & @3 & @5 & @1 & @3 & @5 & @1 & @3 & @5 & @1 & @3 & @5\\ \hline WLT & 0.710 & 0.683 & 0.680 & 0.703 & 0.695 & 0.692 & 0.714 & 0.712 & 0.710 & 0.741 & 0.719 & 0.715\\ {\it GAPfm} & 0.709 & \textbf{0.692} & \textbf{0.683} & \textbf{0.717} & 0.691 & \textbf{0.695} & \textbf{0.722} & 0.709 & 0.708 & 0.736 & 0.712 & 0.704\\ \hline \end{tabular} \end{table*} \subsection{Performance on Ranking Graded Items} \label{sec:perfrank} The last experiment is conducted to examine the performance of {\it GAPfm} on the task of ranking a given list of rated/graded items. We compare {\it GAPfm} with other collaborative ranking approaches proposed in the literature. Note that our focus on this paper is on top-N recommendation, which differs essentially from the ranking of rated items. In this setting we do not sample unrated items but only focus on the correct ranking of the rated items. However, this experiment only serves to verify that {\it GAPfm} would be still competitive even in the case of evaluating the ranking of rated items. For this reason, we extend our experiment on a different dataset, i.e., the MovieLens 100K dataset\footnote{http://www.grouplens.org/node/73}~\cite{Herlocker1999}, and compare the performance of {\it GAPfm} with a state-of-the-art approach, Win-Loss-Tie (WLT) feature-based collaborative ranking~\cite{Volkovs2012}, which is, to our knowledge, the latest contribution to collaborative ranking. We follow exactly the same experimental protocol as used in the work of~\cite{Volkovs2012}, to allow us to make a straightforward comparison with the results reported in their work. The results are shown in Table~\ref{tab:perfrated}. We observe that {\it GAPfm} achieves competitive performance for ranking rated items, across all the conditions and NDCG at all the truncation levels. These results indicate that the optimization of GAP for top-N recommendation would naturally lead to improvements in terms ranking graded items. In summary, according to our observations from Section~\ref{sec:perf} and~\ref{sec:perfrank}, we can conclude a positive answer to our last research question. \section{Conclusions and future work} \label{sec:conclude} We have presented {\it GAPfm}, a new CF approach for top-N recommendation, by learning a latent factor model that directly optimizes GAP. We propose an adaptive selection strategy for {\it GAPfm} so that it could attain a constant computational complexity, which guarantees its usefulness for large scale use scenarios. Our experiments also empirically validate the scalability of {\it GAPfm}. {\it GAPfm} is demonstrated to substantially outperform the baseline approaches for the top-N recommendation task, while also being competitive for the performance of ranking graded items, compared to the state of the art. There are a few directions for future work. First, inspired by statistical analysis of evaluation metrics~\cite{Wang2010b}, we would like to analyze the relations and differences between learning methods that optimize different evaluation metrics. Second, we are also interested in developing distributed version of the proposed {\it GAPfm}, by taking into account recent contributions of distributed latent factor models~\cite{Gemulla2011}. Third, considering the multi-facet relevance judgments in recommender systems, such as accuracy, diversity, serendipity, we would also like to investigate the possibilities of optimizing top-N recommendation with multiple cohesive or competing objectives~\cite{Agarwal2012a,Jambor2010,Svore2011}. \bibliographystyle{abbrv} { \scriptsize{
1,108,101,563,086
arxiv
\section{Introduction} Recent decades have seen improvements in the handling of high-energy ultra-violet (XUV) and X-ray light in terms of coherence, \cite{Yu2019, Halavanau2020} intensity, \cite{Halavanau2020} and time control. \cite{KumarMaroju2005, Duris2019, Halavanau2020} As a result, scientists have been able to observe phenomena in chemistry, \cite{Chang2020, Loh2020, Lin2021} material sciences, \cite{Sette1998, Attar2020} and physics \cite{Mazza2020, Haynes2021} that were previously inaccessible. Furthermore, the increasing availability of table-top equipment \cite{Popmintchev2012, Zimmermann2020, Barreau2020, Scutelnic2021} capable of generating the light required for core spectroscopies has extended the use of said techniques for a variety of new studies. \cite{Epshtein2020} Efficiently and accurately modeling core excited states presents several challenges that a useful methodology should address, chief among them the large charge rearrangement associated with the creation of the core hole. Within the independent particle model, this charge rearrangement results in a strong contraction of the orbitals due to the decreased nuclear screening - this is referred to as orbital relaxation in the literature. The most widely used method for calculating valence excited states, time-dependent density functional theory (TD-DFT), struggles to describe core excited states (and charge transfer states in general) because the linear-response formalism fails to account for the charge rearrangement when standard exchange-correlation functionals are used.\cite{BEsley2010, Lyon2017, Maitra2017, Maitra2021} Functionals that are custom-built for core excitations have, nonetheless, shown promise to bringing the computational cost advantages of DFT over to the realm of core spectroscopies.\cite{BEsley2010} To circumvent the uncertainty associated with the choice of functionals, established wavefunction theories that are well-regarded for their accuracy in describing valence excitations, such as equation-of-motion coupled-cluster (EOM-CC) theory and algebraic diagrammatic construction (ADC), have been extended to core excitations by specifically targeting the very high-energy core excitation roots of their effective Hamiltonians. \cite{Coriani2012, Wenzel2015, Vidal2019} An alternative approach followed by state-specific methods such as $\Delta$SCF \cite{Triguero1999, Hait2020} and its correlated relatives, \cite{Besley2009, Duflot2010, Shim2011, Ljubic2014, Zheng2019, Zheng2020, Lee2019, Huang2021} the closely-related Transition Potential (TP)-SCF approaches,\cite{Hu1996, Triguero1998, Triguero1999, Michelitsch2019} a number of multi-reference (MR) wavefunction models, \cite{Brabec2012, Sen2013, Dutta2014, Maganas2019} excited state mean field theory,\cite{Garner2020-2} and Monte-Carlo-based approaches,\cite{Garner2020-1} is to account for relaxation in some way by optimizing for a target state. The $\Delta$SCF approach, for example, converges a set of orbitals in a configuration that resembles the one-electron picture of the core excitation in question. These are non-Aufbau solutions to the self-consistent field (SCF) equations and are often saddle points in orbital space. Similarly, TP-SCF employs configurations optimized for a fractional core occupancy in the hopes of providing a reference of similar quality for both the ground and the core excited states. A difficulty of orbital-optimized excited state approaches is the possibility of landing on an undesired SCF solution of lower energy. In the context of mean-field approaches, such as Hartree-Fock (HF) and density functional theory (DFT), this issue has been addressed by algorithms specialized for excited state optimization, such as the maximum overlap method (MOM)\cite{Gilbert2008}, and, more recently, the initial MOM (IMOM)\cite{barca2018simple}, square-gradient minimization (SGM)\cite{Hait2020} and state-targeted energy projection (STEP)\cite{Carter-Fenk2020} methods. $\Delta$SCF has been used for decades to calculate core ionizations with success.\cite{Triguero1999, Besley2009, Ljubic2014} In the cases where there are symmetry-equivalent atoms present in the system, an orbital localization procedure (such as that of Boys\cite{Boys1960}) must be carried out on the core orbitals prior to SCF re-optimization to allow for proper orbital relaxation.\cite{Zheng2019} The spatial symmetry breaking technically renders these situations multi-reference (MR) since multiple configurations must be re-combined via non-orthogonal configuration interaction (NOCI) to yield states of the proper spatial symmetry. In practice, the splitting between the symmetry-adapted configurations is small,\cite{Liu2019, Oosterbaan2020} so that the MR character associated with the core hole localization can be disregarded without serious error. The $\Delta$SCF ionization energies, as calculated with the spatially symmetry-broken configurations are often good estimates of what would be observed in an experiment. Studies on core excitations with $\Delta$SCF have been more sparse until quite recently.\cite{Besley2009} In some measure this is due to the fact that MR character now factors in because of the need for two configurations for a spin-pure description of the excited state. The approximate spin-projection scheme (AP) established a way to estimate the excitation energy of the pure singlet, provided that the energies of a spin-contaminated singlet and the pure triplet are known.\cite{yamaguchi1988spin,Kitagawa2009} An attractive alternative to AP for $\Delta$SCF calculations is the use of restricted open-shell Kohn-Sham orbitals (ROKS), which optimizes the spin-pure singlet energy as computed via the AP scheme for a mixed and a triplet (M$_s$ = 1) configuration sharing the same set of restricted open-shell (RO) orbitals.\cite{frank1998molecular,Filatov1999,kowalczyk2013excitation,hait2021orbital} Recently, this technique (and a generalized version for radicals\cite{hait2020accurate}) has been used to study core excited states with the best-performing functional (SCAN) achieving an impressive 0.2 eV root-mean-squared-deviation (RMSD) from experimental results for a representative set of small organic molecules.\cite{Hait2020} With an appropriate treatment of scalar relativistic effects, ROKS has also been employed to tackle the K-edge of third-group elements.\cite{cunha2021relativistic} Excited SCF solutions are often a better reference than the ground state for finding alternative solutions to the CC equations, which in turn are reasonable approximations to the true excited states.\cite{Meissner1993} Explicit SCF re-optimization takes care of the strong orbital relaxation and allows single-reference (SR) post-HF methods such as second order M{\o}ller-Plesset perturbation theory (MP2) and CC to focus on addressing the remaining dynamic correlation of a system. Core ionized states of closed-shell systems are perfect cases to be treated by these models and they have been studied via $\Delta$MP2\cite{Triguero1999, Besley2009, Duflot2010, Shim2011, Ljubic2014} and, more recently, $\Delta$CCSD(T) \cite{Zheng2019, Lee2019, Zheng2020}. The last decade has seen an effort to also employed explicitly-relaxed orbitals on a (wavefunction-based) correlated calculation for singlet excited states \cite{Besley2009, Lee2019, Matthews2020, Huang2021, Brabec2012, Sen2013, Dutta2014, Maganas2019}. Among these, the wavefunction theories employing explicit MR construction often constrain them to study few molecules in small basis sets, which means they can only be compared to other computational methods in the same small basis sets. \cite{Brabec2012, Sen2013, Dutta2014, Maganas2019} Simons and Matthews have recently proposed a theory, TP CC, that employs a TP SCF reference for an EOM-CC calculation of the core excited states.\cite{Simons2021} This model inherits some of the advantages of both state-specific methods - orbital relaxation - while retaining the advantages of EOM-CC: inherent spin-adaptation of the excited states, a full spectrum with a single calculation, and straightforward transition properties. The cost to pay comes from relying on a deteriorated description of the ground state relative to standard CC, controlled by tuning the fractional occupation number of the core orbital. Even though this renders the model arbitrary, to some extent, Simons and Matthews have carried out a study to find an optimal core occupancy parameter transferable across edges of the same element, making this a promising method for reliable and affordable high-accuracy wavefunction X-ray calculations.\cite{simons2022transitionpotential} Owing to the simple nature of the MR character of singly core excited states of closed shell systems (namely, a two-determinant CSF) the objective of this paper is to assess the use of SR CC formalism with orbital-optimized references, limited to the level of single and double substitutions (i.e. CCSD) for computational tractability) for core excitations. In contrast to TP-CC, the protocols presented here are well-defined in that only the molecule and the transition of interest need to be specified - the ground state CC wavefunction and energies are used as is and no compromise in the excited state wave function is made either. The major issue to be addressed in order to permit such calculations is how to treat the electron correlation effects that couple core orbitals with either other core levels or valence levels. De-excitation into the core hole can lead either to numerical instabilities or variational collapse towards the ground state. Therefore a suitable adaptation of SR CCSD for state-specific optimization of core excited states must treat core correlation, as well as removing potentially ill-behaved amplitudes. Additionally, a correction to produce approximately spin-pure eigenstates is required. This paper is organized as follows. After a review of the appropriate theory, we describe three candidate approaches that we deem potentially promising. Two of them employ Yamaguchi's AP approach\cite{yamaguchi1988spin}, while the third one instead enforces correct spin symmetry at the ROHF level by constraining the amplitude of the double substitution that flips the spins of the two half-occupied orbitals to $+1$ for singlet and $-1$ for triplet states. A comparison of these approaches is then made against successful core-excited state theories, ROKS(SCF) and FC-CVS-EOM-CC, with the ultimate judge being the experimental results. The energetic differences between the singlet and triplet core excited states, presumed to be accurate enough to make a statement about them, are presented. An effort is made to reach basis set convergence for all methods in order to exclude this factor from the discussion as much as possible and focus on their inherent performance. Despite the computational demands of approaching the basis set limit (BSL) for CC methods constraining us to molecules with at most two heavy atoms, the data set is diverse in terms of the elements targeted (Be, C, N, O, F, Ne) and in terms of the excited state character ($\sigma^*$, $\pi^*$, Rydberg). In total, a set of 21 excitations and 18 ionizations on 18 small closed-shell organic molecules is used. \subsection{Background} Following convention, we will reserve the indexes $i, j, k \dots$ for any occupied orbital, $a, b, c \dots$ for any virtual orbital, and $p, q, r \dots$ for an arbitrary orbital. For the CCSD amplitudes, we will use the symbols $t_i^a$ and $t_{ij}^{ab}$, collected in $T_1$ and $T_2$. For a set of orbitals that are not necessarily canonical, the CCSD amplitude equations take the following form: \begin{align} D_{i}^{a} t_{i}^{a} &= F_{ia} + w_{i}^{a}(T_1,T_2) \label{eq:t1}\\ D_{ij}^{ab} t_{ij}^{ab} &= \bra{ij}\ket{ab} + w_{ij}^{ab}(T_1,T_2) \label{eq:t2} \end{align} The terms $w_{i}^{a}(T_1,T_2)$ and $w_{ij}^{ab}(T_1,T_2)$ in Eqs. \ref{eq:t1} and \ref{eq:t2} contain terms that are linear and higher in $T_1$ and $T_2$ separate from the orbital energy differences, $D_{i}^{a}$ and $D_{ij}^{ab}$ defined below\cite{Stanton1991}. \begin{align} D_{i}^{a} &= \varepsilon_i - \varepsilon_a \\ D_{ij}^{ab} &= \varepsilon_i + \varepsilon_j - \varepsilon_a - \varepsilon_b \end{align} $\varepsilon_p$ are the orbital energies themselves. $D_{i}^{a}$ and $D_{ij}^{ab}$ will always be negative when employing a ground state reference and, in the absence of strong correlation, are large enough to make the $T$ amplitudes well behaved (i.e. $\mathrm{max}\left[|t_i^a|,|t_{ij}^{ab}| \right] \ll 1$). State-specific optimization of a core excited state, on the other hand, correlates a non-Aufbau SCF reference. Here, we make use of three different kinds of such (beta) core-excited references: (i) open-shell, symmetry-broken $M_S=0$ references for the calculation of the singlet core-excited states; open-shell, (ii) spin-pure triplet $M_S=1$ references for the AP approach, when needed; and (iii) open-shell, spin-pure $M_S=\frac{1}{2}$ doublet references for the calculation of core ionized states. In the case of the spin-pure triplet and pure doublet references, standard ROHF is used in conjunction with MOM. The use of unrestricted orbitals for the symmetry-broken reference was found to be detrimental to some of our $\Delta$CC schemes, so ROKS(HF) orbitals, followed by a Fock-build for the broken-symmetry singlet state and further pseudocanonicalization, were employed instead. With these choices of reference, and specific to the case of core excitations, the presence of a virtual orbital with a large negative energy representing the core hole (we reserve the indexes $h$ and $\bar{h}$ for the occupied alpha core orbital and the virtual beta core orbital) allows for denominators $D_i^a$ and $D_{ij}^{ab}$ to be positive when $a = \bar{h}$. In the case of single excitations, $a^\dag_{\bar{h}}a_i$, this occurs when the occupied orbital has a higher orbital energy than the core virtual \begin{equation}\label{eq:single} \varepsilon_i > \varepsilon_{\bar{h}} \end{equation} The condition in Eq.~\ref{eq:single} holds unless there are other core orbitals of lower orbital energy. In the case of double excitations, $a^\dag_{\bar{h}}a_i a^\dag_b a_j$, $D_{ij}^{\bar{h}b}$ will be positive when \begin{equation}\label{eq:double} \varepsilon_i + \varepsilon_j - \varepsilon_b > \varepsilon_{\bar{h}} \end{equation} One scenario where this happens is when the excitation $a^\dag_{\bar{h}} a_i$ involves a valence occupied orbital and the excitation $a^\dag_b a_j$ involves only valence orbitals. The denominator $D_{ij}^{\bar{h}b}$ can still be negative if the other virtual has an orbital energy $\varepsilon_b$ positive and large enough to break Eq.~\ref{eq:double}. Furthermore, the orbital energies can conspire to make $\varepsilon_i + \varepsilon_j - \varepsilon_b \approx \varepsilon_{\bar{h}}$, rendering $D_{ij}^{ab} \approx 0$. Depending on the ability of the basis set to describe the high-lying virtual orbitals associated with the continuum, the denominator associated with double excitations can get arbitrarily close to zero, leading to numerical difficulties in solving for the T amplitudes (and of course divergence of perturbation theory methods, such as MP2). Close-to-zero denominators also yield numerical instabilities in the context of EOM-CC. In their study of EOM-CC-IP for K-edge ionization energies, Liu et al. found that spurious high-lying valence excited states that are quasi-degenerate with the core excited state result in erratically-converging correlation energies with respect to basis set.\cite{Liu2019} The core-valence separation (CVS) scheme is a proposed solution to this numerical problem; in this approach, core excitations are excluded from the ground state amplitudes, and all-valence excitations are excluded from the EOM amplitudes.\cite{Vidal2019} The spurious couplings with the high-lying continuum excited states are then removed by design. In a spirit similar to the CVS scheme, Zheng \emph{et al.} proposed to exclude the virtual core orbital from the correlation treatment to address the divergence problem in the $\Delta$CC calculations of core ionizations.\cite{Zheng2019, Zheng2020} Some of us adopted a similar strategy where we freeze up the doubly-vacant core orbital all together when studying double-core excitations.\cite{Lee2019} Zheng \emph{et al.} found the missing correlation to be relevant for accurate core ionizations and uses estimates from fully-correlated CC calculations with decreasing denominator thresholds to account for it. \section{Computational details} A development version of Q-Chem 5.4\cite{Epifanovsky2021} was used for all calculations. Experimental geometries available on the NIST computational database \cite{FIPS1402} were used throughout this work. An atomic relativistic correction calculated via the Douglas-Kroll-Hell method, found to be nearly independent of basis-set and molecule for the main group elements, is added to all calculations (0.012, 0.09, 0.18, 0.34, 0.57, and 0.91 eV for Be, C, N, O, F, and Ne.)\cite{Takahashi2017} For two of the three schemes of $\Delta$CC we employ, the calculated singlet excited states are spin contaminated; the AP method is used to estimate the spin-pure excitation energies. Our best attempt was made at comparing the excitation or ionization energies near their BSL values. To that end, different procedures involving specialized basis sets were employed for obtaining an approximate BSL for the different methods. The aug-pcX-3 (heavy)/ aug-pcseg-2 (hydrogen) basis was used to approximate the BSL for the ROKS(SCF) calculations.\cite{Ambroise2019} A (99, 590) Euler-Maclaurin-Lebedev grid was used for the computation of the exchange-correlation integrals for the ROKS(SCAN) calculations. The aug-ccX-nZ (heavy) / aug-cc-pVTZ (hydrogen) bases,\cite{Ambroise2021} extrapolated using the two-point X$^{-3}$ scheme\cite{Helgaker1997, halkier1998basis} with n = T, Q, were used to approximate the BSL for the EOM-CC calculations. As noted in a recent study, such an extrapolation scheme is appropriate for core excitations via EOM-CC.\cite{Carbone2019} All ROKS(SCF) and EOM-CC calculations were also run with the standard Dunning aug-cc-pCVXZ (X = D, T, Q) family of bases\cite{woon1995gaussian,peterson2002accurate} and a slower convergence towards a similar BSL value was observed (SI). Of the basis sets available, none were designed with both explicit orbital relaxation via SCF and correlation with wavefunction methods in mind. We used the TQ-extrapolated aug-cc-pCVXZ (heavy) / aug-cc-pVDZ (hydrogen) numbers as the best BSL estimate of the correlated $\Delta$ calculations. The only exception to these choices of basis set was for the calculated Rydberg excitations in \ce{Ne}. As expected for a full-fledged Rydberg excitation, significant differences between the aug-cc-pCVXZ and its doubly-augmented counterparts were observed in this case. The BSL core excited states for this molecule are given by the d-aug-cc-pCV5Z for ROKS(SCF), Q5-extrapolated d-aug-cc-pCVXZ for EOM-CC, and TQ-extrapolated d-aug-cc-pCVXZ for the correlated $\Delta$ methods. No severe difference of a similar sort was found in any other molecule studied in this data set, including the rest of the isoelectronic ten electron series (SI). \section{Approaches to inclusion of core-valence correlation} \subsection{Scheme 0: Using the full set of amplitudes} To motivate the need for the schemes presented in the following subsections, we begin by exploring the behavior of the correlated methods with no modifications. The Fock matrix and MO coefficients of the optimized excited reference are passed to the correlated calculation and all amplitudes (e.g. all singles and doubles in CCSD) are included; we refer to this as Scheme 0 (S0). Scheme S0 would not be of use for real applications because of the possibility of variational collapse, and limitations of today's standard iterative CC solvers. Nevertheless, it provides useful insight in the few cases where the coupled cluster equations do converge. Such systems are few-atom molecules in a small basis, where there are no orbitals of the right energy to make the denominators small enough. Figure \ref{fgr:CH4_ion} shows the basis set convergence of the \ce{CH4} core ionization energies, as calculated with the $\Delta$-based methods, with respect to increasing cardinality of the aug-cc-pCVXZ basis set. The $\Delta$SCF values converge quickly, with the 5Z result decreasing the calculated ionization energy by only 0.014 eV from the QZ numbers. The results for all the correlated $\Delta$ methods are within 0.1 eV of each other up until the QZ level, where they begin to diverge. At the 5Z level, the CCSD equations fail to converge and the $\Delta$MP2 results break monotonicity. An analysis of the denominators associated with excitations into the core virtual (Figure \ref{fgr:denominators}) reveals that, for all basis sets, there are positive denominators and, furthermore, that a close-to-zero denominator appears at the QZ level. Once the complexity of the molecule increases, the virtual space will begin to populate the problematic orbital energy range associated with near-zero denominators even when using small basis sets. Yet the CCSD(S0) results, at the very least, suggest that accurate results via $\Delta$-based methods could be obtained if the irregularities caused by small denominators were addressed. \begin{figure} \includegraphics{convergence.pdf} \caption{\ce{CH4} ionization at the Frank-Condon geometry} \label{fgr:CH4_ion} \end{figure} \begin{figure} \includegraphics{Denominators.pdf} \caption{Some values for the denominators associated with excitation into the core virtual for the \ce{CH4} core-ionized reference.} \label{fgr:denominators} \end{figure} \subsection{Scheme 1: Deleting all amplitudes involving the core virtual} We make use of three additional schemes to address the numerical instabilities discussed previously. The first, which we refer as Scheme 1 (S1), is that proposed Zheng et al.\cite{Zheng2019}, and employed by Lee and Head-Gordon.\cite{Lee2019} This scheme simply excludes any amplitude involving the core virtual. Additionally, we chose to exclude singles amplitudes that excite the occupied core electron. \begin{align*} \text{if(}a &= \bar{h}\;\text{or}\;i = h)\;\;\; a_i^a,\; t_i^a = 0 \\ \text{if(}a &= \bar{h}\;\text{or}\;b = \bar{h})\;\;\; a_{ij}^{ab},\; t_{ij}^{ab} = 0 \\ \text{if(}a &= \bar{h}\;\text{or}\;b = \bar{h}\;\text{or}\;c = \bar{h})\;\;\; t_{ijk}^{abc}(c) = 0 \end{align*} Under these conditions, the ill-behaved amplitudes are removed by design. However, by excluding amplitudes that involve the core virtual, we are also excluding part of the correlation between the remaining core electron and valence electrons, as will become more clear below. The de-excitation amplitudes in the Lambda equations, solved to obtain CC properties like $\braket{S^2}$, are treated in a completely analogous way. Under these constraints, the Lambda equations converged to yield to similar $\braket{S^2}$ values than without them, but at a much accelerated pace. \subsection{Scheme 2: Half-occupied core with zero spin-complement amplitude} To incorporate some of the correlation missing in S1, Scheme 2 (S2) allows for the double substitutions involving the core virtual, $\bar{h}$, that also promote the occupied electron in the same core orbital, $h$ - these were found to be the leading amplitudes for some of the larger well-behaved S0 calculations. S2 is pleasing in that, even though core substitutions are involved, they are all associated with configurations that retain a core occupancy of 1. \begin{align*} \text{if(}a = \bar{h}\;\text{or}\;i &= h)\;\;\; a_i^a,\; t_i^a = 0 \\ \text{if(}a = \bar{h}\;\text{or}\;b &= \bar{h})\\ \text{if(}i &\ne h\;\text{or}\;j \ne h)\;\;\; a_{ij}^{ab},\; t_{ij}^{ab} = 0 \\ \text{if(}a = \bar{h}\;\text{or}\;b &= \bar{h}\;\text{or}\;c = \bar{h})\;\;\; \\ \text{if(}i &\ne h\;\text{or}\;j \ne h\;\text{or}\;k \ne h)\;\;\; t_{ijk}^{abc}(c) = 0 \end{align*} As for S1, the CC de-excitation amplitudes are treated in a completely analogous way. We found that, in the case of the mixed singlets, allowing for the double substitution that generates the spin complement of the reference, $a^\dag_{\bar{h}}a_{\bar{t}} a^\dag_t a_h$ with $t$ being the target orbital, leads the CC iterations to converge towards the (lower energy) triplet excited state, resulting $\braket{S^2}$ values that deviate significantly from 1. Therefore, an additional constraint was placed on calculations for the mixed singlet: the amplitude associated with said excitation is also set to zero. This helped ensure that the $\braket{S^2}$ value of the CCSD wave function remained close to 1, signifying that it is a mixed spin configuration. Therefore, as with S0, the spin contamination is removed by evaluating the singlet energy via Yamaguchi's AP expression. \subsection{Scheme 3: Half-occupied core with unit spin-complement amplitude} As a final scheme, and exclusively for the calculations on the mixed singlet state, we propose to incorporate all of the conditions of S2 but, instead of neglecting the double substitution amplitude, $a^\dag_{\bar{h}}a_{\bar{t}} a^\dag_t a_h$, associated with the spin complement of the reference, we set it to -1.0; we refer to this as Scheme 3 (S3). These conditions force the CC iterations to look for the pure singlet starting from the mixed reference. As previously, the exact same S3 conditions are imposed on the de-excitation amplitudes for the left eigenvectors of the similarity transformed Hamiltonian. We found that the lambda equations were able to converge even when the de-excitation amplitude associated with the spin complement is not forced to be -1.0. Enforcing said condition accelerated the convergence to result in the same value for $\braket{S^2}$ An attractive feature of S3, as will be elaborated on in the Results section, is that it bypasses the need for AP altogether because the resulting states have $\braket{S^2}$ values relatively close to 0. S3 is, in fact, similar in spirit to the the bi-configurational MR-CC model proposed by Oliphant and Adamowicz in 1991, \cite{Oliphant1991} (see also the two-determinant Hilbert-space MR-CC,\cite{Kucharski1991, Balkova1992} recently employed by Matthews \cite{Matthews2020} in conjunction with an orbital-optimized CSF for core excited states). However S3 is dramatically simpler because additional triple and quadruple excitations that are necessary in MR-CC (in order to account for the single and double excitations on top of the ``secondary reference'') are omitted here. The amplitude of the spin complement can also be set to +1.0 to access the M$_s$ = 0 triplet. This allows us to asses the reliability of S3 by comparing its calculated triplet, M$_S$ = 0 numbers against the M$_s$ = $\pm$1 triplet numbers obtained via S2. In the absence of spin-orbit coupling or external magnetic fields, the M$_s$ = 1 and M$_s$ = 0 triplet states should be degenerate, so any differences reflect the failures of S3 with respect to S2. Naturally, one source of error will be the fact that, in S3, the correlation methods treat each individual configuration of the CSF unequally. \section{Results and discussion} Before discussing the correlated methods, it is worth revisiting the spin-pure open shell singlet HF results (labeled as ROKS(HF) as this can be viewed as a special case of OO-DFT\cite{hait2021orbital}). For the excitations considered, ROKS(HF) achieves a mean absolute error (MAE) and RMSE of 0.43 and 0.52 eV. All of the excitations involving carbon and nitrogen, and the O 1s - $\sigma^*$/Rydberg transitions are overestimated. All of the fluorine and neon excitations , and the O 1s - $\pi^*$ transitions are underestimated. This element-dependent error distribution with respect to experiment leads to a relatively small mean signed error (MSE) of 0.18 eV. Using ROKS with the standard SCAN functional, \cite{sun2015strongly} the best-performing functional according to a recent study, reduces the MAE to 0.16 eV, the RMSE to an impressive 0.19 eV,\cite{Hait2020} and an MSE of only -0.08 eV. How well can CC methods limited to double or perturbative triple substitutions compete with these results? As a further preliminary, we note that standard FC-CVS-EOM-CCSD approach cannot match ROKS(SCAN), and in fact scarcely outperforms the simple ROKS(HF) approach: EOM-CCSD achieves an MAE and RMSE of 0.34 and 0.41 eV. EOM-CCSD tends to underestimate the excitations out of carbon, with an overestimation of 0.34 eV for the \ce{\textbf{C}H3OH} 1s $\longrightarrow$ 3s transition being the only serious exception. All other excitations are overestimated by EOM-CCSD, except for the \ce{N2} 1s $\longrightarrow$ $\pi^*$ and \ce{Be} 1s $\longrightarrow$ 2p excitations, which are underestimated by 0.25 and 0.68 eV, respectively. The latter might be a failure of the FC-CVS model. \begin{figure} \includegraphics{excitations.pdf} \caption{Statistical summary of the accuracy of calculated K-shell core excitations relative to experimental values for the 21 transitions shown in Table \ref{excitations}, as evaluated by ROKS(HF), ROKS(SCAN), the correlated $\Delta$ methods (Schemes S1, S2 and S3), and FC-CVS-EOM-CCSD-EE. For the S1 and S2 approaches, in addition to CCSD itself, the corresponding MP2 and CCSD(T) values are also shown. The specific values corresponding to these statistics are given in Table \ref{excitations} and the Supplementary Information.} \label{fgr:excitations} \end{figure} In regards to the correlated $\Delta$ methods, addressing the offending denominators, either by eliminating all excitations into the core virtual (S1) or including only those that retain a core occupancy of 1 (S2 and S3) resulted in well-behaved, monotonically convergent CC calculations in all cases. Furthermore, for Schemes S1 and S2, the MP2, CCSD, and CCSD(T) correlation energies of the excited states, and the calculated excitation energies seem to converge monotonically towards a well defined BSL. As observed in Figure \ref{fgr:excitations} and the SI, correlated calculations via Scheme S1 always overestimate the excitation energy. $\Delta$MP2(S1), $\Delta$CCSD(S1), and $\Delta$CCSD(T)(S1) achieve MAEs of 0.82, 0.58, 0.63 eV, and RMSEs of 0.88, 0.60, 0.65 eV. $\Delta$CCSD(S1) attenuates the most severe failures of $\Delta$MP2(S1) - where it overestimates experiment by more than 1 eV: \ce{H2\textbf{C}O} 1s $\longrightarrow$ $\pi^*$, \ce{H\textbf{C}N} 1s $\longrightarrow$ $\pi^*$, \ce{HC\textbf{N}} 1s $\longrightarrow$ $\pi^*$, \ce{N2} 1s $\longrightarrow$ $\pi^*$, and \ce{F2} 1s $\longrightarrow$ $\sigma^*$. It is worth noticing that these are all cases where $\Delta$MP2(S1) changes the ROKS(HF) results the most - in all cases for worse - with \ce{F2} having the largest change of 2.3 eV. $\Delta$CCSD(T)(S1), more often than not, seems to very slightly increase the error against experiment when compared to $\Delta$CCSD(S1). Including correlation via S1, either via MP2, CCSD, or CCSD(T) only decreases the calculated values relative to $\Delta$HF in roughly half the cases. The MSEs for all the correlated methods under S1 are identical to their MAEs, which is is consistent with a systematic overestimation of the excitation energies or, conversely, an under-correlation of the excited states. Since the results are expected to be well near the BSL, and the perturbative triples correction changes the CCSD results by a small amount, we attribute this to the configurations excluded from the correlation treatment for the sake of proper convergence. Figure \ref{fgr:excitations} and SI show that including some of the missing configurations via model S2 indeed reduces the error relative to S1. $\Delta$MP2(S2), $\Delta$CCSD(S2), and $\Delta$CCSD(T)(S2) achieves MAEs of 0.62, 0.18, and 0.20 eV, and RMSEs of 0.69, 0.22 and 0.25 eV. A small systematic overestimation remains, as suggested by MSEs of 0.61, 0.16, and 0.20 eV. Two relevant statistical observations are that $\Delta$MP2(S2) still fails to offer an improvement over ROKS(HF), and that the (T) correction slightly worsens the $\Delta$CCSD results. We note how the well-behaved excitations involving the core account for roughly 0.4 eV of the calculated excitation energy, as measured by the statistical differences between $\Delta$CCSD(S1) and $\Delta$CCSD(S2). This is in agreement with the findings of Zheng \emph{et. al}\cite{Zheng2019} and emphazises that, if quantitative agreement is desired, a CVS scheme like S1 is inadequate. Before discussing the performance of S3 in predicting excitation energies, we make some other relevant remarks on the scheme. The de-excitation amplitudes were usually converged without any modifications to yield a CCSD $\braket{S^2}$ close to 0 (or 2, if the triplet state was being targeted). Naturally, it often takes many iterations for these amplitudes to respond to the large excitation amplitude in T$_2$. Imposing the condition analogous to S3 for the de-excitation amplitudes accelerated the convergence, never taking more than 35 iterations without DIIS for the cases that we studied. As is noted in the SI, a residual deviation from an $\braket{S^2}$ value of 0 remained for all calculations. The largest of these deviations was for the \ce{C2H2} 1s $\longrightarrow$ $\pi^*$ state, with an $\braket{S^2}$ of 0.069, the average being 0.033. We suspect that this might be due to the missing excitations described in the discussion of S3. The spin-forbidden excitations into the triplet M$_s$ = 0 manifold were calculated with $\Delta$CCSD(S3) by forcing the amplitude of the spin complement of the reference to be 1.0; they are listed in SI. We compared these against the triplet M$_s$ = 1 excitation energies as calculated by $\Delta$CCSD(S2). The largest deviation was of 0.09 eV for the \ce{H2\textbf{C}O} 1s $\longrightarrow$ $\pi^*$ state, the average being 0.04 eV. The M$_s$ = 0 triplet excitations were higher than the M$_s$ = 1 results for all but one case, \ce{Be} 1s $\longrightarrow$ 2p, where the difference is -0.01 eV. This is also consistent with the idea that for the M$_s$ = 0 triplets, as for the singlets, we are undercorrelating the excited state due to missing excitations. An undercorrelation is not present for the M$_s$ = 1 triplet because, aside from any spatial symmetry breaking, this is purely a SR situation that S2 should be able to address. The triplet numbers, as calculated by $\Delta$CCSD(S2), match fairly well with the two experimental numbers that we found for these spin-forbidden transitions: 114.3 eV for \ce{Be} 1s $\longrightarrow$ 2p and 400.12 eV for \ce{N2} 1s $\longrightarrow$ $\pi^*$ \cite{Shaw1982}. $\Delta$CCSD(S2) predicts them to be 114.37 eV and 400.24 eV, respectively. The average energy difference between the singlet and triplet excited states for the set of molecules studied here, as calculated by $\Delta$CCSD(S3), is 0.44 eV. Some cases worthy of notice are \ce{Be} 1s $\longrightarrow$ 2p , where the splitting is 1.16 eV, and \ce{\textbf{C}O} 1s $\longrightarrow$ $\pi^*$, with the largest splitting of all: 1.42 eV. Interestingly, the splitting for \ce{C\textbf{O}} 1s $\longrightarrow$ $\pi^*$ is only 0.34 eV. Another case of relevance are the two Rydberg excitations \ce{Ne} 1s $\longrightarrow$ 3s and \ce{Ne} 1s $\longrightarrow$ 3p, which have the smallest splittings: 0.06 eV and 0.05 eV, respectively. In Table \ref{excitations}, we present the calculated excitation energies of the singlet excited states for the most successful scheme, $\Delta$CCSD(S3), against ROKS(HF), ROKS(SCAN), and FC-CVS-EOM-CCSD-EE\cite{Vidal2019}. All the statistics provided are compared against their measured experimental values. The per-molecule results for the remaining schemes are listed in the SI. Overall, $\Delta$CCSD(S3) achieves an MAE and RMSE of 0.14 and 0.18 eV. The most challenging excitation for this method is \ce{H2\textbf{C}O} 1s $\longrightarrow \pi^*$, with an overestimation of 0.37 eV from the experimental value of 287.98 eV by Remmers \emph{et al.}\cite{Remmers1992}. A small systematic overestimation remains, as suggested by a MSE of 0.12 eV. The only excitation that $\Delta$CCSD(S3) significantly underestimates is \ce{\textbf{C}O} 1s $\longrightarrow$ $\pi^*$, which is below Sodhi and Brion's result of 534.21 $\pm$ 0.09 eV by 0.21 eV. \cite{Sodhi1984} The closely-related two-determinant CCSD results of Matthews suggest a comparable accuracy, where a MAE of 0.10 eV and RMSE of 0.11eV were found against EOM-CCSDT-EE numbers for the three lowest lying core excitations of \ce{HCN}, \ce{CO}, \ce{NH3}, and \ce{H2O}.\cite{Matthews2020} \begin{table} \caption{Core excitations} \label{excitations} \begin{tabular}{l|ccccc} \hline Transition & ROKS(HF) & ROKS(SCAN) & $\Delta$CCSD(S3) & EOM-CCSD & Experiment \\ \hline \ce{Be} 1s - 2p &115.37 &115.34 &115.53 &114.79 &115.47 \cite{???} \\ \ce{C2H4} 1s - $\pi^*$ &285.27 &284.70 &284.77 &284.68 &284.68 (0.1)\cite{Hitchcock1977} \\ \ce{H2\textbf{C}O} 1s - $\pi^*$ &286.42 &285.74 &285.96 &285.62 &285.59\cite{Remmers1992} \\ \ce{C2H2} 1s - $\pi^*$ &286.40 &285.67 &285.84 &285.55 &285.9 (0.1)\cite{Hitchcock1977} \\ \ce{H\textbf{C}N} 1s - $\pi^*$ &286.98 &286.35 &286.51 &286.07 &286.37\cite{Hitchcock1979} \\ \ce{\textbf{C}O} 1s - $\pi^*$ &288.05 &286.99 &287.46 &286.71 &287.40(0.02)\cite{Sodhi1984} \\ \ce{\textbf{C}H3OH} C 1s - 3s &288.91 &288.18 &288.34 &288.26 &287.98\cite{Prince2003} \\ \ce{CH4} 1s - 3p($t_2$) &288.38 &287.96 &288.02 &287.9 &288.00 (0.2)\cite{Schirmer1993} \\ \ce{HC\textbf{N}} 1s - $\pi^*$ &400.00 &399.60 &399.80 &399.74 &399.7\cite{Hitchcock1979} \\ \ce{NH3} 1s - 3s &400.97 &400.42 &400.63 &400.82 &400.66 (0.2)\cite{Schirmer1993} \\ \ce{N2} 1s - $\pi^*$ &401.18 &400.80 &401.02 &400.63 &400.88 (0.02)\cite{Sodhi1984} \\ \ce{NH3} 1s - 3p(e) &402.62 &402.18 &402.41 &402.46 &402.33 (0.2)\cite{Schirmer1993} \\ \ce{H2C\textbf{O}} 1s - $\pi^*$ &530.67 &530.83 &530.86 &531.26 &530.82\cite{Remmers1992} \\ \ce{H2O} 1s - 3s &534.15 &533.84 &534.14 &534.44 &534.0 (0.2)\cite{Schirmer1993} \\ \ce{CH3\textbf{O}H} 1s - 3s &534.16 &533.98 &534.24 &534.64 &534.12\cite{Prince2003} \\ \ce{C\textbf{O}} 1s - $\pi^*$ &533.68 &533.97 &534.00 &534.50 &534.21 (0.09)\cite{Sodhi1984} \\ \ce{H2O} 1s - 3p (b$_2$) &536.03 &535.65 &536.08 &536.21 &535.9 (0.2)\cite{Schirmer1993} \\ \ce{F2} 1s - $\sigma^*$ &681.19 &682.43 &682.41 &683.07 &682.2 (0.1)\cite{Hitchcock1981} \\ \ce{HF} 1s - $\sigma^*$ &687.31 &687.44 &687.76 &688.05 &687.4 (0.2)\cite{Hitchcock1981} \\ \ce{Ne} 1s - 3s &864.75 &865.18 &865.37 &865.54 &865.1 (0.1)\cite{Hitchcock1981} \\ \ce{Ne} 1s - 3p &866.58 &866.96 &867.30 &867.40 &867.29\cite{Muller2017} \\ \hline MSE &0.15 &-0.09 &0.12 &0.11 & \\ MAE &0.43 &0.15 &0.14 &0.34 & \\ RMSE &0.52 &0.19 &0.18 &0.41 & \\ MAX &1.01 &0.41 &0.37 &0.87 & \\ \end{tabular} \end{table} Table \ref{tbl:ionizations} compares the $\Delta$CCSD(S2) core ionizations, against those calculated by $\Delta$SCF(HF), $\Delta$SCF(SCAN) and FC-CVS-EOM-CCSD-IP. Figure \ref{fgr:ionizations} shows box-whisker plots for both the S1 and S2 methods applied to MP2, CCSD, and CCSD(T) relative to the same existing methods. The experimental values used as a reference are the ones given by Jolly et al.\cite{Jolly1984}, unless a more recent study was found. $\Delta$SCF(HF) has a MSE, MAE, and RMSE of -0.15, 0.45, and 0.58 eV, respectively. The two most challenging cases for $\Delta$SCF in the ionization data set, \ce{CO} and \ce{F2}, are the only cases with an error greater than 1 eV. $\Delta$SCF(SCAN) reduces the $\Delta$SCF(HF) errors by more than a factor of two, with an MAE and RMSE of 0.21 and 0.25 eV. In contrast to excitations, all ionizations except two, \ce{F2} and \ce{Ne}, are overestimated with $\Delta$SCF(SCAN), resulting in an MSE similar to its MAE: 0.18 eV. The most challenging case for $\Delta$SCF(SCAN) is Be, over estimated by 0.51 eV. Somewhat surprisingly $\Delta$SCF(HF) predicts the Be experimental ionization perfectly. The performance of $\Delta$SCF(HF) against the much more sophisticated FC-CVS-EOM-CCSD-IP is once again remarkable, with the MAE and RMSE of the latter being 0.35 and 0.45 eV. These FC-CVS-EOM-CCSD-IP errors are roughly five times smaller than those reported by Liu et al.\cite{Liu2019} We wonder whether a situation similar to that reported by Vidal et al.\cite{Vidal2019} in the context of EOM-CCSD-EE for core excitations is taking place, where the specifics of the CVS implementation resulted in differences of over 1 eV. \begin{figure} \includegraphics{ionizations.pdf} \caption{Statistical summary of the accuracy of calculated K-shell core ionizations relative to experimental values for the 18 ionizations shown in Table \ref{tbl:ionizations}, as evaluated by ROKS(HF), ROKS(SCAN), the correlated $\Delta$ methods (Schemes S1, S2 and S3), and FC-CVS-EOM-CCSD-EE. For the S1 and S2 approaches, in addition to CCSD itself, the corresponding MP2 and CCSD(T) values are also shown. The specific values corresponding to these statistics are given in Table \ref{tbl:ionizations} and the Supplementary Information.} \label{fgr:ionizations} \end{figure} In contrast to excitations, the correlated $\Delta$ methods using the S1 model manage to slightly improve upon $\Delta$HF for ionization. $\Delta$MP2(S1) increases the HF ionization energy in almost all cases, and over 1 eV in several of them: \ce{H2C\textbf{O}}, \ce{CH3\textbf{O}H}, \ce{C\textbf{O}}, \ce{HF}, \ce{F2}, and \ce{Ne}. The only case where $\Delta$MP2(S1) decreases the ionization predicted by $\Delta$HF is \ce{\textbf{C}O}, which is also the second most challenging case for $\Delta$HF, right after \ce{F2}. The problematic Be is overestimated by 0.81 eV by $\Delta$MP2(S1). Once again, $\Delta$CCSD(S1) alleviates the worst cases in $\Delta$MP2(S1). \ce{\textbf{C}O} is anomalous in that this is the only case where $\Delta$CCSD(S1) significantly worsens the $\Delta$MP2(S1) result, and also the only one where the (T) seems to significantly improve the result, correcting the $\Delta$CCSD(S1) result by 0.17 eV. Overall, the S1 methods result in MAEs and RMSEs of 0.42, 0.37, 0.38 eV and 0.49, 0.39, 0.41 eV for MP2, CCSD, and CCSD(T). As Lubijic\cite{Ljubic2014} noted in their study, $\Delta$MP2(S1) seldom warrants the additional cost over $\Delta$SCF and neither extending to CCSD or CCSD(T) seems to improve the results to an extent that justifies their cost. As for excitations, a consistent overestimation of the core ionization energies, as evidenced by the MSEs being equal to the MAEs for all the S1 correlated methods, hints at the configurations neglected by the S1 scheme being important. Indeed, the improvement in calculated core ionization energies provided by the correlated methods under model S2, relative to S1, is even more dramatic than it is for the excitations. In contrast with $\Delta$MP2(S1), $\Delta$MP2(S2) manages to somewhat improve the statistics from $\Delta$HF, bringing down the MAE and RMSE to 0.33 and 0.44 eV. S2 improves the S1 results for MP2 in almost all cases, the only significant exception being Be, where $\Delta$MP2(S2) performs the worst: an overestimation of 1.1 eV. As with S1, $\Delta$CCSD(S2) alleviates the failures of $\Delta$MP2(S2) (significantly for Be) and brings the MAE and RMSE down to 0.12 and 0.15 eV. $\Delta$CCSD(T) slightly worsens the statistics by bringing the MAE and RMSE to 0.13 and 0.17 eV. The RMSE for $\Delta$CCSD(S2) is more than 2.5 times smaller than for FC-CVS-EOM-CCSD-IP. The results presented here are comparable to those in Table 5 of Zheng \textit{et al.} \cite{Zheng2019} The differences can be associated with the different basis sets used and the way we are treating the correlation associated with the core virtual. Whereas in their study, they make estimates to the correlation missing due to freezing the core orbital completely (S1) by carrying out unconstrained (S0) calculations with denominator thresholds, S2 recovers it by a well-defined protocol. \begin{table} \caption{Core ionizations} \label{tbl:ionizations} \begin{tabular}{l|ccccc} \hline Transition & $\Delta$SCF(HF) & $\Delta$SCF(SCAN) & $\Delta$CCSD(S2) & EOM-CCSD & Experiment \\ \hline \ce{Be} 1s - ion &123.35 &123.92 &123.65 &123.49 &123.35 \cite{???} \\ \ce{C2H4} 1s - ion &290.71 &290.92 &290.72 &290.95 &290.88\cite{Jolly1984} \\ \ce{CH4} 1s - ion &290.86 &290.92 &290.69 &290.68 &290.83 \cite{Jolly1984} \\ \ce{C2H2} 1s - ion &291.39 &291.47 &291.21 &291.26 &291.14 \cite{Jolly1984} \\ \ce{\textbf{C}H3OH} 1s - ion &292.63 &292.63 &292.44 &292.52 &292.3 (0.2)\cite{Hempelmann1999} \\ \ce{H\textbf{C}N} 1s - ion &293.76 &293.68 &293.43 &293.34 &293.50\cite{Jolly1984} \\ \ce{H2\textbf{C}O} 1s - ion &294.91 &294.75 &294.50 &294.70 &294.35\cite{Remmers1992} \\ \ce{\textbf{C}O} 1s - ion &297.23 &296.58 &296.47 &296.43 &296.24\cite{Jolly1984} \\ \ce{NH3} 1s - ion &405.48 &405.70 &405.51 &405.77 &405.52\cite{Jolly1984} \\ \ce{HC\textbf{N}} 1s - ion &406.74 &406.96 &406.78 &407.10 &406.8\cite{Jolly1984} \\ \ce{N2} 1s - ion &410.21 &410.15 &409.99 &409.89 &409.9\cite{Jolly1984} \\ \ce{CH3\textbf{O}H} 1s - ion &538.43 &539.08 &538.90 &539.64 &539.06 (0.2)\cite{Hempelmann1999} \\ \ce{H2C\textbf{O}} 1s - ion &538.51 &539.47 &539.29 &540.28 &539.30\cite{Remmers1992} \\ \ce{H2O} 1s - ion &539.49 &539.96 &539.82 &540.29 &539.92\cite{Jolly1984} \\ \ce{C\textbf{O}} 1s - ion &541.79 &542.65 &542.43 &543.10 &542.57 \cite{Jolly1984} \\ \ce{HF} 1s - ion &693.62 &694.30 &694.25 &694.80 &694.0\cite{Jolly1984} \\ \ce{F2} 1s - ion &695.36 &696.54 &696.58 &697.58 &696.71\cite{Jolly1984} \\ \ce{Ne} 1s - ion &869.54 &870.21 &870.31 &870.49 &870.33\cite{Muller2017} \\ \hline MSE &-0.15 &0.18 &0.02 &0.31 & \\ MAE &0.45 &0.21 &0.13 &0.35 & \\ RMSE &0.58 &0.25 &0.17 &0.45 & \\ MAX &1.35 &0.57 &0.33 &0.98 & \\ \end{tabular} \end{table} \section{Conclusions} We have studied the use of core-hole orbital-optimized references in SR correlated methods to describe core excited and core ionized states of 18 small closed-shell organic molecules, and compared them against two of the most successful approaches so far: ROKS(SCF) and EOM-CC. The use of three different schemes (S1, S2, S3) to address the convergence problems of the CC equations, and the spin contamination of the excited states, were employed. S1 excludes all amplitudes involving the half-occupied core orbital associated with the excitation or ionization. S2 allows for the ones that retain a core occupancy of 1. S3, exclusively for CCSD core excitations, fixes the $T_2$ amplitude associated with the spin compliment of a spin symmetry-broken core-excited reference to $\pm 1.0$, thereby ensuring the proper reference CSF is present in the cluster expansion. As evidenced by the energetic difference between the singlet and the triplet core excited states, addressing the spin contamination associated with using a symmetry broken reference is essential for quantitative studies using the correlated $\Delta$ methods unless Rydberg states are being targeted. To compare with experimental core excitations and ionizations requires careful attention to basis set convergence, which we have addressed by using the aug-cc-pCVXZ basis set for heavy atoms (n = T, Q, with extrapolation), and aug-cc-pVDZ for hydrogen. With this protocol, $\Delta$CCSD(S3) performs the best among the correlated $\Delta$ methods for core excitations, reaching an MAE and RMSE of 0.15 and 0.19 eV for CCSD. These statistics are on par with the most successful orbital-optimized DFT approach, ROKS(SCAN). $\Delta$CCSD(S2) follows closely behind, with an MAE and RMSE of 0.18 and 0.22 eV. As such, $\Delta$CCSD with either S2 or S3 roughly halves the errors of FC-CVS-EOM-CCSD-EE. A similar situation takes place for ionizations, where S2 in conjunction with CCSD performs the best, by achieving a MAE and RMSE of 0.13 and 0.17 eV, respectively. $\Delta$CCSD(S2) reduces the FC-CVS-EOM-CCSD-IP error by more than a factor of 2.5, and outperforms $\Delta$SCF(SCAN), which has an MAE and RMSE of 0.21 and 0.25 eV. The use of a CVS scheme like S1 for the correlated $\Delta$ methods is discouraged, if quantitative agreement is sought after. Furthermore, as has previously been concluded by others,\cite{Ljubic2014} we cannot recommend the use of $\Delta$MP2 for the prediction of core excitations or ionizations. In the future, it may be interesting to explore whether regularization\cite{shee2021regularized, Lee2019} can address the limitations of $\Delta$MP2. Finally, we note that the use of the perturbative (T) triples correction with the best scheme that allows for it, S2, does not seem to offer a significant improvement over CCSD. Perhaps this is because the effect of triples is small (based on the excellent results obtained with $\Delta$CCSD(S2) and $\Delta$CCSD(S3)) or perhaps a full triples treatment is needed to obtain further significant improvement. Considering the tractability of our $\Delta$CCSD schemes S2 and S3, as well as the challenge of approaching the complete basis set limit, it is indeed encouraging that our results with $\Delta$CCSD(S2) and $\Delta$CCSD(S3) are attaining errors that approach the given experimental uncertainties (typically on the order of 0.1 eV). Apart from the BSL, the use of atomic relativistic corrections, and the truncation of correlation at the CCSD and CCSD(T) level, there are two (presumably small) additional energetic effects that we have neglected. First is the coupling of the resonance with the Auger continuum\cite{Carravetta1991} and second is vibronic effects \cite{Coreno1999, Prince1999, DeSimone2002, Duflot2003}. \begin{acknowledgement} The authors thank Diptarka Hait for fruitful discussions. J. L. thanks David Reichman for support. This work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy, under Contract No. DE-AC02-05CH11231. \end{acknowledgement} \begin{suppinfo} All the Supporting Information is provided as .xlsx spreadsheets. For both excitations and ionizations via the three $\Delta$CC schemes, the following data is provided: SCF energies, correlation energies, SCF and CCSD $\braket{S^2}$ values. For the ROKS(SCAN) and FC-CVS-EOM-EE-CCSD and FC-CVS-EOM-IP-CCSD calculations, the excitation energies are provided. \end{suppinfo}
1,108,101,563,087
arxiv
\section{Introduction} An important problem in 4-manifold topology is to understand which manifolds carry symplectic structures (i.e., closed non-degenerate 2-forms), and to develop invariants that can distinguish symplectic manifolds. Additionally, one would like to understand to what extent the category of symplectic manifolds is richer than that of K\"ahler (or complex projective) manifolds. Similar questions may be asked about singular curves inside, e.g., the complex projective plane. The two types of questions are related to each other via symplectic branched covers. A branched cover of a symplectic 4-manifold with a (possibly singular) symplectic branch curve carries a natural symplectic structure. Conversely, using approximately holomorphic techniques it can be shown that every compact symplectic 4-manifold is a branched cover of the complex projective plane, with a branch curve presenting nodes (of both orientations) and complex cusps as its only singularities (cf.\ \S \ref{sec:covers}). The topology of the 4-manifold and that of the branch curve are closely related to each other; for example, using braid monodromy techniques to study the branch curve, one can reduce the classification of symplectic 4-manifolds to a (hard) question about factorizations in the braid group (cf.\ \S \ref{sec:bmf}). Conversely, in some examples the topology of the branch curve complement (in particular its fundamental group) admits a simple description in terms of the total space of the covering (cf.\ \S \ref{sec:pi1}). In the language of branch curves, the failure of most symplectic manifolds to admit integrable complex structures translates into the failure of most symplectic branch curves to be isotopic to complex curves. While the symplectic isotopy problem has a negative answer for plane curves with cusp and node singularities, it is interesting to investigate this failure more precisely. Various partial results have been obtained recently about situations where isotopy holds (for smooth curves; for curves of low degree), and about isotopy up to stabilization or regular homotopy (cf.\ \S \ref{sec:isotopy}). On the other hand, many known examples of non-isotopic curves can be understood in terms of twisting along Lagrangian annuli (or equivalently, Luttinger surgery of the branched covers), leading to some intriguing open questions about the topology of symplectic 4-manifolds versus that of K\"ahler surfaces. \section{Background} In this section we review various classical facts about symplectic manifolds; the reader unfamiliar with the subject is referred to the book \cite{McS} for a systematic treatment of the material. Recall that a {\it symplectic form} on a smooth manifold is a 2-form $\omega$ such that $d\omega=0$ and $\omega\wedge\dots\wedge \omega$ is a volume form. The prototype of a symplectic form is the 2-form $\omega_0=\sum dx_i\wedge dy_i$ on $\mathbb{R}^{2n}$. In fact, one of the most classical results in symplectic topology, Darboux's theorem, asserts that every symplectic manifold is locally symplectomorphic to $(\mathbb{R}^{2n},\omega_0)$: hence, unlike Riemannian metrics, symplectic structures have no local invariants. Since we are interested primarily in compact examples, let us mention compact oriented surfaces (taking $\omega$ to be an arbitrary area form), and the complex projective space $\mathbb{CP}^n$ (equipped with the Fubini-Study K\"ahler form). More generally, since any submanifold to which $\omega$ restricts non-degenerately inherits a symplectic structure, all complex projective manifolds are symplectic. However, the symplectic category is strictly larger than the complex projective category, as first evidenced by Thurston in 1976 \cite{Th}. In 1994 Gompf obtained the following spectacular result using the {\it symplectic sum} construction \cite{Go1}: \begin{theorem}[Gompf] Given any finitely presented group $G$, there exists a compact symplectic 4-manifold $(X,\omega)$ such that $\pi_1(X)\simeq G$. \end{theorem} Hence, a general symplectic manifold cannot be expected to carry a complex structure; however, we can equip it with a compatible {\it almost-complex} structure, i.e.\ there exists $J\in\mathrm{End}(TX)$ such that $J^2=-\mathrm{Id}$ and $g(\cdot,\cdot):=\omega(\cdot,J\cdot)$ is a Riemannian metric. Hence, at any given point $x\in X$ the tangent space $(T_xX,\omega,J)$ can be identified with $(\mathbb{C}^n,\omega_0,i)$, but there is no control over the manner in which $J$ varies from one point to another ($J$ is not {\it integrable}). In particular, the $\bar\partial$ operator associated to $J$ does not satisfy $\bar\partial^2=0$, and hence there are no local holomorphic coordinates. \medskip An important problem in 4-manifold topology is to understand the hierarchy formed by the three main classes of compact oriented 4-manifolds: (1) complex projective, (2) symplectic, and (3) smooth. Each class is a proper subset of the next one, and many obstructions and examples are known, but we are still very far from understanding what exactly causes a smooth 4-manifold to admit a symplectic structure, or a symplectic 4-manifold to admit an integrable complex structure. One of the main motivations to study symplectic 4-manifolds is that they retain some (but not all) features of complex projective manifolds: for example the structure of their Seiberg-Witten invariants, which in both cases are non-zero and count certain embedded curves \cite{Ta1,Ta2}. At the same time, every compact oriented smooth 4-manifold with $b_2^+\ge 1$ admits a ``near-symplectic'' structure, i.e.\ a closed 2-form which vanishes along a union of circles and is symplectic over the complement of its zero set \cite{GK,Ho1}; and it appears that some structural properties of symplectic manifolds carry over to the world of smooth 4-manifolds (see e.g.\ \cite{Ta3,Asinglp}). Many new developments have contributed to improve our understanding of symplectic 4-manifolds over the past ten years (while results are much scarcer in higher dimensions). Perhaps the most important source of new results has been the study of pseudo-holomorphic curves in their various incarnations: Gromov-Witten invariants, Floer homology, \dots (for an overview of the subject see \cite{McS2}). At the same time, gauge theory (mostly Seiberg-Witten theory, but also more recently Ozsvath-Szabo theory) has made it possible to identify various {\it obstructions} to the existence of symplectic structures in dimension 4 (cf.\ e.g.\ \cite{Ta1,Ta2}). On the other hand, various new constructions, such as link surgery \cite{FS1}, symplectic sum \cite{Go1}, and symplectic rational blowdown \cite{Sy} have made it possible to exhibit interesting families of non-K\"ahler symplectic 4-manifolds. In a slightly different direction, approximately holomorphic geometry (first introduced by Donaldson in \cite{Do1}) has made it possible to obtain various structure results, showing that symplectic 4-manifolds can be realized as symplectic Lefschetz pencils \cite{Do2} or as branched covers of $\mathbb{CP}^2$ \cite{Au2}. In the rest of this paper we will focus on this latter approach, and discuss the topology of {\it symplectic branched covers} in dimension 4. \section{Symplectic branched covers}\label{sec:covers} Let $X$ and $Y$ be compact oriented 4-manifolds, and assume that $Y$ carries a symplectic form $\omega_Y$. \begin{definition} A smooth map $f:X\to Y$ is a {\em symplectic branched covering} if given any point $p\in X$ there exist neighborhoods $U\ni p$, $V\ni f(p)$, and local coordinate charts $\phi:U\to\mathbb{C}^2$ $($orientation-preserving$)$ and $\psi:V\to\mathbb{C}^2$ $($adapted to $\omega_Y$, i.e.\ such that $\omega_Y$ restricts positively to any complex line in $\mathbb{C}^2)$, in which $f$ is given by one of: \smallskip $(i)$ $(x,y)\mapsto (x,y)$ $($local diffeomorphism$)$, $(ii)$ $(x,y)\mapsto (x^2,y)$ $($simple branching$)$, $(iii)$ $(x,y)\mapsto (x^3-xy,y)$ $($ordinary cusp$)$. \end{definition} These local models are the same as for the singularities of a generic holomorphic map from $\mathbb{C}^2$ to itself, except that the requirements on the local coordinate charts have been substantially weakened. The {\it ramification curve} $R=\{p\in X,\ \det(df)=0\}$ is a smooth submanifold of $X$, and its image $D=f(R)$ is the {\it branch curve}, described in the local models by the equations $z_1=0$ for $(x,y)\mapsto (x^2,y)$ and $27z_1^2=4z_2^3$ for $(x,y)\mapsto (x^3-xy,y)$. The conditions imposed on the local coordinate charts imply that $D$ is a {\it symplectic curve} in $Y$ (i.e., $\omega_{Y|TD}>0$ at every point of $D$). Moreover the restriction of $f$ to $R$ is an immersion everywhere except at the cusps. Hence, besides the ordinary complex cusps imposed by the local model, the only generic singularities of $D$ are transverse double points (``nodes''), which may occur with either the complex orientation or the anti-complex orientation. We have the following result \cite{Au2}: \begin{proposition}\label{prop:au2} Given a symplectic branched covering $f:X\to Y$, the manifold $X$ inherits a natural symplectic structure $\omega_X$, canonical up to isotopy, in the cohomology class $[\omega_X]=f^*[\omega_Y]$. \end{proposition} The symplectic form $\omega_X$ is constructed by adding to $f^*\omega_Y$ a small multiple of an exact form $\alpha$ with the property that, at every point of $R$, the restriction of $\alpha$ to $\mathrm{Ker}(df)$ is positive. Uniqueness up to isotopy follows from the convexity of the space of such exact 2-forms and Moser's theorem. Conversely, we can realize every compact symplectic 4-manifold as a symplectic branched cover of $\mathbb{CP}^2$ \cite{Au2}, at least if we assume {\it integrality}, i.e.\ if we require that $[\omega]\in H^2(X,\mathbb{Z})$, which does not place any additional restrictions on the diffeomorphism type of $X$: \begin{theorem}\label{thm:au2} Given an integral compact symplectic 4-manifold $(X^4,\omega)$ and an integer $k\gg 0$, there exists a symplectic branched covering $f_k:X\to\mathbb{CP}^2$, canonical up to isotopy if $k$ is sufficiently large. \end{theorem} Moreover, the natural symplectic structure induced on $X$ by the Fubini-Study K\"ahler form and $f_k$ (as given by Proposition \ref{prop:au2}) agrees with $\omega$ up to isotopy and scaling (multiplication by~$k$). The main tool in the construction of the maps $f_k$ is {\it approximately holomorphic geometry} \cite{Do1,Do2,Au2}. Equip $X$ with a compatible almost-complex structure, and consider a complex line bundle $L\to X$ such that $c_1(L)=[\omega]$: then for $k\gg 0$ the line bundle $L^{\otimes k}$ admits many approximately holomorphic sections, i.e.\ sections such that $\sup |\bar\partial s|\ll\sup |\partial s|$. Generically, a triple of such sections $(s_0,s_1,s_2)$ has no common zeroes, and determines a projective map $f:p\mapsto [s_0(p)\!:\!s_1(p)\!:\!s_2(p)]$. Theorem \ref{thm:au2} is then proved by constructing triples of sections which satisfy suitable transversality estimates, ensuring that the structure of $f$ near its critical locus is the expected one \cite{Au2}. (In the complex case it would be enough to pick three generic holomorphic sections, but in the approximately holomorphic context one needs to work harder and obtain uniform transversality estimates on the derivatives of $f$.) Because for large $k$ the maps $f_k$ are canonical up to isotopy through symplectic branched covers, the topology of $f_k$ and of its branch curve $D_k$ can be used to define invariants of the symplectic manifold $(X,\omega)$. The only generic singularities of the plane curve $D_k$ are nodes (transverse double points) of either orientation and complex cusps, but in a generic one-parameter family of branched covers pairs of nodes with opposite orientations may be cancelled or created. However, recalling that a node of $D_k$ corresponds to the occurrence of two simple branch points in a same fiber of $f_k$, the creation of a pair of nodes can only occcur in a manner compatible with the branched covering structure, i.e.\ involving disjoint sheets of the covering. Hence, for large $k$ the sequence of branch curves $D_k$ is, up to isotopy (equisingular deformation among symplectic curves), cancellations and admissible creations of pairs of nodes, an invariant of $(X,\omega)$. The ramification curve of $f_k$ is just a smooth connected symplectic curve representing the homology class Poincar\'e dual to $3k[\omega]-c_1(TX)$, but the branch curve $D_k$ becomes more and more complicated as $k$ increases: in terms of the symplectic volume and Chern numbers of $X$, its degree (or homology class) $d_k$, genus $g_k$, and number of cusps $\kappa_k$ are given by $$d_k=3k^2\,[\omega]^2-k\,c_1\cdot [\omega],\qquad 2g_k-2=9 k^2\,[\omega]^2-9 k\,c_1\cdot [\omega]+2c_1^2,$$ $$\kappa_k=12k^2\,[\omega]^2-9k\,c_1\cdot [\omega]+2c_1^2-c_2.$$ It is also worth mentioning that, to this date, there is no evidence suggesting that negative nodes actually do occur in these high degree branch curves; our inability to rule our their presence might well be a shortcoming of the approximately holomorphic techniques, rather than an intrinsic feature of symplectic 4-manifolds. So in the following sections we will occasionally consider the more conventional problem of understanding isotopy classes of curves presenting only positive nodes and cusps, although most of the discussion applies equally well to curves with negative nodes. \medskip Assuming that the topology of the branch curve is understood (we will discuss how to achieve this in the next section), one still needs to consider the branched covering $f$ itself. The structure of $f$ is determined by its {\it monodromy morphism} $\theta:\pi_1(\mathbb{CP}^2-D)\to S_N$, where $N$ is the degree of the covering $f$. Fixing a base point $p_0\in \mathbb{CP}^2-D$, the image by $\theta$ of a loop $\gamma$ in the complement of $D$ is the permutation of the fiber $f^{-1}(p_0)$ induced by the monodromy of $f$ along $\gamma$. (Since viewing this permutation as an element of $S_N$ depends on the choice of an identification between $f^{-1}(p_0)$ and $\{1,\dots,N\}$, the morphism $\theta$ is only well-defined up to conjugation by an element of $S_N$.) By Proposition \ref{prop:au2}, the isotopy class of the branch curve $D$ and the monodromy morphism $\theta$ determine completely the symplectic 4-manifold $(X,\omega)$ up to symplectomorphism. Consider a loop $\gamma$ which bounds a small topological disc intersecting $D$ transversely once: such a loop plays a role similar to the meridian of a knot, and is called a {\it geometric generator} of $\pi_1(\mathbb{CP}^2-D)$. Then $\theta(\gamma)$ is a transposition (because of the local model near a simple branch point). Since the image of $\theta$ is generated by transpositions and acts transitively on the fiber (assuming $X$ to be connected), $\theta$ is a surjective group homomorphism. Moreover, the smoothness of $X$ above the singular points of $D$ imposes certain compatibility conditions on $\theta$. Therefore, not every singular plane curve can be the branch curve of a smooth covering; moreover, the morphism $\theta$, if it exists, is often unique (up to conjugation in $S_N$). In the case of algebraic curves, this uniqueness property, which holds except for a finite list of well-known counterexamples, is known as Chisini's conjecture, and was essentially proved by Kulikov a few years ago \cite{Ku}. The upshot of the above discussion is that, in order to understand symplectic 4-manifolds, it is in principle enough to understand singular plane curves. Moreover, if the branch curve of a symplectic covering $f:X\to \mathbb{CP}^2$ happens to be a complex curve, then the integrable complex structure of $\mathbb{CP}^2$ can be lifted to an integrable complex structure on $X$, compatible with the symplectic structure; this implies that $X$ is a complex projective surface. So, considering the branched coverings constructed in Theorem \ref{thm:au2}, we have: \begin{corollary}\label{cor:au2} For $k\gg 0$ the branch curve $D_k\subset\mathbb{CP}^2$ is isotopic to a complex curve (up to node cancellations) if and only if $X$ is a complex projective surface. \end{corollary} This motivates the study of the {\it symplectic isotopy problem}, which we will discuss in \S \ref{sec:isotopy}. For now we focus on the use of braid monodromy invariants to study the topology of singular plane curves. In the present context, the goal of this approach is to reduce the classification of symplectic 4-manifolds to a purely algebraic problem, in a manner vaguely reminiscent of the role played by Kirby calculus in the classification of smooth 4-manifolds; as we shall see below, representing symplectic 4-manifolds as branched covers of $\mathbb{CP}^2$ naturally leads one to study the calculus of factorizations in braid groups. \section{The topology of singular plane curves}\label{sec:bmf} The topology of singular algebraic plane curves has been studied extensively since Zariski. One of the main tools is the notion of {\it braid monodromy} of a plane curve, which has been used in particular by Moishezon and Teicher in many papers since the early 1980s in order to study branch curves of generic projections of complex projective surfaces (see \cite{Te1} for a detailed overview). Braid monodromy techniques can be applied to the more general case of {\it Hurwitz curves} in ruled surfaces, i.e.\ curves which behave in a generic manner with respect to the ruling. In the case of $\mathbb{CP}^2$, we consider the projection $\pi:\mathbb{CP}^2-\{(0:0:1)\}\to \mathbb{CP}^1$ given by $(x:y:z)\mapsto (x:y)$. \begin{definition}\label{def:hurwitz} A curve $D\subset\mathbb{CP}^2$ $($not passing through $(0\!:\!0\!:\!1))$ is a Hurwitz curve (or braided curve) if $D$ is positively transverse to the fibers of $\pi$ everywhere except at finitely many points where $D$ is smooth and non-degenerately tangent to the fibers. \end{definition} \begin{figure}[t] \begin{center} \setlength{\unitlength}{0.7mm} \begin{picture}(80,52)(-40,-12) \put(0,-2){\vector(0,-1){8}} \put(2,-6){$\pi:(x:y:z)\mapsto (x:y)$} \put(-40,-15){\line(1,0){80}} \put(-38,-12){$\mathbb{CP}^1$} \put(-40,0){\line(1,0){80}} \put(-40,40){\line(1,0){80}} \put(-40,0){\line(0,1){40}} \put(40,0){\line(0,1){40}} \put(-38,33){$\mathbb{CP}^2-\{0\!:\!0\!:\!1\}$} \put(27,31){$D$} \multiput(-20,20)(0,-2){18}{\line(0,-1){1}} \multiput(-5,20)(0,-2){18}{\line(0,-1){1}} \multiput(15,15)(0,-2){9}{\line(0,-1){1}} \multiput(15,-9)(0,-2){3}{\line(0,-1){1}} \put(-20,-15){\circle*{1}} \put(-5,-15){\circle*{1}} \put(15,-15){\circle*{1}} \qbezier[140](25,35)(5,30)(-5,20) \qbezier[60](-5,20)(-10,15)(-15,15) \qbezier[60](-15,15)(-20,15)(-20,20) \qbezier[60](-20,20)(-20,25)(-15,25) \qbezier[60](-15,25)(-10,25)(-5,20) \qbezier[100](-5,20)(0,15)(15,15) \qbezier[250](15,15)(5,15)(-30,5) \put(-20,20){\circle*{1}} \put(-5,20){\circle*{1}} \put(15,15){\circle*{1}} \end{picture} \end{center} \caption{A Hurwitz curve in $\mathbb{CP}^2$} \end{figure} The projection $\pi$ makes $D$ a singular branched cover of $\mathbb{CP}^1$, of degree $d=\deg D=[D]\cdot[\mathbb{CP}^1]$. Each fiber of $\pi$ is a complex line $\ell\simeq \mathbb{C}\subset\mathbb{CP}^2$, and if $\ell$ does not pass through any of the singular points of $D$ nor any of its vertical tangencies, then $\ell\cap D$ consists of $d$ distinct points. We can trivialize the fibration $\pi$ over an affine subset $\mathbb{C}\subset\mathbb{CP}^1$, and define the {\it braid monodromy morphism} $$\rho:\pi_1(\mathbb{C}-\mathrm{crit}(\pi_{|D}))\to B_d.$$ Here $B_d$ is the Artin braid group on $d$ strings (the fundamental group of the configuration space $\mathrm{Conf}_d(\mathbb{C})$ of $d$ distinct points in $\mathbb{C}$), and for any loop $\gamma$ the braid $\rho(\gamma)$ describes the motion of the $d$ points of $\ell\cap D$ inside the fibers of $\pi$ as one moves along the loop $\gamma$. Equivalently, choosing an ordered system of arcs generating the free group $\pi_1(\mathbb{C}-\mathrm{crit}(\pi_{|D}))$, one can express the braid monodromy of $D$ by a {\it factorization} $$\Delta^2=\prod_{i} \rho_i$$ of the central element $\Delta^2$ (representing a full rotation by $2\pi$) in $B_d$, where each factor $\rho_i$ is the monodromy around one of the special points (cusps, nodes, tangencies) of $D$. A same Hurwitz curve can be described by different factorizations of $\Delta^2$ in $B_d$: switching to a different ordered system of generators of $\pi_1(\mathbb{C}-\mathrm{crit}(\pi_{|D}))$ affects the collection of factors $\langle \rho_1,\dots,\rho_r\rangle $ by a sequence of {\it Hurwitz moves}, i.e.\ operations of the form $$\langle \rho_1,\,\cdots,\rho_i,\rho_{i+1},\,\cdots,\rho_r\rangle \, \longleftrightarrow\, \langle \rho_1,\,\cdots,(\rho_i\rho_{i+1}\rho_i^{-1}), \rho_i,\,\cdots,\rho_r\rangle; $$ and changing the identification between the reference fiber $(\ell,\ell\cap D)$ of $\pi$ and the base point in $\mathrm{Conf}_d(\mathbb{C})$ affects braid monodromy by a {\it global conjugation} $$\langle\rho_1,\,\cdots,\rho_r\rangle \,\longleftrightarrow\, \langle b^{-1}\rho_1 b,\,\cdots,b^{-1}\rho_r b\rangle. $$ For Hurwitz curves whose only singularities are cusps and nodes (of either orientation), or more generally curves with $A_n$ (and $\overline{A}_n$) singularities, the braid monodromy factorization determines the isotopy type completely (see for example \cite{KK}). Hence, determining whether two given Hurwitz curves are isotopic among Hurwitz curves is equivalent to determining whether two given factorizations of $\Delta^2$ coincide up to Hurwitz moves and global conjugation. \medskip It is easy to see that any Hurwitz curve in $\mathbb{CP}^2$ can be made symplectic by an isotopy through Hurwitz curves: namely, the image of any Hurwitz curve by the rescaling map $(x:y:z)\mapsto (x:y:\lambda z)$ is a Hurwitz curve, and symplectic for $|\lambda|\ll 1$. On the other hand, a refinement of Theorem \ref{thm:au2} makes it possible to assume without loss of generality that the branch curves $D_k\subset\mathbb{CP}^2$ are Hurwitz curves \cite{AK}. So, from now on we can specifically consider symplectic coverings with Hurwitz branch curves. In this setting, braid monodromy gives a purely combinatorial description of the topology of compact (integral) symplectic 4-manifolds. The braid monodromy of the branch curves $D_k$ given by Theorem \ref{thm:au2} can be computed explicitly for various families of complex projective surfaces (non-K\"ahler examples are currently beyond reach). In fact, in the complex case the branched coverings $f_k$ are isotopic to generic projections of projective embeddings. Accordingly, most of these computations rely purely on methods from algebraic geometry, using the degeneration techniques extensively developed by Moishezon and Teicher (see \cite{AGTV,Mo1,Mo2,MRT,Ro,Te1,Te2} and references within); but approximately holomorphic methods can be used to simplify the calculations and bring a whole new range of examples within reach \cite{ADKY}. This includes some complex surfaces of general type which are mutually homeomorphic and have identical Seiberg-Witten invariants but of which it is unknown whether they are symplectomorphic or even diffeomorphic (the {\it Horikawa surfaces}). However, the main obstacle standing in the way of this approach to the topology of symplectic 4-manifolds is the intractability of the so-called ``Hurwitz problem'' for braid monodromy factorizations: namely, there is no algorithm to decide whether two given braid monodromy factorizations are identical up to Hurwitz moves. Therefore, since we are unable to compare braid monodromy factorizations, we have to extract the information contained in them by indirect means, via the introduction of more manageable (but less powerful) invariants. \section{Fundamental groups of branch curve complements}\label{sec:pi1} The idea of studying algebraic plane curves by determining the fundamental groups of their complements is a very classical one, which goes back to Zariski and Van Kampen. More recently, Moishezon and Teicher have shown that fundamental groups of branch curve complements can be used as a major tool to further our understanding of complex projective surfaces (cf.\ e.g.\ \cite{Mo1,MT,Te1}). By analogy with the situation for knots in $S^3$, one expects the topology of the complement to carry a lot of information about the curve; however in this case the fundamental group does not determine the isotopy type. For an algebraic curve in $\mathbb{CP}^2$, or more generally for a Hurwitz curve, the fundamental group of the complement is determined in an explicit manner by the braid monodromy factorization, via the Zariski-Van Kampen theorem. Hence, calculations of fundamental groups of complements usually rely on braid monodromy techniques. A close examination of the available data suggests that, contrarily to what has often been claimed, in the specific case of generic projections of complex surfaces projectively embedded by sections of a sufficiently ample linear system (i.e.\ taking $k\gg 0$ in Theorem \ref{thm:au2}), the fundamental group of the branch curve complement may be determined in an elementary manner by the topology of the surface (see below). In the symplectic setting, the fundamental group of the complement of the branch curve $D$ of a covering $f:X\to\mathbb{CP}^2$ is affected by node creation or cancellation operations. Indeed, adding pairs of nodes (in a manner compatible with the monodromy morphism $\theta:\pi_1(\mathbb{CP}^2-D)\to S_N$) introduces additional commutation relations between geometric generators of the fundamental group. Hence, it is necessary to consider a suitable ``symplectic stabilization'' of $\pi_1(\mathbb{CP}^2-D)$ \cite{ADKY}: \begin{definition}\label{def:stabgp} Let $K$ be the normal subgroup of $\pi_1(\mathbb{CP}^2-D)$ generated by the commutators $[\gamma,\gamma']$ for all pairs $\gamma,\gamma'$ of geometric generators such that $\theta(\gamma)$ and $\theta(\gamma')$ are disjoint commuting transpositions. Then the symplectic stabilization of $\pi_1(\mathbb{CP}^2-D)$ is the quotient $\bar{G}=\pi_1(\mathbb{CP}^2-D)/K$. \end{definition} Considering the branch curves $D_k$ of the coverings given by Theorem \ref{thm:au2}, we have the following result \cite{ADKY}: \begin{theorem}[A.-Donaldson-Katzarkov-Yotov] For $k\gg 0$, the stabilized group $\bar{G}_k(X,\omega)= \pi_1(\mathbb{CP}^2-D_k)/K_k$ is an invariant of the symplectic manifold $(X^4,\omega)$. \end{theorem} The fundamental group of the complement of a plane branch curve $D\subset\mathbb{CP}^2$ comes naturally equipped with two morphisms: the symmetric group valued monodromy homomorphism $\theta$ discussed above, and the abelianization map $\delta:\pi_1(\mathbb{CP}^2\!-\!D)\to H_1(\mathbb{CP}^2\!-\!D,\mathbb{Z})$. Since we only consider irreducible branch curves, we have $H_1(\mathbb{CP}^2\!-\!D,\mathbb{Z})\simeq \mathbb{Z}_d$, where $d=\deg D$, and $\delta$ counts the linking number (mod $d$) with the curve $D$. The morphisms $\theta$ and $\delta$ are surjective, but the image of $(\theta,\delta):\pi_1(\mathbb{CP}^2-D)\to S_N\times \mathbb{Z}_d$ is the index 2 subgroup consisting of all pairs $(\sigma,p)$ such that the permutation $\sigma$ and the integer $p$ have the same parity (note that $d$ is always even). The subgroup $K$ introduced in Definition \ref{def:stabgp} lies in the kernel of $(\theta,\delta)$; therefore, setting $G^0=\mathrm{Ker}(\theta,\delta)/K$, we have an exact sequence $$1\longrightarrow G^0\longrightarrow \bar{G}\stackrel{(\theta,\delta)} {\longrightarrow}S_N\times \mathbb{Z}_d\longrightarrow \mathbb{Z}_2\longrightarrow 1.$$ Moreover, assume that the symplectic 4-manifold $X$ is simply connected, and denote by $L=f^*[\mathbb{CP}^1]$ the pullback of the hyperplane class and by $K_X=-c_1(TX)$ the canonical class. Then we have the following result \cite{ADKY}: \begin{theorem}[A.-Donaldson-Katzarkov-Yotov]\label{thm:adky} If $\pi_1(X)=1$ then there is a natural surjective homomorphism $\phi:\mathrm{Ab}(G^0)\twoheadrightarrow (\mathbb{Z}^2/\Lambda)^{N-1}$, where $\Lambda=\{(L\cdot C, K_X\cdot C),\ C\in H_2(X,\mathbb{Z})\}\subset\mathbb{Z}^2$. \end{theorem} The fundamental groups of the branch curve complements have been computed for generic polynomial maps to $\mathbb{CP}^2$ on various algebraic surfaces, using braid monodromy techniques (cf.\ \S \ref{sec:bmf}) and the Zariski-Van Kampen theorem. Since in the symplectic setting Theorem \ref{thm:au2} gives uniqueness up to isotopy only for $k\gg 0$, we restrict ourselves to those examples for which the fundamental groups have been computed for $\mathbb{CP}^2$-valued maps of arbitrarily large degree. The first such calculations were carried out by Moishezon and Teicher, for $\mathbb{CP}^2$, $\mathbb{CP}^1\times\mathbb{CP}^1$ \cite{Mo2}, and Hirzebruch surfaces (\cite{MRT}, see also \cite{ADKY}); the answer is also known for some specific linear systems on rational surfaces and K3 surfaces realized as complete intersections (by work of Robb \cite{Ro}, see also related papers by Teicher et al). Additionally, the symplectic stabilizations of the fundamental groups have been computed for all double covers of $\mathbb{CP}^1\times\mathbb{CP}^1$ branched along connected smooth algebraic curves \cite{ADKY}, which includes an infinite family of surfaces of general type. In all these examples it turns out that, if one considers projections of sufficiently large degree (i.e., assuming $k\ge 3$ for $\mathbb{CP}^2$ and $k\ge 2$ for the other examples), the structure of $G^0$ is very simple, and obeys the following conjecture: \begin{conj} Assume that $X$ is a simply connected algebraic surface and $k\gg 0$. Then: $(1)$ the symplectic stabilization operation is trivial, i.e.\ $K=\{1\}$ and $\bar{G}=\pi_1(\mathbb{CP}^2-D)$; $(2)$ the homomorphism $\phi:\mathrm{Ab}(G^0)\to (\mathbb{Z}^2/\Lambda)^{N-1}$ is an isomorphism; and $(3)$ the commutator subgroup $[G^0,G^0]$ is a quotient of $\,\mathbb{Z}_2\times\mathbb{Z}_2$. \end{conj} \section{The symplectic isotopy problem} \label{sec:isotopy} The symplectic isotopy problem asks under which conditions (assumptions on degree, genus, types and numbers of singular points) it is true that any symplectic curve in $\mathbb{CP}^2$ (or more generally in a complex surface) is symplectically isotopic to a complex curve (by isotopy, we mean a continuous family of symplectic curves with the same singularities). The first result in this direction is due to Gromov, who proved that every smooth symplectic curve of degree 1 or 2 in $\mathbb{CP}^2$ is isotopic to a complex curve \cite{Gr}. The argument relies on a careful study of the deformation problem for pseudo-holomorphic curves: starting from an almost-complex structure $J$ for which the given curve $C$ is pseudo-holomorphic, and considering a family of almost-complex structures $(J_t)_{t\in [0,1]}$ interpolating between $J$ and the standard complex structure, one can prove the existence of smooth $J_t$-holomorphic curves $C_t$ realizing an isotopy between $C$ and a complex curve. The isotopy property is expected to hold for smooth and nodal curves in all degrees, and also for curves with sufficiently few cusps. For smooth curves, successive improvements of Gromov's result have been obtained by Sikorav (for degree $3$), Shevchishin (for degree $\le 6$), and more recently Siebert and Tian \cite{ST}: \begin{theorem}[Siebert-Tian] Every smooth symplectic curve of degree $\le 17$ in $\mathbb{CP}^2$ is symplectically isotopic to a complex curve. \end{theorem} Some results have been obtained by Barraud and Shevchishin for nodal curves of low genus. For example, the following result holds \cite{Sh}: \begin{theorem}[Shevchishin] Every irreducible nodal symplectic curve of genus $g\le 4$ in $\mathbb{CP}^2$ is symplectically isotopic to a complex curve. \end{theorem} Moreover, work in progress by S.\ Francisco is expected to lead to an isotopy result for curves of low degree with node and cusp singularities (subject to specific constraints on the number of cusps). If one aims to classify symplectic 4-manifolds by enumerating all branched covers of $\mathbb{CP}^2$ according to the degree and number of singularities of the branch curve, then the above cases are those for which the classification is the simplest and does not include any non-K\"ahler examples. On the other hand, Corollary \ref{cor:au2} implies that the isotopy property cannot hold for all curves with node and cusp singularities; in fact, explicit counterexamples have been constructed by Moishezon \cite{Mo3} (see below). \medskip Even when the isotopy property fails, the classification of singular plane curves becomes much simpler if one considers an equivalence relation weaker than isotopy, such as {\it regular homotopy}, or {\it stable isotopy}. Namely, let $D_1,D_2$ be two Hurwitz curves (see Definition \ref{def:hurwitz}) in $\mathbb{CP}^2$ (or more generally in a rational ruled surface), with node and cusp singularities (or more generally singularities of type $A_n$). Assume that $D_1$ and $D_2$ represent the same homology class, and that they have the same numbers of singular points of each type. Then we have the following results \cite{AKS,KK}: \begin{theorem}[A.-Kulikov-Shevchishin]\label{thm:aks} Under the above assumptions, $D_1$ and $D_2$ are {\em regular homotopic} among Hurwitz curves, i.e.\ they are isotopic up to creations and cancellations of pairs of nodes. \end{theorem} \begin{theorem}[Kharlamov-Kulikov]\label{thm:kk} Under the above assumptions, let $D'_i$ $(i\in\{1,2\})$ be the curve obtained by adding to $D_i$ a union of $n$ generic lines (or fibers of the ruling) intersecting $D_i$ transversely at smooth points, and smoothing out all the resulting intersections. Then for all large enough values of $n$ the Hurwitz curves $D'_1$ and $D'_2$ are isotopic. \end{theorem} Unfortunately, Theorem \ref{thm:aks} does not seem to have any implications for the topology of symplectic 4-manifolds, because the node creation operations appearing in the regular homotopy need not be admissible: even if both $D_1$ and $D_2$ are branch curves of symplectic coverings, the homotopy may involve plane curves for which the branched cover is not smooth. For similar reasons, the applicability of Theorem \ref{thm:kk} to branch curves is limited to the case of double covers, i.e.\ symplectic 4-manifolds which admit {\it hyperelliptic} Lefschetz fibrations. In particular, for genus 2 Lefschetz fibrations we have the following result \cite{AuGo}: \begin{theorem} If the symplectic 4-manifold $X$ admits a genus $2$ Lefschetz fibration, then $X$ becomes complex projective after stabilization by fiber sums with rational surfaces along genus $2$ curves. \end{theorem} It follows from Theorem \ref{thm:kk} that this result extends to all Lefschetz fibrations with monodromy contained in the hyperelliptic mapping class group. However, few symplectic 4-manifolds admit such fibrations, and in general the following question remains open: \begin{question} Let $X_1,X_2$ be two integral compact symplectic 4-manifolds with the same $(c_1^2,\,c_2,\,c_1\!\cdot\![\omega],\,[\omega]^2)$. Do $X_1$ and $X_2$ become symplectomorphic after sufficiently many fiber sums with the same complex projective surfaces (chosen among a finite collection of model holomorphic fibrations)? \end{question} This question can be thought of as the symplectic analogue of the classical result of Wall which asserts that any two simply connected smooth 4-manifolds with the same intersection form become diffeomorphic after repeatedly performing connected sums with $S^2\times S^2$ \cite{Wall}. \medskip A closer look at the known examples of non-isotopic singular plane curves suggests that an even stronger statement might hold. It was first observed in 1999 by Fintushel and Stern \cite{FS2} that many symplectic 4-manifolds contain infinite families of non-isotopic smooth connected symplectic curves representing the same homology class (see also \cite{Sm}). The simplest examples are obtained by ``braiding'' parallel copies of the fiber in an elliptic surface, and are distinguished by comparing the Seiberg-Witten invariants of the corresponding double branched covers. Other examples have been constructed by Smith, Etg\"u and Park, and Vidussi. However, for singular plane curves the first examples were obtained by Moishezon more than ten years ago \cite{Mo3}: \begin{theorem}[Moishezon] For all $p\ge 2$, there exist infinitely many pairwise non-isotopic singular symplectic curves of degree $9p(p-1)$ in $\mathbb{CP}^2$ with $27(p-1)(4p-5)$ cusps and $\frac{27}{2}(p-1)(p-2)(3p^2+3p-8)$ nodes, not isotopic to any complex curve. \end{theorem} Moishezon's approach is purely algebraic (using braid monodromy factorizations), and very technical; the curves that he constructs are distinguished by the fundamental groups of their complements \cite{Mo3}. However a much simpler geometric description of this construction can be given in terms of braiding operations, which makes it possible to distinguish the curves just by comparing the canonical classes of the associated branched covers \cite{ADK}. Given a symplectic covering $f:X\to Y$ with branch curve $D$, and given a Lagrangian annulus $A$ with interior in $Y\setminus D$ and boundary contained in $D$, we can {\it braid} the curve $D$ along the annulus $A$ by performing the local operation depicted on Figure \ref{fig:braiding}. Namely, we cut out a neighborhood $U$ of $A$, and glue it back via a non-trivial diffeomorphism which interchanges two of the connected components of $D\cap \partial U$, in such a way that the product of $S^1$ with the trivial braid is replaced by the product of $S^1$ with a half-twist (see \cite{ADK} for details). \begin{figure}[t] \centering \epsfig{file=braidingbw.eps,height=2.8cm}\\ $D$\hskip3cm$\tilde{D}$\vskip-2mm \caption{The braiding construction}\label{fig:braiding} \end{figure} Braiding the curve $D$ along the Lagrangian annulus $A$ affects the branched cover $X$ by a {\it Luttinger surgery} along a smooth embedded Lagrangian torus $T$ which is one of the connected components of $f^{-1}(A)$ \cite{ADK}. This operation consists of cutting out from $X$ a tubular neighborhood of $T$, foliated by parallel Lagrangian tori, and gluing it back via a symplectomorphism wrapping the meridian around the torus (in the direction of the preimage of an arc joining the two boundaries of $A$), while the longitudes are not affected. The starting point of Moishezon's construction is the complex curve $D_0$ obtained by considering $3p(p-1)$ smooth cubics in a pencil, removing balls around the 9 points where these cubics intersect, and inserting into each location the branch curve of a generic degree $p$ polynomial map from $\mathbb{CP}^2$ to itself. By repeatedly braiding $D_0$ along a well-chosen Lagrangian annulus, one obtains symplectic curves $D_j$, $j\in\mathbb{Z}$. Moishezon's calculations show that, whereas for the initial curve the fundamental group of the complement $\pi_1(\mathbb{CP}^2-D_0)$ is infinite, the groups $\pi_1(\mathbb{CP}^2-D_j)$ are finite for all $j\ne 0$, and of different orders \cite{Mo3}. On the other hand, it is fairly easy to check that, as expected from Theorem \ref{thm:adky}, this change in fundamental groups can be detected by considering the canonical class of the $p^2$-fold covering $X_j$ of $\mathbb{CP}^2$ branched along $D_j$. Namely, the canonical class of $X_0$ is proportional to the cohomology class of the symplectic form induced by the branched covering: $c_1(K_{X_0})=\lambda[\omega_{X_0}]$, where $\lambda=\frac{6p-9}{p}$. On the other hand, $c_1(K_{X_j})=\lambda[\omega_{X_j}]+\mu\,j\,[T]^{PD}$, where $\mu=\frac{2p-3}{p}\neq 0$, and the homology class $[T]$ of the Lagrangian torus $T$ is not a torsion element in $H_2(X_j,\mathbb{Z})$~\cite{ADK}. \medskip Many constructions of non-K\"ahler symplectic 4-manifolds can be thought of in terms of twisted fiber sum operations, or Fintushel-Stern surgery along fibered links. However the key component in each of these constructions can be understood as a particular instance of Luttinger surgery; so it makes sense to ask to what extent Luttinger surgery may be responsible for the greater variety of symplectic 4-manifolds compared to complex surfaces. More precisely, we may ask the following questions: \begin{question} Let $D_1,D_2$ be two symplectic curves with nodes and cusps in $\mathbb{CP}^2$, of the same degree and with the same numbers of nodes and cusps. Is it always possible to obtain $D_2$ from $D_1$ by a sequence of braiding operations along Lagrangian annuli? \end{question} \begin{question} Let $X_1,X_2$ be two integral compact symplectic 4-manifolds with the same $(c_1^2,\,c_2,\,c_1\!\cdot\![\omega],\,[\omega]^2)$. Is it always possible to obtain $X_2$ from $X_1$ by a sequence of Luttinger surgeries? \end{question} This question is the symplectic analogue of a question asked by Ron Stern about smooth 4-manifolds, namely whether any two simply connected smooth 4-manifolds with the same Euler characteristic and signature differ from each other by a sequence of logarithmic transformations. However, here we do not require the manifolds to be simply connected, we do not even require them to have the same fundamental group.
1,108,101,563,088
arxiv
\section{Conclusion} \label{sec:conclusion} We have proven that RPN is a strict generalisation of both Petri nets and context-free grammars without increasing the complexity of coverability, termination, boundedness and finiteness problems. It remains several open problems about languages of RPN and decidability/complexity of checking properties. Here is a partial list of open problems: \begin{itemize} \item How to decide whether a word belongs to a coverability or reachability language of a RP? \item Since the quasi-order possesses an infinite antichain, but there exist short witnesses for coverability, does there exist an effective finite representation of the downward closure of the reachability set? \item Does there exist a relevant fragment of LTL decidable for RPN? \end{itemize} {\noindent \bf Acknowledgment.} We thank the reviewers very much for their deep, detailed and insightful reviews, which helped us a lot in order to simplify and clarify this paper. \section{Coverability is {\sf EXPSPACE} -Complete} \label{sec:coverability} Let $\mathcal N$ be an $RPN$ and $s_{ini},s_{tar}$ be two states of $\mathcal N$. The \emph{coverability problem} asks whether there exists a firing sequence $s_{ini}\xrightarrow{\sigma} s\succeq s_{tar}$. Such a sequence $\sigma$ with initial and target states, is called a \emph{covering sequence}. The section is devoted to establishing that this problem is ${\sf EXPSPACE}$-complete. The ${\sf EXPSPACE}$-hardness follows immediately from the ${\sf EXPSPACE}$-hardness of the coverability problem for Petri nets~\cite{Lipton76}. In~\cite{Rac78}, Rackoff showed that the coverability problem for Petri nets belongs to ${\sf EXPSPACE}$. More precisely, he proved that if there exists a covering sequence, then there exists a `short' one: \begin{theorem}[Rackoff \cite{Rac78}] \label{thm:Rackoff covering path}Let $\mathcal N$ be a Petri net, $m_{ini}$, $m_{tar}$ be markings and $\sigma$ be a firing sequence such that $m_{ini}\xrightarrow{\sigma}m\geq m_{tar}$. Then there exists a sequence $\sigma'$ such that $m_{ini}\xrightarrow{\sigma'}m'\geq m_{tar}$ with $\left|\sigma'\right|\leq 2^{2^{cn\log n}}$ for some constant $c$ and $n$ being the size of $(\mathcal N,m_{tar})$. \end{theorem} A surprising consequence of Rackoff's proof is that the length of the minimal coverability sequence does not depend on the initial marking of the net. So to solve the coverability problem on Petri nets, one guesses a sequence of length at most $2^{2^{cn\log n}}$, checking at the same time whether it is a covering sequence in exponential space. Which shows that the coverability problem belongs to ${\sf NEXPSPACE}={\sf EXPSPACE}$ by Savitch's theorem. Similarly we will show that if there exists some $\sigma$ such that $ s_{ini}\xrightarrow{\sigma}s\succeq s_{tar} $ in an RPN $\mathcal N$, then there exists a `short' covering sequence $\sigma'$. First, we establish that the final state of a covering sequence can be chosen with a limited number of threads (Proposition~\ref{prop:Bound to the possible threads}). Then we enlarge the RPN $\mathcal N$ with new elementary transitions getting $\widehat{\mathcal N}$, leaving the coverability problem unchanged. The interest of $\widehat{\mathcal N}$ is that a covering sequence (when it exists) can be chosen with a particular form that we call \emph{{well-sequenced}} without increasing its length (Proposition~\ref{prop: translating from RPN to sim}). In order to come back to $\mathcal N$, we establish that the firing of an additional transition of $\widehat{\mathcal N}$ can be simulated by a short sequence in $\mathcal N$ (Proposition~\ref{prop:Length of an abstract transition in RPN}). Proposition~\ref{prop: bound on length of cover path} combines these intermediate results to get an upper bound for a short covering sequence. Let $\sigma$ be a firing sequence. A thread is \emph{extremal} w.r.t. $\sigma$ if it is an initial or final thread. We show that we can bound the number of extremal threads in a covering sequence.\\ {\bf Notation:} In the sequel, the size of the input of the coverability problem is denoted by $\eta$, i.e. the accumulated size of the RPN, the initial and target states.\\ In the sequel, the size of the input of the coverability problem is denoted by $\eta$, i.e. the accumulated size of the RPN, the initial and target states. Recall that $Asc_s(v)$ is the set of ancestors of $v$ in $s$. \begin{proposition} \label{prop:Bound to the possible threads} Let $\mathcal N$ be an RPN and $s_{ini}\xrightarrow{\sigma}s\succeq s_{tar}$ be a covering sequence. Then there exists a sequence $s_{ini}\xrightarrow{\sigma'}s'\succeq s_{tar}$ such that $|V_{s'}|\leq 3\eta$. \end{proposition} \begin{proof} If $s_{tar}=\emptyset$ then $\sigma'=\varepsilon$ is the appropriate sequence. Otherwise denote $f:V_{s_{tar}}\rightarrow V_{s}$ the injective mapping associated with $s_{tar}\preceq s$. Let $U=Asc_{s}(f(r_{s_{tar}})) \setminus \{f(r_{s_{tar}})\}$ be the branch in $s$ leading to the vertex corresponding to the root of $s_{tar}$. Consider the set $V=V_{s}\setminus (U\cup f(V_{s_{tar}}))$. Then one can delete in $\sigma$ all transitions fired from threads in $Des_{\sigma}(V)$ and those that created the threads of $V$ and still get a covering sequence. \input{Uslesss_threads1} \noindent Now assume that on the branch $U$, two edges $(u_1,v_1)$ and $(u_2,v_2)$ are labelled by the same transition where $u_2$ is a descendent of $u_1$ and $v_1\notin V_{s_{ini}}$. Then one can delete all transitions fired in the subbranch from $v_1$ to $u_2$ and subtitute transitions $(v_2,t)$ by transitions $(v_1,t)$ and still get a covering sequence. So $|U\setminus V_{s_{ini}}|\leq|T_{ab}|$. \input{Uslesss_threads2} \noindent Thus: $|V_{s}|\leq |V_{s_{\it{ini}}}|+|U\setminus V_{s_{ini}}|+ |V_{s_{\it{tar}}}| \leq |V_{s_{\it{ini}}}|+|T_{ab}|+ |V_{s_{\it{tar}}}|\leq 3\eta$ \end{proof} \igor{ Let $T_{ret}\subseteq T_{ab}$, the set of \emph{returning transitions}, the abstract transition which have a firing sequence which 1) fireable from the thread created by them 2) ends with cutting the created thread. Hence we could get a total effect on the state of the net as if the abstract transition was an elementary one. More precisely they are defined by: $t\in T_{ret}$ if there exists a firing sequence (called a \emph{return sequence}): $ s[r,\Omega(t)]\xrightarrow{\sigma_{t}}\emptyset $. For any $t\in T_{ret}$, we define $\sigma_{t}$ to be some arbitrary shortest return sequence. } Let $T_{ret}\subseteq T_{ab}$, the set of \emph{returning transitions} be defined by: $t\in T_{ret}$ if there exists a firing sequence (called a \emph{return sequence}): $ s_{t}\xrightarrow{\sigma_{t}}\bot, $ where $V_{s_t}=\left\{ r\right\} $, $M_{s_t}\left(r\right):=\Omega\left(t\right)$, and $E_{s_t}=\Lambda_{s_t}=\emptyset$. For any $t\in T_{ret}$, we define $\sigma_{t}$ to be some arbitrary shortest return sequence. As mentioned before, we get $\widehat{\mathcal N}$ from $\mathcal N$ by adding elementary transitions as follows. \begin{definition} Let $\mathcal N$ be an RPN. Then $\widehat{\mathcal N}=\left<P,\widehat{T},\widehat{W}^+, \widehat{W}^-,\Omega\right>$ is defined by: \begin{itemize}[nosep] % \item $\widehat{T}_{ab}=T_{ab}$, $\widehat{T}_{\tau} = T_{\tau} $ , $\widehat{T}_{el}=T_{el}\uplus \{t^r\mid t\in T_{ret} \}$; % \item for all $t\in T$, $\widehat{W}^-(t)=W^-(t)$ and $\widehat{W}^+(t)=W^+(t)$; % \item for all $t^r$, $\widehat{W}^-(t^r)=W^-(t)$ and $\widehat{W}^+(t^r)=W^+(t)$. % \end{itemize} \end{definition} \igor{not sure if it is important to prove the following proof, or this is enough:\\} Note that since any transition in $\mathcal N$ has a copy in $\widehat{\mathcal N}$, and any transition in $\widehat{\mathcal N}$ that does not exists in $\mathcal N$ can be replaced by a firing sequence (return sequence) in $\mathcal N$ with the same underling effect we get: \begin{lemma} Given a marked RPN $\mathcal N,s_{ini}$ Any state $s$ is reachable in $\mathcal N$ from $s_{ini}$ if and only if it is reachable from $s_{ini}$ in $\widehat{\mathcal N}$. \end{lemma} The key ingredient of the existence of a short sequence is that in $\widehat{\mathcal N}$ every sequence can be turned into a \emph{{well-sequenced}}\ sequence reaching the same state. Along such a sequence, (1) there are only extremal threads, (2) firings are performed in one shot by threads, and (3) only initial threads disappear and final threads perform firings of abstract transitions. \begin{definition} Let $ \mathcal N $ be an RPN and $\sigma$ be a firing sequence. Then $\sigma$ is \emph{{well-sequenced}}\ if $\sigma=\sigma_{1}(v_1,t_{\tau_1})\sigma_{2}(v_2,t_{\tau_2})\ldots \sigma_{\ell}(v_\ell,t_{\tau_\ell})\sigma_{\ell+1}\sigma^{ab}_{\ell+1} \ldots\sigma_{k}\sigma^{ab}_{k}$ where:\smallskip \noindent$\bullet$ $t_{\tau_i}\in T_\tau$ for $1\leq i \leq \ell$;\\ \noindent$\bullet$ The threads $v_i$ are initial for $1\leq i \leq \ell$;\\ $\bullet$ The threads $v_i$ are final for $\ell+1\leq i \leq k $;\\ $\bullet$ The firing sequence $\sigma_i\in (\{v_i\}\times T_{el})^*$ for $1\leq i \leq k$;\\ $\bullet$ The firing sequence $\sigma_i^{ab}\in (\{v_i\}\times T_{ab})^*$ for $\ell+1\leq i \leq k $. \end{definition} \begin{restatable}{proposition}{fromRPNtosim} \label{prop: translating from RPN to sim} Let $\mathcal N$ be an RPN and $ s\xrightarrow{\sigma} s' $ be a firing sequence. There exists a {well-sequenced}\ firing sequence $s\xrightarrow{\widehat{\sigma}}s'$ in $\widehat{\mathcal N}$, with $ |\widehat{\sigma}|\leq|\sigma| $. \end{restatable} \begin{proof} By construction of $\widehat{\mathcal N}$, $\sigma$ is fireable in $\widehat{\mathcal N}$. \noindent First assume that we have an extremal thread $u$ which fires $ t\in T_{ab} $ creating a non final thread $ v $ that disappears by a matching cut transition $ (v,t_\tau)\in \sigma $ for $t_\tau\in T_\tau$. One builds $\sigma'$ by (1) deleting from $\sigma $ the step $ (u,t) $, (2) deleting all the firings from $ Des_{\sigma}(v) $ in $\sigma$ and (3) replacing the step $ (v,t_\tau) $ by $(u,t^r)$. We claim that $s\xrightarrow{\sigma}s' $. Indeed in $u$ the step $(u,t^r)$ has the same incidence in $u$ as the step $(u,t)$ followed by $(v,t_\tau)$ (thus `anticipating' $(v,t_\tau)$ only add tokens in intermediate states) and the other deleted firings are performed by threads in $Des_{\sigma}(v) $ which do not exist anymore. Denote $\sigma^*$ the sequence obtained by iterating the process for all such cases. \noindent Let us establish that we can transform $\sigma^*$ into $\widehat{\sigma}$ fulfilling in the requirements for being {well-sequenced}, by induction on $|V_s\cup V_{s'}|$, i.e. the number of extremal states. If $ |V_s\cup V_{s'}|=0 $, $\sigma^*=\varepsilon$, hence {well-sequenced}. Assume that we have shown the proposition for any $\sigma^*$ with $|V_s\cup V_{s'}|< n$, and that now we have $\sigma^*$ which fulfills $|V_s\cup V_{s'}|=n$. We consider three cases according to a leaf thread $u$ (in $s$ or $s'$) with maximal depth: \begin{enumerate}[nosep] % \item $ u\in V_{s'} \cap V_s$.\, Denote by $\sigma^*_u\subset \sigma^*$ the subsequence consisting of the firings performed by $u$ in $\sigma^*$ and $\sigma^*_r$ the subsequence of $\sigma^*$ obtained by removing $\sigma^*_u$. Due to the first transformation, $\sigma^*_u$ consists of elementary transitions. Since $u$ is both an initial and final thread in $\sigma^*$, $\sigma^*_u$ can be fired at the end (or indifferently at the beginning) of $\sigma^*$. Observe that $\sigma^*_r$ is a firing sequence from $s$ `without' $u$ to $s'$ `without' $u$. So the induction applies: let $\widehat{\sigma}_r$ be the {well-sequenced}\ sequence corresponding to $\sigma^*_r$. So $\widehat{\sigma}_r\sigma^*_u$ is the sequence we are looking for. % \item $u\in V_{s'} \setminus V_{s}$.\, Let $(v,t) $ be the step which creates the thread $u$. Denote by $\sigma^*_u\subset \sigma^*$ the subsequence consisting of the firings performed by $u$ in $\sigma^*$ and $\sigma^*_r$ the subsequence of $\sigma^*$ obtained by removing $(v,t)$ and $\sigma^*_u$. $\sigma^*_u$ consists of elementary transitions. Since $u$ is a final thread in $\sigma^*$, $\sigma^*_u$ can be fired at the end of $\sigma^*$. Observe that $\sigma^*_r$ is a firing sequence from $s$ to $s'$ `without' $u$ (say $s''$) where $M_{s''}(v)=M_{s'}(v)+W^-(t)$. By induction there exists $\sigma^+_r$ a {well-sequenced}\ sequence corresponding to $\sigma^*_r$ and $\widehat{\sigma}_r$ be $\sigma^+_r$ with the firing $(v,t)$ inserted at the end of the subsequence of firings performed by $v$ in $\sigma^+_r$. Then $\widehat{\sigma}_r\sigma^*_u$ is the sequence we are looking for. % % \item $u\in V_s \setminus V_{s'}$.\, Denote by $\sigma^*_u\subset \sigma^*$ the subsequence consisting of the firings performed by $u$ in $\sigma^*$ and $\sigma^*_r$ the subsequence of $\sigma^*$ obtained by removing $\sigma^*_u$. \begin{itemize}[nosep] % \item If $\sigma^*_u$ does not end by a cut transition then $\sigma^*_r$ is a firing sequence from $s$ `without' $u$ to $s'$. Let $\widehat{\sigma}_r$ be the {well-sequenced}\ sequence corresponding to $\sigma^*_r$. But $\widehat{\sigma}_r$ is also a sequence from $s$ to $s'$. % \item If $\sigma^*_u$ ends by a cut transition and $u$ is the root then $\sigma^*_u$ is the sequence we are looking for. % \item If $\sigma^*_u$ ends by a cut step, $v$ is the parent of $u$ and $\Lambda(v,u)=t$ then $\sigma^*_r$ is a firing sequence from $s$ `without' $u$ (say $s''$) where $M_{s''}(v)=M_{s}(v)+W^-(t)$. So the induction applies: let $\widehat{\sigma}_r$ be the {well-sequenced}\ sequence corresponding to $\sigma^*_r$. So $\sigma^*_u\widehat{\sigma}_r$ is the sequence we are looking for. % \end{itemize} % \end{enumerate} Since we did not add any new step, $ |\widehat{\sigma}|\leq |\sigma|$. \end{proof} In order to recover from a sequence in $\widehat{\mathcal N}$ a sequence in $\mathcal N$, for every $ t\in T_{ret}$ one has to simulate the firings of a transition $t^r$ by sequence $\sigma_t$. Therefore bounding the length of $\sigma_t$ is critical. \begin{proposition} \label{prop:Length of an abstract transition in RPN} Let $\mathcal N$ be an RPN and $t\in T_{ret}$. Then the returning sequence $\sigma_t$ fulfills $|\sigma_t|\leq 2^{\cdot2^{dn\log n}}$ for some constant $d$ and $n=size(\mathcal N)$. \end{proposition} \begin{proof} Let us enumerate $T_{ret}=\{t_1,\ldots,t_K\}$ in such a way that $i<j$ implies $|\sigma_{t_i}|\leq|\sigma_{t_j}|$. Observe first that the shortest returning sequences do not include firings of abstract transitions not followed by a matching cut transition since it could be omitted as it only deletes tokens in the thread. We argue by induction on $k\leq K$ that: \[ |\sigma_{t_{k}}|<2^{k\cdot2^{cn\log n}} \qquad \mbox{where } c \mbox{ is the Rackoff constant} \] For $k=1$, we know that $\sigma_{t_{1}}$ has a minimal length over all returning sequences. Hence there are no cuts in $\sigma_{t_{1}}$ except the last one. Due to the above observation, $\sigma_{t_{1}}$ only includes firing of elementary transitions. Thus the Rackoff bound of Theorem~\ref{thm:Rackoff covering path} applies for a covering of some final marking. \noindent Assume that the result holds for all $i<k$. Due to the requirement on lengths, $\sigma_{t_{k}}$ only includes cuts from threads created by $t_{i}\in T_{ret}$ with $i<k$. Thus by Proposition~\ref{prop: translating from RPN to sim} we get a sequence $\widehat{\sigma}_{t_{k}}\cdot(r,t_\tau)$ in $\widehat{\mathcal N}$ (where $r$ is the root and $ t_\tau\in T_\tau $). The sequence $ \widehat{\sigma}_{t_{k}} $ consists of only elementary steps and does not contain any transition $t_i^r$ with $i\geq k$. The marking of $r$ reached by $ \widehat{\sigma}_{t_{k}} $ covers some final marking, hence by Theorem~\ref{thm:Rackoff covering path} there exists a covering sequence $\widehat{\sigma}_{t_{k}}'$ such that $|\widehat{\sigma}_{t_{k}}'|\leq2^{2^{cn\log n}}$. Since $\widehat{\sigma}_{t_{k}}$ does not contain firing of $t_i^r$ with $i\geq k$ this also holds for $\widehat{\sigma}_{t_{k}}'$. Substituting any firing of $t_i^r$ by $\sigma_{t_i}$, one gets a corresponding sequence $\sigma_{t_k}'$ in $\mathcal N$. Using the induction hypothesis, one gets that the length of $\sigma_{t_k}'$ fulfills: \[ |\sigma_{t_k}'|\leq|\widehat{\sigma}_{t_{k}'}|2^{(k-1)\cdot2^{cn\log n}}\leq2^{2^{cn\log n}}\cdot 2^{(k-1)\cdot2^{cn\log n}}\leq2^{k\cdot2^{cn\log n}} \] From minimality of $\sigma_{t_{k}}$, one gets $|\sigma_{t_{k}}|\leq|\sigma_{t_k}'|\leq2^{k\cdot2^{cn\log n}}$ which concludes the proof since \[ \max_{t\in T_{ret}}\{ |\sigma_{t}|\}\leq2^{|T_{ret}| \cdot2^{cn\log n}}\leq2^{n2^{cn\log n}}\leq2^{2^{(2c)n\log n}}. \] \end{proof} Combining all previous results, we can now bound the length of a shortest covering sequence: \begin{proposition}\label{prop: bound on length of cover path} Let $ \mathcal N $ be an RPN, and $s_{ini}\xrightarrow{\sigma}s\succeq s_{tar}$. Then there exists a covering sequence of length shorter than $2^{2^{e\eta\log \eta}}$, where $e$ is some constant and $\eta=size(\mathcal N,s_{ini},s_{tar})$. \end{proposition} \begin{proof} Using Proposition~\ref{prop:Bound to the possible threads} we can assume that \igor{$ |V_{s_{ini}}\cup V_{s}|\leq |V_{s_{ini}}| + |V_{s}|\leq 4\eta$}. Using Proposition~\ref{prop: translating from RPN to sim} one gets a {well-sequenced}\ sequence $ s_{ini}\xrightarrow{\widehat{\sigma}}s$ in $ \widehat{\mathcal N} $, such that: \[ \widehat{\sigma}=\sigma_{1}(v_1,t_{\tau_1})\sigma_{2}(v_2,t_{\tau_2})\ldots \sigma_{\ell}(v_\ell,t_{\tau_\ell})\sigma_{\ell+1}\sigma^{ab}_{\ell+1} \ldots\sigma_{k}\sigma^{ab}_{k}, \] where $\sigma^{ab}_{i}=(v_i,t_{i,1})\ldots (v_i,t_{i,n_i})$. Observe that $k\leq |V_{s_{ini}}\cup V_{s}|$. \noindent We now show that there is a short covering sequence in $ \widehat{\mathcal N} $. Let $ f:V_{s_{tar}}\rightarrow V_{s} $ the function associated with $ s\succeq s_{tar} $. Each of the $ \sigma_i $ is a sequence whose final marking of $v_i$ covers a certain marking, given as follows: \begin{enumerate}[nosep] % \item For $ i\leq \ell $,\,\, a final marking of the net; % \item For $ i> \ell $ and $ v_i\notin f(V_{s_{tar}})$,\,\, $\sum_{j}W^-(t_{i,j})$; % \item For $ i> \ell $ and $ v_i\in f(V_{s_{tar}})$,\,\, $\sum_{j}W^-(t_{i,j})+M_{s_{tar}}(f^{-1}(v_i))$. % \end{enumerate} Since all $ \sigma_i $ contain only elementary steps, using Theorem~\ref{thm:Rackoff covering path}, one gets a sequence $ \sigma_i' $ with $ |\sigma_i'|\leq 2^{2^{c\eta\log \eta}}$ covering the marking specified by the three cases above. Define the sequence $ s\xrightarrow{\widehat{\sigma}'}s' $ where each $\sigma_i$ is replaced by $\sigma_i'$. Using case 3, for all $v\in V_{s_{tar}} $ $ M_{s'}(f(v))\geq M_{s_{tar}}(v) $. Therefore $s'\succeq s_{tar} $, and the length of $ \widehat{\sigma}' $ is at most: \[ |\widehat{\sigma}'|= \sum_{i=1}^{k}|\sigma_{i}'|+\sum_{i=1}^{\ell}|(v_i,t_{\tau_i})| +\sum_{i=\ell+1}^k|\sigma^{ab}_{i}| \leq 4\eta2^{2^{c\eta\log \eta}} + 4\eta +4\eta \leq 2^{2^{2c\eta\log \eta}}. \] \noindent Substituting any firing of $t_i^r$ by $\sigma_{t_i}$ in $ \widehat{\sigma}'$ we get a covering sequence $ \sigma'$ in $ \mathcal N $. Using Proposition~\ref{prop:Length of an abstract transition in RPN}, its length fulfills: \[ |\sigma'|\leq |\widehat{\sigma}'|\cdot 2^{\cdot2^{dn\log n}} \leq 2^{\cdot2^{en\log n}} \] for some constant $ e $. \end{proof} Using Proposition~\ref{prop: bound on length of cover path} we establish the complexity of the coverability problem. \begin{theorem} \label{thm:Coverabilty for RPN in EXPSPACE} The coverability problem for RPNs is ${\sf EXPSPACE}$-complete. \end{theorem} \begin{proof} According to Proposition~\ref{prop: bound on length of cover path}, if there is a covering sequence then there is one with length at most $ 2^{2^{e\eta\log \eta}} $. Hence one guesses a sequence of at most this length and checks simultaneously whether it is a covering sequence in exponential space. This shows that the coverability problem belongs to ${\sf NEXPSPACE}={\sf EXPSPACE}$ by Savitch theorem. \end{proof} Another relevant problem is the \emph{cut problem}: Let $(\mathcal N,s)$ be a marked RPN, does there exist $ \sigma $ such that $s\xrightarrow{\sigma}\emptyset$? \begin{theorem}\label{thm:cut problem} The cut problem is {\sf EXPSPACE}-complete. \end{theorem} \begin{proof} Let $(\mathcal N',s')$ be a marked RPN which is a copy of $(\mathcal N,s)$ enlarged by a place $p$ and such that $M_{s'}(r) = M_{s}(r)+p$. Denote by $S = \{s[m_e + p] \mid m_e\in \mathcal{F} \}$. Since in $\mathcal N'$ there is no transition updating the marking of $p$, there exists $s_e\in S$ and a sequence $s\xrightarrow{\sigma'}s'\succeq s_e$ if and only if there exists a reachable state $\widehat{s}$ with $M_{\widehat{s}}(r) \geq m_e $ where $m_e\in \mathcal{F}$. Therefore checking whether $s\xrightarrow{\sigma}\bot$ exists in $\mathcal N$ is equivalent of solving $|\mathcal{F}|$ coverability problems in $\mathcal N'$ which each of them is doable in {\sf EXPSPACE}\ by Theorem \ref{thm:Coverabilty for RPN in EXPSPACE}. Let $(\mathcal N,m)$ be a marked Petri net and $m'$ be a marking then the coverability problem is equivalent to the cut problem with $(\mathcal N',s[m])$ where $\mathcal N'$ is a copy of $\mathcal N$ enlarged by $ \mathcal{F} = \{m'\}$. Thus {\sf EXPSPACE}\ hardness follows from the complexity of coverability problem. \end{proof} \igor{translation in our new language: } \begin{proof} Let $(\mathcal N',s')$ be a marked RPN which is a copy of $(\mathcal N,s)$ enlarged by a place $p$ and such that $M_{s'}(r) = M_{s}(r)+p$. Denote by $S = \{s[r_{t_\tau},W^-(t_\tau) + p] \mid t_\tau \in T_\tau \}$. Since in $\mathcal N'$ there is no transition updating the marking of $p$, there exists $s_{t_\tau}\in S$ and a sequence $s\xrightarrow{\sigma'}s'\succeq s_{t_\tau}$ if and only if there exists a reachable state $\widehat{s}=(\widehat{r},m_0,\{(m_i,s_i)\}_{1\leq i\leq k})$ with $M_{\widehat{s}}(\widehat{r}) \geq W^+(t_\tau) $ where $t_\tau\in T_\tau$. Therefore checking whether $s\xrightarrow{\sigma}\emptyset$ exists in $\mathcal N$ is equivalent of solving $|T_\tau|$ coverability problems in $\mathcal N'$ which each of them is doable in {\sf EXPSPACE}\ by Theorem \ref{thm:Coverabilty for RPN in EXPSPACE}.\igor{(not sure what to do with this period)} Let $(\mathcal N,m)$ be a marked Petri net and $m'$ be a marking then the coverability problem is equivalent to the cut problem with $(\mathcal N',s[r,m])$ where $\mathcal N'$ is a copy of $\mathcal N$ enlarged by $ T'_\tau = \{t_\tau\}$ with $W^-(t_\tau) = m'$. Thus {\sf EXPSPACE}\ hardness follows from the complexity of coverability problem. \end{proof} \section{Coverability is {\sf EXPSPACE} -Complete} \label{sec:coverability} The section is devoted to establishing that the coverability problem is in ${\sf EXPSPACE}$-complete. The ${\sf EXPSPACE}$-hardness follows immediately from the ${\sf EXPSPACE}$-hardness of the coverability problem for Petri nets~\cite{Lipton76}. Observe that the coverability problem is equivalent to the emptiness problem of the coverability language of an RPN. In Section \ref{sec:expressiveness} we have shown that the families of coverability languages and cut languages for RPN are equal and that the transformation from one to another is performed in polynomial time (proposition \ref{prop:reach-inc-cov} and \ref{prop:cov-inc-reach}). Therefore we will establish the complexity result for the cut problem getting as a corollary the same result for the coverability problem. \begin{theorem}\label{thm:cut_problem} The cut problem is in {\sf EXPSPACE}-complete. \end{theorem} \begin{proof} \noindent Let $(\mathcal N,s_0)$ be a marked RPN and $\eta$ the accumulated size of the RPN and the initial state. By Proposition~\ref{col:rooted} we can assume that $V_{s_0}$ is a singleton $\{r\}$. \noindent Assume there exists a firing sequence $s_{0}\xrightarrow{\sigma}_{\mathcal N}\emptyset$. Using Proposition~\ref{prop:omniciant} one gets an omniscient sequence $ s_0\xrightarrow{\widehat{\sigma}}_{\widehat{\mathcal N}}\emptyset$ such that $\widehat{\sigma}=(r,\sigma_{1})(r,t)$ for some $t\in T_{\tau}$. \noindent The (omniscient) sequence $(r, \sigma_1)$ contains only elementary transitions. Thus $m_0 \xrightarrow{\sigma_1}_{\widehat{\mathcal N}_{el}} m$ with $m\geq W^-(t)$. By Theorem~\ref{thm:Rackoff covering path}, there exists $ \sigma_1' $ with $ |\sigma_1'|\leq 2^{2^{c\eta\log \eta}}$ covering $W^-(t)$. Using Corollary \ref{col:bound_on_translation_of_N-hat_to_N} there $s\xrightarrow{\sigma'}_{\mathcal N}\emptyset$ with $|\sigma'|\leq ^{2^{e\eta\log \eta}} $ for some constant $ e $. \noindent Therefore if there is a cut sequence then there is one with length at most $2^{2^{e\eta\log \eta}}$. Hence one guesses a sequence with at most this length and simultaneously checks whether it is a cut sequence in exponential space. This shows that the cut problem belongs to ${\sf NEXPSPACE}$ which is equivalent to ${\sf EXPSPACE}$ by Savitch's theorem. \noindent The {\sf EXPSPACE}\ hardness of the coverability problem in Petri nets entails {\sf EXPSPACE}\ hardness of the coverability problem in RPNs which in turn entails the {\sf EXPSPACE}\ hardness of the cut problem in RPNs. \end{proof} The next theorem is an immediate corollary of the previous one. \begin{theorem}\label{thm:Coverabilty for RPN in EXPSPACE} The coverability problem for RPNs is in ${\sf EXPSPACE}$-complete. \end{theorem} \section{Expressiveness} \label{sec:expressiveness} The expressiveness of a formalism may be defined by the family of languages that it can generate. In~\cite{HP-icatpn99}, the expressiveness of RPNs was studied using reachability languages. However, using reachability languages as specification languages has an inconvenient since the emptiness problem for these languages is not elementary \cite{DBLP:conf/stoc/CzerwinskiLLLM19} for Petri nets, so it is also not elementary, at least, for RPN. We propose to characterize the expressive power of RPN by studying the family of coverability languages which is sufficient to express most of the usual reachability properties since many of them reduce to check that no reachable state may cover a bad marking in a thread. The characterization of the expressive power by means of covering languages has been done for Petri nets (studied in the book of Peterson \cite{nla.cat-vn2956435}), and more recently, for Well Structured Transition Systems (WSTS) \cite{DBLP:journals/acta/GeeraertsRB07} and for monotonic extensions of Petri nets like reset-transfer Petri nets, $\nu$-Petri nets, unordered Petri nets \cite{BFHR-icomp13,DBLP:journals/tcs/DelzannoR13}. More properties are decidable for VASS covering languages than for VASS reachability languages. For instance, universality for reachability languages is undecidable for 1-VASS \cite{DBLP:journals/jcss/ValkV81} and then co-finiteness is also undecidable but these two properties are both decidable for VASS covering languages \cite{figueira:hal-02193089}; moreover, it is Ackermann-complete for 1-VASSs \cite{DBLP:conf/rp/HofmanT14}. Generally, the universality of both reachability and coverability of WSTS languages is undecidable \cite{DBLP:journals/acta/GeeraertsRB07}. So we equip any transition $t$ with a \emph{label} $\lambda(t)\in \Sigma \cup \{\varepsilon\}$ where $\Sigma$ is a finite alphabet and $\varepsilon$ is the empty word. The labelling is extended to transition sequences in the usual way. Thus given a labelled marked RPN $(\mathcal N,s_0)$ and a finite subset of states $S_f$, the (coverability) language $\mathcal L_C(\mathcal N,s_0,S_f)$ is defined by: $$\mathcal L_C(\mathcal N,s_0,S_f)=\{\lambda(\sigma) \mid \exists\ s_0 \xrightarrow{\sigma} s\succeq s_f\wedge s_f \in S_f\}$$ i.e. the set of labellings for sequences covering some state of $S_f$ in $\mathcal N$. We now study the family of RPN coverability languages both from the point of view of expressiveness and closure under multiple operations. \begin{restatable}{proposition}{closedbyunion} \label{prop:closedunion} The family of coverability languages of RPNs is closed under union. \end{restatable} \begin{proof} We closely follow the classic proof that the family of Petri net languages is closed under union, i.e. adding a place and two extra transitions that have to be fired in the beginning of the firing sequence in order to decide in which of the Petri net one fires. Due to the correspondence between firing sequences of $(\mathcal N,s_0)$ and those of $(\rooted{\mathcal N},{\mathring{s_0}})$, established in the previous section, one can assume w.l.o.g. that the initial markings of the RPNs have a single vertex. Consider two labelled marked RPNs with final states $(\mathcal N,s[r,m_0],S_f)$ and $(\mathcal N',s[r',m'_0],S'_f)$. Let us define $\widetilde{\mathcal N}$ as follows. Its set of places is the disjoint union of $P$ and $P'$ with three additional places $p_0$, $p$ and $p'$. Its set of transitions is the disjoint union of $T$ and $T'$ with four additional elementary transitions $t_b$, $t_c$, $t'_b$ and $t'_c$. \smallskip\noindent $\bullet$ For all $t\in T$, $\widetilde{W}^-(t)=W^-(t)+p$ and when $t\notin T_{\tau}$ $\widetilde{W}^+(t)=W^+(t)$\\ $\bullet$ For all $t\in T'$, $\widetilde{W}^-(t)=W'^-(t)+ p'$ and when $t\notin T'_{\tau}$ $\widetilde{W}^+(t)=W'^+(t)$\\ $\bullet$ For all $t\in T_{ab}$, $\widetilde{\Omega}(t)=\Omega(t)+ p$\\ $\bullet$ For all $t\in T'_{ab}$, $\widetilde{\Omega}(t)=\Omega'(t)+ p'$\\ $\bullet$ $\widetilde{W}^-(t_b)=\widetilde{W}^-(t'_b)= p_0$, $\widetilde{W}^+(t_b)=m_0+ p$, $\widetilde{W}^+(t'_b)=m_0'+ p'$\\ $\bullet$ $\widetilde{W}^-(t_c)=p$, $\widetilde{W}^+(t_c)=2 p$, $\widetilde{W}^-(t'_c)= p'$, $\widetilde{W}^+(t'_c)=2 p'$\\ $\bullet$ $\widetilde{S}_f$ is obtained from the union $S_f\cup S'_f$ by adding a token in place $p$ (resp. $p'$)\\ \hspace*{0.3cm}of all markings of states of $S_f$ (respectively $S'_f$).\\ $\bullet$ For all $t\in T$, $\widetilde{\lambda}(t)=\lambda(t)$ and for all $t\in T'$, $\widetilde{\lambda}(t)=\lambda'(t)$\\ $\bullet$ For all $t\in \{t_b,t_c,t'_b,t'_c\}$, $\widetilde{\lambda}(t)=\varepsilon$.\\ $\bullet$ The initial state of $\widetilde{\mathcal N}$ is $s[\tilde{r},p_0]$. \smallskip\noindent Let us prove that $\mathcal L(\mathcal N,s[r,m_0],S_f) \cup \mathcal L(\mathcal N',s[r',m'_0],S'_f) \subseteq \mathcal L(\widetilde{\mathcal N},s[\tilde{r},p_0],\widetilde{S}_f)$. Let $\sigma$ be a coverability sequence of $(\mathcal N,s[r,m_0],S_f)$. The corresponding coverability sequence $\widetilde{\sigma}$ of $L(\widetilde{\mathcal N},s[\tilde{r},p_0],\widetilde{S}_f)$ is built as follows. Initially, one fires $(\tilde{r},t_b)(\tilde{r},t_c)^{\ell_r}$ where $\ell_r$ is the number of abstract transition firings occurring in $\sigma$ triggered by $r$. Then after the creation of a thread $v$, one inserts $(v,t_c)^{\ell_v}$ firings where $\ell_v$ is the number of abstract transition firings occurring in $\sigma$ triggered by $v$. It is routine to check that $\widetilde{\sigma}$ is coverability sequence. The proof for $\mathcal L(\mathcal N',s[r',m'_0],S'_f)$ is similar. \smallskip\noindent Let us prove that $\mathcal L(\widetilde{\mathcal N},s[r,p_0],\widetilde{S}_f) \subseteq \mathcal L(\mathcal N,s[\tilde{r},m_0],S_f) \cup \mathcal L(\mathcal N',s[r',m'_0],S'_f)$. Observe that any firing sequence must start by a firing of $t_b$ or $t'_b$. Let $t_b\widetilde{\sigma}$ be a coverability sequence of $(\widetilde{\mathcal N},s[\tilde{r},p_0],\widetilde{S}_f)$. Consider the sequence $\sigma$ obtained by deleting all the firings of $t_c$ in $\tilde{\sigma}$. It is routine to check that $\sigma$ is a coverability sequence for $(\mathcal N,s[r,m_0],S_f)$. The case of a coverability sequence starting by $t'_b$ is similar. \end{proof} The next theorem has two interesting consequences: the family of RPN coverability languages is not closed under intersection with the family of regular languages. But the family obtained by this intersection is \emph{quite close} to the family of recursively enumerable languages. The result was already stated in Proposition~9 of~\cite{HaddadP99} for the family of RPN reachability languages but the proof was only sketched. \begin{theorem} \label{theo:re} Let $\mathcal L$ be a recursively enumerable language. Then there exist an RPN language $\mathcal L'$, a regular language $\mathcal R$ and a homomorphism $h$ such that $\mathcal L=h(\mathcal L'\cap\mathcal R)$. \end{theorem} \begin{proof} Let $\mathcal M=(\Sigma,L,\delta)$ be a Turing machine with it set of states $L$ including $\ell_0$ (resp. $\ell_f$) the initial (resp. final) state and its transition function $\delta$ from $L\times \Sigma \cup \{\flat\}$ to $L\times \Sigma \times \{\leftarrow,\rightarrow\}$ where $\flat$ is the blank character. Let us define a labeled marked RPN $\mathcal N$ and an automaton $\mathcal A$. Their common alphabet is the set of transitions of $\mathcal N$ and the labeling of the transitions of the RPN is the identity mapping. The intersection of their languages is thus the language of the synchronized product of the two devices. The single final state of $\mathcal N$ (to be covered) is the empty tree. The automaton $\mathcal A$ is depicted below (with $\Sigma=\{a,b\}$). In $q_0$ it allows $\mathcal N$ to generate the representation of any word $w\in \Sigma^*$, input of $\mathcal M$. However, this intermediate representation is not suitable for mimicking $\mathcal M$. Thus in $q_1$, the intermediate representation is translated into an appropriate one. Once this representation is obtained, it mimics any transition of $\mathcal M$ by triggering the firing of several transitions of $\mathcal N$. We will detail this simulation after the specification of $\mathcal N$. \begin{center} \begin{tikzpicture}[xscale=0.65,yscale=0.65] \path (10,0) node[draw,rectangle,minimum height=1.5cm,minimum width=4cm] () {}; \path (0,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q0) {\small{$q_0$}}; \path (4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q1) {\small{$q_1$}}; \path (4,2) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q2) {}; \path (4,-2) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q3) {}; \path (8,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q4) {\small{$\ell_0$}}; \path (10,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q5) {\small{$\ell$}}; \path (12,0) node[draw,circle,accepting,inner sep=2pt,minimum size=0.8cm] (q6) {\small{$\ell_f$}}; \path (10,2) node[] {\small{The simulation part of $\mathcal A$}}; \path (9,0) node[] {\small{$\cdots$}}; \path (11,0) node[] {\small{$\cdots$}}; \draw[arrows=-latex'] (-1,0) -- (q0) ; \draw[arrows=-latex'] (q0) -- (q1) node[pos=0.5,above] {\tiny{$next$}} ; \draw[arrows=-latex'] (q1) -- (3.5,1)--(q2) node[pos=0,left] {\tiny{$from_a$}} ; \draw[arrows=-latex'] (q2) -- (4.5,1)--(q1) node[pos=0,right] {\tiny{$to_a$}} ; \draw[arrows=-latex'] (q1) -- (3.5,-1)--(q3) node[pos=0,left] {\tiny{$from_b$}} ; \draw[arrows=-latex'] (q3) -- (4.5,-1)--(q1) node[pos=0,right] {\tiny{$to_b$}} ; \draw[arrows=-latex'] (q1) -- (q4) node[pos=0.5,above] {\tiny{$run$}} ; \draw[-latex'] (q0) .. controls +(55:60pt) and +(125:60pt) .. (q0) node[pos=.5,above] {\tiny{$t_a,t_b$}}; \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[xscale=0.65,yscale=0.65] \path (4,1.2) node[] {\small{$\delta(\ell,a)=(\ell',b,\rightarrow)$}}; \path (0,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q0) {\small{$\ell$}}; \path (4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q1) {}; \path (8,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q2) {\small{$\ell'$}}; \draw[arrows=-latex'] (q0) -- (q1) node[pos=0.5,above] {\tiny{$right_{a}^{\rightarrow}$}} ; \draw[arrows=-latex'] (q1) -- (q2) node[pos=0.5,above] {\tiny{$left_{b}^{\rightarrow}$}} ; \path (12,1.2) node[] {\small{$\delta(\ell,a)=(\ell',b,\leftarrow)$}}; \path (10,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q10) {\small{$\ell$}}; \path (14,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q11) {}; \path (18,1) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q12) {\small{$\ell_{a,a}$}}; \path (18,-1) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (r12) {\small{$\ell_{a,b}$}}; \path (22,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q13) {\small{$\ell'$}}; \draw[arrows=-latex'] (q10) -- (q11) node[pos=0.5,above] {\tiny{$upd_{a,b}^{\leftarrow}$}} ; \draw[arrows=-latex'] (q11) -- (q12) node[pos=0.5,above] {\tiny{$left_{a}^{\leftarrow}$}} ; \draw[arrows=-latex'] (q12) -- (q13) node[pos=0.5,above] {\tiny{$right_{a}^{\leftarrow}$}} ; \draw[arrows=-latex'] (q11) -- (r12) node[pos=0.5,above] {\tiny{$left_{b}^{\leftarrow}$}} ; \draw[arrows=-latex'] (r12) -- (q13) node[pos=0.5,above] {\tiny{$right_{b}^{\leftarrow}$}} ; \path (4,-1.8) node[] {\small{$\delta(\ell,\flat)=(\ell',b,\rightarrow)$}}; \path (0,-3) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q20) {\small{$\ell$}}; \path (4,-3) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q21) {}; \path (8,-3) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q22) {\small{$\ell'$}}; \draw[arrows=-latex'] (q20) -- (q21) node[pos=0.5,above] {\tiny{$check_{\flat}$}} ; \draw[arrows=-latex'] (q21) -- (q22) node[pos=0.5,above] {\tiny{$left_{b}^{\rightarrow}$}} ; \path (12,1.2) node[] {\small{$\delta(\ell,a)=(\ell',b,\leftarrow)$}}; \path (10,-3) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q30) {\small{$\ell$}}; \path (13,-3) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q30b) {}; \path (16,-3) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q31) {}; \path (19,-2) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q32) {\small{$\ell_{\flat,a}$}}; \path (19,-4) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (r32) {\small{$\ell_{\flat,b}$}}; \path (22,-3) node[draw,circle,inner sep=2pt,minimum size=0.8cm] (q33) {\small{$\ell'$}}; \draw[arrows=-latex'] (q30) -- (q30b) node[pos=0.5,above] {\tiny{$check_{\flat}$}} ; \draw[arrows=-latex'] (q30b) -- (q31) node[pos=0.5,above] {\tiny{$upd_{\flat,b}^{\leftarrow}$}} ; \draw[arrows=-latex'] (q31) -- (q32) node[pos=0.5,above] {\tiny{$left_{a}^{\leftarrow}$}} ; \draw[arrows=-latex'] (q32) -- (q33) node[pos=0.5,above] {\tiny{$right_{a}^{\leftarrow}$}} ; \draw[arrows=-latex'] (q31) -- (r32) node[pos=0.5,above] {\tiny{$left_{b}^{\leftarrow}$}} ; \draw[arrows=-latex'] (r32) -- (q33) node[pos=0.5,above] {\tiny{$right_{b}^{\leftarrow}$}} ; \path (12,-1.8) node[] {\small{$\delta(\ell,\flat)=(\ell',b,\leftarrow)$}}; \end{tikzpicture} \end{center} $\mathcal N$ is defined as follows. Its set of places is $P=\{p_a \mid a \in \Sigma\}\cup \{root,right,left,start,ret\}$. We now define the set of transitions $T$. The first subset corresponds to the generation of a representation of the input word of $\mathcal M$. \begin{itemize}[nosep] \item For all $a\in \Sigma$, $t_a\in T_{ab}$ with $W^-(t_a)=start$, $W^+(t_a)=ret$ and $\Omega(t_a)=start+p_a$; \item $next\in T_{el}$ with $W^-(next)=start$ and $W^+(next)=ret$; \item For all $a\in \Sigma$, $from_a\in T_{\tau}$ $W^-(from_a)=ret+p_a$; \item For all $a\in \Sigma$, $to_a\in T_{ab}$ with $W^-(to_a)=right$, $W^+(to_a)=right$ and $\Omega(to_a)=right+p_a$; \item $run\in T_{el}$ with $W^-(run)=root+ret$ and $W^+(run)=root$ \end{itemize} The second subset corresponds to the simulation of $\mathcal M$. \begin{itemize}[nosep] \item For all $a\in \Sigma$, $right_{a}^{\rightarrow}\in T_{\tau}$ with $W^-(right_{a}^{\rightarrow})=right+p_a$; \item For all $a\in \Sigma$, $left_{a}^{\rightarrow}\in T_{ab}$ with $W^-(left_{a}^{\rightarrow})=W^+(left_{a}^{\rightarrow})=left$\\ and $\Omega(left_{a}^{\rightarrow})=left+p_a$; \item For all $a,b \in \Sigma$, $upd_{a,b}^{\leftarrow}\in T_{el}$ with $W^-(upd_{a,b}^{\leftarrow})=right+p_a$ and $W^+(upd_{a,b}^{\leftarrow})=right+p_{b}$ \item For all $a\in \Sigma$, $left_{a}^{\leftarrow}\in T_{\tau}$ with $W^-(left_{a}^{\leftarrow})=left+p_a$ \item For all $a\in \Sigma$, $right_{a}^{\leftarrow}\in T_{ab}$ with $W^-(right_{a}^{\leftarrow})=W^+(right_{a}^{\leftarrow})=right$\\ and $\Omega(right_{a}^{\leftarrow})=right+p_a$ \item $check_{\flat}\in T_{el}$ with $W^-(check_{\flat})=W^+(check_{\flat})=right+root$; \item For all $b \in \Sigma$, $upd_{\flat,b}^{\leftarrow}\in T_{ab}$ with $W^-(upd_{\flat,b}^{\leftarrow})=right$, $W^+(upd_{\flat,b}^{\leftarrow})=right$\\ and $\Omega(upd_{\flat,b}^{\leftarrow})=right+p_b$. \end{itemize} The initial state is $s[r,root+start+left+right]$. Let us explain how the simulation works. Let $abc$ be the word on the tape of $\mathcal M$. Then firing $(r,t_a)(v_1,t_b)(v_2,t_c)$ one gets: \begin{center} \begin{tikzpicture}[xscale=0.65,yscale=0.65] \path (0,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$root+left+right$}}] (q0) {\small{$r$}}; \path (4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_a$}}] (q1) {\small{$v_1$}}; \path (8,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_b$}}] (q2) {\small{$v_2$}}; \path (12,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_c+start$}}] (q3) {\small{$v_3$}}; \draw[arrows=-latex'] (q0) -- (q1) node[pos=0.5,above] {\tiny{$ret$}} ; \draw[arrows=-latex'] (q1) -- (q2) node[pos=0.5,above] {\tiny{$ret$}} ; \draw[arrows=-latex'] (q2) -- (q3) node[pos=0.5,above] {\tiny{$ret$}} ; \end{tikzpicture} \end{center} After firing $(v_3,next)(v_3,from_c)(r,to_c)(v_2,from_b)(u_1,to_b)(v_1,from_a)(u_2,to_a)(r,run)$ one gets: \begin{center} \begin{tikzpicture}[xscale=0.65,yscale=0.65] \path (0,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$root+left$}}] (q0) {\small{$r$}}; \path (4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_c$}}] (q1) {\small{$u_1$}}; \path (8,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_b$}}] (q2) {\small{$u_2$}}; \path (12,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_a+right$}}] (q3) {\small{$u_3$}}; \draw[arrows=-latex'] (q0) -- (q1) node[pos=0.5,above] {\tiny{$right$}} ; \draw[arrows=-latex'] (q1) -- (q2) node[pos=0.5,above] {\tiny{$right$}} ; \draw[arrows=-latex'] (q2) -- (q3) node[pos=0.5,above] {\tiny{$right$}} ; \end{tikzpicture} \end{center} Let us describe the two cases of tape simulation. Assume that the content of the tape is $abcd\flat^\omega$ and that the head of $\mathcal M$ is over $c$ then the corresponding state is the following one. The ``left'' branch contains the content of the tape on the left of the head while descending to the leaf and the ``right'' branch contains the relevant content of the tape on the right of the head (including the cell under the head) while ascending from the leaf. Thus the token in place $right$ points to the thread corresponding to the cell under the head while the token in place $left$ points to the thread corresponding to the cell immediately on the left of the head. The state of $\mathcal M$ is the state of $\mathcal A$. \begin{center} \begin{tikzpicture}[xscale=0.65,yscale=0.65] \path (0,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$root$}}] (q0) {\small{$r$}}; \path (4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_d$}}] (q1) {}; \path (8,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_c+right$}}] (q2) {\small{$v$}}; \path (-4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_a$}}] (q3) {}; \path (-8,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_b+left$}}] (q4) {\small{$u$}}; \draw[arrows=-latex'] (q0) -- (q1) node[pos=0.5,above] {\tiny{$right$}} ; \draw[arrows=-latex'] (q1) -- (q2) node[pos=0.5,above] {\tiny{$right$}} ; \draw[arrows=-latex'] (q0) -- (q3) node[pos=0.5,above] {\tiny{$left$}} ; \draw[arrows=-latex'] (q3) -- (q4) node[pos=0.5,above] {\tiny{$left$}} ; \end{tikzpicture} \end{center} Assume that the content of the tape is $abcd\flat^\omega$ and that the head of $\mathcal M$ is over the first $\flat$ then the corresponding state is the following one. \begin{center} \begin{tikzpicture}[xscale=0.65,yscale=0.65] \path (0,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$root+right$}}] (q0) {\small{$r$}}; \path (-4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_a$}}] (q1) {}; \path (-8,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_b$}}] (q2) {}; \path (-12,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_c$}}] (q3) {}; \path (-16,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_d+left$}}] (q4) {}; \draw[arrows=-latex'] (q0) -- (q1) node[pos=0.5,above] {\tiny{$left$}} ; \draw[arrows=-latex'] (q1) -- (q2) node[pos=0.5,above] {\tiny{$left$}} ; \draw[arrows=-latex'] (q2) -- (q3) node[pos=0.5,above] {\tiny{$left$}} ; \draw[arrows=-latex'] (q3) -- (q4) node[pos=0.5,above] {\tiny{$left$}} ; \end{tikzpicture} \end{center} It is routine to check that the simulation works. Let us illustrate it with one example. Assume that the content of the tape is $abcd\flat^\omega$, the head of $\mathcal M$ is over $c$ and the current state is $\ell$. Let $\delta(\ell,c)=(\ell',e,\leftarrow)$. Then after firing $(v,upd_{c,e}^{\leftarrow})(u,left_{b}^{\leftarrow})(v,right_{b}^{\leftarrow})$, one gets: \begin{center} \begin{tikzpicture}[xscale=0.65,yscale=0.65] \path (0,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$root$}}] (q0) {\small{$r$}}; \path (4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_d$}}] (q1) {}; \path (8,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_e$}}] (q2) {\small{$v$}}; \path (-4,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_a+left$}}] (q3) {}; \path (12,0) node[draw,circle,inner sep=2pt,minimum size=0.8cm, label={[xshift=0cm, yshift=0cm]\tiny{$p_b+right$}}] (q4) {}; \draw[arrows=-latex'] (q0) -- (q1) node[pos=0.5,above] {\tiny{$right$}} ; \draw[arrows=-latex'] (q1) -- (q2) node[pos=0.5,above] {\tiny{$right$}} ; \draw[arrows=-latex'] (q0) -- (q3) node[pos=0.5,above] {\tiny{$left$}} ; \draw[arrows=-latex'] (q2) -- (q4) node[pos=0.5,above] {\tiny{$right$}} ; \end{tikzpicture} \end{center} For all $a\in \Sigma$, the homomorphism $h$ maps $t_a$ to $a$ and for all $t \notin \{t_a\}_{a\in \Sigma}$, $h$ maps $t$ to $\varepsilon$. \end{proof} Obviously, the family of RPNs coverability languages include the family of PNs coverability languages. In~\cite{HP-icatpn99}, Proposition~1 establishes that the family of context-free languages is included in family of reachability languages for RPNs. The proof relies on simulating the leftmost derivations of a context-free grammar within particular two places $b_X$ and $e_X$ per nonterminal symbol $X$ where a token in $b_X$ means that $X$ must derived and a token in $e_X$ means that the derivation of $X$ into a word has been achieved. In order to adapt this result for the family of coverability languages for RPNs, it is enough to consider w.l.o.g. that the initial symbol $I$ never appears on the right hand side of a rule and to specify $s[r,e_I]$ as final state. We refer the reader to~\cite{HP-icatpn99} for more details. \begin{restatable}{proposition}{CFinRPN} \label{prop:CFinRPN} The family of Context-free languages is included in the family of coverability languages of RPNs. \end{restatable} Since universality is undecidable for the family of context-free languages, we deduce that universality of the family of RPN coverability languages is undecidable. Let $\mathcal L_1=\{a^mb^nc^p \mid m\geq n \geq p\}$. Denote by $\mathcal L_2 = \{w\tilde{w}\mid w\in \{d,e\}^*\}$ where $\tilde{w}$ is the mirror of $w$. Let $\mathcal L_3=\{a^nb^nc^n \mid n \in \mathbb{N}\}$. Observe that given the final marking $p_f$ we get that the net in Figure~\ref{fig:PN_for_L1_and_L3} has $\mathcal L_1$ as its coverability language, and $\mathcal L_3$ its reachability language. \begin{figure}[h] \begin{center} \begin{tikzpicture}[xscale=1,yscale=1] \path (0,0) node[] {$\bullet$}; \path (0,0) node[draw,circle,inner sep=2pt,minimum size=0.4cm] (p1) {}; \path (1,0.5) node[] {$\varepsilon$}; \path (1,0) node[draw,rectangle,inner sep=2pt,minimum width=0.2cm,minimum height=0.4cm] (t1) {}; \path (2,0) node[draw,circle,inner sep=2pt,minimum size=0.4cm] (p2) {}; \path (3,0.5) node[] {$\varepsilon$}; \path (3,0) node[draw,rectangle,inner sep=2pt,minimum width=0.2cm,minimum height=0.4cm] (t2) {}; \path (4,0) node[draw,circle,inner sep=2pt,minimum size=0.4cm] (p3) {}; \path (4,0.5) node[] {$p_f$}; \path (0,-1.5) node[] {$a$}; \path (0,-1) node[draw,rectangle,inner sep=2pt,minimum width=0.2cm,minimum height=0.4cm] (t3) {}; \path (1,-1) node[draw,circle,inner sep=2pt,minimum size=0.4cm] (p4) {}; \path (2,-1.5) node[] {$b$}; \path (2,-1) node[draw,rectangle,inner sep=2pt,minimum width=0.2cm,minimum height=0.4cm] (t4) {}; \path (3,-1) node[draw,circle,inner sep=2pt,minimum size=0.4cm] (p5) {}; \path (4,-1.5) node[] {$c$}; \path (4,-1) node[draw,rectangle,inner sep=2pt,minimum width=0.2cm,minimum height=0.4cm] (t5) {}; \draw[arrows=-latex] (p1) -- (t1) ; \draw[arrows=-latex] (t1) -- (p2) ; \draw[arrows=-latex] (p2) -- (t2) ; \draw[arrows=-latex] (t2) -- (p3) ; \draw[arrows=-latex] (t3) -- (p4) ; \draw[arrows=-latex] (p4) -- (t4) ; \draw[arrows=-latex] (t4) -- (p5) ; \draw[arrows=-latex] (p5) -- (t5) ; \draw[arrows=-latex] (p1) -- (-0.2,-0.5)-- (t3) ; \draw[arrows=-latex] (t3) -- (0.2,-0.5)-- (p1) ; \draw[arrows=-latex] (p2) -- (1.8,-0.5)-- (t4) ; \draw[arrows=-latex] (t4) -- (2.2,-0.5)-- (p2) ; \draw[arrows=-latex] (p3) -- (3.8,-0.5)-- (t5) ; \draw[arrows=-latex] (t5) -- (4.2,-0.5)-- (p3) ; \end{tikzpicture} \end{center} \caption{A Petri net for the languages $\mathcal L_1 $ and $ \mathcal L_3 $} \label{fig:PN_for_L1_and_L3} \end{figure} \smallskip The next proposition witnesses a Petri net language interesting from an expressiveness point of view. A similar result can be found page~179 in Peterson's book~\cite{nla.cat-vn2956435}. \begin{restatable}{proposition}{PNnotCF} \label{prop:PNnotCF} $\mathcal L_1$ is the coverability language of some Petri net but it is not a context-free language. \end{restatable} \begin{proof} Let us recall (a weak version of) Ogden lemma~\cite{ogden1968helpful}. For any context-free language $\mathcal L$ there exists an integer $N$ such for any word $w \in \mathcal L$ with $N$ marked positions, there exists a decomposition $w=w_1w_2w_3w_4w_5$ such that $w_2w_4$ contains at least a marked position and for all $n\geq 0$, $w_1w_2^nw_3w_4^nw_5 \in \mathcal L$. \noindent The proof that $\mathcal L_1$ is not a context-free language is similar to the proof of the folk result that $\mathcal L_3$ is not a context-free language. Assume that $\mathcal L_1$ is a context-free language and consider the word $w=a^Nb^Nc^N$ with all $c$ positions marked. So let $w=w_1w_2w_3w_4w_5$ with the decomposition fulfilling the requirements of Ogden lemma. Since $w'=w_1w_2^2w_3w_4^2w_5\in \mathcal L_1$, $w_2$ and $w_4$ are mono-letter words. Furthermore one of these words is equal to $c^q$ for some $q>0$. If $w_2=c^q$ then $w_4=c^{q'}$ and thus $w'$ contains too much $c$'s to belong to $\mathcal L_1$. If $w_4=c^q$ then either $w_2=a^{q'}$, $w_2=b^{q'}$ or $w_2=c^{q'}$. Whatever the case, $w'$ misses either $a$'s or $b$'s to belong to $\mathcal L_1$. \noindent As mentioned before the coverability language for the net in Figure~\ref{fig:PN_for_L1_and_L3} with final marking $p_f$ is $\mathcal L_1$. \end{proof} Using the previous results, the next theorem emphasises the expressive power of coverability languages of RPNs. \begin{theorem}\label{context-free} The family of coverability languages of RPNs strictly include the union of the family of coverability languages of PNs and the family of context-free languages. \end{theorem} \begin{proof} The inclusion is an immediate consequence of Proposition~\ref{prop:CFinRPN}. Consider the language $\mathcal L=\mathcal L_1 \cup \mathcal L_2$. \noindent Since (1) by Proposition~\ref{prop:closedunion}, the family of coverability languages of RPNs is closed under union, (2) $\mathcal L_1$ is a PN language, and (3) the language of palindromes is a context-free language, we deduce that $\mathcal L$ is an RPN language. \noindent PN and context-free languages are closed under homomorphism. Since the projection of $\mathcal L$ on $\{a,b,c\}$ is the language of Proposition~\ref{prop:PNnotCF}, $\mathcal L$ is not a context-free language. The projection of $\mathcal L$ on $\{d,e\}$ is the language of palindromes. Since it was seen in~\cite{Lambert92} that the language of (2 letters) palindromes is not a coverability language for any PN we are done. \end{proof} \input{reach-cov} The transformation presented in the above proof can be performed in polynomial time and this will be used in the next section. The next proposition establishes that, as for Petri nets, coverability does not ensure the power of ``exact counting''. The proof is interesting by itself since it combines an argument based on WSTS (case 1) and an argument \emph{\`a la} Ogden (case 2). \begin{proposition} \label{prop:ReachPNnotCovRPN} $\mathcal L_3$ is the reachability language of the Petri net of Figure~\ref{fig:PN_for_L1_and_L3} but it is not the coverability language of any RPN. \end{proposition} \begin{proof} % \noindent Due to Proposition~\ref{prop:cov-inc-reach}, it is enough to prove that there does not exist $(\mathcal N,s[r,m_0])$ such that $\mathcal L_3=\mathcal L_R(\mathcal N,s[r,m_0],\{\emptyset\})$. Assume by contradiction that there exists such $(\mathcal N,s[r,m_0]\})$. For all $n$, let $\sigma_n$ be a firing sequence reaching $\emptyset$ such that $\lambda(\sigma_n)=a^nb^nc^n$ and $\sigma'_n$ be the prefix of $\sigma_n$ whose last transition corresponds the last occurrence of $a$. Denote $s_n$ the state reached by $\sigma'_n$ and the decomposition by $\sigma_n=\sigma'_n\sigma''_n$. Among the possible $\sigma_n$, we select one such that $s_n$ has a minimal number of threads. Let $Post$ be the finite set of $\mathbb{N}^P$ defined by: $Post=\{W^+(t)\}_{t\in T_{ab}}$. \begin{center} \begin{tikzpicture}[triangle/.style = {regular polygon, regular polygon sides=3 },xscale=0.45,yscale=0.45] \path (0,3.4) node[label={[xshift=0.0cm, yshift=-0.2cm]\small{$s[r,m_0]$}}] (s0) {$\bullet$}; \path (8,0) node[draw,minimum size=3cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=0cm]$s_n$}] (sn) {}; \path (16,3.4) node[] (fn) {$\emptyset$}; \draw[arrows=-latex'] (s0) -- (8,3.4) node[pos=0.5,above] {$\sigma'_n$} ; \draw[arrows=-latex'] (8,3.4) -- (fn) node[pos=0.5,above] {$\sigma''_n$} ; \path (4,2.5) node[] {\scriptsize{$\sigma'_n=\rho t$}}; \path (4,1.5) node[] {\scriptsize{$\lambda(\rho)=a^{n-1}$}}; \path (4,0.5) node[] {\scriptsize{$\lambda(t)=a$}}; \path (12,1.5) node[] {\scriptsize{$\lambda(\sigma''_n)=b^{n}c^n$}}; \path (12,1.5) node[] {\scriptsize{$\lambda(\sigma''_n)=b^{n}c^n$}}; \path (8,-2) node[] {\tiny{minimal number of threads of $s_n$}}; \end{tikzpicture} \end{center} \noindent {\bf Case 1.} There exists a bound $B$ of the depths of the trees corresponding to $\{s_n\}_{n\in \mathbb{N}}$. Let $S_B$ be the set of abstract states of depth at most $B$ and different from $\emptyset$. Observe that $S_0$ can be identified to $\mathbb{N}^P$ and $S_B$ can be identified to $\mathbb{N}^P \times {\sf Multiset}(Post \times S_{B-1})$. Furthermore the (component) order on $\mathbb{N}^P$ and the equality on $Post$ are well quasi-orders. Since well quasi-order is preserved by the multiset operation and the cartesian product, $S_B$ is well quasi-ordered by a quasi-order denoted $<$. By construction, $s\leq s'$ implies $s\preceq_r s'$. Thus there exist $n<n'$ such that $s_{n}\preceq_r s_{n'}$ which entails that $\sigma'_{n'}\sigma''_{n}$ is a firing sequence with trace $a^{n'}b^{n}c^{n}$ reaching $\emptyset$ yielding a contradiction. \noindent {\bf Case 2.} The depths of the trees corresponding to $\{s_n\}_{n\in \mathbb{N}}$ are unbounded. There exists $n$ such that the depth of $s_n$ is greater than $(2|Post|+1)$. Thus in $s_n$ for $1\leq j\leq 3$, there are edges $u_j \xrightarrow{m}_{s_n}v_j$ and denoting $i_j$ the depth of $v_j$, one has $0<i_1<i_2<i_3$. % % % % % % % % % % % % % \noindent For $k\in \{1,2,3\}$, consider of the sequence $\rho_k$ performed in the subtree rooted in $v_k$ by the firings of $\sigma_n$. Among these three firing sequences two of them either (1) both finish by a cut transition in $v_k$ or (2) both do not finish by a cut transition in $v_k$. Let us call $i,j$ with $i<j$ the indices of these sequences and $w_i$ and $w_j$ their traces. We have illustrated the situation below. \begin{center} \begin{tikzpicture}[triangle/.style = {regular polygon, regular polygon sides=3 },xscale=0.45,yscale=0.45] \path (0,3.4) node[label={[xshift=0.0cm, yshift=-0.2cm]\small{$s[r,m_0]$}}] (s0) {$\bullet$}; \path (6,1.7) node[draw,minimum size=1.5cm,triangle,inner sep=0pt, label={[xshift=-0.2cm, yshift=-0.7cm]{}}] (s1) {}; \path (12,0.6) node[draw,minimum size=2.5cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (s2) {}; \path (18,0) node[draw,minimum size=3cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-1.5cm]{}}] (sn) {}; \path (30,3.4) node[] (fn) {$\emptyset$}; \path (30,0.8) node[] (fn1) {$\:\:$}; \path (30,-0.8) node[] (fn2) {$\:\:$}; \path (12,-0.2) node[draw,color=blue,minimum size=1cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (st2) {}; \path (18,-0.8) node[draw,color=blue,minimum size=1.6cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (st3) {}; \path (18,-1.4) node[draw,color=red,minimum size=0.5cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (su3) {}; % % \draw[arrows=-latex'] (s0) -- (18,3.4) node[pos=0.5,above] {$\sigma'_n$} ; \draw[arrows=-latex'] (18,3.4) -- (fn) node[pos=0.5,above] {$\sigma''_n$} ; \draw[color=blue,arrows=-latex'] (6,0.8) -- (fn1) node[pos=0.8,above] {$\rho_i$} ; \draw[color=red,arrows=-latex'] (12,-0.8) -- (fn2) node[pos=0.7,above] {$\rho_{j}$} ; \draw (6,3.4) -- (6,0.8); \path (6,0.8) node[label={[xshift=-0.3cm, yshift=-0.7cm]$v_i$}] {$\bullet$}; \draw (12,3.4) -- (12,0.8); \path (12,0.8) node[] {$\bullet$}; \path (12,-0.8) node[label={[xshift=-0.3cm, yshift=-0.7cm]$v_{j}$}] {$\bullet$}; \draw (12,0.8) -- (12,-0.8); \path (18,0.8) node[] {$\bullet$}; \path (18,-0.8) node[] {$\bullet$}; \draw (18,0.8) -- (18,-0.8); \end{tikzpicture} \end{center} \noindent One can build two firing sequences that still reach $\emptyset$ and thus whose labels belong to the language. The first one consists of mimicking the ``behavior'' of the subtree rooted in $v_{j}$ starting from $v_i$, which is possible due to the choice of $i$ and $j$, as illustrated below. \begin{center} \begin{tikzpicture}[triangle/.style = {regular polygon, regular polygon sides=3 },xscale=0.45,yscale=0.45] \path (0,3.4) node[label={[xshift=0.0cm, yshift=-0.2cm]\small{$s[r,m_0]$}}] (s0) {$\bullet$}; \path (6,1.7) node[draw,minimum size=1.5cm,triangle,inner sep=0pt, label={[xshift=-0.2cm, yshift=-0.7cm]{}}] (s1) {}; \path (12,1.1) node[draw,minimum size=2.1cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (s2) {}; \path (24,3.4) node[] (fn) {$\emptyset$}; \path (24,0.8) node[] (fn1) {$\:\:$}; \path (12,0.2) node[draw,color=red,minimum size=0.5cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (su3) {}; % \draw[arrows=-latex'] (s0) -- (fn) node[pos=0.5,above] {} ; \draw[color=red,arrows=-latex'] (6,0.8) -- (fn1) node[pos=0.7,above] {$\rho_{j}$} ; \draw (6,3.4) -- (6,0.8); \path (6,0.8) node[label={[xshift=-0.3cm, yshift=-0.7cm]$v_i$}] {$\bullet$}; \draw (12,3.4) -- (12,0.8); \path (12,0.8) node[] {$\bullet$}; \end{tikzpicture} \end{center} \noindent The second one consists of mimicking the ``behavior'' of the subtree rooted in $v_{i}$ starting from $v_{j}$ as illustrated below. \begin{center} \begin{tikzpicture}[triangle/.style = {regular polygon, regular polygon sides=3 },xscale=0.35,yscale=0.35] \path (0,3.4) node[label={[xshift=0.0cm, yshift=-0.2cm]\small{$s[r,m_0]$}}] (s0) {$\bullet$}; \path (6,1.3) node[draw,minimum size=1.5cm,triangle,inner sep=0pt, label={[xshift=-0.2cm, yshift=-0.7cm]{}}] (s1) {}; \path (12,0.2) node[draw,minimum size=2.2cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (s2) {}; \path (18,-0.7) node[draw,minimum size=2.8cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-1.5cm]{}}] (sn) {}; \path (26,-1.4) node[draw,minimum size=3.3cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-1.5cm]{}}] (ssn) {}; \path (36,3.4) node[] (fn) {$\emptyset$}; \path (36,-1.4) node[] (fn1) {$\:\:$}; \path (36,-2.7) node[] (fn2) {$\:\:$}; \path (12,-0.9) node[draw,color=blue,minimum size=0.7cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (st2) {}; \path (18,-2.2) node[draw,color=blue,minimum size=0.7cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (st2bis) {}; \path (26,-3) node[draw,color=blue,minimum size=1cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (st3) {}; \path (26,-3.3) node[draw,color=red,minimum size=0.5cm,triangle,inner sep=0pt, label={[xshift=0.0cm, yshift=-0.7cm]{}}] (su3) {}; % % \draw[arrows=-latex'] (s0) -- (fn) node[pos=0.5,above] {} ; \draw[color=blue,arrows=-latex'] (6,0.2) -- (12,0.2) node[pos=0.8,above] {} ; \draw[color=blue,arrows=-latex'] (12,-1.4) -- (fn1) node[pos=0.8,above] {$\rho_i$} ; \draw[color=red,arrows=-latex'] (18,-2.7) -- (fn2) node[pos=0.7,above] {$\rho_{j}$} ; \draw (6,3.4) -- (6,0.2); \path (6,0.2) node[label={[xshift=-0.3cm, yshift=-0.7cm]$v_i$}] {$\bullet$}; \draw (12,3.4) -- (12,0.2); \path (12,0.2) node[] {$\bullet$}; \path (12,-1.4) node[label={[xshift=-0.4cm, yshift=-0.7cm]$v_{j}$}] {$\bullet$}; \draw (12,0.8) -- (12,-1.4); \path (18,0.2) node[] {$\bullet$}; \path (18,-1.4) node[] {$\bullet$}; \path (18,-2.7) node[] {$\bullet$}; \draw (18,0.2) -- (18,-2.7); \path (26,0.2) node[] {$\bullet$}; \path (26,-1.4) node[] {$\bullet$}; \path (26,-2.7) node[] {$\bullet$}; \draw (26,0.2) -- (26,-2.7); \end{tikzpicture} \end{center} \noindent {\bf Case $w_i=w_{j}$.} Then the firing sequence reaching $\emptyset$ obtained by mimicking in $v_i$ the behaviour of $v_{j}$ has trace $a^nb^nc^n$ and leads to another state $s_n$ with less threads yielding a contradiction, since $s_n$ was supposed to have a minimal number of threads. \noindent {\bf Case $w_i\neq w_{j}$.} Let $w\neq \varepsilon$ be the trace of the sequence performed in the subtree rooted in $v_i$ without the trace of the sequence performed in the subtree rooted in $v_{j}$. Let us consider the firing sequence $\sigma$ reaching $\emptyset$ obtained by mimicking in $v_{j}$ the behaviour of $v_{i}$. The trace of $\sigma$ is an interleaving of $a^nb^nc^n$ and $w$ and it belongs to $\mathcal L_3$ which implies that $w=a^qb^qc^q$ for some $q>0$. Furthermore $\sigma$ can be chosen in such a way that the firing subsequences in the subtrees rooted at $v_i$ and $v_{j}$ are performed in one shot which implies that its trace is $\ldots a^qa^qw_{j}b^qc^q b^qc^q\ldots$ yielding a contradiction. \end{proof} The following corollary shows that extending the family of coverability languages of PNs by substituting either (1) coverability by reachability or (2) PNs by RPNs is somewhat ``orthogonal''. \begin{corollary} \label{cor:ReachPNuncompCovRPN} The families of reachability languages of Petri nets and the family of coverability languages of RPNs are incomparable. \end{corollary} \begin{proof} One direction is a consequence of Proposition~\ref{prop:ReachPNnotCovRPN} while the other direction is a consequence of Proposition~\ref{prop:CFinRPN} observing that the language of palindromes is not the reachability language of any Petri net. \end{proof} The next corollary exhibits a particular feature of RPNs languages (e.g. Petri nets or context-free languages are closed under intersection with a regular language). \begin{corollary}\label{col:intersection_rpn_and_regular} The family of coverability languages of RPNs is not closed under intersection with a regular language and under complementation. \end{corollary} \begin{proof} Due to Proposition~\ref{prop:ReachPNnotCovRPN}, the family of coverability languages of RPNs is strictly included in the family of recursively enumerable languages. Since the former family is closed under homomorphism, Theorem~\ref{theo:re} implies that it is not closed under intersection with a regular language and a fortiori with another coverability language. Since intersection can be obtained by union and complementation and since the family of RPN coverability languages is closed under union, they are not closed under complementation. \end{proof} Combining Propositions~\ref{prop:reach-inc-cov},~\ref{prop:cov-inc-reach} and~\ref{prop:ReachPNnotCovRPN}, one gets the following theorem. \begin{theorem} The family of coverability languages of RPNs is strictly included in the family of reachability languages of RPNs. \end{theorem} Figure~\ref{fig:lang_compare} illustrates the hierarchy of the languages presented in this work. \begin{figure}[h] \input{lan_compare.tex} \caption{$\mathcal L_1 =\{a^mb^nc^p\mid m\geq n\geq p \};\mathcal L_2= \{w\in\{d,e\}^* \mid w=\widetilde{w}\};\mathcal L_3 = \{a^nb^nc^n\mid n\in\mathbb{N}\} $} \label{fig:lang_compare} \end{figure} \section{Finiteness and boundedness are {\sf EXPSPACE}-complete} \label{sec:finiteness} In this section we will show that the finiteness and boundedness problems for RPNs is in ${\sf EXPSPACE}$-complete w.r.t. $\eta=size(\mathcal N,s_{0})$, i.e. the accumulated size of the RPN and the initial state. For Petri nets the finiteness problem, which is equivalent to the boundedness problem, has been shown to be ${\sf EXPSPACE}$-complete: \begin{theorem}[\cite{Lipton76,Rac78}] \label{thm:Finiteness Bound For PN }The finiteness problem for Petri nets is in ${\sf EXPSPACE}$-complete. \end{theorem} ${\sf EXPSPACE}$-hardness follows immediately from ${\sf EXPSPACE}$-hardness of the finiteness problem for Petri nets~\cite{Lipton76}. Moreover by applying Proposition~\ref{col:rooted} like in previous sections we will assume that $s_0=s[r,m_0]$. Given two vertices $u,v$ in a graph $\mathcal G$, the \emph{distance} between them $dist_{\mathcal G}(u,v)$ is the length of a shortest path going from one to the other. \begin{lemma} \label{lem:Form a path in ab to a path in RPN} Let $(\mathcal N,s_0)$ be a marked RPN and $G_{\mathcal N,s_{0}}=(V_a,E_a,M_a)$ be its abstract graph. Then for all $v\in V_a$, there exists $s\in Reach(\mathcal N, s_{0})$ and $u\in V_s $ such that $ M_s(u)=M_a(v)$. \end{lemma} \begin{proof} We show the lemma by induction on $dist_{G_{\mathcal N,s_{0}}}(r,u)$. If $dist_{G_{\mathcal N,s_{0}}}(r,v) =0$ then $v=r$ and $M_a(r)=m_0$. Assume that we have shown the lemma for any $v$ such that $dist_{G_{\mathcal N,s_{0}}}(r,v)<n$, and pick $v\in V_a$ such that $dist_{G_{\mathcal N,s_{0}}}(r,v)=n$. Since $dist_{G_{\mathcal N,s_{0}}}(v,r)>0$, $v=v_t$ for some $t\in T_{ab}$. Moreover there is some $(u,v_t)\in E_a$ such that $dist_{G_{\mathcal N,s_{0}}}(r,u)=n-1$ and by the induction hypothesis there is a sequence $s_{0}\xrightarrow{\sigma_u}s_u$ and some $w\in V_{s_u}$ such that $M_{s_u}(w)=M_a(u)$. From the definition of $G_{\mathcal N,s_{0}}$ there is a fireable sequence $ s[w,M_a(w)]\xrightarrow{\sigma_t(w,t)}$. Combining these sequences, we get $s_{0}\xrightarrow{\sigma_u}s_u\xrightarrow{\sigma_t(w,t)}s_{v_t}$, where the newly created thread $w'$ fulfills $M_{s_v}(w')=\Omega(t)=M_a(v_t)$. \end{proof} The following lemma shows that we can simulate the behaviour of every thread by a Petri net. \begin{lemma} \label{lem:every marking is covered by ab graph} Let $(\mathcal N,s_{0})$ be a marked RPN and $G_{\mathcal N,s_{0}}=(V_a,E_a,M_a)$ be its abstract graph. Then: $$ \bigcup_{s\in Reach(\mathcal N,s_{0})}\{M_s(v) \}_{v\in V_s}= \bigcup_{u\in V_a} Reach(\widehat{\mathcal N}_{el}, M_a(u)). $$ \end{lemma} \begin{proof} \noindent $\bullet$ Let $m\in\bigcup_{s\in Reach(\mathcal N,s_{0})}\{M_s(u) \}_{u\in V_s} $. There exists $s_0\xrightarrow{\sigma}s$ with some $v\in V_s $ such that $M_s(v)=m$. By Proposition~\ref{prop:omniciant} there is an omniscient sequence in $s_0\xrightarrow{\widehat{\sigma}}_{\widehat{\mathcal N}}s$. We split $\widehat\sigma$ into $s_0\xrightarrow{\widehat{\sigma}_1}_{\widehat{\mathcal N}}s_v\xrightarrow{\widehat{\sigma}_2}_{\widehat{\mathcal N}}$ where $s_v$ is the the state where the thread $v$ first appears. Note that there is $u\in V_a$ for which $M_{s_v}(v) = M_a(u)$. Let $(v,\widehat{\sigma}_2')$ consisting of all firings of $v$ in $\widehat{\sigma}_2$. $(v,\widehat{\sigma}_2')$ is fireable from $s_v$ since $\widehat{\sigma}_2$ is omniscient implying that there will be not cut transition fired by a child of $v$. By construction of $\widehat{\mathcal N}_{el}$, the sequence $\widehat{\sigma}_2'$ is a firing sequence of $(\widehat{\mathcal N}_{el},M_a(u))$ thus $ m\in Reach(\widehat{\mathcal N}_{el}, M_a(u))$. \noindent $\bullet$ Let $u\in V_a $ and $m\in Reach(\widehat{\mathcal N}_{el}, M_a(u))$, i.e. $M_a(u)\xrightarrow{\sigma}_{\widehat{\mathcal N}_{el}}m$ for some $n\in \mathbb{N}$. First by Lemma~\ref{lem:Form a path in ab to a path in RPN} there exists $s_0\xrightarrow{\sigma_u}_\mathcal N s_u$ where for some $v\in V_{s_u}$ we have $M_{s_u}(v) = M_a(u)$. By construction of $\widehat{\mathcal N}$ we also have $s_0\xrightarrow{\sigma_u}_{\widehat{\mathcal N}} s_u$. By construction of $\widehat{\mathcal N}_{el}$ we get that $s_u\xrightarrow{(v,\sigma)}_{\widehat{\mathcal N}}s$ where $M_s(v) = m$. By Proposition~\ref{prop:equivreach}, $s\in Reach(\mathcal N,s_{0})$, which concludes the proof. \end{proof} Using the previous Lemma and Rackoff's Theorem we establish the complexity of the boundedness problem: \begin{proposition} The boundedness problem of RPN is in {\sf EXPSPACE}-complete. \end{proposition} \begin{proof} Hardness of the problem comes from hardness of Petri nets. Let $(\mathcal N,s_0)$ be a marked RPN. First by Corollary~\ref{col:rooted} we can assume that $s_0 = s[r,m_0]$. By Lemma \ref{lem:every marking is covered by ab graph} checking whether $\mathcal N,s_0$ is bounded is equivalent to whether for $v \in V_a$, $(\widehat{\mathcal N}_{el}, M_a(u))$ is bounded which, due to Rackoff, can be performed in exponential space. \end{proof} Let $(\mathcal N,s_{0})$ be a marked RPN. If $s_{0}=\emptyset$ then the number of reachable states is finite (one), hence from now on we assume that $s_{0}\neq\emptyset$. Next, if there exists $t\in T_{ab}$ with $W^-(t)=0$ then there are infinitely many reachable states since one can fire $t$ repeatedly which provides us with a sequence of states with an unbounded number of threads. Therefore from now on we assume that for all $t\in T_{ab}$, $W^-(t)>0$. We now establish a connection between the boundedness of $\widehat{\mathcal N}_{el}$ and the maximal number of children of the root in $\mathcal N$: \begin{lemma}\label{lem:bounded num of children} Let $\mathcal N$ be an RPN such that $(\widehat{\mathcal N}_{el},m_0)$ is bounded. Then: \[ \sup_{s'\in Reach(\mathcal N,s[r,m_0])}|\{v\in V_{s'}\mid r_{s'}\rightarrow_{s'} v\}|< \infty \] \end{lemma} \begin{proof} Assume that there exists a family of sequences $ \{\sigma_n\}_{n\in\mathbb{N}} $ such that $s[r,m_0]\xrightarrow{\sigma_n}_{\mathcal N}s_n$ and the number of children of $r$ in $s_n$ is greater than $n$. By Proposition~\ref{prop:omniciant} for all $\sigma_n$ there exists an omniscient sequence $\widehat{\sigma}_n$ in $\widehat{\mathcal N}$ from $s[r,m]$ reaching $s_n$. We remove from $\widehat{\sigma}_n$ all the transitions not fired from the root getting $(r,\widehat{\sigma}_n')$ which is also fireable from $s[r,m]$ and which leads to a state where the root has a number of children greater than $n$. Since an abstract transition consumes tokens from the root (for all $t\in T_{ab}$, $W^-(t)>0$) one can remove them from $(r,\widehat{\sigma}_n')$ and get $(r,\widehat{\sigma}_n'')$ for which $s\xrightarrow{(r,\widehat{\sigma}_n'')}_{\widehat{\mathcal N}}s_n''$ and $\sum_{p\in P}M_{s_n''}(r)(p)>n$. Since $ \widehat{\sigma}_n''$ is fireable from $m$ in $\widehat{\mathcal N}_{el}$ this contradicts the hypothesis of the lemma. \end{proof} Combining the above results, we get a characterization of the finiteness problem: \begin{proposition}\label{prop:const finitness is equvalent to} Let $(\mathcal N,s_{0})$ be a marked RPN. Then $Reach(\mathcal N, s_{0})$ is finite if and only if both of the following assertions hold:\\ 1. There is no loop in $G_{\mathcal N,s_{0}}=(V_a,E_a,M_a)$;\\ 2. For all $v\in V_a$, $(\widehat{\mathcal N}_{el},M_a(v))$ is bounded. \end{proposition} \begin{proof} $\bullet$ Assume that assertions 1 and 2 hold. Due to Assertion~1 and Lemma~\ref{lem:Finidng unbounded path using at} any reachable state has its depth bounded by some constant. Due to Assertion~2 and Lemmas~\ref{lem:every marking is covered by ab graph} and~\ref{lem:bounded num of children} each thread in any reachable state has a bounded number of children, and a bounded number of different reachable markings. Therefore $Reach(\mathcal N,s_0)$ is finite.\smallskip \noindent $\bullet$ Assume that Assertion~1 does not hold. By Lemma~\ref{lem:Finidng unbounded path using at} there is a deep infinite sequence. Hence there is an infinite sequence of states with growing depth. Therefore $Reach(\mathcal N,s_0)$ is not finite. \noindent $\bullet$ Assume that Assertion~2 does not hold for some vertex $v$. By Lemma~\ref{lem:Form a path in ab to a path in RPN} there exists a state $s\in Reach(\mathcal N, s_{0})$ and a vertex $u\in V_s $ such that $ M_s(u)=M_a(v)$. By the definition of $\widehat{\mathcal N}_{el}$, for any $m\in Reach(\widehat{\mathcal N}_{el},M_s(v))$, there exists a firing sequence $(r,\sigma')$ in $\widehat{\mathcal N}$ such that $s\xrightarrow{(r,\sigma')}_{\widehat{\mathcal N}}s'$ with $M_{s'}(v)=m$. Therefore $Reach(\widehat{\mathcal N},s)\subseteq Reach(\widehat{\mathcal N},s_0)$. Due to Proposition~\ref{prop:equivreach}, $Reach(\widehat{\mathcal N},s_0)= Reach(\mathcal N,s_0)$. \end{proof} \begin{theorem} \label{prop:constraind finitness in EXPSPACE} The finiteness problem of RPN is in {\sf EXPSPACE}-complete. \end{theorem} \begin{proof} The algorithm proceeds by checking Assertions~1 and~2 of Proposition\ref{prop:const finitness is equvalent to}. It builds in exponential space (by Lemma~\ref{lem:abstract graph in expspace}) the abstract\ graph and checks whether there is no loop in $G_{\mathcal N,s_{0}}$. In the negative case, it checks in exponential space for any vertex $v\in V_a$, whether the marked Petri net $(\widehat{\mathcal N}_{el},M_{a}(v))$ is bounded. \end{proof} \section{Introduction} \label{sec:introduction} {\bf Verification problems for Petri nets.} Petri net is a useful formalism for the analysis of concurrent programs for several reasons. From a modeling point of view (1) due to the locality of the firing rule, one easily models concurrent activities and (2) the (a priori) unbounded marking of places allows to represent a dynamic number of activities. From a verification point of view, most of the usual properties are decidable. However, Petri nets suffer two main limitations: they cannot model recursive features and the computational cost of verification may be very high. More precisely, all the known algorithms solving reachability are nonprimitive recursive (see for instance~\cite{Mayr84}) and it has been proved recently that the reachability problem is non elementary~\cite{abs-1809-07115} but primitive recursive when the dimension is fixed~\cite{DBLP:conf/lics/LerouxS19}. Fortunately some interesting properties like coverability, termination, finiteness, and boundedness are {\sf EXPSPACE}-complete~\cite{Rac78} and thus still manageable by a tool. So an important research direction consists of extending Petri nets to support new modeling features while still preserving decidability of properties checking and if possible with a "reasonable" complexity. \noindent {\bf Extended Petri nets.} Such extensions may be partitioned between those whose states are still markings and the other ones. The simplest extension consists of adding inhibitor arcs which yields undecidability of most of the verification problems. However adding a single inhibitor arc preserves the decidability of the reachability, coverability, and boundedness problems~\cite{Reinhardt08,BFLZ-lmcs12,Bonnet11}. When adding reset arcs, the coverability problem becomes Ackermann-complete ~\cite{PhS-mfcs10} and boundedness undecidable~\cite{dufourd98} In $\nu$-Petri nets, the tokens are colored where colors are picked in an infinite domain: their coverability problem is double-Ackermann time complete~\cite{lazic:hal-01265302}. In Petri nets with a stack, the reachability problem may be reduced to the coverability problem and both are at least not elementary ~\cite{abs-1809-07115,Lazic13} while their decidability status is still unknown~\cite{Lazic13}. In branching vector addition systems with states (BVASS) a state is a set of threads with associated markings. A thread either fires a transition as in Petri nets or forks, transferring a part of its marking to the new thread. For BVASS, the reachability problem is also {\sf TOWER}-hard~\cite{LazicS14} and its decidability is still an open problem while the coverability and the boundedness problems are 2-{\sf EXPTIME}-complete~\cite{jcss12-DJLL}. The analysis of subclasses of Petri nets with a stack is an active field of research~\cite{AtigG11,MavlankulovOTSZ18,DassowT09,Zetzsche15}. However, for none of the above extensions, the coverability and termination problems belong to {\sf EXPSPACE}. \noindent {\bf Recursive Petri nets (RPN).} This formalism has been introduced to model distributed planning of multi-agent systems for which counters and recursivity were necessary for specifying resources and delegation of subtasks~\cite{EFH-icmas96}. Roughly speaking, a state of an RPN consists of a tree of \emph{threads} where the local state of each thread is a marking. Any thread fires an \emph{elementary}, \emph{abstract} or \emph{cut} transition. When the transition is elementary, the firing updates its marking as in Petri nets; when it is abstract, this only consumes the tokens specified by the input arcs of the transition and creates a child thread initialized with the \emph{initial marking} of the transition. When a cut transition is fired, the thread and its subtree are pruned, producing in its parent the tokens specified by the output arcs of the abstract transition that created it. In RPN, reachability, boundedness and termination are decidable~\cite{HP-icatpn99,haddad:hal-01573071} by reducing these properties to reachability problems of Petri nets. So the corresponding algorithms are nonelementary. LTL model checking is undecidable for RPN but becomes decidable for the subclass of sequential RPN~\cite{HaddadP01}. In~\cite{HaddadP07}, several modeling features are proposed while preserving the decidability of the verification problems. \noindent {\bf Our contribution.} We first study the expressive power of RPN from the point of view of coverability languages (reachability languages were studied in \cite{HP-icatpn99}). We first introduce a quasi-order on states of RPN compatible with the firing rule and establish that it is not a well quasi-order. Moreover, we show that there cannot exist a transition-preserving compatible well quasi-order, preventing us to use the framework of Well Structured Transition Systems to prove that coverability is decidable. We show that the RPN languages are \emph{quite close} to recursively enumerable languages since the closure under homomorphism and intersection with a regular language is the family of recursively enumerable languages. More precisely, we show that RPN coverability (as reachability) languages strictly include the union of context-free languages and Petri net coverability languages. Moreover, we prove that RPN coverability languages and reachability languages of Petri nets are incomparable. We prove that RPN coverability languages are a strict subclass of RPN reachability languages. In addition, we establish that the family of RPN languages is closed under union, homomorphism but neither under intersection with a regular language nor under complementation. From an algorithmic point of view, we show that, as for Petri nets, coverability, termination, boundedness, and finiteness are {\sf EXPSPACE}-complete. Thus the increase of expressive power does not entail a corresponding increase in complexity. In order to solve the coverability problem, we show that if there exists a covering sequence there exists a `short' one (i.e. with a length at most doubly exponential w.r.t. the size of the input). In order to solve the termination problem, we consider two cases for an infinite sequence depending (informally speaking) whether the depth of the trees corresponding to states are bounded or not along the sequence. For the unbounded case, we introduce the abstract graph that expresses the ability to create threads from some initial state. The decidability of the finiteness and boundedness problems are also mainly based on this abstract graph. Let us mention that this paper is an extended version of \cite{FHK-atpn19} that contains new results about expressiveness like the characterization of the RPN coverability languages, decidability and complexity of finiteness and boundedness and we greatly simplified the proofs of coverability, termination, and finiteness. We also provided a more elegant definition of the (now inductive) syntax and the semantics of RPN. \noindent {\bf Outline.} In section~\ref{sec:recursive}, we introduce RPNs and state ordering and establish basic results related to these notions. In section~\ref{sec:reductions}, we introduce decision problems and some reductions between them. In section~\ref{sec:expressiveness}, we study the expressiveness of coverability languages. Then in sections~\ref{sec:coverability},~\ref{sec:termination}, and~\ref{sec:finiteness} we show that the coverability, termination, boundedness, and finiteness problems are {\sf EXPSPACE}-complete. In section~\ref{sec:conclusion}, we conclude and give some perspectives to this work. \subsection{Bibliography} \bibliographystyle{fundam} \section{Recursive Petri Nets} \label{sec:recursive} \subsection{Presentation} The state of an RPN has a structure akin to a `directed rooted tree' of Petri nets. Each vertex of the tree, hereafter \emph{thread}, is an instance of the RPN and possessing some marking on it. Each of these threads can fire \emph{three} types of transitions. An \emph{elementary} transition updates its own marking according to the usual Petri net firing rule. An \emph{abstract} transition consumes tokens from the thread firing it and creates a new child (thread) for it. The marking of the new thread is determined according to the fired abstract transition. A \emph{cut} transition can be fired by a thread if its marking is greater or equal than some marking. Firing a cut transition, the thread erases itself and all of its descendants. Moreover, it creates tokens in its parent, which are specified by the abstract transition that created it. \begin{definition}[Recursive Petri Net] \noindent A \emph{Recursive Petri Net} is a 6-tuple $\mathcal N=\langle P,T,W^{+},W^{-},\Omega\rangle$ where: \begin{itemize} % \item $P$ is a finite set of places; % \item $T=T_{el}\uplus T_{ab} \uplus T_{\tau}$ is a finite set of transitions with $P\cap T=\emptyset$, and $T_{el}$ (respectively $T_{ab}, T_{\tau}$) is the subset of elementary (respectively abstract, cut) transitions; % \item $W^{-}$ is the $\mathbb{N}^{P\times T}$ backward incidence matrix; % \item $W^{+}$ are the $\mathbb{N}^{P\times (T_{el}\uplus T_{ab})}$ forward incidence matrix; % \item $\Omega : T_{ab} \rightarrow \mathbb{N}^P$ is a function that labels every abstract transition with a initial marking; % \end{itemize} \end{definition} Figure~\ref{fig:rpn_example} graphically describes an example of an RPN with: \begin{align*} &P=\{p_{ini},p_{fin},p_{beg},p_{end}\}\cup\{p_{b_i},p_{a_i}:i\leq 2 \}; \\ &T_{el}=\{t_{b_1},t_{b_3},t_{a_1},t_{a_3},t_{sa},t_{sb}\}\,;\,T_{ab}=\{t_{beg},t_{b_2},t_{a_2}\};\\ &T_{\tau}=\{t_{\tau_1},t_{\tau_2}\}. \end{align*} and for instance $ W^-(p_{ini},t_{beg})=1 $ and $ \Omega(t_{b_2})= p_{beg} $ (where $ p_{beg} $ denotes the marking with one token in place $ p_{beg} $ and zero elsewhere). For brevity reasons, we denote by $W^+(t)$ a vector in $\mathbb{N}^P$, where for all $p\in P$, $W^+(t)(p)=W^+(p,t)$, and we do the same for $W^-(t)$. \begin{figure}[h] \begin{center} \input{rpn_example} \end{center} \caption{An example of a marked RPN.} \label{fig:rpn_example} \end{figure} A \emph{concrete state} $s$ of an RPN is a labeled tree representing relations between threads and their associated markings. Every vertex of $s$ is a thread and edges are labeled by abstract transitions. We introduce a countable set $\mathcal V$ of vertices in order to pick new vertices when necessary. \begin{definition}[State of an RPN] A \emph{concrete state (in short, a state)} $s$ of an RPN is a tree over the finite set of vertices $V_s\subseteq \mathcal V$, inductively defined as follows: \begin{itemize} % \item either $V_s=\emptyset$ and thus $s=\emptyset$ is the empty tree; % \item or $V_s=\{r_s\}\uplus V_1\uplus \ldots \uplus V_k$ with $0\leq k$ and $s=(r_s,m_0,\{(m_i,s_i)\}_{1\leq i\leq k})$ is defined as follows: \begin{itemize} % \item $r_s$ is the root of $s$ labelled by a marking $m_{0} \in \mathbb{N}^P$; % \item For all $i\leq k$, $s_i$ is a state over $V_i\neq \emptyset$ and there is an edge $r_s\xrightarrow{m_i}_s r_{s_i}$ with $m_i\in \{W^+(t)\}_{t\in T_{ab}}$. \end{itemize} \end{itemize} \end{definition} For all $u,v\in V_s$, one denotes $M_s(u)$ the marking labelling $u$ and when $u\xrightarrow{m}_s v$, one writes $\Lambda(u,v):=m$. State $s_v$ is the (maximal) subtree of $s$ rooted in $v$. While the set of vertices $V_s$ will be important for analyzing the behavior of a firing sequence in an RPN, one can omit it and get a more abstract representation of the state. Note that contrary to the previous definition where $ \{(m_i,s_i)\}_{1\leq i\leq k} $ was a set, in the following definition we need a multiset $Child_s$. \begin{definition}[Abstract state of an RPN] An \emph{abstract state} $s$ of an RPN is inductively defined as follows: \begin{itemize} % \item either $s=\emptyset$ is the empty set ; % \item or $s=(m_s,Child_s)$ where $m_s \in \mathbb{N}^P$ and $Child_s$ is a finite multiset of pairs $(m',s')$ where $m'\in \{W^+(t)\}_{t\in T_{ab}}$ and $s'$ is an abstract state different from $\emptyset$. % \end{itemize} \end{definition} Given a concrete state $s$, we denote by $\abst{s}$ its abstract state. Except if explicitly stated, a state is a concrete state In the other direction, given an abstract state $s$, one recovers its set of concrete states by picking an arbitrary set of vertices $V_s\subseteq \mathcal V$ of appropriate cardinality and, inductively, arbitrarily splitting $V_s$ between the root and the pairs $(m,s')$. For example, on the right side of Figure~\ref{fig:rpn_example}, there is a (concrete) state of the RPN $ \mathcal N$. This state consists of three threads with markings $ \bf 0,0,$ and $p_{end}$ (where $\bf 0 $ is the \emph{null marking}) and two edges with the labels $ W^+(t_{beg})$ and $W^+(t_{b_2}) $. Let $ s $ be a state of some RPN. Every thread $u$ different from the root has an unique \emph{parent}, denoted by $prd(u)$. The \emph{descendants} of a thread $u$ consists of threads in the subtree rooted in $u$ including $u$ itself. We denote this set by $Des_{s}(u)$. For $m\in \mathbb{N}^P$, denote by $s[r,m]:=(r,m,\emptyset)$, the state consisting of a single vertex $r$ whose marking is $m$. As usual, two markings $m,m'\in \mathbb{N}^P$, over a set of places $P$, are partially ordered as follows: $m\leq m'$ if for all places $p\in P$, $m(p) \leq m'(p)$. \begin{definition}[Operational semantics] Let $s=(r,m_0,\{(m_i,s_i)\}_{1\leq i\leq k})$ be a state. Then the firing rule $s\xrightarrow{(v,t)}s'$ where $v\in V_s $ and $t\in T$ is inductively defined as follows: \begin{itemize} % \item Let $t\in T_{el}$ such that $W^-(t)\leq m_0$, then one has $s\xrightarrow{r,t} (r,m_0-W^-(t)+W^+(t), \{(m_i,s_i)\}_{i\leq k})$ % \item Let $t\in T_{ab}$ such that $W^-(t)\leq m_0$, then one has $s\xrightarrow{r,t} (r,m_0-W^-(t), \{(m_i,s_i)\}_{i\leq k+1}))$ where $m_{k+1}=W^+(t)$, $s_{k+1}=s[v,\Omega(t)]$ with $v\in \mathcal V \setminus V_s$ % \item Let $t\in T_{\tau}$ such that $W^-(t)\leq m_0$, then one has $s\xrightarrow{r,t} \emptyset$ % \item Let $i\leq k$ such that $s_i \xrightarrow{v,t} s'_i$ \\ if $s'_i=\emptyset$ then $s \xrightarrow{v,t} (m_0+m_i,\{(m_j,s_j)\}_{1\leq j\neq i\leq k})$ else $s \xrightarrow{v,t} (m_0,\{m_j,s_j\}_{1\leq j\neq i\leq k}\cup \{m_i,s'_i\})$ \end{itemize} \end{definition} Figure \ref{fig:firing_example} illustrates a sequence of transition firings in the RPN described by Figure~\ref{fig:rpn_example}. The first transition $t_{beg}\in T_{ab}$ is fired by the root. Its firing results in a state for which the root has a new child (denoted by $ v $) and a new outgoing edge with label $ p_{fin} $. The marking of the root is decreased to ${\bf 0}$ and $v$ is initially marked by $ \Omega(t_{beg})=p_{beg} $. The second firing is due to an elementary transition $t_{b_1}\in T_{el}$ which is fired by $v$. Its firing results in a state for which the marking of $v$ is changed to $M_s'(v)=M_s(v)+W^+(t_{b_1})-W^-(t_{b_1})=p_{b_1}$. The fifth transition to be fired is the cut transition $t_{\tau_2}$, fired by the thread with the marking $p_{end}$ (denoted by $w$). Its firing results in a state where the thread $ w $ is erased, and the marking of its parent is increased by $ W^+(t_{b_2})=p_{b_2}$. \begin{figure}[h] \begin{center} \input{firing_sequance} \end{center} \caption{Firing sequence for the RPN in Figure \ref{fig:rpn_example}} \label{fig:firing_example} \end{figure} \smallskip A \emph{firing sequence} is a sequence of transition firings, written in a detailed way: $s_0\xrightarrow{(v_1,t_1)}s_1\xrightarrow{(v_2,t_2)}\cdots\xrightarrow{(v_n,t_n)}s_n$, or when the context allows it, in a more concise way like $s_0\xrightarrow{\sigma} s_n$ for $\sigma=(v_1,t_1)(v_2,t_2)\dots(v_n,t_n)$. Let $\sigma \in T^*$ with $\sigma=t_1\ldots t_n$ and $v$ be a vertex, $(v,\sigma)$ is an abbreviation for $(v,t_1)\ldots(v,t_n)$. When we deal with several nets, we indicate by a subscript in which net, say $\mathcal N$, the firing sequence takes place: $s_0\xrightarrow{\sigma}_{\mathcal N} s_n$. Infinite firing sequences are similarly defined. In a firing sequence, a thread $v$ that has been deleted is \emph{never reused} (which is possible since $\mathcal V$ is countable). A thread is \emph{final} (respectively \emph{initial}) w.r.t. $\sigma$ if it occurs in the final (respectively initial) state of $\sigma$. We say that $v\in Des_{\sigma}(u) $ if there exists $ i\leq n $ such that $ v\in Des_{s_i}(u) $. We call $\sigma'$ a \emph{subsequence} of $\sigma$, denoted by $\sigma'\sqsubseteq \sigma$, if there exists $k$ indexes $ i_1, i_2 \dots i_k$ such that $ 1 \leq i_1<i_2<\dots i_k\leq n $ and $\sigma'=(v_{i_1},t_{i_1})(v_{i_2},t_{i_2})\dots (v_{i_k},t_{i_k})$. \begin{remark} In the sequel, when we write ``RPN $\mathcal N$'', we mean $\mathcal N=\left<P,T,W^+,W^-,\Omega\right>$, unless we explicitly write differently. An RPN $\mathcal N$ equipped with an initial state $s$ is a \emph{marked~RPN} and denoted $(\mathcal N,s)$. Similarly a \emph{marked Petri net} $(\mathcal N,m)$ is a Petri net $\mathcal N$ equipped with an initial marking $m$. \end{remark} For a marked RPN $(\mathcal N,s_0)$, let $Reach(\mathcal N,s_0)=\{\abst{s}\mid \exists\sigma\in T^* \text{ s.t. } s_0\xrightarrow{\sigma} s\}$ be its \emph{reachability set}, i.e. the set of all the reachable \emph{abstract} states. \subsection{An order for Recursive Petri Nets} We now define a quasi-order $\preceq $ on the states of an RPN. Given two states $ s,s' $ of an RPN $\mathcal N$, we say that $ s $ is \emph{smaller or equal} than $ s' $, denoted by $s\preceq s'$, if there exists a subtree in $ s' $, which is isomorphic to $ s $, where markings are greater or equal on all vertices and edges. \begin{definition} Let $s \neq \emptyset$ and $s'$ be states of an RPN $\mathcal N$. Then $ s\preceq s'$ if there exists an injective mapping $f$ from $V_s$ to $V_{s'}$ such that for all $v\in V_s$: \begin{enumerate}[nosep] % \item $M_{s}(v) \leq M_{s'}(f(v))$, and, % \item for all $v\xrightarrow{m}_sw$, there exists an edge $f(v) \xrightarrow{m'}_{s'}f(w)$ with $m\leq m'$. \end{enumerate} In addition, $\emptyset \preceq s$ for all states $s$. \noindent When $f(r_s)$ is required to be $r_{s'}$, one denotes this relation $s\preceq_r s'$ with $\emptyset \preceq_r s$ if and only if $s=\emptyset$. \end{definition} Figure~\ref{fig:order example} illustrates these quasi-orders. \begin{figure}[h] \begin{center} \input{comparing_states} \end{center} \caption{We have that $s\preceq s'$, but $s\not\preceq_r s'$ because the marking of the root of $s'$ is too small.} \label{fig:order example} \end{figure} While this is irrelevant for the results presented here, let us mention that checking whether $s\preceq s'$ can be done in polynomial time by adapting a standard algorithm for the subtree problem (see for instance~\cite{Stadel78}). \begin{restatable}{lemma}{partialorder} The relations $\preceq $ and $\preceq_r$ are quasi-orders. \end{restatable} \begin{proof} Let, $ s,s',s''$ be states of an RPN $ \mathcal N $ with $s=(r,m_0,\{(m_i,s_i)\}_{1\leq i\leq k})$, $s'=(r',m'_0,\{(m'_i,s'_i)\}_{1\leq i\leq k'})$ and $s''=(r'',m''_0,\{(m''_i,s''_i)\}_{1\leq i\leq k''})$. Let us show that the relation $\preceq $ is a quasi-order. \begin{enumerate} % \item Reflexivity: the identity function $ Id$ on $V_s$ insures that $s\preceq s$. % \item Transitivity: Given $ s\preceq s'\preceq s'' $, there exist two injective functions $f:V_s \rightarrow V_{s'}$ and $f':V_{s'} \rightarrow V_{s''}$. Let $ g: V_{s} \rightarrow V_{s''}$ be defined by $ g=f'\circ f $. Then $g$ is injective. For any edge $v\xrightarrow{m}_sw$, there exists an edge $f(v) \xrightarrow{m'}_{s'}f(w)$ with $m\leq m'$ and there exists an edge $f'(f(v)) \xrightarrow{m''}_{s''}f'(f(w))$ with $m\leq m'\leq m''$. For all $v\in V_s$, one has $M_s(v) \leq M_{s'}(f(v)) \leq M_{s''}(f'(f(v)))=M_{s''}(g(v)).$ Therefore $s \preceq s''$. \end{enumerate} The proof for the relation $\preceq_r$ is similar. \end{proof} Consider the equivalence relation $\simeq:=\preceq \cap \preceq^{-1}$. Given a set of states $A$, one denotes by $\bigslant{A}{\simeq}$ the quotient set by the equivalence relation $\simeq$. Observe that $s \simeq s'$ if and only if their abstract representations are equal and that $\simeq=\preceq_r \cap \preceq^{-1}_r$. A quasi-order $\leq$ on the states of an RPN is \emph{strongly compatible} (as in \cite{Alain01}) if for all states $s,s'$ such that $s\leq s'$ and for all transition firings $s\xrightarrow{(v,t)}s_1$, there exist a state $s'_1$ and a transition firing $s'\xrightarrow{(v',t')}s'_1$ with $s_1 \leq s'_1$. \begin{restatable}{lemma}{strongcompatibility} \label{lem:strongcompatibility} The quasi-orders $\preceq $ and $\preceq_r$ are strongly compatible. \end{restatable} \begin{proof} Let $s\preceq s'$ and let $f$ be the mapping associated with the relation $\preceq$ and $s\xrightarrow{v,t}s_1$.\\ Thus $s_v\xrightarrow{v,t}s_2$ for some $s_2$.\\ We will exhibit some $s'_1$ such that $s_1\preceq s'_1$ with some $f'$ as associated mapping.\\ Since $M_s(v) \leq M_{s'}(f(v))$, one has $s'_{f(v)}\xrightarrow{f(v),t}s'_2$ for some $s'_2$ and by induction $s'\xrightarrow{f(v),t}s'_1$ for some $s'_1$.\\ It remains to define $f'$. \begin{itemize}[nosep] % \item If $t \in T_{el}$ then $f'=f$; % \item If $t \in T_{ab}$ then for all threads $u$ of $s$, $f'(u)=f(u)$ and if $v^*$ (resp. $w^*$) is the thread created by the firing $(v,t)$ (resp. $(f(v),t)$) then $f(v^*)=w^*$; % \item If $t \in T_{\tau}$ then $f'$ is equal to $f$ restricted to the remaining vertices. \end{itemize} It is routine to check that the inequalities between corresponding markings of $s$ and $s'$ are fulfilled. The proof for $\preceq_r$ is similar. \end{proof} These quasi-orders may contain an infinite set of incomparable states (i.e. an infinite \emph{antichain}). For example, see Figure~\ref{fig:antichain} where any two states $s_i$ and $s_j$ are incomparable. \begin{figure}[h] \begin{center} \input{anitchain} \end{center} \caption{An RPN with an antichain of states} \label{fig:antichain} \end{figure} Indeed, for any $ i<j $: (1) $ s_j\not\preceq s_i$ since $ |V_{s_j}|>|V_{s_{i}}| $ there cannot be any injective function from $ V_{s_j}$ to $ V_{s_{i}} $, and (2) $ s_i\not\preceq s_j$ since for any injective function from $ V_{s_i}$ to $ V_{s_{j}} $, at least one of the edges with the marking $ p_r $ would be mapped to an edge with a marking $p_\ell$. Since $s\preceq_r s'$ implies $s\preceq s'$, this is also an antichain for $\preceq_r$. Observe also that these quasi-orders are not only strongly compatible. They are \emph{transition-preserving compatible} meaning that for all states $s,s'$ such that $s\leq s'$ and for all transition firings $s\xrightarrow{(v,t)}s_1$, there exist $s'_1$ and a transition firing $s'\xrightarrow{(v',t)}s'_1$ with $s_1 \leq s'_1$. In Petri net, the standard order on $\mathbb{N}^P$ is a well quasi-order which is transition-preserving compatible. The next proposition establishes that such a quasi-order does not exist in RPN. \begin{proposition} There does not exist a well quasi-order on states of RPN which is transition-preserving compatible. \end{proposition} \begin{proof} Consider the net of Figure~\ref{fig:antichain} and the family of states $\{s_n\}_{n\geq 1}$. By a simple examination one gets that for all $n\geq 1$, $s_n\xrightarrow{(v_{n+1},\tau_{\ell})\ldots (v_{1},\tau_{\ell})(v_{0},\tau_{r})} \emptyset$. Moreover for all $n'\neq n$, there does not exist a firing sequence from $s_{n'}$ labelled by $\tau_{\ell}^{n+1}\tau_{r}$. Thus for any transition-preserving compatible quasi-order $\leq$, these states are incomparable establishing that $\leq$ is not a well quasi-order. \end{proof} Since $\preceq$ is not a well quasi-order, RPNs with the relation $\preceq$ are not well structured transition systems (WSTS)~\cite{Alain01} for which coverability is decidable. Therefore to solve coverability, one needs to find another way. \section{Decision problems and reductions} \label{sec:reductions} In this section, we introduce the decision problems that we are going to solve and establish reductions to simpler problems in order to shorten the proofs of subsequent sections. Let $(\mathcal N,s_0)$ be a marked RPN and $s_f$ be a state of $\mathcal N$. \begin{itemize} % \item The \emph{cut problem} asks whether there exists a firing sequence $\sigma$ such that $s_0\xrightarrow{\sigma}\emptyset$? % \item The \emph{coverability problem} asks whether there exists a firing sequence $\sigma$ such that $s_0\xrightarrow{\sigma}s\succeq s_f$? % \item The \emph{termination problem} asks whether there exists an infinite firing sequence? % \item The \emph{finiteness problem} asks whether $Reach(\mathcal N,s_0)$ is finite? % \item The \emph{boundedness problem} asks whether there exists $B\in \mathbb{N}$ such that for all $s\in Reach(\mathcal N,s_0)$ and for all $v\in V_s$, one has $\max(M_s(v)(p))_{p\in P} \leq B$? % \end{itemize} Observe that contrary to Petri nets, the finiteness and boundedness problems are different and not equivalent. Indeed, an RPN can be bounded while due to the unbounded number of vertices, its reachability set can be infinite. We introduce the ``rooted'' version of the above problems: for these versions, $s_0$ is required to be some $s[r,m_0]$. In order to establish a reduction from the general problems to their rooted versions, given a marked RPN $(\mathcal N,s_0)$, we build a marked RPN $(\rooted{\mathcal N},s[r,\rooted{m_0}])$ that in a way simulates the former marked RPN. We do this by adding a place $p_v$ for every vertex $v\neq r$ of $s_0$ and we add an abstract transition $t_v$ that consumes a token from this place and creates a new vertex with initial marking in $M_{s_0}(v)+\sum_{v\xrightarrow{m_{v'}}_{s_0} v'}$. This will allow to create the children of $v$ in $s_0$ (see Figure \ref{fig:RPN_to_rooted_rpn}). In order to similarly proceed in the root, $\rooted{m_0}=M_{s_0}(r)+\sum_{r\xrightarrow{m_{v'}}_{s_0} v'}$. \begin{definition} Let $(\mathcal N,s_0)$ be a marked RPN. Then $(\rooted{\mathcal N},{\mathring{s_0}})$ is defined by: \begin{itemize}[nosep] % \item $\rooted{P} = P \cup \{p_v \mid v\in V_{s_0}\setminus\{r_{s_0}\}\}$ ; % \item $\rooted{T}_{ab}=T_{ab}\cup T_V$, $\rooted{T}_{\tau} = T_{\tau}$, $\rooted{T}_{el}=T_{el}$ with $ T_V=\{t_v \mid v\in V_s\setminus\{r_s\}\}$ ; % \item for all $t\in T$, one has $\rooted{W}^-(t)=W^-(t)$ and all $t\in T_{ab}\cup T_{el}$, $\rooted{W}^+(t)=W^+(t)$ ; % \item for all $t_v\in T_V$ and $u\xrightarrow{m_v}_{s_0} v$, $\rooted{W}^-(t_v)=p_v$ and $\rooted{W}^+(t_v)=m_v$ ; % \item for all $t\in T_{ab}$, $\rooted{\Omega}(t) = \Omega(t)$ ; % \item for all $t_v\in T_V$, $\rooted{\Omega}(t_v)=M_{s_0}(v)+\sum_{v\xrightarrow{m_{v'}}_{s_0} v'} p_{v'}$ ; % \item ${\mathring{s_0}} = s[r,M_{s_0}(r_s)+\sum_{r_{s_0}\xrightarrow{m_v}_{s_0} v} p_{v}]$. % \end{itemize} \end{definition} \begin{figure}[h] \input{rooted_rpn} \caption{From a marked RPN to a rooted one}\label{fig:RPN_to_rooted_rpn} \end{figure} Let $m\in \mathbb{N}^{\rooted{P}}$, we denote by $m_{|_{P}}\in\mathbb{N}^{P}$ the projection of $m$ on $P$. Let $s$ be a state of $\rooted{N}$, we denote $s_{|_{P}}$ a state of $\mathcal N$ obtained by projecting every marking of $s$ on $P$. \smallskip \noindent{\bf Observations.} \begin{enumerate}[nosep] % \item The encoding size of $(\mathcal N,s_0)$ is linear w.r.t. the encoding size of $(\rooted{\mathcal N},\rooted{s_0})$. % \item Let $e:=(v_i)_{0\leq i\leq k}$ be an enumeration of $V_{s_0}$ such that $v_0=r_{s_0}$ and for all $0< i\leq k$, $prd(v_i)\in \{v_j\}_{j<i}$. Consider $\sigma^e_{s_0} = (prd(v_i),t_{v_i})_{i=1}^{k}$. Such an enumeration is called \emph{consistent}. By construction of $\rooted{\mathcal N}$, ${\mathring{s_0}} \xrightarrow{\sigma^e_{s_0}}_{\rooted{\mathcal N}} s'_0$ with $s'_{0|P}=s_0$ and all places of $P_V$ unmarked in $s'_0$. % \item Let ${\mathring{s_0}}\xrightarrow{\sigma}_{\rooted{\mathcal N}}s$. Then by construction, for all $v\in V_{s_0} \setminus \{r_{s_0}\}$, there is at most one occurrence of $t_v$ which furthermore is fired in $prd(v)$. Moreover since these firings consume tokens in $P_V$ that were not used for firings of $T$, they can be pushed at the beginning of $\sigma$ (denoted by $\sigma_1$) and completed by the missing firings of $T_V$ in $\sigma$ (denoted by $\sigma_2$) getting a consistent enumeration $e$. Summarizing, denoting $\sigma_{| \mathcal N}$, $\sigma$ without the firings of $T_V$, one gets that: \\ (1) ${\mathring{s_0}}\xrightarrow{\sigma_1\sigma_{| \mathcal N}}_{\rooted{\mathcal N}}s$,\\ (2) ${\mathring{s_0}}\xrightarrow{\sigma\sigma_2}_{\rooted{\mathcal N}}s'$ and \\ (3) $s_0 \xrightarrow{\sigma_{| \mathcal N}}_{\mathcal N}s''$ with $s'_{|P}=s''$ and all places of $P_V$ are unmarked in $s'$. % \end{enumerate} Due to observation~2, we immediately get that: \begin{lemma} \label{lemma:fromNtoroot} Let $(\mathcal N,s_0)$ be a marked RPN and $s_0\xrightarrow{\sigma}_{\mathcal N}s$. Then for every consistent enumeration $e$, there exists a firing sequence ${\mathring{s_0}}\xrightarrow{\sigma^e_{s_0}{\sigma}}_{\rooted{\mathcal N}}s'$ with ${s'}_{|_P} = s$ and all places of $P_V$ are unmarked in $s'$. \end{lemma} Due to observation~3, we immediately get that: \begin{lemma} \label{lemma:fromroottoN} Let $(\mathcal N,s_0)$ be a marked RPN and ${\mathring{s_0}}\xrightarrow{\sigma}_{\rooted{\mathcal N}}s$. Then there exist a consistent enumeration $e$ and a decomposition $\sigma^e_{s_0}=\sigma_1\sigma_2$ such that $\rooted{s_0}\xrightarrow{\sigma_1\sigma_{| \mathcal N}}_{\rooted{\mathcal N}}s$, ${\mathring{s_0}}\xrightarrow{\sigma\sigma_2}_{\rooted{\mathcal N}}s'$ and $s_0 \xrightarrow{\sigma_{| \mathcal N}}_{\mathcal N}s''$ with $s'_{|P}=s''$ and all places of $P_V$ are unmarked in $s'$. \end{lemma} Due to the previous lemmas, we get that: \begin{proposition}\label{col:rooted} The cut (resp. coverability, termination, finiteness, boundedness) problem is polynomially reducible to the rooted cut (resp. coverability, termination, finiteness, boundedness) problem. \end{proposition} \begin{proof} \noindent Let $(\mathcal N,s_0)$ be a marked RPN and $s_f$ be a state of $\mathcal N$. Define $\rooted{s}_f$ a state of $\rooted{\mathcal N}$ be as $s_f$ with in all markings of $s_f$, all places of $\rooted{P} \setminus P$ unmarked. \noindent $\bullet$ Assume that there exists $s_0\xrightarrow{\sigma}_{\mathcal N}\emptyset$. Then by Lemma~\ref{lemma:fromNtoroot}, ${\mathring{s_0}}\xrightarrow{\sigma^e_{s_0}{\sigma}}_{\rooted{\mathcal N}}\emptyset$. Assume that there exists ${\mathring{s_0}}\xrightarrow{\sigma}_{\rooted{\mathcal N}}\emptyset$ which means that the last transition is fired in the root and is a cut transition. Then by Lemma~\ref{lemma:fromroottoN}, $s_0 \xrightarrow{\sigma_{| \mathcal N}}_{\mathcal N}s''$ for some $s''$. Since the last firing of of $\sigma_{| \mathcal N}$ is the cut transition fired in the root $s''=\emptyset$. \noindent $\bullet$ Assume that there exists $s_0\xrightarrow{\sigma}_{\mathcal N}s\succeq s_f$. Then by Lemma~\ref{lemma:fromNtoroot}, ${\mathring{s_0}}\xrightarrow{\sigma^e_{s_0}{\sigma}}_{\rooted{\mathcal N}}\rooted{s}$ with $\rooted{s}_{|P}=s$. Thus $\rooted{s}\succeq \rooted{s}_f$. Assume that there exists ${\mathring{s_0}}\xrightarrow{\sigma}_{\rooted{\mathcal N}}s\succeq \rooted{s}_f$. Then by Lemma~\ref{lemma:fromroottoN}, there exists $\sigma_2$ a firing sequence of $T_V$ with ${\mathring{s_0}}\xrightarrow{\sigma\sigma_2}_{\rooted{\mathcal N}}s'$, $s_0 \xrightarrow{\sigma_{| \mathcal N}}_{\mathcal N}s''$ and $s'_{|P}=s''$. Since $\sigma_2$ only creates vertices and deletes tokens from $P_V$, $s'\succeq \rooted{s}_f$. Thus $s'' \succeq s_f$. \noindent $\bullet$ Assume that there exists $s_0\xrightarrow{\sigma}_{\mathcal N}$ with $\sigma$ infinite. Then by Lemma~\ref{lemma:fromNtoroot}, ${\mathring{s_0}}\xrightarrow{\sigma^e_{s_0}{\sigma}}_{\rooted{\mathcal N}}$. Assume that there exists ${\mathring{s_0}}\xrightarrow{\sigma}_{\rooted{\mathcal N}}$ with $\sigma$ infinite. Then by Lemma~\ref{lemma:fromroottoN}, $s_0 \xrightarrow{\sigma_{| \mathcal N}}_{\mathcal N}$ with $\sigma_{| \mathcal N}$ infinite since there are only a finite number of firings of $T_V$. \noindent $\bullet$ Assume that $Reach(\mathcal N,s_0)$ is infinite. For all $s \in Reach(\mathcal N,s_0)$, define $\rooted{s}$ a state of $\rooted{\mathcal N}$ as $s$ with all places of $P_V$ in markings of $s$ unmarked. Due to Lemma~\ref{lemma:fromNtoroot}, $\rooted{s}\in Reach(\rooted{\mathcal N},\rooted{s_0})$. Since this mapping is injective, $Reach(\rooted{\mathcal N},\rooted{s_0})$ is infinite. Assume that $Reach(\rooted{\mathcal N},\rooted{s_0})$ is infinite. Let $s \in Reach(\rooted{\mathcal N},\rooted{s_0})$. Due to Lemma~\ref{lemma:fromroottoN}, consider $s \xrightarrow{\sigma_2}_{\rooted{\mathcal N}}s'$ and $s_0 \xrightarrow{\sigma_{| \mathcal N}}_{\mathcal N}s''$ with $s'_{|P}=s''$ and all places of $P_V$ unmarked in $s'$. Thus $s''\in Reach(\mathcal N,s_0)$. The mapping from $s$ to $s''$ is not injective. However, the inverse image of $s''$ by this mapping is finite since there are a finite number of consistent enumerations and prefixes of such enumerations. Thus $Reach(\mathcal N,s_0)$ is infinite. \noindent $\bullet$ Assume that $(\mathcal N,s_0)$ is unbounded. For all $s \in Reach(\mathcal N,s_0)$, define $\rooted{s}$ a state of $\rooted{\mathcal N}$ as $s$ with all places of $P_V$ in markings of $s$ unmarked. Due to Lemma~\ref{lemma:fromNtoroot}, $\rooted{s}\in Reach(\rooted{\mathcal N},\rooted{s_0})$. Thus $(\rooted{\mathcal N},\rooted{s_0})$ is unbounded. Assume that $(\rooted{\mathcal N},\rooted{s_0})$ is unbounded. By construction, the marking of places in $P_V$ is bounded. Let $s \in Reach(\rooted{\mathcal N},\rooted{s_0})$. Due to Lemma~\ref{lemma:fromroottoN}, consider $s \xrightarrow{\sigma_2}_{\rooted{\mathcal N}}s'$ and $s_0 \xrightarrow{\sigma_{| \mathcal N}}_{\mathcal N}s''$ with $s'_{|P}=s''$ and all places of $P_V$ unmarked in $s'$. Thus $s''\in Reach(\mathcal N,s_0)$. Since for all vertex $v$ of $s$, $v$ is also present in $s''$ and for all $p\in P$, $M_s(v)(p)=M_{s''}(v)(p)$. Then $Reach(\mathcal N,s_0)$ is unbounded. \end{proof} Let $\sigma$ be a firing sequence. A thread is \emph{extremal} w.r.t. $\sigma$ if it is an initial or final thread. \begin{definition} Let $\mathcal N$ be an RPN. Then $T_{ret}\subseteq T_{ab}$, the set of \emph{returning transitions} is defined by: $$\{t\in T_{ab}\mid \exists s[r,\Omega(t)] \xrightarrow{\sigma}\emptyset\}$$ \end{definition} For all $t\in T_{ret}$, we define $\sigma_{t}$ to be some arbitrary shortest \emph{returning sequence} (i.e. $s[r,\Omega(t)] \xrightarrow{\sigma_t}\emptyset$). We now introduce $\widehat{\mathcal N}$ obtained from $\mathcal N$ by adding elementary transitions that mimic the behaviour of a returning sequence. Observe that the size of $\widehat{\mathcal N}$ is linear w.r.t. the size of $\mathcal N$. \begin{definition} Let $\mathcal N$ be an RPN. Then $\widehat{\mathcal N}=\left<P,\widehat{T},\widehat{W}^+, \widehat{W}^-,\Omega\right>$ is defined by: \begin{itemize}[nosep] % \item $\widehat{T}_{ab}=T_{ab}$, $\widehat{T}_{\tau} = T_{\tau} $ , $\widehat{T}_{el}=T_{el}\uplus \{t^r\mid t\in T_{ret} \}$; % \item for all $t\in T$, $\widehat{W}^-(t)=W^-(t)$ and all $t\in T_{ab}\cup T_{el}$, $\widehat{W}^+(t)=W^+(t)$; % \item for all $t\in T_{ab}$, $\widehat{\Omega}(t)=\Omega(t)$; % \item for all $t\in T_{ret}$, $\widehat{W}^-(t^r)=W^-(t)$ and $\widehat{W}^+(t^r)=W^+(t)$. % \end{itemize} Figure~\ref{fig:RPN_hat} has an example of an RPN $\mathcal N$ and its $\widehat{\mathcal N}$. \end{definition} \begin{figure}[h] \input{rpn_hat} \caption{From $\mathcal N$ to $\widehat{\mathcal N}$ and $\widehat{\mathcal N}_{el}$}\label{fig:RPN_hat} \end{figure} Note that since $\widehat{\mathcal N}$ enlarges $\mathcal N$ by adding transitions and that any firing of $t^r$ in $\widehat{\mathcal N}$ can be replaced by the firing of $t\sigma_t$ in $\mathcal N$ we get: \begin{proposition} \label{prop:equivreach} Let $(\mathcal N,s_0)$ be a marked RPN. Then $Reach(\mathcal N,s_0)=Reach(\widehat{\mathcal N},s_0)$. \end{proposition} We call a firing sequence $\sigma$ \emph{omniscient} if any thread created during its firing is a final thread. \begin{proposition}\label{prop:omniciant} Let $(\mathcal N,s_0)$ be a marked RPN and $s_0\xrightarrow{\sigma}_{\mathcal N}s$. Then there exists a firing sequence $s_0\xrightarrow{\widehat{\sigma}}_{\widehat{\mathcal N}} s$ such that $\widehat{\sigma}$ is omniscient. \end{proposition} \begin{proof} Assume that we have an extremal thread $u$ which fires $ t\in T_{ab} $ creating a non final thread $ v $ that disappears by a matching cut transition $ (v,t_\tau)\in \sigma $ for $t_\tau\in T_\tau$. One builds $\sigma'$ by (1) deleting from $\sigma $ the transition $ (u,t) $, (2) deleting all the firings from $ Des_{\sigma}(v) $ in $\sigma$ and (3) replacing the transition $ (v,t_\tau) $ by $(u,t^r)$. We claim that $s\xrightarrow{\sigma}s' $. Indeed in $u$ the transition $(u,t^r)$ has the same incidence in $u$ as the transition $(u,t)$ followed by $(v,t_\tau)$ (`anticipating' $(v,t_\tau)$ only add tokens in intermediate states) and the other deleted firings are performed by threads in $Des_{\sigma}(v) $ which do not exist anymore. By taking $\widehat\sigma$ the sequence obtained by iterating the process, we get the omniscient sequence. \end{proof} In order to recover from a sequence in $\widehat{\mathcal N}$ a sequence in $\mathcal N$, for every $ t\in T_{ret}$ one has to simulate the firings of a transition $t^r$ by sequence $\sigma_t$. Therefore bounding the length of $\sigma_t$ is a critical issue. Recall that in~\cite{Rac78}, Rackoff showed that the coverability problem for Petri nets belongs to ${\sf EXPSPACE}$. More precisely, he proved that if there exists a covering sequence, then there exists a `short' one: \begin{theorem}[Rackoff \cite{Rac78}] \label{thm:Rackoff covering path}Let $\mathcal N$ be a Petri net, $m_{ini}$, $m_{tar}$ be markings and $\sigma$ be a firing sequence such that $m_{ini}\xrightarrow{\sigma}m\geq m_{tar}$. Then there exists a sequence $\sigma'$ such that $m_{ini}\xrightarrow{\sigma'}m'\geq m_{tar}$ with $\left|\sigma'\right|\leq 2^{2^{cn\log n}}$ for some constant $c$ and $n$ being the size of $(\mathcal N,m_{tar})$. \end{theorem} A surprising consequence of Rackoff's proof is that the length of the minimal coverability sequence does not depend on the initial marking of the net. \begin{proposition} \label{prop:Length of an abstract transition in RPN} Let $\mathcal N$ be an RPN and $t\in T_{ret}$. Then the returning sequence $\sigma_t$ fulfills $|\sigma_t|\leq 2^{\cdot2^{dn\log n}}$ for some constant $d$ and $n=size(\mathcal N)$. \end{proposition} \begin{proof} Let us enumerate $T_{ret}=\{t_1,\ldots,t_K\}$ in such a way that $i<j$ implies $|\sigma_{t_i}|\leq|\sigma_{t_j}|$. Observe first that the shortest returning sequences do not include firings of abstract transitions not followed by a matching cut transition since it could be omitted as it only deletes tokens in the thread. We argue by induction on $k\leq K$ that: \[ |\sigma_{t_{k}}|<2^{k\cdot2^{cn\log n}} \qquad \mbox{where } c \mbox{ is the Rackoff constant} \] For $k=1$, we know that $\sigma_{t_{1}}$ has a minimal length over all returning sequences. Hence there are no cuts in $\sigma_{t_{1}}$ except the last one. Due to the above observation, $\sigma_{t_{1}}$ only includes firing of elementary transitions. Thus the Rackoff bound of Theorem~\ref{thm:Rackoff covering path} applies for a covering of some final marking. \noindent Assume that the result holds for all $i<k$. Due to the requirement on lengths, $\sigma_{t_{k}}$ only includes cuts from threads created by $t_{i}\in T_{ret}$ with $i<k$. Thus by Proposition~\ref{prop:omniciant} we get a sequence $\widehat{\sigma}_{t_{k}}\cdot(r,t_\tau)$ in $\widehat{\mathcal N}$ (where $r$ is the root and $ t_\tau\in T_\tau $). The sequence $ \widehat{\sigma}_{t_{k}} $ consists of only elementary transitions and does not contain any transition $t_i^r$ with $i\geq k$. The marking of $r$ reached by $ \widehat{\sigma}_{t_{k}} $ covers some final marking, hence by Theorem~\ref{thm:Rackoff covering path} there exists a covering sequence $\widehat{\sigma}_{t_{k}}'$ such that $|\widehat{\sigma}_{t_{k}}'|\leq2^{2^{cn\log n}}$. Since $\widehat{\sigma}_{t_{k}}$ does not contain firing of $t_i^r$ with $i\geq k$ this also holds for $\widehat{\sigma}_{t_{k}}'$. Substituting any firing of $t_i^r$ by $\sigma_{t_i}$, one gets a corresponding sequence $\sigma_{t_k}'$ in $\mathcal N$. Using the induction hypothesis, one gets that the length of $\sigma_{t_k}'$ fulfills: \[ |\sigma_{t_k}'|\leq|\widehat{\sigma}_{t_{k}'}|2^{(k-1)\cdot2^{cn\log n}}\leq2^{2^{cn\log n}}\cdot 2^{(k-1)\cdot2^{cn\log n}}\leq2^{k\cdot2^{cn\log n}} \] From minimality of $\sigma_{t_{k}}$, one gets $|\sigma_{t_{k}}|\leq|\sigma_{t_k}'|\leq2^{k\cdot2^{cn\log n}}$ which concludes the proof since \[ \max_{t\in T_{ret}}\{ |\sigma_{t}|\}\leq2^{|T_{ret}| \cdot2^{cn\log n}}\leq2^{n2^{cn\log n}}\leq2^{2^{2cn\log n}}. \] \end{proof} Using the previous proposition, we can compute $T_{ret}$ in exponential space, by enumerating for all abstract transitions, all firing sequences of sufficient length and checking whether they lead to the empty tree. Below are immediate corollaries from the previous propositions: \begin{corollary}\label{col:bound_on_translation_of_N-hat_to_N} Let $\mathcal N$ be a marked RPN. Then for all $s\xrightarrow{\widehat{\sigma}}_{\widehat{\mathcal N}}s'$, there exists $s\xrightarrow{\sigma}_{\mathcal N}s'$ such that $|\sigma|\leq 2^{\cdot2^{dn\log n}} |\widehat{\sigma}|$ for some constant $d$ and $n=size(\mathcal N)$. \end{corollary} \begin{corollary}\label{col:build_N_hat} Given an RPN $\mathcal N$ one can build $\widehat{\mathcal N}$ in exponential space. \end{corollary} In order to mimic the behavior of a specific thread in a firing sequence (which will be useful later on), we introduce the Petri net $\widehat{\mathcal N}_{el}$. The size of $\widehat{\mathcal N}_{el}$ is also linear w.r.t. the size of $\mathcal N$. \begin{definition}\label{def:N_el} Let $\mathcal N$ be an RPN. Then the Petri net $\widehat{\mathcal N}_{el}=\left<P,\widehat{T}_{el},\widehat{W}^+_{el},\widehat{W}^-_{el}\right>$ is defined by: \begin{itemize} % \item $\widehat{T}_{el}=\widehat{T}\setminus T_\tau$; % \item For all $t\in \widehat{T}_{el}\setminus T_{ab}$, $\widehat{W}^{-}_{el}(t)=\widehat{W}^{-}(t)$ and $\widehat{W}^{+}_{el}(t)=\widehat{W}^{+}(t)$; % \item For all $t\in T_{ab}$, $\widehat{W}^{-}_{el}(t)=\widehat{W}^{-}(t)$ and $\widehat{W}^+_{el}(t)=0$. \end{itemize} Figure~\ref{fig:RPN_hat} has an example of an RPN $\mathcal N$ and its $\widehat{\mathcal N}_{el}$. \end{definition} As for $\widehat{\mathcal N}$, one can build $\widehat{\mathcal N}_{el}$ in exponential space. \noindent{\bf Observation.} The main (straightforward) property of $\widehat{\mathcal N}_{el}$ is the following one. Let $\sigma\in \widehat{T}_{el}^*$ with $n_t$ the number of occurrences of $t$ in $\sigma$. Then $m_0\xrightarrow{\sigma}_{\widehat{\mathcal N}_{el}} m$ if and only if $s[r,m_0]\xrightarrow{(r,\sigma)}_{\widehat{\mathcal N}} s$ with $V_s=\{r\}\cup \bigcup_{t\in T_{ab}} \{v_{t,1},\ldots,v_{t,n_t}\}$, $M_s(r)=m$ and for all $v_{t_i}$, $r \xrightarrow{W^+(t)}_s v_{t_i}$ and $M_s(v_{t_i})=\Omega(t)$. \section{Termination is {\sf EXPSPACE}-Complete} \label{sec:termination} In this section we tackle the termination problem for RPN. Let $(\mathcal N,s_{0})$ be a marked RPN. We denote the size of the input of the termination problem by $\eta$. In~\cite{Rac78} Rackoff showed that the termination problem for Petri net is solvable in exponential space: \begin{theorem}[Rackoff\cite{Lipton76,Rac78}] \label{thm:Termination Bound For PN }The termination problem for Petri nets is in ${\sf EXPSPACE}$-complete. \end{theorem} We aim to show that the termination problem for RPN is ${\sf EXPSPACE}$-complete. ${\sf EXPSPACE}$-hardness follows immediately from ${\sf EXPSPACE}$-hardness of the termination problem for Petri nets~\cite{Lipton76}. By Proposition~\ref{col:rooted} we can assume that $V_{s_0} = \{r\}$. Hence for the rest of the section, we will assume that $s_0 = s[r,m_0]$ for some marking $m_0$. \noindent A main ingredient of the proof is the construction of an \emph{abstract\ graph} related to the firing of abstract transitions. \begin{definition}[abstract\ graph] Let $(\mathcal N,s_0)$ be a marked RPN. Let $ G_{\mathcal N,s_{0}}=(V_{a},E_{a},M_{a}) $ be a labeled directed graph defined inductively as follows:\smallskip \begin{enumerate} % \item $r\in V_a$ and $M_{a}(r)=m_0$; % \item For any $v\in V_a$ and $t\in T_{ab}$, if there exists $s[v,M_{a}(v)] \xrightarrow{\sigma(v,t)}$ then $v_{t} \in V_a$, $(v,v_t)\in E_a$ and $M_{a}(v) = \Omega(t)$. \end{enumerate} \end{definition} Observe that an edge $(v,v_{t})$ means that from state $s[v,M_a(v)]$, the thread $v$ can fire $t$ in the future and by induction that $v_t\in V_a$ if and only if $t$ is fireable in the marked RPN. Observe that the size of $ G_{\mathcal N,s_{0}}$ is linear w.r.t. the size of $(\mathcal N,s_0)$. \begin{lemma}\label{lem:abstract graph in expspace} Let $(\mathcal N,s_{0})$ be a marked RPN. Then one can build its abstract graph in exponential space. \end{lemma} \begin{proof} First note that $|V_a|\leq|T_{ab}|+1$. Then for any vertex $v$ already in $V_a$ and any $t\in T_{ab}$ checking whether $s[v,M_{a}(v)] \xrightarrow{\sigma(v,t)}$ is fireable is equivalent to solving the covering problem $M_{a}(v)\xrightarrow{\sigma} m\succeq W^-(t)$ in $\widehat{\mathcal N}_{el}$ (recall Definition~\ref{def:N_el}) which can be done in exponential space due to Rackoff's coverability theorem for Petri nets. \end{proof} While we will not prove it, using a reduction from the Petri net coverability problem, one can show that we cannot use less than an exponential space to build the abstract graph. Let us illustrate the abstract\ graph in Figure~\ref{fig:at_graph} corresponding to the RPN of Figure~\ref{fig:rpn_example}. Here the initial state is $s[r,p_{ini}]$. For clarity, we have renamed the abstract transitions as follows: $ t:={t_{beg}}$, $ta:={t_{a_2}}$, $tb:={t_{b_2}}. $ For instance, the existence of the edge from $v_t$ to $v_{ta}$ is justified by the firing sequence $(v_t,t_{a_1})(v_t,ta)$. \begin{figure}[h] \begin{center} \input{abst_tran_graph} \end{center} \caption{An abstract graph for the RPN in Figure~\ref{fig:rpn_example} } \label{fig:at_graph} \end{figure} Let $\sigma$ be an infinite firing sequence. We say that $\sigma$ is \emph{deep}\ if it visits a state $s$ whose depth is strictly greater than $|T_{ab}|$. Otherwise, we say that $\sigma$ is \emph{shallow}. To solve the termination problem it suffices to show whether the RPN has such an infinite sequence, either shallow\ or deep. The next lemma establishes that \emph{lassos} of the abstract\ graph are witnesses of deep\ infinite sequences in an RPN: \begin{restatable}{lemma}{unboundedpath} \label{lem:Finidng unbounded path using at} Let $(\mathcal N,s_{0})$ be a marked RPN. Then there is a deep\ infinite sequence starting from $s_{0}$ if and only if there is a cycle in $G_{\mathcal N,s_{0}}$. \end{restatable} \begin{proof} $\bullet$ Assume that $ \sigma $ is a deep\ sequence. Hence, it reaches a state $\tilde{s}$ whose tree has a path $ \gamma $ starting from the root, with $|\gamma|> |T_{ab}|$. Let us denote it by $\gamma=(v_{i})_{i=1}^{m}$. For all $ i\leq m $ denote by $ t_i $ the abstract transition that creates $ v_i $. Using $ \gamma$, one builds a path $ \gamma_{a}=v_{1}v_{2}\ldots v_{m} $ in $G_{\mathcal N,s_{0}}$ as follows. First $ v_{1}=r $ and $ m_r= M_{a}(r) $. Since along $ \sigma $ the thread $ r $ fires $ t_1 $ to create $v_2$, there is an edge between $ r $ to $ v_{t_2} $ in $ G_{\mathcal N,s_{0}} $. For any $ 1< i\leq m $ the thread $ v_{i} $ is created with the marking $ \Omega(t_i)=M_{a}(v_{t_i}) $. Since $ v_{i+1} $ is a child of $ v_i $, somewhere on the sequence $ \sigma $ the thread $ v_i $ fires $ t_{i+1} $. Therefore there is an edge from $ v_{t_{i}} $ to $ v_{t_{i+1}} $ in $ G_{\mathcal N,s_{0}}$. The length of the path $ \gamma_a $ strictly greater then $ |T_{ab}|$, and since $V_{a}\leq|T_{ab}|+1$ there is a cycle in $\gamma_a$. \noindent $\bullet$ Conversely assume that there is a cycle in $G_{\mathcal N,s_{0}}$. Then there is an infinite path $\gamma_a=\{ v_{i}\} _{i=0}^{\infty}$ in $G_{\mathcal N,s}$ starting from $r$, where for any $i\geq1$ denote by $t_{i}$ the abstract transition associated the vertex $v_i$. We now translate this infinite path to an deep\ sequence on $\mathcal N$ with initial state $s_{0}$. Note that $v_{0}=r$ and that $m_r=M_{a}(r)$. By definition of $E_{a}$ there is a sequence $s \xrightarrow{\sigma_{1}}s_0'$ where the abstract transition $ t_1 $ is fireable from $v_{0}$ in $ s_0' $. We get $ s\xrightarrow{\sigma_{1}}s_0'\xrightarrow{(v_0,t_1)}s_{2}$. Denote by $ v_1 $ the thread created by $t_1$. The threads marking has $M_{s_1}(v_1)=M_{a}(v_1)$, therefore one continues translating the path $ \gamma_a $ in the same way as the first edge. Since for any $ (v_i,v_{i+1}) $ in $ \gamma_{a} $ we create a new thread from $ v_i $ one gets an deep\ sequence. \end{proof} We now show that for any shallow\ $\sigma$ there is a thread $v$ which fires infinitely many times in $\sigma$. \begin{restatable}{lemma}{infinitelymanytimes} \label{lem: thread repets infinitly many times} Let $(\mathcal N,s_0)$ be a marked RPN and $\sigma$ be a shallow\ sequence. Then there is a thread $v$ that fires infinitely many times in $\sigma$. \end{restatable} \begin{proof} If the root $r$ fires infinitely often then we are done. Otherwise, $r$ has finitely many children, and the firing subsequence of $\sigma$ of the subtree of (at least) one child, say $v$, must be infinite. If $v$ fires infinitely often then we are done. Otherwise, we proceed inductively up to $|T_{ab}|$ where some thread must fire infinitely often. \end{proof} We now show that given some state $s[r,m_0]$ one can check in exponential space the existence of a shallow\ sequence in which $r$ fires infinitely many times. \begin{restatable}{lemma}{sseqFromu} \label{lem:sseq from a given u} Let $(\mathcal N,s_0)$ be a marked RPN. Then one can check in exponential space, whether there exists an infinite sequence starting with $r$ firing infinitely many times. \end{restatable} \begin{proof} We first show that there is a sequence where $r$ fires infinitely many times if and only if there is a infinite firing sequence in the marked Petri net $(\widehat{\mathcal N}_{el},m_0)$. \noindent $\bullet$ Assume there exists such $\sigma$ in $(\mathcal N,s[r,m_0])$. Then the sequence $\sigma$ is also fireable in $(\widehat{\mathcal N},s[r,m_0])$. In $\widehat{\mathcal N}$, one eliminates in $\sigma$ the cut transitions by increasing occurrence order as follows. Let $(v,t)$ be a cut transition and $(v',t')$ be the firing that creates $v$. Then one deletes all the firings performed by the descendants of $v$ and replaces $(v',t')$ by $(v',t'^r)$. Let $\sigma'$ be the sequence obtained after this transformation. In $\sigma'$, the root still fires infinitely often since no firing performed by the root has been deleted (but sometimes substituted by an elementary firing). Moreover, $\sigma'$ has no more cut transitions. Consider the still infinite firing sequence $(r,\sigma'')$ where in $\sigma'$ all firings in other vertices than $r$ have been deleted. Observe now that by definition, $\sigma''$ is also an infinite sequence of $\widehat{\mathcal N}_{el}$. \noindent $\bullet$ Conversely, assume there exists an infinite firing sequence $\sigma$ of $(\widehat{\mathcal N}_{el},m_0)$. Then $(r,\sigma)$ is an infinite firing sequence of $(\widehat{\mathcal N},s[r,m_0])$ (with only root firings) entailing the existence of an infinite firing sequence of $(\mathcal N,s[r,m_0])$. \noindent By Theorem~\ref{thm:Termination Bound For PN }, one can check in exponential space whether there exists an infinite sequence of $(\widehat{\mathcal N}_{el},m_0)$. \end{proof} Summing up the results for shallow\ and deep\ sequences we get: \begin{theorem} \label{prop:Bounded path in EXPSPACE} The termination problem of RPN is in {\sf EXPSPACE}-complete. \end{theorem} \begin{proof} The algorithm proceeds as follows. It builds in {\sf EXPSPACE}\ (by Lemma~\ref{lem:abstract graph in expspace}) the abstract\ graph and checks whether there is a deep\ infinite sequence using the characterization of Lemma~\ref{lem:Finidng unbounded path using at}. In the negative case, it looks for a shallow\ infinite sequence. To this aim, it checks in exponential space for any reachable vertex $v$ from $r$ in $G_{\mathcal N,s_{0}}$, whether there exists an infinite sequence starting from $s[v,M_{a}(v)]$ with the root firing infinitely many times. The complexity follows from Lemma~\ref{lem:sseq from a given u} while the correctness follows from Lemma~\ref{lem: thread repets infinitly many times}. \end{proof}
1,108,101,563,089
arxiv
\section{Introduction} \label{sec:introduction} Topological insulator (TI) materials have attracted a lot of attention over the recent years \cite{annurev-conmatphys-062910-140432,RevModPhys.83.1057,RevModPhys.82.3045}. Their unusual metallic surface electronic structure on an inverted bulk band gap and the time reversal (TR) topological protection of these states, which forbids the backscattering, make TIs very fascinating materials \cite{annurev-conmatphys-062910-140432,RevModPhys.83.1057, RevModPhys.82.3045,PhysRevLett.105.166803,Zhang2009,PhysRevB.94.041302}. Due to the advances in synthesis techniques\cite{Eremeev2012} and their simple mathematical \cite{PhysRevB.82.045122} and computational modeling\cite{Yang2012}, Bi$_2$Se$_3$-like materials have been referred as the ``hydrogen atom'' of the 3DTI\cite{Xia2009}. These systems have been proposed as platforms for spintronic devices based on the control of induced magnetic moment direction \cite{PhysRevLett.109.076801}, surface barriers\cite{PhysRevB.90.205431}, and single-atom magnetoresistance\cite{Awadhesh2015}. In addition to the metallic surface topological protected states in a insulating bulk, experiments find that Bi$_2$Se$_3$-like materials exhibit electronic scattering channels, attributed to the presence of bulk states near in energy to the Dirac point\cite{annurev-conmatphys-062910-140432,Zhang2009, PhysRevLett.107.056803}. These ubiquitous bulk states are believed to prevent the observation of the expected unusual electronic and transport properties governed by surface states in 3DTIs\cite{PhysRevLett.107.056803, Brahlek201554,deVries2017}. First principles $GW$ calculations for surface states~\cite{Louie2012,PhysRevB.92.201404,PhysRevB.93.205442} show that bulk states of Bi$_{2}$Se$_{3}$ thin films are shifted below the Dirac point, while this is not the case for Bi$_{2}$Te$_{3}$. In contrast, other bulk band structure calculations show that there is barely any energy separation between the Dirac point and the bulk valence band maximum~\cite{PhysRevB.93.205442,Nechaev2013a,Aguilera2013a}. This is at odds with recent experimental results \cite{deVries2017} that, by investigating Shubnikov-de Haas oscillations in this material, showed the coexistence of surface states and bulk channels with high mobility. In order to obtain insight on this problem and understand the experimentally observed magnetotransport properties of thin films of rhombohedral TI materials, one needs an effective model capable of describing both the topological surface states as well as the bulk ones over the whole Brillouin zone. In addition, the effective Hamiltonian has to account for the presence of external magnetic fields and be amenable to model disorder effects, which is beyond the scope of first principle methods. The main purpose of this paper is to put forward a tight-binding model that fulfills these characteristics. Based on symmetry properties and $\boldsymbol{k}\cdot\boldsymbol{p}$ perturbation theory, Zhang and collaborators \cite{PhysRevB.82.045122} derived a Dirac-like Hamiltonian model describing the low energy band structure around the $\Gamma$-point of Bi$_2$Se$_3$-like 3DTIs. Subsequently\cite{PhysRevB.84.115413}, a tight-binding effective model has been proposed to describe the Brillouin of these systems, realizing both strong and weak TIs. However, the basis set used in such works fails to account for bulk states in the energy vicinity of the Dirac point and, hence, their effect on the electronic properties. Here, we propose an effective tight-binding model that provides insight on the above mentioned bulk states close to the Fermi energy that potentially spoil the bulk-boundary duality. In the presence of disorder these states can mix with the surface ones, quenching the topological properties of the material. We also use our model to discuss some known mechanisms to cause an energy shift of the bulk states, such as, stacking faults~\cite{Seixas2013} and applying an external electric field ~\cite{PhysRevLett.105.266806}. This paper is organized as follows. In Sec.~\ref{sec:tight-binding}, we derive a tight-binding model for Bi$_2$Se$_3$-type 3DTI materials, that is based on their crystal structure symmetries and reproduces the bulk {\it ab initio} band structure calculations, thus describing the continuous bulk states near the Fermi level. In Sec.~\ref{sec:thin_films} we calculate the surface modes and discuss the microscopic origin of the bulk states in these materials. In Sec.~\ref{sec:shift} we study mechanisms to eliminate and/or shift the bulk states below the Fermi surface. Finally, we present our conclusions in Sec. \ref{sec:conclusions}. The paper also contains one Appendix containing a detailed technical description of the effective model and the tight-binding parameters for both Bi$_{2}$Se$_{3}$ andBi$_{2}$Te$_{3}$ compounds. \section{TIGHT BINDING EFFECTIVE MODEL} \label{sec:tight-binding} \begin{figure*} \includegraphics[width = 0.95\linewidth]{fig1.pdf} \caption{(Color online) (a) Bulk band structure of Bi$_{2}$Se$_{3}$. The color code stands for the projections of the $p_{z}$ Bi orbitals (red), $p_{x}p_{y}$ Se orbitals (blue), and $p_{z}$ Se orbitals (green) in the wave function. The maximum and local maxima of the valence band are denoted by VBM and VBM', respectively. Panels (b) and (c) give the $p_{z}$ and $p_{x}p_{y}$ contributions of the $J=1/2$ and $J=3/2$ bands, respectively.} \label{Int1} \end{figure*} We begin this section by reviewing the key symmetry arguments that allow one to obtain a simple effective tight-binding model for Bi$_{2}$Se$_{3}$-like 3DTIs. Next, we present the \textit{ab initio} electronic structure calculations on which our effective tight-binding model is based. The crystalline structure of Bi$_2$Se$_3$-like 3DTIs is formed by Quintuple-Layers (QL) characterized by $D_{3d}^{5}(R{\overline {3}}m)$ point group symmetries \cite{Zhang2009}. The Bi$_2$Se$_3$ QL unit cell is composed by two bismuth and three selenium atoms \cite{Zhang2009}. The QL-QL interaction is weak, mainly ruled by the Van der Waals-like interaction \cite{Zhang2009,PhysRevB.82.045122,Seixas2013}. This allows one to model each QL unit cell by a triangular lattice site. Following the approach presented in Ref.~\onlinecite{PhysRevB.82.045122}, the Bi$_2$Se$_3$ hexagonal unit cell is conveniently described by three triangular lattice layers stacked in the $z$ direction, instead of considering three QL unit cells. This simple model preserves the symmetries of the $D_{3d}^{5}(R\overline{3}m)$ point group, namely: \textit{i}) threefold rotation symmetry $R_3$ along the $z$ axis, \textit{ii}) twofold rotation symmetry $R_2$ along the $x$ axis, \textit{iii}) inversion symmetry $\mathcal{P}$, and \textit{iv}) time-reversal symmetry $\mathcal{T}$. It is well established \cite{PhysRevB.82.045122, Zhang2009} that the bulk wave function at the $\Gamma$ point can be accurately described by a set of few effective states $\{|\Lambda^{\tau}_{J},j_{z}\rangle\}$. Here, $\tau$ is the state parity, $J$ is the total angular momentum with projection $j_{z}$ on the $z$ axes, and $\Lambda$ labels the Bi and Se orbital contributions. We use these states to obtain an effective Hamiltonian that reproduces the bulk states of rhombohedral TIs calculated using {\it ab initio} methods. The first-principle calculations are performed within the Density Functional Theory (DFT) framework\cite{Capelle2006bird}, as implemented in the SIESTA code\cite{soler2002siesta}, considering the on-site approximation for the spin-orbit coupling\cite{fernandez2006site,PhysRevB.89.155438}. The Local Density Approximation (LDA)\cite{perdew1981self} is used for the exchange-correlation functional. Figure~\ref{Int1} summarizes our \textit{ab initio} results for Bi$_{2}$Se$_{3}$. The color code represents the contribution of the Bi and Se $p_{z}$ orbitals and the Se $p_{x}p_{y}$ atomic orbitals to the electronic structure. The main orbital contributions are associated with $p$ orbitals corresponding to $J=3/2, 1/2$ and $j_{z}=\pm3/2, \pm1/2$ states (Fig.~\ref{Int1}a). To conserve the total angular momentum the $|\Lambda^{\pm}_{3/2},\pm 3/2\rangle$ effective states must be a linear combination of $p_x$ and $p_y$ orbitals, whereas the $|\Lambda^{\pm}_{J},\pm 1/2\rangle$ states correspond to a linear combination of all $p$ orbitals (Fig. \ref{Int1}b and Fig. \ref{Int1}c). The symmetry properties of the $|\Lambda^{\tau}_{J},j_{z}\rangle$ states are discussed in Appendix A. The bulk Valence Band Maximum (VBM) is located along the $Z\rightarrow F$ symmetry path, as shown in Fig.~\ref{Int1}a. In addition, one finds two local maxima, denoted by VBM', along the $F\rightarrow\Gamma$ and $\Gamma\rightarrow L$ lines, both close to the $\Gamma$-point. In line with previous results\cite{PhysRevB.84.115413}, we observe that both VBM and VBM' have a strong $p_{z}$ Se orbital character. However, we find that the so far neglected $p_{x}p_{y}$ orbitals play a key role for an accurate description of the orbital composition of the valence band maxima, as we discuss below. Along the $\Gamma\rightarrow Z$ symmetry line, the $R_{3}$ symmetry is preserved. Thus, the $|\Lambda_{1/2},\pm1/2\rangle$ and $|\Lambda_{3/2},\pm3/2\rangle$ effective states do not mix. In contrast, in the $\Gamma\rightarrow L$ and $\Gamma\rightarrow F$ paths the $R_{3}$ symmetry is broken. This allows for the hybridization of $p_z$ atomic orbitals with $p_{x}$ and $p_{y}$ ones. We find that this hybridization can be rather large, as clearly shown by Figs.~\ref{Int1}b and \ref{Int1}c, where we present the Se orbital composition of the $J=1/2$ and $J=3/2$ bands along the Brillouin zone. Since the valence band maxima do not belong to the $\Gamma\rightarrow Z$ symmetry line, their orbital composition is a superposition of all $p$ Se-atomic orbitals. As a consequence, a minimal Hamiltonian aiming to effectively describe VBM and VBM' needs to take into account the states associated with the $p_{x}$ and $p_{y}$ orbitals, instead of including just the states with $p_z$ character \cite{PhysRevB.82.045122,PhysRevB.84.115413}. To calculate the surface electronic structure in the presence of surface projected bulk states, we consider a tight-binding model with eight states, namely, the $|\text{Se}^{-}_{1/2},\pm 1/2\rangle$ and $|\text{Bi}^{+}_{1/2},\pm 1/2\rangle$ states responsible for the band inversion, and $|\text{Se}^{-}_{3/2},\pm 3/2\rangle$ and $|\text{Se}^{+}_{3/2},\pm 3/2\rangle$ that dominate the most energetic $J=3/2$ band. Using this basis, we write the 8$\times$8 Hamiltonian: \begin{equation} \mathcal{H}(\boldsymbol{k}) = \left(\begin{array}{cc} \mathcal{H}_{1/2}(\boldsymbol{k}) & \mathcal{H}_{\rm int}(\boldsymbol{k})\\ \mathcal{H}_{\rm int}^{\dagger}(\boldsymbol{k}) & \mathcal{H}_{3/2}(\boldsymbol{k})\\ \end{array}\right), \label{eq1} \end{equation} where $\mathcal{H}_{1/2}(\boldsymbol{k})$ is the standard 4$\times$4 Hamiltonian discussed in the literature \cite{PhysRevB.82.045122,PhysRevB.84.115413}, that considers only $|{\rm Bi}^{+}_{1/2},\pm 1/2\rangle$ and $|{\rm Se}^{-}_{1/2},\pm 1/2\rangle$ states \footnote{We note that Ref.~\onlinecite{PhysRevB.82.045122} presents an $8\times 8$ Hamiltonian, which is slightly different from ours, but does not explore its consequences of the additional bands. The focus of this seminal paper is the study of $\mathcal{H}_{1/2}({\bm k})$. } Our model introduces $\mathcal{H}_{3/2}(\boldsymbol{k})$, a 4$\times$4 Hamiltonian associated with the $|{\rm Se}^{-}_{3/2},\pm 3/2\rangle$ and $|{\rm Se}^{+}_{3/2},\pm 3/2\rangle$ states, and $\mathcal{H}_{\rm int}(\boldsymbol{k})$ the corresponding coupling term. For a given total angular momentum $J$ the matrix elements in $\mathcal{H}(\boldsymbol{k})$ read \begin{equation} [\mathcal{H}(\boldsymbol{k})]_{ii^\prime}=\varepsilon_{ii^\prime}(\boldsymbol{k})\delta_{ii^\prime}+ \sum_{\nu}\left(t^{ii^\prime}_{\boldsymbol{a}_{\nu}}e^{i\boldsymbol{k}\cdot\boldsymbol{a}_{\nu}} +t^{ii^\prime}_{\boldsymbol{b}_{\nu}}e^{i\boldsymbol{k}\cdot\boldsymbol{b}_{\nu}}\right), \label{matrixelements} \end{equation} where the states are labeled by $i=(\Lambda,J,\tau,j_{z})$, $\varepsilon_{ii}(\boldsymbol{k})$ are on-site energy terms, and $t^{ii^\prime}_{\bm c}=\langle{\bm n},\Lambda^{\tau}_{J},j_{z}|H| \boldsymbol{n}+{\bm c},\Lambda^{'\tau'}_{J'},j_{z}'\rangle$ are the corresponding nearest neighbor QL hopping terms, with $\boldsymbol{n}_\nu$ and $\tau$ indicating lattice site and orbital parity, respectively. Here ${\bm c} = {\bm a}_\nu$ or ${\bm b}_\nu$, where $\pm\boldsymbol{a}_{\nu}$ stands for the 6 intra-layer nearest neighbor vectors of each triangular lattice, namely, ${\bm a}_{1}=(a,0,0), \boldsymbol{a}_{2}=(-a/2,\sqrt{3}a/2,0), \boldsymbol{a}_{3}=(-a/2,-\sqrt{3}a/2,0)$, while $\pm\boldsymbol{b}_{\nu}$ denotes the 6 inter-layer nearest neighbors vectors, ${\bm b}_{1}=(0,\sqrt{3}a/3,c/3), {\bm b}_{2}=(-a/2,-\sqrt{3}a/6,c/3), {\bm b}_{3}=(a/2,-\sqrt{3}a/6,c/3)$ with $a = 4.14$ \AA~and $c = 28.70 $ \AA~\cite{Zhang2009}. \begin{figure} \includegraphics[width = 0.95\columnwidth]{fig2.pdf} \caption{(Color online) Comparison between the DFT (gray solid lines) and the tight-binding model (red dotted lines) bulk band structure of Bi$_{2}$Se$_{3}$. } \label{fig:TB_bulk} \end{figure} Exploring the system symmetries, we find constraints relating the nearest neighbors QL hopping terms $t^{ij}_{\bm c}$, thereby reducing the total number of possible hopping terms from 432 to 30 independent ones (see Appendix \ref{sec:appA}). The corresponding 30 tight-binding parameters are determined by fitting the tight-binding model bulk band structure to the one calculated with DFT, shown in Fig~\ref{fig:TB_bulk}. We present the complete Hamiltonian and provide more details on the fitting procedure in Appendix \ref{sec:appA}. The proposed Hamiltonian captures the low-energy {\it ab initio} band dispersion, even for $k$-points far from $\Gamma$, overcoming an intrinsic limitation of the ${\bm k}\cdot {\bm p}$ models proposed in the literature to describe the band inversion at the $\Gamma$ point. We show in the Appendix \ref{sec:appA} how to reduce our model to a ${\bm k}\cdot {\bm p}$ Hamiltonian by taking the approximation $k\rightarrow\Gamma$ and relating, for instance, the hopping terms $t^{ii^\prime}_{\boldsymbol{a}_{\nu}}$ and $t^{ii^\prime}_{\boldsymbol{b}_{\nu}}$ to the perturbation theory parameters of Ref.~\onlinecite{PhysRevB.82.045122}. The inclusion of additional bands does not affect the band inversion, for instance, the $J=3/2$ bands have much lower energies than the $J=1/2$ bands. \section{Thin films} \label{sec:thin_films} In this section we calculate the electronic band structure of rhombohedral TI thin films. We take the QLs parallel to the $xy$-plane and define the $z$-axis as the stacking direction. The thickness of the films is given in terms of $N_{\rm QL}$, the number of stacked QLs. The surface corresponds to the outermost QLs. The surface states correspond to the ones spatially localized in these QLs. We modify the bulk tight-binding Hamiltonian defined in Eq.~(\ref{eq1}) to account for a finite number of layers. The slab Hamiltonian consists of intra- and inter-layer terms, namely \cite{Ebihara2012885} \begin{equation} \mathcal{H}_{\rm slab}= \sum_{n=1}^{N_{\rm QL}}{c}^{\dagger}_{n}\mathcal{H}^{}_{0}{c}_{n}^{}+ \sum_{n=1}^{N_{{\rm QL}}-1}\left({c}^{\dagger}_{n}\mathcal{H}^{}_{z}{c}^{}_{n+1}+ \rm {H.c.}\right). \label{Slab} \end{equation} The basis is given by $|n, k_{x},k_{y},\Lambda^{\tau}_{J},j_{z}\rangle$ with corresponding creation (annihilation) operators given in compact notation by $c^{\dagger}_{n}$ ($c_{n}^{}$). The intra-layer matrix elements read \begin{equation} [\mathcal{H}_{0}({\bm k})]_{ii^\prime}=\varepsilon_i({\bm k})\delta_{ii^\prime}+ \sum_{\nu=1}^{6}t^{ii^\prime}_{\boldsymbol{a}_{\nu}}e^{i\boldsymbol{k}\cdot\boldsymbol{a}_{\nu}}. \label{eq:interlayer} \end{equation} The latter are similar to those of Eq.~\eqref{matrixelements}, but restricted to two-dimensions, namely, ${\bm k}= (k_x, k_y)$. In turn, the inter-layer term, \begin{equation} [\mathcal{H}_{z}]_{ii^\prime}=\sum_{\nu}t^{ii^\prime}_{\boldsymbol{b}_{\nu}}, \label{eq:intralayer} \end{equation} provides the coupling between nearest neighbor QLs planes. It is well established that a bulk band inversion occurs between states dominated by $p_{z}$ Se and Bi atomic orbitals with different parities \cite{Zhang2009}. The four states $|\text{Se}^{-}_{1/2},\pm 1/2\rangle$ and $|\text{Bi}^{+}_{1/2},\pm 1/2\rangle$ form a good basis to describe the surface states at the $k$-points near the $\Gamma$ point \cite{PhysRevB.82.045122,PhysRevB.84.115413}. However, similarly to bulk systems, this reduced basis also fails to correctly describe the bulk states close in energy to the Dirac point in thin Bi$_2$Se$_3$ films. \begin{figure}[h!] \includegraphics[width = 0.9\linewidth]{fig3.pdf} \caption{Band structure along the $\Gamma\rightarrow M$ symmetry line, {without considering $J=3/2$-states}, for different film thicknesses of Bi$_{2}$Se$_{3}$.} \label{fig:films_reducedH} \end{figure} To better understand the importance of the $J=3/2$ states, let us first consider a thin film described by the Hamiltonian $\mathcal{H}_{1/2}({\boldsymbol{k}})$ projected out from $\mathcal{H}_{\rm slab}$. Figure \ref{fig:films_reducedH} shows the finite size effects and how the band structure is modified by increasing $N_{\rm QL}$\cite{Ebihara2012885}. For $N_{\rm QL}\geqslant 3$ one clearly observes the appearance of surface states and the formation of a Dirac cone. For $N_{\rm QL}\gg 1$ the bulk band gap is recovered. We stress that without the $J=3/2$ states, the model does not show VBM bulk states close to the Fermi level, as expected from the analysis of bulk band structure (see, for instance, Fig.~\ref{Int1}). Moreover, within this simple model the band structure close to the Dirac point along the $\Gamma\rightarrow K$ and $\Gamma\rightarrow M$ paths are identical, which is a rather unrealistic symmetry feature. \begin{figure}[h!] \includegraphics[width= 0.95\linewidth]{fig4.pdf} \caption{(Color online) Band structure of a Bi$_{2}$Se$_{3}$ thin film for different thickness values, $N_{\rm QL} = 1,2,3,6,9,$ and 32, using our 8$\times$8 tight-binding effective model.} \label{fig:bs-varyingQLs} \end{figure} The $J=3/2$ states modify significantly the electronic band structure. Figure \ref{fig:bs-varyingQLs} summarizes the results we obtain for the 8$\times$8 total effective Hamiltonian, Eq. \eqref{eq1}. Even for a few QLs, the shape of the surface band structure reproduces the qualitative behavior observed in the bulk LDA-DFT calculations. Figure \ref{fig:bs-varyingQLs} shows that as $N_{\rm QL}$ is increased, the Dirac cone is formed and bulk states appear in the vicinity of the Fermi level turning the system into a metal. \section{Application: Bulk states engineering} \label{sec:shift} Several strategies have been proposed and used to suppress the scattering channels associated with the continuous bulk states, like for instance, alloy stoichiometry \cite{ZhangJinsong2011,Arakane2012,PhysRevB.84.165311,Abdalla2015}, application of an external electric field\cite{PhysRevLett.105.266806}, stacking faults\cite{Seixas2013}, and strain\cite{LiuY2014,nanolett5b00553}. Let us now use the effective model put forward in the previous section to discuss some of these known strategies to shift the bulk band states away from the Dirac point energy, defined as $\varepsilon = 0$. Our analysis is based on the observation that the energy of the bulk states along the $\Gamma\rightarrow M$ symmetry path depends very strongly on the in-plane interaction between $|{\rm Se}^{-}_{1/2},\pm 1/2\rangle$ and $|{\rm Se}^{+}_{3/2},\pm 3/2\rangle$ states. We find that by increasing the matrix elements associated with the mixture of the above states the bulk states are shifted up in energy, as shown by Fig.~\ref{fig:Se-Se_hopping}. \begin{figure}[h!] \includegraphics[width= 0.9\linewidth]{fig5.pdf} \caption{(Color online) Surface band structure for $N_{\rm QL} =20$ calculated using the eight bands effective model with hopping term (repulsion parameter) (a) $t^{ij}_{b}= 1.2$ eV, (b) $t^{ij}_{b}= 1.5$ eV, (c) $t^{ij}_{b}= 1.8$ eV, and (d) $t^{ij}_{b}= 2.1$ eV, where $i=(\mbox{Se}_{1/2}^{-},\pm1/2)$, and $j=(\mbox{Se}_{3/2}^{+},\pm3/2)$. The color code stands for the magnitude of the projection of the orbitals at the outermost (surface) QLs. Pure surface states are indicated by blue, whereas bulk states are depicted in red.} \label{fig:Se-Se_hopping} \end{figure} Hence, as previously proposed \cite{Abdalla2015}, one way to engineer the VBM and VBM$^\prime$ states is by substituting the Se atoms by chemical elements that do not spoil the topological properties of the material and reduce the interaction between $|{\rm Se}^{-}_{1/2},\pm 1/2\rangle$ and $|{\rm Se}^{+}_{3/2},\pm 3/2\rangle$ states. This effect can be described by a simple model in terms of the direct modification of the matrix element $t^{ij}_{b}$ that mixes the $|{\rm Se}^{-}_{1/2},\pm 1/2\rangle$ and $|{\rm Se}^{+}_{3/2},\pm 3/2\rangle$ states. In fact, the band structures obtained for several values of $t^{ij}_{b}$ shown in Fig.~\ref{fig:Se-Se_hopping} qualitatively describe the first-principles calculations for Bi$_2$(Se$_{1-x}$S$_{x}$)$_{3}$ alloys \cite{Abdalla2015}. Alternatively, the double degenerate surface-state bands due to the presence of two [111] cleavage surfaces in a slab geometry can be removed by applying a perpendicular electric field $E_{0}\hat{z}$ \cite{PhysRevLett.105.266806}. The Dirac cone associated with the surface at the highest potential energy can be shifted above the VBM, leading to a suppression of the scattering channels between the topologically protected metallic surface states and the bulk states. We describe this effect using our tight-binding effective model by modifying the on-site term $\varepsilon(\boldsymbol{k})\delta_{ij}$ in the inter-layer matrix elements associated with each QL. As a result, Eq.~\eqref{eq:interlayer} becomes \begin{equation} [\mathcal{H}_{0}(k_{x}, k_{y})]_{n,ij}=\tilde{\varepsilon}_{n}(\boldsymbol{k})\delta_{ij}+ \sum_{\nu}t^{ij}_{\boldsymbol{a}_{\nu}}e^{i\boldsymbol{k}\cdot\boldsymbol{a}_{\nu}}, \end{equation} where $\tilde{\varepsilon}_{n}(\boldsymbol{k})=\varepsilon(\boldsymbol{k})+ nceE_{0}/N_{QL}$, $n$ is the layer index, and $e$ is the electron charge. This simple approach captures the shift of the Dirac cone located at the surfaces corresponding to the QL with $n=N_{QL}$ and $n=0$. Figure \ref{Fig6}a show the effect of an electric field of $E = 5\times 10^{-3}$ V/\AA~ on a thin film of $N_{\rm QL} = 9$. Another band engineering strategy has been suggested by \textit{ab-initio} atomistic investigations on the role played by extended defects, like stacking faults, on the structural and electronic properties of 3D topological insulators~\cite{Seixas2013}. In $R\overline{3}m$ structures the typical stacking is a ABCABC configuration, that is, each QL is rotated with respect to its adjacent QL by 120$^{o}$. When a QL is ``removed" leading to a ACABCA, ABABCA, or ABCBCA stacking configuration, the defects is called an intrinsic stacking fault. The inter-QLs distance decreases as a consequence of these stacking faults, making the Van der Waals inter-QLs interaction weaker and changing the on-site potential of the QLs in which the structural defect is located~\cite{Seixas2013}. Thus, it is relatively easy to account for this effect within our model, namely, we rewrite the on-site energy and the inter band interaction as $\varepsilon_{n}(\boldsymbol{k})-\delta\varepsilon_{0}$ and $t^{ij}_{\boldsymbol{b}_{\nu}} -\delta t$. Stacking faults nearby the surface layers of Bi$_2$Se$_3$ give rise to a positive energy shift of the bulk states with respect to their energy in a pristine system\cite{Seixas2013}. This shift is typically about 75 meV. Thus, we obtain a qualitative description of the stacking faults effect by fitting $\delta\varepsilon_{0}$ and $\delta t$ to the DFT results only for the QLs with this structural defect, see Fig.~\ref{Fig6}. Our simplified model and description allows for the study of thin films with a large number of QLs. \begin{figure} \includegraphics[width=8.6cm]{fig6.pdf} \caption{(Color online) Electric field (left) and stacking faults (right) effect of the band structure of a Bi$_2$Se$_3$ thin film of 9QLs. The splitting between the Dirac cones associated with different surfaces is represented by the arrow. The color code quantifies the surface/bulk character of the electronic states, see caption of Fig. \ref{fig:Se-Se_hopping}.} \label{Fig6} \end{figure} \section{Conclusions} \label{sec:conclusions} We have revisited the band structure calculations of rhombohedral topological insulators, both bulk and thin films, and investigated the occurrence of bulk states at the Fermi level. Based on \textit{ab initio} calculations, we construct a simplified tight-binding model considering the states with angular momentum $J=1/2$ and $J=3/2$ and therefore, taking explicitly into account the $p_{x}p_{y}$ Se orbitals contributions. Our model shows that the energy of bulk states near the Dirac-point is associated with a band mixing, which is mainly ruled by the hopping term between $p_{z}$ and $p_{x}p_{y}$ states. The valence band maximum appears in the symmetry path in which the $R_{3}$ symmetry is broken. In this situation, the $J=3/2$ states can mix with the $J=1/2$ ones. We illustrate the versatility of our tight-binding model by studying some strategies to eliminate and/or shift the bulk states away from the Fermi surface. We show that the band structures obtained using our simple model reproduce qualitatively very well computationally costly {\it ab initio} calculations found in the literature. In summary, we show that our simple effective model captures the main surface band structure features, allowing to explore strategies to perform a continuous bulk states engineering and opening the possibility to model disorder, which is ubiquitous in rhombohedral TIs and beyond the scope of {\it ab initio} calculations. \acknowledgements This work was supported by FAPESP (grant 2014/12357-3), CNPq (grant 308801/2015-6), and FAPERJ (grant E-26/202.917/2015).
1,108,101,563,090
arxiv
\section{Introduction} The Galactic VHE sky contains objects of a variety of classes, including pulsar wind nebulae (PWNe), binary systems, supernova remnants (SNRs), and pulsars. Many Galactic gamma-ray sources are resolved as extended in the VHE band, which enables morphological studies and derivation of spatially-dependent spectra. Studying the characteristics of the emission from these objects allows to determine the locations and mechanisms of the particle acceleration responsible for the observed radiation. This proceeding provides an overview of recent Galactic science results of the VERITAS collaboration. \section{VERITAS} VERITAS is an array of four 12\,m diameter IACTs and is located at the Fred Lawrence Whipple Observatory in southern Arizona (31$^{\circ}$ 40' N, 110$^{\circ}$ 57' W, 1.3\,km a.s.l.). VERITAS started full-array operations in 2007. The telescope reflectors consist of 345 hexagonal mirror facets, and the cameras comprise 499 photomultiplier tubes giving a field of view (FoV) of $\sim$3.5$^{\circ}$. VERITAS is sensitive to VHE gamma-ray photons in the energy range $0.85$ to $>$30\,TeV with a sensitivity to detect a 1\% Crab Nebula source in $\sim$25 hr. It has an angular resolution of $0.1^{\circ}$ at 68\% containment and a pointing-accuracy error of less than 50 arcseconds~\cite{2008AIPC.1085..657H}. The VERITAS data analysis results presented here follow the methodology outlined in~\cite{2008ApJ...679.1427A}. \section{The Binary System PSR J2032+4127/MT91 213} \label{sec:j2032} PSR J2032+4127/MT91 213 is a binary system comprising the young gamma-ray pulsar J2032+4127 and a 15\,M$_{\odot}$ Be star with an orbital period of $\sim$50\,yr~\cite{2015MNRAS.451..581L}. The binary nature of the system PSR J2032+4127/MT91 213 (hereafter referred to as J2032) was only recently established via radio observations of dramatic changes to the pulsar spin-down rate~\cite{2015MNRAS.451..581L}. The system experienced periastron passage in 2017 November, where the orbital separation fell to just $\sim$0.5\,AU~\cite{2017ApJ...836..241T}. Co-located with J2032 is the extended VHE gamma-ray source TeV J2032+4130, which was long-classified as an unidentified object, despite thirteen years of observations since its discovery. It is now thought likely to be a pulsar wind nebula, given that it is co-located with the young, energetic \textit{Fermi}-LAT-detected pulsar J2032+4127. The X-ray flux from J2032 was seen to rise for some time, with an increase by a factor of 70 between 2002 and 2016 reported in~\cite{2017MNRAS.464.1211H}. This brightening has been interpreted as a result of increasing wind energy in the shock region formed by the pulsar and stellar winds~\cite{2017MNRAS.464.1211H}. More recently, \cite{2018ApJ...857..123L} reported a continuing X-ray flux increase up to 2017 May in \textit{Swift}-XRT data. Evidence for variability in the X-ray light curve is also present. In contrast to the increasing X-ray flux from J2032, the high-energy gamma ray flux seen in \textit{Fermi}-LAT data appeared steady leading up to and during the 2017 periastron passage, likely due to masking by the strong magnetospheric emission~\cite{2018ApJ...857..123L}. Recent observations by VERITAS and MAGIC revealed a rising VHE gamma-ray flux beginning in 2017 September associated with the emergence of a new point source in the TeV J2032+4127 region~\cite{2018ApJ...867L..19A}. The emergence of this new point source confirmed that the system is a gamma-ray binary and resulted in confirmation of the second-known gamma-ray binary with a known compact companion\footnote{The other is PSR B1259--63/LS 2883.}. Long-term and near-periastron VHE and X-ray light curves of J2032 are shown in Figure~\ref{fig:2032_lightcurves}. The VHE flux displays a clear rise and then a dip as the system nears and passes periastron, and the VHE flux is poorly correlated during this time with the overlaid model prediction. That the time-dependent behavior of the VHE flux is not well modeled at present underscores the poorly understood geometry of the system---significant revisions on this front will be required moving forward~\cite{2018ApJ...867L..19A}. \begin{figure}[t] \centering \subfloat[]{\includegraphics[scale=0.355]{./lightCurve_extended}} \subfloat[]{\includegraphics[scale=0.355]{./lightCurve_periastron}} \\ \caption{Light curves of PSR J2032+4127/MT91 213 for the full data set (left) and near periastron (right). Upper panels show the 0.3--10.0 keV {\it Swift-XRT} light curves of PSR J2032+4127/MT91 213. Lower panels show the $>200$\,GeV light curves from VERITAS (green) and MAGIC (blue). The average fluxes seen by VERITAS and MAGIC prior to 2017 are indicated by horizontal solid lines. The solid gray lines (right axes) are the energy-flux light curve predictions from~\cite{} for X-rays and updated predictions from~\cite{2018ApJ...857..123L} using parameters from~\cite{2017ApJ...836..241T} (J. Takata 2018, priv. comm.) for VHE gamma-rays, assuming an inclination angle of $60^{\circ}$. The vertical gray dashed line indicates the time of periastron passage.} \label{fig:2032_lightcurves} \end{figure} \section{The Supernova Remnant Cassiopeia A} \label{sec:casA} Observations of cosmic rays have long shown a contribution from within the Galaxy up to energies of $>$10$^{15}$\,eV (1\,PeV) where the spectrum shows a break at the ``knee'', above which the cosmic-ray contribution is thought to be primarily extragalactic. Determining the Galactic source population (the so-called ``PeVatrons'') responsible for these PeV-scale cosmic rays has been one of the primary historical motivations for VHE gamma-ray astronomy, though these efforts have been fruitless with perhaps the exception of the Galactic Center~\cite{2016Natur.531..476H}. One of the most intensively studied classes of Galactic VHE gamma-ray sources are the supernova remnants (SNRs), which have been historically proposed as potential PeV-scale accelerators of hadrons. Diffusive shock acceleration remains the accepted explanation for cosmic-ray acceleration in SNRs and can in principle provide the required PeV-scale cosmic-ray energies~\cite{2001RPPh...64..429M}. However, the observed VHE gamma-ray spectra of SNRs display cutoffs at sub-PeV energies~\cite{2013APh....43...71A} and thus do not currently allow the conclusion that the known population of VHE-emitting SNRs constitutes the missing PeVatrons. The supernova remnant Cassiopeia A (henceforth Cas A) is one of the best-studied young SNRs in the Galaxy. Non-thermal X-ray emission has been observed that indicates a population of multi-TeV electrons in the forward shock~\cite{2001ApJ...552L..39G}, while the observed gamma-ray emission from MeV--TeV energies is of more ambiguous origin~\cite{2014A&A...563A..88S}. Gamma-ray observations in the VHE band by MAGIC revealed a spectrum that is best described by a power law with an exponential cutoff at $3.5$\,TeV, indicating that Cas A is not currently operating as a PeVatron. A combined \textit{Fermi}-LAT and VERITAS spectrum also strongly favors a cutoff at a few TeV (see Figure~\ref{fig:casAspectrum}), and recent modeling tends to favor hadronic scenarios for the observed gamma-ray spectra (Abeysekara et al., {\it in prep}). In short, though a hadronic origin of the cosmic rays in Cas A is currently preferred, VHE observations by MAGIC and VERITAS have not established Cas A as a potential source of cosmic rays up to the knee. \begin{figure}[h] \centering \includegraphics[scale=0.85]{./casA_spectrum_v1} \caption{Combined {\it Fermi}-LAT and VERITAS SED of Cas A. The measured \textit{Fermi}-LAT spectral points are shown in red, while the VERITAS points are shown in blue. The best-fit model (a power law with exponential cutoff) is shown with a dotted blue line. The blue shaded region represents the $1\sigma$ statistical error band on the fitted spectral model.} \label{fig:casAspectrum} \end{figure} \section{Follow-up of VHE Sources Discovered by HAWC} \label{sec:hawc} The High-Altitude Water Cherenkov (HAWC) telescope provides a wide-field survey of the northern sky at TeV energies. The second HAWC source catalog (2HWC)~\cite{2017ApJ...843...40A} contains 16 newly detected VHE gamma-ray sources that are over $1^{\circ}$ away from known sources. Of these 16, the locations of 11 appear in archival VERITAS data collected between 2007 and 2015, while VERITAS has additionally observed the locations of other 2HWC sources with dedicated observations, bringing the total observed locations to 13. The VERITAS analysis resulted in the detection of one of the new VHE sources: 2HWC J1953+294 (VER J1952+294)~\cite{2018ApJ...866...24A}. A \textit{Fermi}-LAT analysis was also conducted in the energy range 10--900\,GeV, which resulted upper limits for both a point-source and extended-source search~\cite{2018ApJ...866...24A}. One region observed by VERITAS contains two of the new VHE sources in the 2HWC: 2HWC J1953+294 and 2HWC J1955+285~\cite{2018ApJ...866...24A}. VERITAS has accumulated a total of 64\,hr of observations of this region, which resulted in a 5.2$\sigma$ detection of a source (VER J1952+294) coincident with 2HWC J1953+294 and a non-detection of emission from the 2HWC J1955+285 location. The likely association of 2HWC J1953+294 and VER J1952+294 is the radio pulsar wind nebula (PWN) DA 495~\cite{2018ApJ...866...24A}. The VERITAS counts map for the DA 495 region is shown in Figure~\ref{fig:da495}. \begin{figure}[t] \centering \subfloat{\includegraphics[scale=1.00]{./da495_image}} \caption{VERITAS VHE gamma-ray map of the DA 495 region. The blue circles indicate the 1$\sigma$ locations of the two 2HWC sources in this region, with the blue x indicating the centroids. Sources from the \textit{Fermi}-LAT third source catalog are shown in green. Radio contours for DA 495 from~\cite{2003AJ....125.3145T} are drawn in pink, while the solid white curves indicate the 5$\sigma$ HAWC source locations. The dashed white circle shows the size of the angular cut used in the VERITAS extended source search, and the dashed black circle indicates the extent of the radio emission of SNR G65.1+0.6. More details are provided in~\cite{2018ApJ...866...24A}.} \label{fig:da495} \end{figure} The possibility of this type of multiwavelength study in gamma rays across nearly seven decades in energy underscores the value of combined \textit{Fermi}-LAT, VERITAS, and HAWC studies. While the \textit{Fermi}-LAT and HAWC continually survey the sky, the high angular resolution of an IACT such as VERITAS allows detailed morphological studies in follow-up observations. Furthermore, spectra derived from data collected by these three instruments can be combined to produce a more complete picture of the gamma-ray radiation from a variety of sources. \section{Searches for VHE Gamma-Ray Pulsars } \label{sec:pulsars} Since the unexpected detection of the Crab pulsar in VHE gamma rays by VERITAS~\cite{2011Sci...334...69V} and MAGIC~\cite{2008Sci...322.1221A, 2016A&A...585A.133A}, one question of great interest in VHE astrophysics has been whether or not the Crab pulsar is the sole VHE-emitting pulsar. The VHE spectrum of the Crab pulsar has been measured to be consistent with a pure power law up to 1.5\,TeV~\cite{2016A&A...585A.133A} by MAGIC, which has allowed stringent constraints to be made on the mechanism and location of the particle acceleration responsible for the emission. In the time since the detection of the Crab pulsar, the Vela pulsar was recently detected up to $\sim$100\,GeV by H.E.S.S. II~\cite{2018A&A...620A..66H} and at energies above a few TeV by H.E.S.S.~\cite{arache_vela}, which indicated that pulsed VHE emission may be a more ubiquitous feature of energetic pulsars and not unique to the Crab. Detecting more pulsars in the VHE band in general will of course help to further elucidate the nature of pulsed VHE emission, therefore pulsar observations continue to be of interest in gamma-ray science. As members of the young gamma-ray pulsar population, both the Crab and Vela are very highly ranked according to a so-called ``observability metric'' $\dot E/d^2$,\footnote{This metric give the total power output of a pulsar, $\dot E$, weighted by the inverse square of its distance.} taking the number one and two spots of all known gamma-ray pulsars. Given that the Crab and Vela are the only pulsars known to emit at TeV energies, one natural starting point for a search for more VHE pulsars is to sort observable pulsars according to $\dot E/d^2$. VERITAS has incidentally observed the locations of many of the top northern-hemisphere pulsars according to $\dot E/d^2$ rank~\cite{2019ApJ...876...95A}. Such pulsars that VERITAS has observed (primarily while targeting a PWN or supernova remnant) are listed in Table~\ref{tab:pulsar_properties}, along with some properties and the VERITAS exposure time for each. There are 13 total pulsars, and this list contains eight of the top twelve pulsars located in the northern sky when ranked in $\dot E/d^2$. Two of the top twelve are the Crab and Geminga pulsars, which have already been observed by VERITAS~\cite{2011Sci...334...69V, 2015ApJ...800...61A}. Although none of the 13 pulsars probed for pulsed emission in this study were detected, the derived upper limits constrain a flux that is in many cases below the flux level of the Crab pulsar, so the general statement can be made that VHE pulsed emission from each pulsar, if present, must be more faint than that observed from the Crab pulsar ($\sim$1\% Crab Nebula level)~\cite{2019ApJ...876...95A}. \iffalse \newcolumntype{c}{>{\centering\arraybackslash}X} \begin{table} \centering \footnotesize \begin{tabularx}{1.0\textwidth}{ccccccc} \hline Pulsar & R.A. ($^\circ$) & Dec. ($^\circ$) & $P$ (s) & $\dot P$ ($10^{-15}$) & $\dot E$ ($10^{34}$\,erg\,s$^{-1}$) & VERITAS Exposure (hr) \\ \hline J0007+7303 & 1.7565 & 73.0522 & 315.9 & 357. & 44.8 & 32.4 \\ J0205+6449 & 31.4080 & 64.8286 & 65.7 & 190. & 2644. & 22.2 \\ J0248+6021 & 42.0776 & 60.3597 & 217.1 & 55.0 & 21.2 & 45.9 \\ J0357+3205 & 59.4680 & 32.0891 & 444.1 & 13.1 & 0.6 & 7.92 \\ J0631+1036 & 97.8657 & 10.6165 & 287.8 & 104. & 17.3 & 2.79 \\ J0633+0632 & 98.4339 & 6.5418 & 297.4 & 79.6 & 11.9 & 108 \\ J1907+0602 & 286.9782 & 6.0374 & 106.6 & 86.7 & 282. & 39.1 \\ J1954+2836 & 298.5798 & 28.6013 & 92.7 & 21.2 & 105. & 5.18 \\ J1958+2846 & 299.6667 & 28.7653 & 290.4 & 212. & 34.2 & 13.9 \\ J2021+3651 & 305.2726 & 36.8513 & 103.7 & 95.6 & 338. & 58.2 \\ J2021+4026 & 305.3781 & 40.4461 & 265.3 & 54.2 & 11.4 & 20.6 \\ J2032+4127 & 308.0548 & 41.4568 & 143.2 & 20.4 & 27.3 & 47.9 \\ J2229+6114 & 337.2720 & 61.2359 & 51.6 & 77.9 & 2231. & 47.2 \\ \hline \end{tabularx} \caption{Table listing the 13 pulsars appearing in archival VERITAS data and some of their properties. The right ascension and declination (J2000) values given in columns 2 and 3 used in analysis are taken from the timing solutions. Columns 4 and 5 give the pulsar period $P$ and time derivative of the period $\dot P$. The spin-down powers ($\dot E$) are given in column 6, and the final column gives the VERITAS exposure time for the pulsar. Values for $P$, $\dot P$, and $\dot E$ have been taken from the \textit{Fermi} second pulsar catalog (2PC)~\cite{2013ApJS..208...17A}.} \label{tab:pulsar_properties} \end{table} \fi \begin{table} \centering \caption{Properties of the 13 pulsars searched for pulsed emission by VERITAS as reported in~\cite{2019ApJ...876...95A}. Columns 2 and 3 give the pulsar period, $P$, and time derivative of the period, $\dot P$. The spin-down luminosities ($\dot E$) are given in column 4. Column 5 lists the ranking in $\dot E / d^2$ for the Northern Hemisphere pulsar population. Column 6 gives the possible PWN counterparts of the pulsars, and columns 7 and 8 give the VERITAS exposure times and average zenith angles of observations. Values for $P$, $\dot P$, and $\dot E$ have been taken from the second {\it Fermi}-LAT pulsar catalog~\cite{2013ApJS..208...17A} unless otherwise noted. The possible PWN counterparts are taken from SIMBAD or TeVCat. } \vspace{3mm} \scriptsize \begin{tabularx}{1.0\textwidth}{cccccccc} \hline Pulsar & $P$ (ms) & $\dot P$ ($10^{-15}$) & $\dot E$ ($10^{34}$\,erg\,s$^{-1}$) & $\dot E / d^2$ Rank & Counterpart? & VTS Exposure (hr) & $\bar \theta_{\textrm{zenith}}$ ($^{\circ}$) \\ \hline J0007+7303 & 315.9 & 357 & 44.8 & 9 & CTA 1 & 32.4 & 42 \\ J0205+6449 & 65.7 & 190 & 2644 & 3 & 3C 58 & 22.2 & 35 \\ J0248+6021 & 217.1 & 55.0 & 21.2 & 12 & - & 45.9 & 32 \\ J0357+3205 & 444.1 & 13.1 & 0.6 & 14 & - & 7.92 & 14 \\ J0631+1036 & 287.8 & 104 & 17.3 & 10 & - & 2.79 & 26 \\ J0633+0632 & 297.4 & 79.6 & 11.9 & - & - & 108 & 29 \\ J1907+0602 & 106.6 & 86.7 & 282 & 8 & MGRO J1908+06 & 39.1 & 28 \\ J1954+2836 & 92.7 & 21.2 & 105 & - & - & 5.18 & 16 \\ J1958+2846 & 290.4 & 212 & 34.2 & - & - & 13.9 & 10 \\ J2021+3651 & 103.7 & 95.6 & 338 & 4 & Dragonfly Nebula & 58.2 & 18 \\ J2021+4026 & 265.3 & 54.2 & 11.4 & 13 & $\gamma$ Cygni & 20.6 & 21 \\ J2032+4127 & 143.2 & 20.4 & 15~\cite{2017MNRAS.464.1211H} & 11 & TeV J2032+4130 & 47.9 & 21 \\ J2229+6114 & 51.6 & 77.9 & 2231 & 2 & Boomerang & 47.2 & 33 \\ \hline \end{tabularx} \label{tab:pulsar_properties} \end{table} \section{Acknowledgements} This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, and by NSERC in Canada. This research used resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument. \bibliographystyle{JHEP}
1,108,101,563,091
arxiv
\section{Introduction} Enormous work has been devoted over the last years to the study of mathematical models for Aggregation-Diffusion that are formulated in terms of semilinear parabolic equations combining linear or non-linear diffusion with a Fokker-Planck convection term coming either from a given potential or from an interaction potential, see \cite{AMTU,DP,MV,CJMTU,CMV03,Fornaro2012,CCY19} and the references therein, and the books \cite{AGS,Villani03}. In this paper we consider the aggregation-diffusion equation \begin{equation} \label{eq:main} \tag{P} \frac{\partial \rho}{\partial t} = \Delta \rho^m + \nabla \cdot ( \rho \nabla V ), \qquad \text{in } (0,\infty) \times {\mathbb R^n} \end{equation} where the potential $V(x)$ is given and $0<m<1$, the fast-diffusion range \cite{Vazquez2007}. We take as initial data a probability measure, i.\,e., \begin{equation} \label{eq:rho probability} \rho_0\ge 0 , \qquad \int_{{\mathbb R^n} } \rho_0 \diff x= 1. \end{equation} We will find conditions on the radial initial data $\rho_0$ and the radial potential $V$ so that \begin{enumerate}[label=\roman*)] \item we provide a suitable notion of solution of the Cauchy problem defined globally in time passing through the mass (or distribution) function, \item as $t\to\infty$, the solution undergoes one-point blow-up of the split form \begin{equation*} \rho(t)\to \mu_\infty = \rho_\infty + (1 - \| \rho_\infty \|_{L^1 ({\mathbb R^n})}) \delta_0 , \end{equation*} where $\rho_\infty(x)>0$ is an explicit stationary solution of \eqref{eq:main}. The presence of the concentrated point measure is a striking fact that needs detailed understanding and is the main motivation of this work. Here and after we identify an $L^1$ function with the absolutely continuous measure it generates. \end{enumerate} It is known that Dirac measures are invariant by the semigroup generated by the fast-diffusion equation $u_t = \Delta u^m$ for $0 < m < \frac{n-2}n$ (see \cite{Brezis1983}), but they are never produced from $L^1$ initial data. Here we show that the aggregation caused by the potential term might be strong enough to overcome the fast-diffusion term and produce a Dirac-delta concentration at $0$ as $t \to \infty$, in other words, infinite-time concentration. The case of \eqref{eq:main} with slow diffusion $m > 1$ was studied in \cite{CJMTU,Kim2010}, where the authors show that the steady state does not contain a Dirac delta (i.e. $\| \rho_\infty \|_{L^1} = 1$). The linear diffusion case was extensively studied in \cite{AMTU,MV,OV}. The fast diffusion range $1>m > \frac{n-2}n$ with quadratic confinement potential is also well-known and its long-time asymptotics, even for Dirac initial data, is given by integrable stationary solutions, see for instance \cite{BBDGV} and its references. See also \cite{VazWin2011} for the evolution of point singularities in bounded domains. We will take advantage of the formal interpretation of \eqref{eq:main} as the $2$-Wasserstein flow \cite{CJMTU,CMV03,AGS} associated to the free-energy \begin{equation}\label{eqn.freeen} \mathcal F [\rho] = \tfrac{1}{m-1} \int_{\mathbb R^n} \rho(x)^{m} \diff x + \int_{\mathbb R^n} V(x) \rho(x) \diff x, \end{equation} in order to obtain properties of this functional in terms of the Calculus of Variations. We also take advantage of this structure to obtain a priori estimates on the solution $\rho$ of \eqref{eq:main} due to the dissipation of the energy. \paragraph {Main assumptions and discussion of the main results.} We introduce the specific context in which point-mass concentration arises. We first examine the special stationary solutions that play a role in the asymptotics: \begin{equation}\label{Vh} \rho_{V+h} (x) = ( \tfrac{1-m}{m} (V (x) + h) )^{- \frac{1}{1-m}} \quad \text{for } \ x \in {\mathbb R^n}, \end{equation} for $h\ge 0$. It is easy to check that they are solutions of \eqref{eq:main}, and they are bounded if $h>0$. We now consider the class of suitable potentials. We first assume that $V$ has a minimum at $x=0$ and is smooth: $V \in W^{2,\infty}_{loc} ({\mathbb R^n})$, $V \ge 0$, $V(0) = 0$. We are interested in radial aggregating potentials, in fact we $V$ {is radially symmetric and non-decreasing}. An essential assumption in the proof of formation of a point-mass concentration is the following small-mass condition for the admissible steady states: \begin{equation} a_V = \int_{\mathbb R^n} \rho_V(x) \diff x< 1. \end{equation} As a simplifying assumption we will assume that \begin{equation} \label{eq:rhoV in L1ee loc} \int_{B_1} \rho_V^{1+\varepsilon}(x)\diff x < +\infty, \qquad \text{for some } \varepsilon > 0 . \end{equation} The bounded case in which $\rho_{V+h_1} \le \rho_0 \le \rho_{V+h_2}$ with $h_1, h_2 > 0$ was studied in \cite{Cao2020} and leads to no concentration. On the contrary, we will show that there exists a class of radial initial data $\rho_0(x)\ge \rho_V(x)$ such that the corresponding solution converges as $t\to\infty$ to the split measure \begin{equation}\label{eqn.split} \mu_\infty = (1- a_V) \delta_0 + \rho_V(x), \end{equation} in the sense of mass (which will be made precise below). Moreover, under further assumptions on $V$, we show that $\mu_\infty$ is the global minimizer in the space of measures of the relaxation of $\mathcal F$. An important motivation for our paper is the current interest in the following model of aggregation diffusion with interaction potential \begin{equation} \label{eq:ADE general} \frac{\partial \rho}{\partial t} = \Delta \rho^m + \nabla \cdot ( \rho \nabla W*\rho ) \end{equation} that has led to the discovery of some highly interesting features that have consequences for the parabolic theory and the the Calculus of Variations. Recent results \cite{Carrillo2019} show that, under some conditions on $W$ the energy minimizer of the corresponding energy functional is likewise split as \begin{equation*} \mu_\infty = (1 - \|\rho_\infty\|_{L^1 ({\mathbb R^n})}) \delta_0 + \rho_\infty. \end{equation*} The presence of the concentrated point measure is known for specific choices of $W$, see \cite{carrillo2020fast}. To the best of our knowledge, there exist no results in the literature showing that solutions of the parabolic problem actually converge to these minimisers with a Dirac delta. In this paper, we treat as a first step the where $V$ is known. We expect similar results to hold when $V$ is replaced by $W * \rho$, but the techniques will be more difficult. For example, the non-local nature of \eqref{eq:ADE general} suggests that there might not be a comparison principle. \normalcolor It was shown in \cite{Benilan1981} that for very fast diffusion, $m < \frac{n-2} n$, then the solutions of the Fast Diffusion Equation $u_t = \Delta u^m$ with $u_0 \in L^1 (\mathbb R^n) \cap L^\infty (\mathbb R^n)$ vanish in finite time, i.e. $u(t, x) = 0$ for $t \ge T^*$. When $a_V < 1$, we construct explicit initial data that preserve the total mass, and this holds for any $m \in (0,1)$. \subparagraph{The case of a ball of radius $R$} We have a more complete overall picture when we focus on the problem posed in a ball ${B_R}$, adding a no-flux condition on the boundary: \begin{equation} \label{eq:main bounded domain} \tag{P$_R$} \begin{dcases} \frac{\partial \rho}{\partial t} = \Delta \rho^m + \nabla \cdot ( \rho \nabla V_R ) & \text{in }(0,\infty) \times {B_R}, \\ (\nabla \rho^m + \rho \nabla V_R) \cdot x = 0, & \text{on } (0,\infty) \times \partial {B_R}, \\ \rho (0,x) = \rho_0 (x). \end{dcases} \end{equation} As a convenient assumption, we require that $V_R$ does not produce flux across the boundary \begin{equation} \label{eq:V no aggregation on boundary} \nabla V_R (x) \cdot x = 0 , \qquad \text{on } \partial {B_R}. \end{equation} We discuss this assumption on \Cref{sec:condition VR no flux}. This problem is the $2$-Wasserstein flow of the free energy \begin{equation} \mathcal F_R [\rho] = \tfrac{1}{m-1} \int_{B_R} \rho(x)^{m} \diff x + \int_{B_R} V_R(x) \rho(x) \diff x . \end{equation} For \eqref{eq:main bounded domain}, we show that $\mathcal F_R$ is bounded below and sequences of non-negative functions of fixed $\| \rho \|_{L^1 ({B_R})} = \mathfrak m$ converge weakly in the sense of measures to \begin{equation*} \mu_{\infty,\mathfrak m,R} = \begin{dcases} \rho_{V_R + h} & \text{ if there exists } h \ge 0 \text{ such that }\| \rho_{V_R + h} \|_{L^1 ({B_R})} = \mathfrak m , \\ \rho_{V_R} + (\mathfrak{m} - \| \rho_{V_R + h} \|_{L^1 ({B_R})} ) \delta_0 & \text{ if } \| \rho_{V_R} \|_{L^1 ({B_R})} < \mathfrak m. \end{dcases} \end{equation*} This means that, if the mass $a_{0,R}$ cannot be reached in the class $\rho_{V_R+h}$, the remaining mass is complete with a Dirac delta at $0$. Notice that the mass of $\rho_{V_R+h}$ is decreasing with $h$, so the largest mass is that of $\rho_{V_R}$. We construct an $L^1$-contraction semigroup of solutions $S_R$ of \eqref{eq:main bounded domain} such that, if $ \rho_{V_R} \le \rho_0 \in L^1({B_R})$ and radially symmetric, then \begin{equation*} \mathcal F_R[S_R (t) \rho_0] \searrow \widetilde {\mathcal F}_R[\mu_{\infty,\mathfrak m,R}] = \mathcal F_R[\rho_{V_R}], \end{equation*} where $\mathfrak m = \| \rho_0 \|_{L^1 ({B_R})}$ and $\widetilde {\mathcal F}_R$ is the relaxation of $\mathcal F_R$ to the space of measures presented below (see \cite{Demengel1986}) . The semigroup $S_R$ is constructed as the limit of the semigroup of the regularised problems written below as \eqref{eq:main regularised bounded}. Then, we recover our results by passing to the limit in $\Phi$ and $R$. \paragraph {The mass function.} One of the main tools in this paper will be the study of the so-called mass variable, which can be applied under the assumption of radial solutions. It works as follows. First, we introduce the spatial volume variable $v = |x|^n |B_1|$ and consider the mass function \begin{equation} M_u(t,v) = \int_{ \widetilde B_v } \rho(t,x) \diff x, \qquad \widetilde B_v = \left( \tfrac{v}{|B_1|} \right)^{\frac 1 n} B_1 \end{equation} Notice that $|\widetilde B_v| = v$. For convenience we define $ R_v = R^n |B_1|. $ We will prove that $M$ satisfies the following nonlinear diffusion-convection equation in the viscosity sense \begin{align} \tag{M} \label{eq:mass} \frac{\partial M}{\partial t} &= (n \omega_n^{\frac 1 n } v ^{\frac{n-1} n})^2 \left\{ \frac{\partial }{\partial v} \left[ \left( \frac{\partial M }{\partial v} \right)^m \right] + \frac{\partial M }{\partial v} \frac{\partial V}{\partial v} \right\}, \end{align} where $\omega_n = |B_1|$. The diffusion term of this equation is of $p$-Laplacian type, where $p = m+1$. The weight will not be problematic when $ v > 0$, as we show in \Cref{sec:classical regularity} using the parabolic theory in DiBenedetto's book \cite{DiBenedetto1993}. Notice that the formation of a Dirac delta at $0$ is equivalent to the loss of the Dirichlet boundary condition $M(t,0) = 0$. Few results of loss of the Dirichlet boundary condition are known in the literature of parabolic equation. For equations of the type $u_t=u_{xx}+|u_x|^p$, it is known (see, e.g., \cite{Arrieta2004}) that $u_x$ may blow up on the boundary in finite or infinite time, depending on the choice of boundary conditions. The case of infinite time blow-up was revisited in \cite{SoupletVazq}. The question of boundary discontinuity in finite time, loss of boundary condition, for the so-called viscous Hamilton-Jacobi equations is studied in \cite{BarlesDaLio2004,PS17,PS20,mizoguchi2021singularity} and does not bear a direct relation with our results. A general reference for boundary blow-up can be found in the book \cite{QuittSoupBook}. \paragraph {Precise statement of results.} In order to approximate the problem in ${\mathbb R^n}$, our choice of $V_R$ will be of the form \begin{equation*} V_R (x) \begin{dcases} = V(x) & |x| \le R- \varepsilon, \\ \le V(x) & R - \varepsilon < |x| \le R \end{dcases} \end{equation*} and with the condition $V_R \cdot x = 0$ on $\partial B_R$. We also define \begin{equation*} a_{V,R} = \int_{{B_R}} \rho_{V_R} \diff x \qquad \text{ and } \qquad a_{0,R} = \int_{{B_R}} \rho_0 \diff x. \end{equation*} We will denote $V = V_R$ until \Cref{sec:Rn}. \begin{theorem}[Infinite-time concentration of solutions of \eqref{eq:main bounded domain}] \label{thm:bounded concentrating intro} Assume $V \in W^{2,\infty} ({B_R})$ is radially symmetric, strictly increasing, $V\ge 0$, $V (0) = 0$, $V \cdot x = 0$ on $\partial {B_R}$ and the technical assumption \eqref{eq:rhoV in L1ee loc}. \normalcolor Assume also that $a_{0,R} > a_{V,R}$, $\rho_0$ radially symmetric, $\rho_0 \ge \rho_V$ and $\rho_0 \in L^\infty ({B_R} \setminus B_{r_1})$ for some $r_1 < R$. Then, the solution $\rho$ of \eqref{eq:main bounded domain} constructed in \Cref{thm:existence bounded L1} satisfies \begin{equation*} \liminf_{t \to \infty} \int_{B_r} \rho(t,x) \diff x \ge (a_{0,R} - a_{V,R}) + \int_{B_r} \rho_V (x) \diff x , \qquad \forall r \in [0,R]. \end{equation*} (i.e., there is concentration in infinite time). Moreover, if \begin{equation} \label{eq:rho0 below limit} \int_{B_r} \rho_0 (x) \diff x \le (a_{0,R} - a_{V,R}) + \int_{B_r} \rho_V (x) \diff x \qquad \forall r \in [0,R], \end{equation} then for $\mu_{\infty,R} = (a_{0,R} - a_{V,R}) \delta_0 + \rho_V$ we have that \begin{equation*} \lim_{t \to \infty} d_{1} ( \rho (t) , \mu_{\infty,R} ) = 0 , \end{equation*} where $d_1$ denotes the $1$-Wasserstein distance. \end{theorem} \begin{remark} \label{rem:bounded concentration non-radial} If we take a non-radial datum $\rho_0 \ge \rho_{0,r}$ with $\rho_{0,r}$ radially symmetric satisfying the hypothesis of \Cref{thm:bounded concentrating intro}, then the corresponding solution $\rho(t,x) $ of \eqref{eq:main bounded domain} constructed in \Cref{thm:existence bounded L1} concentrates in infinite time as well, due to the comparison principle. \end{remark} Through approximation as $R \to \infty$, we will also show that \begin{corollary}[At least infinite-time concentration of solutions of \eqref{eq:main}] \label{cor:Rd concentration} Under the hypothesis of \Cref{thm:bounded concentrating intro} and suitable hypothesis on the initial data (specified in \Cref{sec:Rd concentration}), we can show the existence of viscosity solutions of \eqref{eq:mass} in $(0,\infty) \times (0,\infty)$ (obtained as a limit of the problems in ${B_R}$), such that \begin{equation*} \lim_{t \to \infty} M(t,v) = (1-a_V) + M_{\rho_V} (v) \end{equation*} for all $v > 0$ and, furthermore, locally uniformly $(0,\infty)$. We also have that \begin{equation*} \lim_{t \to \infty} d_{1} ( \rho(t) , (1-a_V)\delta_0 + \rho_V ) = 0. \end{equation*} \end{corollary} Through our construction of $M$, we cannot guarantee in general that $M(t,0) = 0$ for $t$ finite. Producing a priori estimates, we can ensure this in some cases. \begin{theorem}[Infinite-time concentration for $V$ quadratic at $0$] \label{prop:Rd no concentration in finite time} Let $\rho_0 \in L^1_+ (\mathbb R^n)$ non-increasing and assume \begin{equation} \label{eq:hypothesis superquadratic} \frac{\partial V}{\partial r} (s) \le C_{V} r , \qquad \text{ in } B_{R_V} \text{ for some } C_V > 0. \end{equation} Then, the viscosity mass solution constructed in \Cref{prop:Rd existence mass} does not concentrate in finite time, i.e. $M(t,0) = 0$. \end{theorem} \paragraph{The picture for power-like $V$.} Let us discuss the case where $V$ is of the form \begin{equation*} V(x) \sim \begin{dcases} |x|^{\lambda_0} & |x| \ll 1, \\ |x|^{\lambda_\infty} & |x| \gg 1. \\ \end{dcases} \end{equation*} The condition $V \in W^{2,\infty}_{loc} ({\mathbb R^n})$ means $\lambda_0 \ge 2$. In this setting, we satisfy \eqref{eq:hypothesis superquadratic} so concentration does not happen in finite time. The condition $\rho_V \in L^1({\mathbb R^n})$ (i.e. $a_V < \infty$) holds if and only if \begin{equation}\label{Vgrowth} \frac{n-\lambda_\infty}{n} < m < \frac{n-\lambda_0}{n}. \end{equation} In fact, under this condition, $\rho_V \in L^{1+\varepsilon} ({\mathbb R^n})$. In addition to the behaviour at $0$ and $\infty$, the \ restriction $a_V < 1$ is a condition on the intermediate profile of $V$. This is sufficient to construct initial data $\rho_0$ (of the shape $\rho_D$ present below) so that solutions converge to $\mu_\infty$ as $t \to \infty$, in the sense of mass. Due to \eqref{eq:hypothesis superquadratic}, the concentration is precisely at infinite time. But we do not know that $\mu_\infty$ is a global minimiser of $\mathcal F$. % In \Cref{rem:minimisation for powers at infinity} we prove that the energy functional is bounded below whenever $m > \frac{n}{n+\lambda_\infty}$. Notice that $ \frac{n-\lambda_\infty}{n} < \frac{n}{n+\lambda_\infty}. $ Therefore, if \begin{equation*} \frac{n}{n+\lambda_\infty} < m < \frac{n-\lambda_0}{n}, \qquad \text{ and } \qquad a_V < 1, \end{equation*} then $\mu_\infty$ is the global minimiser in $\mathcal P ({\mathbb R^n})$ of the relaxation of $\mathcal F$ and it is an attractor for some initial data. \paragraph{Structure of the paper.} In \Cref{sec:2} we write the theory in ${B_R}$ for a regularised problem where the fast-diffusion is replaced by a smooth elliptic non-linearity $\Phi$. In \Cref{sec:agg-diff equation} we construct solutions of \eqref{eq:main bounded domain}, by passing to the limit as $\Phi(s) \to s^m$ the solutions of \Cref{sec:2}. In \Cref{sec:mass} we show that mass functions $M$ of the solutions of \Cref{sec:2,sec:agg-diff equation} are solutions in a suitable sense of Problem \eqref{eq:mass}, and we prove regularity and a priori estimates. In \Cref{sec:concentration} we construct initial data $\rho_0$ so that the mass $M$ is non-decreasing in time as well a space. We show that these solutions $M$ concentrate in the limit, a main goal of the paper. We recall that this means the formation of a jump at $v=0$ for $t=\infty$. \Cref{sec:FR} is dedicated to the minimisation of $\mathcal F_R$ for functions defined in ${B_R}$. We prove that the minimisers are precisely of the form $\mu_{\infty, \mathfrak m , R}$ described above. In \Cref{sec:Rn}, we pass to the limit as $R \to \infty$ in terms of the mass. We show that the mass functions for suitable initial data still concentrate. We discuss minimisation of the function $\mathcal F$. We show the class of potentials $V$ that make $\mathcal F$ bounded below is more restrictive than for $\mathcal F_R$, and provide suitable assumptions so that $\mu_\infty$ is a minimiser. We list some comments and open problems in \Cref{sec:final comments}. We conclude the paper with two appendixes. The first, \Cref{sec:classical regularity} recalls results from \cite{DiBenedetto1993} and compacts them into a form we use for $M$. \Cref{sec:appendix b} is devoted to mixing partial space and time regularities into Hölder regularity in space and time. \paragraph{A comment on the notation.} Throughout the document, we deal with the spatial variable $x$, the radial variable $r = |x|$, and the volume variable $v = |B_1| |x|^d$. Since it will not lead to confusion and simplifies the notation, radial functions of $x$ will sometimes be evaluated or differentiated in $r$ or $v$. This must be understood as the correct substitution. \normalcolor \section{The regularised equation in ${B_R}$} \label{sec:2} Following the theory of non-linear diffusion, we consider in general \begin{equation} \label{eq:main regularised bounded} \tag{P$_{\Phi,R}$} \begin{dcases} \frac{\partial u}{\partial t} = \Delta \Phi (u) + \nabla \cdot ( u E ) & \text{in }(0,\infty) \times {B_R} \\ (\nabla \Phi(u) + u E) \cdot x = 0, & \text{on } (0,\infty) \times \partial {B_R}, \\ u (0,x) = u_0 (x). \end{dcases} \end{equation} We assume that $\Phi \in C^1$ and elliptic we think of the problem as \begin{equation*} \frac{\partial u}{\partial t} = \Delta \Phi (u) + \nabla u \cdot E + u \nabla \cdot E. \end{equation*} Furthermore, we assume \begin{equation} \label{eq:E point outwards} E(x) \cdot x = 0 , \qquad \text{on } \partial {B_R} \end{equation} \begin{remark} Our results work in a general bounded domain $\Omega$, where the assumption on $E$ is that $E \cdot n(x) = 0$ on $\partial \Omega$. However, we write them in a ball of radius $R$ since our main objective is to study the long-time asymptotics of radially symmetric solutions. \end{remark} The diffusion corresponds to the flux $ a(u, \nabla u) = \Phi'(u) \nabla u. $ When $\Phi, E$ are smooth and we assume $\Phi$ is uniformly elliptic, in the sense that there exist constants such that \begin{equation} \label{eq:Phi elliptic} 0 < c_1 \le \Phi'(u) \le c_2 < \infty, \end{equation} existence, uniqueness, and maximum principle hold from the classical theory. The literature is extensive: in ${\mathbb R^n}$ this issue was solved at the beginning of the twentieth century (see \cite{ladyzhenskaia1968parabolic}), % in a bounded domain with Dirichlet boundary condition the result can be found in \cite{Gilbarg+Trudinger2001}, and % the case of Neumann boundary conditions was studied by the end of the the twentieth century (for example \cite{Amann1990}), where the assumptions on the lower order term were later generalised (see, e.g. \cite{Yin2005}). Following \cite{Amann1990}, we have that, if $u_0 \in C^2 (\overline {B_R})$ then the solution $u$ of \eqref{eq:main regularised bounded} is such that \begin{equation} \label{eq:bounded regularised regularity of $u$} u \in C^1\Big((0,T); C(\overline {B_R}) \Big) \cap C \Big((0,\infty) ; C^2 (\overline {B_R})\Big) \cap C\Big([0,\infty)\times \overline {B_R}\Big). \end{equation} Let us obtain further properties of the solution of \eqref{eq:main regularised bounded}. \begin{theorem}[$L^p$ estimates] \label{thm:Lp estimates} Assume $E(x) \cdot n(x) \ge 0$. For classical solutions we have that \begin{align} \label{eq:regularisation Lp estimates} \| u(t)_\pm \|_{L^p } &\le e^{ \frac{p-1}{p} \normalcolor \| \nabla \cdot E \|_{L^\infty} t }\| (u_0)_\pm \|_{L^p}. \end{align} \end{theorem} \begin{proof} Let $j$ be convex. We compute \begin{align*} \frac{\diff }{\diff t} \int_{B_R} j(u) &= \int_{B_R} j'(u) \nabla \cdot ( \nabla \Phi(u) + u E ) =- \int_{B_R} j''(u) \nabla u \cdot ( \nabla \Phi(u) + u E ) \\ &=- \int_{B_R} j''(u) \Phi'(u) |\nabla u|^2 - \int_{B_R} j''(u) u \nabla u \cdot E \le - \int_{B_R} \nabla F(u) \cdot E \end{align*} where $F'(u) = j''(u) u \ge 0$ and we can pick $F(0) = 0$. Hence $F \ge 0$. If $j$ has a minimum at $0$, then $j' \ge 0$ and $F \ge 0$. Since $E \cdot x \ge 0$, \begin{align*} \int_{B_R} \nabla F(u) \cdot E &= \int_{B_R} \nabla \cdot (F(u) E) - \int_{{B_R}} F(u) \nabla \cdot E = \int_{\partial {B_R}} F(u) E \cdot \frac{x}{|x|} - \int_{{B_R}} F(u) \nabla \cdot E \\ &\ge - \int_{{B_R}} F(u) \nabla \cdot E. \end{align*} Finally, we recover \begin{align*} \int_{B_R} j(u (t)) &\le \int_{B_R} j(u_0) + \int_0^t \int_{B_R} F(u) \nabla \cdot E . \end{align*} When $j (s) = s_\pm^p$, $ j''(s) = p (p - 1) s_\pm^{p - 2}$ we have $ F(s) = (p-1) s_\pm^{p }$. Applying \eqref{eq:V no aggregation on boundary} we show that\\ \begin{align*} \int_{B_R} u(t)_\pm^p &\le \int_{B_R} (u_0)_\pm^p + (p-1) \| \nabla \cdot E \|_{L^\infty} \int_0^t \int_{B_R} u_\pm^p. \end{align*} By Gronwall's inequality we have that \begin{align*} \int_{B_R} u(t)_\pm^p &\le e^{(p-1)\| \nabla \cdot E \|_{L^\infty} t }\int_{B_R} (u_0)_\pm^p . \end{align*} Taking the power $1/p$ we have \eqref{eq:regularisation Lp estimates} for $p < \infty$ and letting $p \to \infty$ we also obtain the $L^\infty$ estimate. \end{proof} \begin{theorem}[Estimates on $\nabla \Phi(u)$] Let $\Psi(s) = \int_0^s \Phi(\sigma) \diff \sigma$. Then, we have that \begin{equation} \label{eq:smooth estimate nabla Phi u} % \frac 1 2 % \int_{0}^{T} \int_{B_R} |\nabla \Phi(u)|^2 \le \int_{{B_R}} \Psi(u_0) + \frac {\| E \|_{L^\infty}^2} 2 \int_0^T \int_{B_R} u(t,x)^2 \diff x \diff t. \end{equation} \end{theorem} \begin{proof} Multiplying by $\Phi(u)$ and integrating \begin{equation*} \int_{B_R} u_t \Phi(u) = - \int_{B_R} \nabla \Phi(u) \cdot (\nabla \Phi(u) + u E) = - \int_{B_R} |\nabla \Phi(u)|^2 - \int_{B_R} u \nabla \Phi(u) E. \end{equation*} We have that \begin{equation*} \frac{\diff }{\diff t} \int_{B_R} \Psi(u) + \int_{B_R} |\nabla \Phi(u)|^2 \le \| u (t) \|_{L^2} \| \nabla \Phi(u(t)) \|_{L^2} \| E \|_{L^\infty}. \end{equation*} Applying Young's inequality we obtain \begin{equation*} \frac{\diff }{\diff t} \int_{B_R} \Psi(u) + \frac 1 2 \int_{B_R} |\nabla \Phi(u)|^2 \le \frac 1 2 \| u (t) \|_{L^2}^2 \| E \|_{L^\infty}^2. \end{equation*} Notice since $\Phi' \ge 0$ we have that $\Psi \ge 0$. Hence, we deduce the result. \end{proof} If $\Psi(u_0) \in L^1$ and $u_0 \in L^2$ then the right-hand side is finite due to \eqref{eq:regularisation Lp estimates}. \begin{remark} When $\Phi(s) = s^m$ then $\Psi(s) = \frac{1}{m+1}s^{m+1}$. \end{remark} In order to get point-wise convergence, we follow the approach for the Fast Diffusion equation proposed in \cite[Lemma 5.9]{Vazquez2007}. Define \begin{equation*} Z(s) = \int_0^s \min \{ 1, \Phi'(s) \} \diff s, \qquad z (t,x) = Z(u(t,x)). \end{equation*} \begin{corollary} We have that \begin{equation} \label{eq:smooth estimate nabla z} \int_{0}^{T} \int_{B_R} |\nabla z|^2 \le \int_{{B_R}} \Psi(u_0) + \frac{\| E \|_{L^\infty}^2} 2 \int_0^T \int_{B_R} u(t,x)^2 \diff x \diff t. \end{equation} \end{corollary} \begin{proof} Notice $ |\nabla z| \le |\Phi'(u)||\nabla u| = |\nabla \Phi(u)|. $ \end{proof} \begin{lemma}[Estimates on $u_t$ and $\nabla \Phi (u)$] \label{lem:bounded regularised estimates ut and nabla Phi u} Assume $E \cdot x = 0$ on $\partial B_R$, $u \in L^\infty(0,T; L^2 (B_R))$, $\Phi(u) \in L^2(0,T; H^1 (B_R))$, $\Phi (u_0) \in H^1 (B_R)$ then \begin{equation*} \Phi(u) \in L^\infty(0,T; H^1 (B_R)) \qquad \text{ and } \qquad u_t \in L^2 ( (0,T) \times {B_R}). \end{equation*} We also have, for $z(t,x) = Z(u(t,x))$ that \begin{equation} \label{eq:estimate Phi'uu_t} \begin{aligned} \int_0^T \int_{B_R} |z_t|^2 &\le C \Bigg( \int_{B_R} |\nabla \Phi(u_0)|^2 + \frac{1} 2\int_0^T \int_{B_R} \Phi'(u) |\nabla u|^2| E|^2 \\ & \qquad \qquad + \int_{B_R} u(0) \nabla \Phi(u_0) \cdot E + \int_{B_R} |u(T)|^2 |E|^2 \Bigg). \end{aligned} \end{equation} \end{lemma} \begin{proof} Again we we will use the notation $w = \Phi(u)$. When $u$ is smooth, we can take $w_t$ as a test function and integrate in ${B_R}$. Notice that $w_t = \Phi'(u) u_t$, so \begin{equation*} \int_{B_R} \Phi'(u) |u_t|^2 = \int_{B_R} w_t \Delta w + \int_{B_R} w_t \nabla \cdot (u E) \end{equation*} Since $\nabla w = 0$ on $\partial {B_R}$, then also $\nabla w_t = 0$. We can integrate by parts to recover \begin{equation*} \int_{B_R} \Phi'(u) |u_t|^2 = - \frac{\diff }{\diff t}\int_{B_R} |\nabla w|^2 + \int_{\partial B_R} w_t u E \cdot \frac{x}{|x|} - \int_{B_R} u \nabla w_t \cdot E . \end{equation*} Using assumption \eqref{eq:E point outwards} the second term on the right-hand side vanishes. Integrating in $[0,T]$ we have \begin{equation*} \int_0^T \int_{B_R} \Phi'(u) |u_t|^2 + \int_{B_R} |\nabla w (T)|^2 = \int_{B_R} |\nabla w(0)|^2 - \int_0^T \int_{B_R} u \nabla w_t \cdot E \end{equation*} Integrating by parts in time the last integral \begin{align*} \int_0^T \int_{B_R} \Phi'(u) |u_t|^2 + \int_{B_R} |\nabla w (T)|^2 &= \int_{B_R} |\nabla w(0)|^2 + \int_0^T \int_{B_R} u_t \nabla w \cdot E \\ &\qquad + \int_{B_R} u(0) \nabla w(0) \cdot E - \int_{B_R} u(T) \nabla w(T) \cdot E. \end{align*} Notice that $u_t \nabla w = \Phi'(u)^{\frac 1 2} u_t \Phi'(u)^{\frac 1 2} \nabla u$. Applying Young's inequality, we deduce \begin{equation} \label{eq:estimate Phi'uu_t} \begin{aligned} \frac 1 2 \int_0^T \int_{B_R} \Phi'(u) |u_t|^2 + \frac 1 2 \int_{B_R} |\nabla w (T)|^2 &\le \int_{B_R} |\nabla w(0)|^2 + \frac{1} 2\int_0^T \int_{B_R} \Phi'(u) |\nabla u|^2| E|^2 \\ &\qquad + \int_{B_R} u(0) \nabla w(0) \cdot E + \int_{B_R} |u(T)|^2 |E|^2. \end{aligned} \end{equation} From the estimates above, we know that $c_1 |\nabla u| \le \Phi'(u) |\nabla u| = |\nabla \Phi (u)| \in L^2$. Similarly, the result follows. Finally, we use that \begin{equation*} |z_t|^2 \le |Z'(u)|^2 | u_t| ^2 \le |\Phi'(u)| |u _t|^2 . \end{equation*} using that $Z' = \min \{ 1, \Phi' \}$. \end{proof} \subsection{Free energy and its dissipation when $E = \nabla V$} When $E = \nabla V$ we have, again, a variational interpretation of the equation that leads to additional a priori estimates. We can rewrite equation \eqref{eq:main regularised bounded} as \begin{equation} \label{eq:bounded regularised Wassertein} \frac{\partial u}{\partial t} = \nabla \cdot \left( \Phi'(u) \nabla u + u \nabla V \right) = \nabla \cdot \left( u \left\{ \frac{ \Phi'(u) }{u } \nabla u + \nabla V \right\} \right) = \nabla \cdot \left( u \nabla \left\{ \Theta (u) + V \right\} \right) \end{equation} where \begin{equation} \label{eq:Theta} \Theta(s) = \int_1^s \frac{\Phi'(\sigma)}{\sigma} \diff \sigma. \end{equation} \begin{remark} Since $c_1 \le \Phi'(\rho) \le c_2$ then $ \Theta(\rho) \sim \alpha \ln \rho$ so $\Theta^{-1} (\rho) \sim e^{\alpha^{-1} \rho}$. In particular $\Theta (0) = +\infty$. This is why we have to integrate from $1$ in this setting. However, when $\Phi (s) = s^m$ then $\Theta (s) = \frac{m}{m-1} ( s^{ {m-1}} - 1)$. So $\Theta^{-1} (s) = ( 1 - \frac{1-m}{m} s )^{ \frac 1 {m - 1} }$. For $\Phi$ elliptic then $\Theta^{-1} : \mathbb R \to [0,+\infty)$. However, for the FDE passing to the limit we are restricted to $s \le \frac m {1-m}$. \end{remark} Formulation \eqref{eq:bounded regularised Wassertein} shows that this equation is the 2-Wasserstein gradient flow of the free energy \begin{equation*} \mathcal F_\Phi [u] = \int_{B_R} \left( \int_1^{u(x)}\Theta(s)\diff s + V(x) u(x) \right) \diff x. \end{equation*} Along the solutions of \eqref{eq:main regularised bounded} it is easy to check that \begin{equation} \label{eq:bounded regularised decay of free energy} \frac{\diff}{\diff t} \mathcal F_\Phi [u (t)] = - \int_{B_R} u \left| \nabla (\Theta(u) + V ) \right|^2 \diff x \le 0. \end{equation} Also, by integrating in time we have that \begin{equation} \label{eq:bounded regularised estimate of free energy disipation} 0 \le \int_0^T \int_{B_R} u \left| \nabla (\Theta(u) + V ) \right|^2 \diff x \diff t= \mathcal F_\Phi [u_0] - \mathcal F_\Phi[u(t)] \end{equation} Finally, let us take a look at the stationary states. For any $H \in \mathbb R$, the solution of $\Theta(u) + V = - H$ is a stationary state. Since $\Theta: [0,+\infty) \to \mathbb R$ is non-decreasing, we have that $H = - \Theta (u(0))$. We finally define \begin{equation*} u_{V+H} \, {\coloneqq} \, \Theta^{-1} \Big( -( H + V ) \Big). \end{equation*} \begin{remark} When $\Phi$ is elliptic $u_{V+H} \le \Theta^{-1} (-H)$. In the case of the FDE we have \begin{equation*} u_{V+H} = \left( 1 + \tfrac{1-m}{m} (H + V) \right)^{ \frac 1 {m - 1} } = \rho_{V+h}. \end{equation*} where $h = H + \frac{m}{1-m}$. When $h > 0$ we have $\rho _{V+h}$ is bounded, but $\rho_V$ is not bounded. \end{remark} \subsection{Comparison principle and $L^1$ contraction} Let us present a class of solutions which have a comparison principle, and are therefore unique. \begin{definition} We define strong $L^1$ solutions of \eqref{eq:main regularised bounded} as distributional solutions such that \begin{enumerate} \item $u \in C( [0,T] ; L^1 ({B_R}) )$. \item $\Phi(u) \in L^1(0,T; W^{1,1} ({B_R}))$ , $\Delta \Phi(u) \in L^1 ((0,T) \times {B_R})$. \item $u_t \in L^2 (0,T; L^1({B_R}))$. \end{enumerate} \end{definition} \begin{theorem} \label{thm:ball comparison Phi smooth} Assume $E \cdot n(x) = 0$. Let $u, \overline u$ be two strong $L^1$ solutions of \eqref{eq:main regularised bounded}. Then, we have that \begin{equation*} \int_{B_R} [u (t) - \overline u(t)]_+ \le \int_{B_R} [u (0) - \overline u (0)]_+ \end{equation*} In particular $ \| u(t) - \overline u (t) \|_{L^1 ({B_R})} \le \| u(0) - \overline u(0)\|_{L^1 ({B_R})} $ and, for each $u_0 \in L^1 ({B_R})$, there exists at most one strong $L^1$ solution. \end{theorem} \begin{proof} We now have that $w = \Phi (u) - \Phi (\overline u)$. Let $j$ be convex and denote $p = j'$. We have, using the no flux condition \begin{align*} \int_0^T \int_{B_R} (u - \overline u)_t p(w) &= \int_0^T\int_{B_R} p(w) \nabla \cdot \{ \nabla w + (u - \overline u)E \} \\ &=- \int_0^T\int_{B_R} p'(w) | \nabla w|^2 + \int_0^T\int_{B_R} p(w) \nabla \cdot \left\{ (u - \overline u) E \right\} . \end{align*} Notice that $\nabla u = \frac{1}{\Phi'(u)} \nabla \Phi(u) \in L^1((0,T) \times {B_R})$ due to \eqref{eq:Phi elliptic}. Expanding the divergence, we have \begin{equation*} \int_0^T\int_{B_R} (u - \overline u)_t p(w) \le \int_0^T\int_{B_R} p(w) \left( \nabla (u - \overline u) E + (u - \overline u) \nabla \cdot E \right) . \end{equation*} Then, as $p \to \sign_+$, we have $p(w) \to \sign_0^+ ( w ) = \sign_0^+ (u - \overline u)$ and \begin{equation*} \int_0^T\int_{B_R} ( [u - \overline u]_+ )_t \le \int_0^T\int_{B_R} \left( \nabla [u - \overline u]_+ E + [u - \overline u]_+ \nabla \cdot E \right) = \int_0^T\int_{B_R} \nabla \cdot ( [u - \overline u]_+ E ). \end{equation*} Using again that $E \cdot n(x) = 0$ on $\partial {B_R}$, we recover a $0$ on the right hand side. This completes the proof. \end{proof} \begin{remark}[Uniform continuity in time] Because of the $L^1$ contraction, and the properties of the semigroup $ \| u(t + h) - u(t) \|_{L^1} \le \| u(h) - u(0) \|_{L^1}. $ If $u(h) \to u(0)$ in $L^1$, we have uniform continuity in time $\omega(h) = \| u(h) - u(0) \|_{L^1}$. \end{remark} \begin{remark}[On the assumption $E \cdot x = 0$ on $\partial {B_R}$] \label{sec:condition VR no flux} Notice that to recover the $L^p$ estimates in \Cref{thm:Lp estimates} (which depend on $\| \nabla \cdot E \|_{L^\infty}$) we assume only that $E \cdot x \ge 0$ on $\partial {B_R}$. However, later (as in \Cref{lem:bounded regularised estimates ut and nabla Phi u} and \Cref{thm:ball comparison Phi smooth}) we require $E \cdot x = 0$ on $\partial {B_R}$. The estimates in these results do not include $\nabla \cdot E$, and so it seems possible to extended the results to this setting by approximation. \end{remark} \section{The Aggregation-Fast Diffusion Equation} \label{sec:agg-diff equation} We start this section by providing a weaker notion of solution \begin{definition} We say that $\rho \in L^1((0,T) \times {B_R}$ is a weak $L^1$ solution of \eqref{eq:main bounded domain} if $\rho^m \in L^1 ( 0,T; W^{1,1} ({B_R}) )$ and, for every $\varphi \in L^\infty (0,T; W^{2,\infty} ({B_R}) \cap W^{1,\infty}_0 ({B_R})) \cap C^1 ([0,T]; L^1({B_R}))$ we have that \begin{equation*} \int_{B_R} \rho(t) \varphi(t) - \int_0^t \int_{B_R} \rho(s) \varphi_t (s) \diff s = - \int_0^t \int_{B_R} \left( \nabla \rho^m \cdot \varphi + \rho \nabla V \cdot \nabla \varphi \right) + \int_{B_R} \rho_0 \varphi(0). \end{equation*} for a.e. $t \in (0,T)$. \end{definition} If $\nabla V \cdot n(x) = 0$ we then have $\nabla \rho^m \cdot n = 0$ and we can write the notion of very weak $L^1$ solution by integrating once more in space the diffusion term \begin{equation*} \int_{B_R} \rho(t) \varphi(t) - \int_0^t \int_{B_R} \rho(s) \varphi_t (s) \diff s = \int_0^t \int_{B_R} \left( \rho^m \Delta \varphi - \rho \nabla V \cdot \nabla \varphi \right) + \int_{B_R} \rho_0 \varphi(0). \end{equation*} \begin{theorem}[$L^1$ contraction for $H^1$ solutions bounded below] Assume that $\rho, \overline \rho$ are weak $L^1$ solutions of \eqref{eq:main bounded domain} with initial data $\rho_0 $ and $\overline \rho_0$, $\rho, \overline \rho \in H^1 ((0,T) \times {B_R})$, and $\rho, \overline \rho \ge c_0 > 0$. Then \begin{equation*} \int_{B_R} (\rho(t) - \overline \rho(t))_+ \le \int_{B_R} (\rho_0 - \overline \rho_0)_+. \end{equation*} \end{theorem} \begin{proof} Since the solutions are in $H^1$ and are bounded below, then $\rho^m, \overline \rho^m \in H^1 ((0,T) \times B_R)$. Let $p$ be non-decreasing and smooth. By approximation by regularised choices, let us define $w = \rho^m - \overline \rho^m$ and $\varphi = p ( w )$. Thus we deduce \begin{align*} \int_0^t \int_{B_R} ( \rho(s) - \overline \rho(s))_t p(w) &= -\int_0^t \int_{B_R} \left( p'(w) |\nabla w|^2 + ( \rho - \overline \rho) \nabla p(w) \cdot \nabla V \right) . \end{align*} Proceeding as in \Cref{thm:ball comparison Phi smooth} for $\Phi$ smooth and using \eqref{eq:V no aggregation on boundary} we have that \begin{equation*} \int_0^t \int_{B_R} ( [ \rho(s) - \overline \rho(s) ]_+)_t \diff s \le 0, \end{equation*} and this proves the result. \end{proof} We can now construct a semigroup of solutions. We begin by constructing solutions for regular data, by passing to the limit in regularised problems with a sequence of smooth non-linearities $\Phi_k(s) \to \Phi(s) = s^m$. We consider the sequence $\Phi_k$ of functions given by $\Phi_k(0) = 0$ and \begin{equation} \label{eq:Phik} \Phi_k' (s) \sim \begin{dcases} m k^{m-1} & s > k, \\ m s^{m-1} & s \in [k^{-1}, k], \\ m k^{1-m} & s < k^{-1} \end{dcases} \end{equation} up to a smoothing of the interphases. We define \begin{equation*} Z(s) = \int_0^s \min \{ 1 , \Phi' (\sigma) \} \diff \sigma = \int_0^s \min \{ 1 , m \sigma^{m-1} \} \diff \sigma = \begin{dcases} s & s < m^{-\frac 1 {1-m}} \\ C_m + s^m & s \ge m^{-\frac 1 {1-m}} \end{dcases} . \end{equation*} \begin{theorem}[Existence of solutions for regular initial data] \label{thm:existence bounded domain regular data} Assume $V \in W^{2,\infty} ({B_R})$, $V \ge 0$, $V (0) = 0$, $V \cdot x = 0$ on $\partial {B_R}$ and the technical assumption \eqref{eq:rhoV in L1ee loc}. Let $\rho_0 $ be such that \begin{equation*} 0 < \varepsilon \le \rho_0 \le \varepsilon^{-1}, \qquad \rho_0 \in H^1 ({B_R}). \end{equation*} Then, the sequence $u_k$ of solutions for \eqref{eq:main regularised bounded} where $\Phi = \Phi_k$ given by \eqref{eq:Phik} is such that \begin{alignat*}{2} u_k &\rightharpoonup \rho \qquad && \text{weakly in } H^1 ((0,T) \times {B_R}) \\ u_k &\to \rho \qquad && \text{a.e. in }(0,T) \times {B_R} \\ \Phi_k (u_k) &\rightharpoonup \rho^m \qquad &&\text{weakly in } L^2 (0,T, H^1 ({B_R})) \end{alignat*} and $\rho$ is a weak $L^1$ solution of the problem. Moreover, we have that $\rho \ge \omega(\varepsilon) > 0$, \begin{equation*} \| \rho(t) \|_{L^1} = \|\rho_0 \|_{L^1}, \qquad \| \rho(t) \|_{L^q} \le e^{ t \| \Delta V \|_{L^\infty (B_R)} } \| \rho_0 \|_{L^q}. \end{equation*} In fact, $\rho$ is the unique weak $L^1$ solution which is $H^1$ and bounded below. \end{theorem} \begin{proof} First, we point out that that $\Phi_k(\rho_0) \in W^{1,\infty} (B_R)$. Hence, for the approximation, by \eqref{eq:regularisation Lp estimates}, $u_k \in L^\infty ((0,T) \times B_R)$ and, due to \eqref{eq:smooth estimate nabla Phi u}, $\Phi_k (u_k) \in L^2( 0,T; H^1 (B_R))$ with uniform norm bounds. This ensures (up a subsequence) \begin{alignat*}{2} u_k &\rightharpoonup \rho \qquad && \text{weak-$\star$ in } L^\infty((0,T) \times B_R),\\ \Phi_k (u_k) & \rightharpoonup \phi \qquad && \text{weakly in }L^2 ( (0,T) \times B_R ), \\ \nabla \Phi_k (u_k) &\rightharpoonup \nabla \phi \qquad && \text{weakly in }L^2 ( (0,T) \times B_R ), \\ Z_k (u_k (t,x)) &\rightharpoonup Z^* \qquad && \text{weakly in }H^1 ((0,T) \times B_R), \\ Z_k (u_k (t,x)) &\to Z^* \qquad && \text{a.e. in } (0,T) \times B_R. \end{alignat*} Let us characterise $\phi$ as $\Phi (\rho)$. For $k > m^{\pm \frac 1 {1-m}}$ we can compute clearly $\min \{1, \Phi_k'\}$ from \eqref{eq:Phik} and hence we have \begin{equation*} Z_k'(s) - Z'(s) = \begin{dcases} 0 & s \in [0,k], \\ m(k^{m-1} - s^{m-1}) & s > k , \end{dcases} \end{equation*} Since $u_k$ are uniformly bounded in $L^\infty$, taking $k$ large enough we have that \begin{equation*} Z_k(u_k) = Z(u_k). \end{equation*} Thus $Z(u_k)$ converges pointwise to $Z^*$. But $Z$ is continuous and strictly increasing, so it is invertible. Thus $u_k \to Z^{-1} (Z^*)$ a.e. in $(0,T) \times B_R$. Since, if both exist, the weak $L^2$ and a.e. limits must coincide (apply Banach-Saks theorem and Césaro mean arguments), then $u_k \to \rho$ a.e. in $(0,T) \times B_R$. Finally, due to the locally uniform convergence of $\Phi_k \to \Phi$, $\Phi_k (u_k) \to \Phi (\rho)$ a.e. and hence $\phi = \Phi (\rho)$. We can now upgrade to strong convergence, using the uniform $L^\infty$ bound $|u_k| \le C$. Hence, together with the point-wise convergence, we can apply the Dominated Convergence Theorem to show that our chosen subsequence also satisfies $ u_k \to \rho $ in $ L^q ( (0,T) \times B_R ), \ \forall q \in [1,\infty). $ Let us show that we maintain an upper and positive lower bound. The upper bound is uniform $e^{t \| \Delta V \|_{L^\infty ({B_R})}} \| \rho_0 \|_{L^\infty ({B_R})}$. Since as $H \to \infty$ the stationary states $\Theta_k^{-1} (V + H)$ tend to cero uniformly, then we can choose $H_k$ % so that \begin{equation*} \rho_0 \ge \Theta_k^{-1} (V + H_k) \ge \omega(\varepsilon). \end{equation*} Thus, $u_k \ge \omega(\varepsilon)$ and, therefore, so is $\rho$. In fact, due to this lower bound \begin{equation*} |\nabla u_k| \le \frac {1}{Z'(\omega(\varepsilon))} |\nabla Z(u_k)| , \qquad |( u_k)_t| \le \frac {1}{Z'(\omega(\varepsilon))} |(u_t)| \end{equation*} and so the convergence $u_k \rightharpoonup \rho$ is also weak in $H^1$ (up to a subsequence). But then $\rho$ is the unique weak $L^1$ solution with this property. Since the limit is unique, the whole sequence $u_k$ converges to $\rho$ all the senses above. \end{proof} \begin{corollary}[Approximation of the free energy] \label{cor:bounded domain and data properties of free energy} Under the hypothesis of \Cref{thm:existence bounded domain regular data} we have that \begin{equation*} \mathcal F_{\Phi_k} [u_k (t)] \to \mathcal F_R[\rho(t)] , \qquad \text{for a.e. } t > 0 . \end{equation*} and \begin{equation*} \int_0^T \int_{B_R} \rho \left| \nabla \left( \tfrac{m}{m-1} \rho^{m-1} + V \right)\right|^2 \le \mathcal F_R [\rho_0] - \mathcal F_R[\rho(T)]. \end{equation*} In particular, $\mathcal F_R [\rho(t)]$ is a non-increasing sequence. \end{corollary} \begin{proof} Since $u_k \to \rho$ converges a.e. in $(0,T) \times {B_R}$, then for a.e. $t > 0$ we have that $u_k (t) \to \rho(t)$. Since $u_k$ is uniformly bounded, then the Dominated Convergence Theorem ensures the convergence of $\mathcal F_{\Phi_k} [u_k]$. Taking into account \eqref{eq:bounded regularised estimate of free energy disipation}, then the sequence $ u_k^{\frac 1 2} \nabla (\Theta(u_k) + V) $ is uniformly in $L^2 ((0,T) \times {B_R})$. Therefore, up to a subsequence, it has limit $\xi (x)$. We can write \begin{equation*} u_k^{\frac 1 2} \nabla (\Theta(u_k) + V) = \frac{\Phi_k'(u_k)}{u_k^{\frac 1 2}} \nabla u_k + u_k^{\frac 1 2} \nabla V = u_k^{-\frac 1 2} \left( \nabla \Phi_k(u_k) + u_k \nabla V \right). \end{equation*} We know that $ \nabla \Phi_k(u_k) + u_k \nabla V \rightharpoonup \nabla \rho^m + \rho \nabla V $ weakly in $L^2$. On the other hand, since we know $u_k, \rho \ge \omega(\varepsilon)$ we can apply the intermediate value theorem to show that, up a to further subsequence, \begin{equation*} \int_0^T \int_{B_R} |u_k^{-\frac 1 2} - \rho^{-\frac 1 2}|^2 \diff x = \int_0^T\frac 1 4 \int_{B_R} |\eta_k (x)|^{-3} |u_k - \rho|^2 \diff x \le C \int_0^T \int_{B_R} |u_k - \rho|^2 \diff x \to 0. \end{equation*} where the strong convergence $L^2$ follows, up to a further subsequence, from the weak $H^1$ convergence. Using the product of strong and weak convergence \begin{equation*} u_k^{\frac 1 2} \nabla (\Theta(u_k) + V) \rightharpoonup \rho^{\frac 1 2} \nabla \left( \tfrac m{m-1} \rho^m + V \right), \qquad \text{weakly in } L^1 ((0,T) \times {B_R}). \end{equation*} But this limit must coincide with $\xi$, so the limit holds also weakly in $L^2$. The weak lower-continuity of the $L^2$ yields the result. \end{proof} We are also able to deduce from these energy estimates an $L^1$ bound of $\nabla \rho^m$. Unlike \eqref{eq:smooth estimate nabla Phi u} this bound can use only local boundedness of $\nabla V$. \begin{corollary} In the hypothesis of \Cref{thm:existence bounded domain regular data} we have that \begin{align} \label{eq:bounded estimate nabla rhom in L1} \int_0^T \int_K |\nabla \rho^m| &\le \| \rho_0 \|_{L^1({B_R})}^{\frac 1 2} \left(\mathcal F_R[\rho_0] - \mathcal F_R[\rho(T)] + \int_0^T \int_K \rho |\nabla V|^2 \right)^{\frac 1 2}, \qquad \forall K \subset \overline{{B_R}}. \end{align} \end{corollary} \begin{proof} We therefore have that \begin{align*} \int_0^T \int_K |\nabla \rho^m| &= \int_0^T\int_K \rho |\tfrac{m}{m-1} \nabla \rho^{m-1}| \le \int_0^T \| \rho(t) \|_{L^1({B_R})}^{\frac 1 2} \left( \int_K \rho \left| \nabla \tfrac{m}{m-1} \rho^{m-1} \right|^2 \right)^{\frac 1 2} \diff t \end{align*} Hence, we conclude the result using \Cref{cor:bounded domain and data properties of free energy}, Jensen's inequality and the conservation of the $L^1$ norm. \end{proof} Now we move to $L^1$ data. We first point out that $L^m({B_R}) \subset L^1({B_R})$ so any $\rho \in L^1$ has finite $\mathcal F_R [\rho]$. To be precise, by applying Hölder's inequality with $p = \frac 1 m > 1$ we have the estimate \begin{equation} \label{eq:L1 controls Lm over compacts} \int_K \rho^m \le |K|^{{1-m}} \|\rho\|_{L^1(K)}^{m}. \end{equation} Now we apply density in $L^1$ of the solutions with ``good'' initial data, via the comparison principle \begin{theorem}[Existence of solution for $L^{1}$ initial data] \label{thm:existence bounded L1} Under the assumptions of \Cref{thm:existence bounded domain regular data}, % there exists a semigroup $S(t) : L_+^{1} ({B_R}) \to L^{1} ({B_R})$ with the following properties \begin{enumerate} \item For $ 0 < \varepsilon^{-1} \le \rho_0 \le \varepsilon$ and $\rho_0 \in H^1 ({B_R})$, $S(t) \rho_0$ is the unique weak $L^1$ solution constructed in \Cref{thm:existence bounded domain regular data}. \item We have $\| S(t) \rho_0 \|_{L^1({B_R})} = \| \rho_0 \|_{L^1 ({B_R})}$. \item We have $L^1$ comparison principle and contraction \begin{equation*} \int_{B_R} [S(t) \rho_0 - S(t) \overline \rho_0 ]_+ \le \int_{B_R} [ \rho_0 - \overline \rho_0 ]_+, \qquad \int_{B_R} |S(t) \rho_0 - S(t) \overline \rho_0 | \le \int_{B_R} | \rho_0 - \overline \rho_0 |. \end{equation*} \item \label{it:existence bounded L1 item 3} If $\rho_0 \in L^{1+\varepsilon}_+({B_R})$ is the limit of the solutions $u_k$ of \eqref{eq:main regularised bounded} with \eqref{eq:Phik} and \begin{equation*} \| \rho (t) \|_{L^{1+\varepsilon}} \le Ce^{\frac{\varepsilon}{1+\varepsilon} t \| \Delta V \|_{L^\infty}} \| \rho_0 \|_{L^{1+\varepsilon}}. \end{equation*} \item If $\rho_0 \in L^1_+ ({B_R})$ and \eqref{eq:V no aggregation on boundary}, then $\rho$ is a very weak $L^1$ solution. \item If $\rho_0 \in L^1_+ ({B_R})$, then $\mathcal F_R[\rho(t)]$ is non-increasing and we have \eqref{eq:bounded estimate nabla rhom in L1}. Hence, it is a weak $L^1$ solution. \end{enumerate} \end{theorem} \begin{remark} Notice that there is no concentration in finite time. This is due the combination of the $L^1$ contraction with the uniform $L^{1+\varepsilon}$ estimate \eqref{eq:regularisation Lp estimates}. By the $L^1$ contraction, the sequence $S(t) \max\{ \rho_0 , k \}$ is Cauchy in $L^1$ and hence it has a limit in $L^1$. No Dirac mass may appear in finite time. In $\mathbb R^n$ we do not have an equivalent guarantee that $S(t) \rho_{0,k} \in L^1 (\mathbb R^n)$ for some approximating sequence. We will, however, have this information in the space $\mathcal M ({\mathbb R^n})$. \end{remark} \begin{remark} Notice that the construction of $S(t)$ is unique, since for dense data it produces the unique $H^1$ solution bounded below (which also comes as the limit of the approximations), and then it is extended into $L^1$ by uniform continuity. \end{remark} \begin{proof}[Proof of \Cref{thm:existence bounded L1}] We start by defining $S(t) \rho_{0} = \rho$ for the solutions constructed in \Cref{thm:existence bounded domain regular data}. Let us construct the rest of the situations. \textbf{Step 1. $0 < \varepsilon \le \rho_0 \le \varepsilon^{-1}$ but not necessarily in $H^1$.} We regularise $\rho_0$ by any procedure such that $H^1 ({B_R}) \ni \rho_{0,\ell} \to \rho_0$ in $L^{1+\varepsilon}$ and a.e.. Hence $0 < \varepsilon \le \rho_{0,\ell} \le \varepsilon^{-1}$ for $\ell$ large enough. By using stationary solutions and \eqref{eq:regularisation Lp estimates} we have that $0 < \omega(\varepsilon) \le \rho_{\ell} \le C(t)$. By the $L^1$ contraction, for all $t > 0$, $S(t) \rho_{0,\ell}$ is a Cauchy sequence, and hence it has a unique $L^1$ limit. Let \begin{equation*} S(t) \rho_0 = L^1-\lim_\ell S(t) \rho_{0,\ell}. \end{equation*} We have \begin{equation*} \int_{B_R} |S(t) \rho_{0,\ell} - S(t) \rho_0| \le \int_{B_R} |\rho_{0,\ell} - \rho_0|, \qquad \ell > \ell_0. \end{equation*} For this subsequence $\rho_\ell^m$ converge to $\rho^m$ a.e. and, up to a further subsequence, in $L^\infty$-weak-$\star$, and hence $S(t) \rho_0$ is a weak $L^1$ solution. Taking a different $\overline \rho_0$ with the same properties, and $\overline \rho_{0,\ell}$ its corresponding approximation, again for $\ell$ large, $0 < \omega(\varepsilon) \le \overline \rho \le C(t)$. Then we have that \begin{equation*} \int_{B_R} |S(t) \rho_{0,\ell} - S(t) \overline \rho_{0,\ell} | \le \int_{B_R} |\rho_{0,\ell} - \overline \rho_{0,\ell}|, \qquad \ell > \ell_0. \end{equation*} Let $\ell \to +\infty$ we recover the $L^1$ contraction. Similarly for the comparison principle. \textbf{Step 2. $\rho_0 \in L^{1}$. Approximation by solutions of \Cref{thm:existence bounded domain regular data}.} We define \begin{equation*} \rho_{0,K} =\max \{ \rho, K \}, \qquad \rho_{0, K, \varepsilon} = \max \{ \rho, K \} + \varepsilon. \end{equation*} For the solutions constructed in Step 1. we have that $\rho_{K, \varepsilon} \searrow \rho_K$ as $\varepsilon \searrow 0$ and as $K \nearrow +\infty$ we have $\rho_K \nearrow \rho$. By the $L^1$ contraction, we have as above that the sequence are Cauchy and hence we have $L^1$ convergence at each stage. The contraction and comparison are proven as in Step 1. \textbf{Step 3. \Cref{it:existence bounded L1 item 3}.} Due to the $L^{1+\varepsilon}$ bound, we know that $u_k \rightharpoonup \rho^*$ weakly in $L^{1+\varepsilon} ( (0,T) \times {B_R})$. On the other hand, we can select adequate regularisations of the initial datum $\rho_{0,\ell}\in H^1 $ such that $\varepsilon \le \rho_{0,\ell} \le \varepsilon^{-1}$, and the corresponding solutions $u_{k, \ell}$ of \eqref{eq:main regularised bounded} with $\Phi = \Phi_k$ given by \eqref{eq:Phik} satisfy the $L^1$ contraction. Integrating in $(0,T)$ we have that \begin{equation*} \int_0^T \int_{B_R} |u_k - u_{k,\ell}| \le T \int_{B_R} |\rho_0 - \rho_{0,\ell}|. \end{equation*} As $k \to \infty$, by the lower semi-continuity of the norm \begin{equation*} \int_0^T \int_{B_R} |\rho^* - S(t) \rho_{0,\ell}| \le T \int_{B_R} |\rho_0 - \rho_{0,\ell}|. \end{equation*} As $\ell \to \infty$ we recover $\rho^* = S(t) \rho_{0}$. \textbf{Step 4. $\rho_0 \in L^{1}$. Solutions in the very weak sense.} Finally, let us show that the solutions satisfy the equation in the very weak sense. Since we can integrate by parts, $\rho_{K,\ell}$ satisfies the very weak formulation, and we can pass to the limit to show that so does $\rho_K$. We have shown that $\rho_K \nearrow \rho$ in $L^1$. With the same philosophy, we prove that $\rho_K (t) \nearrow \rho(t)$ for every $t > 0$ so $\rho(t) \in L^1 ({B_R})$ for a.e. and we can pass to the limit in the weak formulation. We only need to the deal with the diffusion term. We also have that $\rho_K^m \nearrow \rho^m$. Due to \eqref{eq:L1 controls Lm over compacts} and the Monotone Convergence Theorem, we deduce that $\rho^m \in L^1 ((0,T) \times {B_R})$. \textbf{Step 5. Conservation of mass.} Since all the limits above hold in $L^1$, then preservation of the $L^1$ mass follows from the properties proved in \Cref{thm:existence bounded domain regular data}. \textbf{Step 6. Decay of the free energy.} Since all the limits above are taken monotonously and a.e., we can pass to the limit in \begin{equation*} \int_{B_R} \rho^m, \qquad \int_{B_R} V \rho \end{equation*} by the Monotone Convergence Theorem. Hence, the decay of the free energy proven in \Cref{cor:bounded domain and data properties of free energy} extends to $L^1$ solutions. We can also pass to the limit in \eqref{eq:bounded estimate nabla rhom in L1}. \end{proof} \section{An equation for the mass} \label{sec:mass} The aim of this section is to develop a well-posedness theory for the mass equation \eqref{eq:mass}. We will show that the natural notion of solution in this setting is the notion of viscosity solution. We will take advantage of the construction of the solution $\rho$ of \eqref{eq:main bounded domain} as the limit of the regularised problems \eqref{eq:main regularised bounded}. \subsection{Mass equation for the regularised problem} If $E$ is radially symmetric and $u$ is the solution solution of \eqref{eq:main regularised bounded}, its mass function $M$ satisfies \begin{align*} \frac{\partial M}{\partial t} &= \kappa (v)^2 \frac{\partial }{\partial v} \Phi \left( \frac{\partial M }{\partial v} \right) + \kappa(v) \frac{\partial M }{\partial v} E (v), \qquad \kappa(v) = n \omega_n^{\frac 1 n } v ^{\frac{n-1} n}, \end{align*} by integrating the equation for $u = \frac{\partial M}{\partial v}$. Notice that when $E = \nabla V$ then $E = \kappa(v) \frac{\partial V}{\partial v}$. This change of variables guarantees that \begin{align*} \int_{B_R} f(t,x) \diff x = |\partial B_1| \int_0^{R} f(t,r) r^{n-1} \diff r = \frac{|\partial B_1|}{|B_1|} n \int_0^{R_v} f(t,v) \diff v = \int_0^{R_v} f(t,v) \diff v, \end{align*} for radially symmetric functions. \begin{theorem}[Comparison principle for masses] \label{thm:comparison classical solutions} Let $M_1$ and $M_2$ be two classical solutions of the mass problem such that $M_1(0,r) \le M_2 (0,r)$. Then $ M_1 \le M_2. $ \end{theorem} \begin{proof} For any $\lambda >0$, let us consider the continuous function \begin{equation*} w (t,v) = e^{-\lambda t} (M_1 (t,v)- M_2(t,v)). \end{equation*} Notice that $w \to 0$ as either $t \to +\infty$ or $v \to 0, R_v$. Assume, towards a contradiction that $w$ reaches positive values. Hence, it reaches a positive global maximum at some point $t_0 > 0$ and $v_0 \in (0,\infty)$. At this maximum \begin{align*} 0 &= \frac{\partial w}{\partial t} (t_0, v_0) = e^{-\lambda t} \frac{\partial}{\partial t} (M_1 - M_2) - \lambda e^{-\lambda t} (M_1 - M_2) \\ 0& =\frac{\partial w}{\partial v} (t_0, v_0)= e^{-\lambda t} \frac{\partial}{\partial v} (M_1 - M_2) \\ 0& \ge \frac{\partial^2 w}{\partial v^2}(t_0, v_0) = e^{-\lambda t} \frac{\partial^2}{\partial v^2} (M_1 - M_2) . \end{align*} At $(t_0, v_0)$, we simply write the contradictory result \begin{align*} 0 &< \lambda e^{\lambda t} w(t_0, v_0) = \lambda (M_1 - M_2 ) = \frac{\partial}{\partial t} (M_1 - M_2)\\ &= (n \omega_n^{\frac 1 n} v_0^{\frac{n-1} n })^2 \left\{ \Phi'\left( \frac{\partial M_1}{\partial v}\right) \frac{\partial^2 M_1}{\partial v^2} + \frac{\partial M_1}{\partial v} E \right\} \\ &\qquad \qquad -(n \omega_n^{\frac 1 n} v_0^{\frac{n-1} n })^2 \left\{ \Phi' \left( \frac{\partial M_2}{\partial v}\right) \frac{\partial^2 M_2}{\partial v^2} + \frac{\partial M_2}{\partial v} E \right\} \\ &=(n \omega_n^{\frac 1 n} v_0^{\frac{n-1} n })^2 \left\{ \Phi'\left( \frac{\partial M_1}{\partial v}\right)\left( \frac{\partial^2 M_1}{\partial v^2} - \frac{\partial^2 M_2}{\partial v^2} \right) \right\} \le 0. \qedhere \end{align*} \end{proof} Let us define the Hölder semi-norm for $\alpha \in (0,1)$ \begin{equation*} [f ]_{C^\alpha([a,b])} = \sup_{ \substack{ x,y \in [a,b] \\ x \ne y } } \frac{|f(x)-f(y)|}{|x-y|^\alpha}. \end{equation*} We have the following estimate \begin{lemma}[Spatial regularity of the mass] \label{lem:M Calpha in space} If $u(t, \cdot) \in L^q (B_R)$ for some $q \in [1,\infty)$ then \begin{equation} \label{eq:bounded regularised M Calpha space} [M (t, \cdot) ]_{C^{\frac {q-1}q} ([0,R_v])} \le \| u \|_{L^q({B_R})}. \end{equation} If $q = \infty$ the same holds in $W^{1,\infty} (0,R_v)$. \end{lemma} \begin{proof} For $v_1 \ge v_2$ we have \begin{equation*} |M(t,v_1) - M(t, v_2)| = \int_{\widetilde B_{v_1} \setminus \widetilde B_{{v_2}}} u(t,x) \diff x \le \| u \|_{L^q} |\widetilde B_{v_1} \setminus \widetilde B_{v_2}|^{\frac {q-1}q} = \| u \|_{L^q} ( v_1 - v_2 )^{\frac{q-1}{q}}.\qedhere \end{equation*} \end{proof} \begin{lemma}[Temporal regularity of the mass] There exists a constant $C > 0$, independent of $u$ or $\Phi$, such that \begin{equation} \label{eq:bound Mt in L2} \int_0^T \int_{0}^{R_v} | M_t |^2 \diff v \diff t \le C \left( \int_{{B_R}} \Psi(u_0) + \| E \|_{L^\infty}^2 \int_0^T \int_{B_R} u(t,x)^2 \diff x \diff t \right) . \end{equation} In particular, if $u_0 \in L^2$ and $\Psi(u_0) \in L^1$ then $M \in C^{\frac 1 2} (0,T; L^1 (0,R_v))$. \end{lemma} \begin{proof} Let us prove first an estimate for $\| M_t (t, \cdot) \|_{L^2 (0,R_v)}$. Since $M = \frac{\partial u}{\partial v}$ then $\frac{\partial M}{\partial t} = \frac{\partial u}{\partial v\partial t}$. Applying Jensen's inequality \begin{align*} \int_{0}^{R_v} \left| \frac{\partial M}{\partial t} (t,v) \right|^2 \diff v &= \int_0^{R_v} \left( \int_{\widetilde B_{v}} \frac{\partial u}{\partial t} \diff x \right)^2 \diff v \\ &= \int_0^{R_v} \left( \int_{\widetilde B_{v}} \nabla \cdot ( \nabla \Phi(u) + u E ) \diff x \right)^2 \diff v \\ &= \int_0^{R_v} \left( \int_{\partial \widetilde B_{v}} (\nabla \Phi(u) + u E ) \cdot \frac x {|x|} \diff S_x \right)^2 \diff v \\ &\le \int_0^{R_v} \int_{\partial \widetilde B_{v}} \left| \nabla \Phi(u) + u E \right|^2 \diff S_x \diff v . \end{align*} Making the change of variables $v = |B_1| r^n$ we have $|\widetilde B_v| = v = |B_1| r^n = |B_r|$ and \begin{equation*} \int_{0}^{R_v} \left| \frac{\partial M}{\partial t} (t,v) \right|^2 \diff v \le \int_0^{R} \int_{\partial B_r} \left| \nabla \Phi(u) + u E \right|^2 \diff S_x |B_1| n r^{n-1} \diff r = \| \nabla \Phi(u) + u E \|_{L^2 ({B_R})} ^2. \end{equation*} Due to \eqref{eq:smooth estimate nabla Phi u} we recover \eqref{eq:bound Mt in L2}. Finally \begin{align*} \| M(t_1) - M(t_2) \|_{L^1 (0,R_v)} & =\int_0^{R_v} |M(t_2, v) - M(t_1, v)| \diff v= \int_0^{R_v} \left |\int_{t_1}^{t_2} \frac{\partial M} { \partial t } (s,v) \diff s \right| \diff v \\ &\le \int_0^{R_v} \int_{t_1}^{t_2} \left | \frac{\partial M} { \partial t } (s,v)\right| \diff s \diff v \le |t_2-t_1|^{\frac 1 2} \left \| \frac{\partial M} { \partial t } \right\|_{L^2 ((0,T) \times (0,R_v))}. \qedhere \end{align*} \end{proof} \subsection{Aggregation-Fast Diffusion} We recall the definition of viscosity solution for the $p$-Laplace problem, which deals with the singular ($p \in (1,2)$) and degenerate ($p > 2$) cases. We recall the definition found in many texts (see, e.g., \cite{Juutinen2003,Medina2019} and the references therein). \begin{definition} For $p > 1$ a function $u$ is a viscosity supersolution of $-\Delta_{p} u = f (x, u, \nabla u)$ if, $u \not \equiv \infty$, and for every $\varphi \in C^2 (\Omega)$ such that $u \ge \varphi$, $u (x_0) = \varphi (x_0)$ and $\nabla \varphi(x) \ne 0$ for all $x \ne x_0$ it holds that \begin{equation*} \lim_{r \to 0} \sup_{x \in B_r (x_0) \setminus \{x_0\}} (-\Delta_p \varphi(x)) \ge f(x_0, u(x_0), \nabla \varphi (x_0)). \end{equation*} \end{definition} Similarly, for our problem we define \begin{definition} For $m \in (0,1)$ a function $u$ is a viscosity supersolution of \eqref{eq:mass} if, for every $t_0 > 0, v_0 \in (0,R_v)$ and for every $\varphi \in C^2 ((t_0 - \varepsilon, t_0 + \varepsilon) \times (v_0 - \varepsilon, v_0 + \varepsilon))$ such that $M \ge \phi$, $M (v_0) = \varphi (v_0)$ and $\frac{ \partial \varphi}{\partial v} (v) \ne 0$ for all $v \ne v_0$ it holds that \begin{align} \frac{\partial \varphi}{\partial t} (t_0, v_0 ) - (n \omega^{\frac 1 n } v_0 ^{\frac{n-1} n})^2 \left[ \lim_{r \to 0} \sup_{0 < |v-v_0| < r} \frac{\partial }{\partial v} \left[ \left( \frac{\partial \varphi }{\partial v} \right)^m \right] + \frac{\partial \varphi }{\partial v}(t_0,v_0) \frac{\partial V}{\partial v} (v_0) \right] \ge 0. \end{align} The corresponding definition of subsolution is made by inverting the inequalities. A viscosity solution is a function that is a viscosity sub and supersolution. \end{definition} \begin{remark} Since we have a one dimensional problem, we can write the viscosity formulation equivalently by multiplying by $( \frac{\partial \varphi }{\partial v} )^{1-m}$ everywhere, to write the problem in degenerate rather than singular form. \end{remark} \begin{remark} Our functions $M$ will be increasing in $v$. This allows to a simplification of the condition in some cases. For example, if also have a lower bound on $\rho$, in the sense that \begin{equation*} M(t, v_2) - M(t, v_1) \ge c (v_2 - v_1), \qquad \forall v_0 + \varepsilon \ge v_2 \ge v_1 \ge v_0 - \varepsilon \text{ where } c > 0 \end{equation*} then we know that it suffices to take viscosity test functions $\varphi$ such $\frac{\partial \varphi}{\partial v} \ge \frac c 2$. In particular, we can simplify the definition of sub and super-solution by removing the limit and the supremum. \end{remark} \begin{remark} We can define the upper jet as \begin{align*} \mathcal J^{2,+} M(t_0,v_0) = \Big\{ & (D\varphi(t_0,v_0), D^2\varphi(t_0,v_0)) \\ &\qquad : \varphi \in C^2 ((t_0 - \varepsilon,t_0 + \varepsilon)\times(v_0 - \varepsilon, v_0 + \varepsilon)), \\ &\qquad \qquad M(t,v) - \varphi(t,v) \le 0 = M(t_0,v_0) - \varphi(t_0,v_0) \Big\}. \end{align*} The elements of the upper jet are usually denoted by $(p,X)$. The lower jet $\mathcal J^{2,-}$ is constructed by changing the inequality above. The definition of viscosity subsolution (resp. super-) can be written in terms of the upper jet (resp. lower). \end{remark} \begin{theorem}[Existence from the semigroup theory for $\rho$] \label{thm:comparison principle for mass} Let $\rho_0 \in L^1 ({B_R})$. Then \begin{equation*} M(t,v) = \int_{\widetilde B_v} S(t)[\rho_0] (x) \diff x \end{equation*} is a viscosity solution of \eqref{eq:mass} with $M(t,0) = 0$ and $M(t,R_v) = \| \rho_0 \|_{L^1 ({B_R})}$. Furthermore, for any $v_1,v_2,T > 0$, $M \in C([0,T] \times [v_1,v_2])$ with a modulus of continuity that depends only on $n, m, v_1, v_2$, $T, \| \frac{\partial V}{\partial v} \|_{L^\infty(v_1,v_2)}$ and the modulus of continuity of $M_{\rho_0}$ in $[v_1,v_2]$ . Moreover, we have the following interior regularity estimate: for any $T_1 > 0$ and $0 < v_1 < v_2 < R_v$ there exists $\gamma > 0$ and $\alpha \in (0,1)$ depending only on $n,m,\| \frac{\partial V}{\partial v} \|_{L^\infty(v_1,v_2)},v_1,v_2, T_1$, such that \begin{equation} \label{eq:local Calpha regularity} |M(t_1,v_1) - M(t_2,v_2) | \le \gamma \left( \frac{|v_1-v_2| + \| \rho_0 \|_{L^1 ({B_R})}^{\frac{m-1}{m+1}} |t_1 - t_2|^{\frac 1 {m+1}} }{ \min\{v_1, R_v-v_2 \} + \| \rho_0 \|_{L^1 ({B_R})}^{\frac{m-1}{m+1}} T_1^{\frac 1 {m+1}}} \right)^\alpha, \end{equation} for all $(t_i, v_i) \in [T_1, +\infty) \times [v_1,v_2]$. \end{theorem} \begin{proof} \textbf{Step 1. $\varepsilon \le \rho_0 \le \varepsilon^{-1}$ and $\rho_0 \in H^1 ({B_R})$.} Let us show that $$ M_{u_k} \to M_\rho \qquad \text{uniformly in } [0,T] \times B_R. $$ $M_\rho$ is a viscosity solution of \eqref{eq:mass} and $M_\rho$ is a weak local solution in the sense of \Cref{sec:classical regularity}. By our construction of $\rho$ by regularised problems in \Cref{thm:existence bounded domain regular data}, the strong $L^q$ convergence of $u_k$ to $\rho$ ensures that \begin{equation*} \int_0^T \sup_{v \in [0,R_v]} |M_{u_k}(t,v) - M_\rho(t, v)| \diff t \le \int_0^T \int_{{\mathbb R^n}} |u_k (t,x) - \rho(t,x)|\diff x \diff t \to 0. \end{equation*} So we know $M_{u_k} \to M_\rho$ in $L^1 (0,T; L^\infty (0,R_v))$, and hence (up a to a subsequence) a.e. Through estimates \eqref{eq:bounded regularised M Calpha space}, \eqref{eq:bound Mt in L2} , and \Cref{thm:relation between space and time regularities} we have \begin{equation} \label{eq:bound Mk Calpha} |M_{u_k}(t_1,v_1) - M_{u_k}(t_2,v_2)| \le C (\varepsilon) (|v_1-v_2|^\alpha + |t-s|^\gamma), \qquad t,s \in [0,T], v_1,v_2 \in [\varepsilon, R_v - \varepsilon]. \end{equation} To check that $M_\rho$ is a viscosity solution, we select $v_0 \in (0,R_v)$. Taking a suitable interval $(\varepsilon , R_v - \varepsilon ) \ni v_0$, by the Ascoli-Arzelá theorem, a further subsequence is uniformly convergent. Since we have characterised the a.e. limit we have \begin{equation*} \| M_{u_k} - M_\rho \|_{L^\infty ([0,T] \times [\varepsilon, R_v - \varepsilon]}) \to 0. \end{equation*} Due to the uniform convergence, we can pass to the limit in the sense of viscosity solutions and $M_\rho$ is a viscosity solution at $x_0$. The argument is classical and goes as follows (see \cite{crandall+ishii+lions1992users-guide-viscosity}). Take a viscosity test function $\varphi$ touching $M_\rho$ from above at $x_0$. Then, due to the uniform convergence $M_{u_k}$ to $M_\rho$ in a neighbourhood of $x_0$, there exists points $x_k$ where $\varphi$ touches $M_{u_k}$ from above. We apply the definition of viscosity solution for $M_{u_k}$ at $x_k$, and pass to the limit. Due to the pointwise convergence, $M_\rho$ also satisfies \eqref{eq:bound Mk Calpha}. \textbf{Step 2. $\rho_0 \in L^{1}$.} We pick the approximating sequence \begin{equation*} \rho_{0,K,\varepsilon} = \max \{ \rho_0 ,K \} + \varepsilon. \end{equation*} As we did in \Cref{thm:existence bounded L1} the $L^1$ limit of the corresponding solutions is $S(t) \rho_0$. Furthermore, the limits $\varepsilon \searrow 0$ and $K \nearrow +\infty$ are taking monotonically in $\rho$, so also monotically in $M$. This guarantees monotone convergence in $M$. With the universal upper bound $1$ we have $L^1$ convergence. Since the $C^\alpha$ bound is uniform away from $0$, we know that $M$ maintains it and is continuous. Due to Dini's theorem the convergence is uniform over $[0,T] \times [\varepsilon, R_v - \varepsilon]$, and $M_{\rho}$ is a viscosity solution of the problem. The value $M(t,0) = 0$ is given by $S(t) \rho_0 \in L^1({B_R})$ and the value at $M(t,R_v) = a_{0,R}$ by the fact that $\| S(t) \rho_0 \|_{L^1({B_R})} = \| \rho_0 \|_{L^1 ({B_R})} = a_{0,R}$. The uniform continuity is a direct application of \Cref{cor:uniform regularity interior in x up to 0 in t}. We point out that, since $\rho_0 \in L^1 ({B_R})$, then $M_{\rho_0}$ is point-wise continuous, and therefore uniformly continuous over compact sets. Estimate \eqref{eq:local Calpha regularity} follows from \Cref{thm:regularity dibenedetto}. \end{proof} Let us now state a comparison principle, under simplifying hypothesis. \begin{theorem}[Comparison principle of viscosity solutions if $\rho$ is bounded below] \label{thm:comparison principle m} Let $\underline M$ and $\overline M$ be uniformly continuous sub and supersolution. Assume, furthermore, that there exists $C_0 > 0$ such that \begin{equation*} \underline M(t,v_2) - \underline M(t, v_1) \ge C_0 (v_2 - v_1), \qquad \forall v_2 \ge v_1. \end{equation*} Then, the solutions are ordered, i.e. $\underline M \le \overline M$. \end{theorem} \begin{proof} Assume, towards a contradiction that \begin{equation*} \sup_{t > 0, v \in [0,R_v]} (\underline M(t,v) - \overline M(t,v)) = \sigma > 0. \end{equation*} Since both functions are continuous, there exists $(t_1,v_1)$ such that $\underline M(t_1,v_1) - \overline M(t_1, v_1) > \frac {3\sigma}4$. Clearly, $t_1 , v_1 > 0$. Let us take $\lambda$ positive such that \begin{equation*} \lambda < \frac{\sigma}{16(t_1 + 1)}. \end{equation*} With this choice, we have that \begin{equation*} 2 \lambda t_1 < \frac{\sigma}{4}. \end{equation*} For this $\varepsilon$ and $\lambda$ fixed, let us construct the variable-doubling function defined as \begin{equation*} \Phi(t,s,v,\xi) = \underline M(t,v) - \overline M(s,\xi) - \frac{|v-\xi|^2 + |s-t|^2}{\varepsilon^2} - \lambda (s + t). \end{equation*} This function is continuous and bounded above, so it achieves a maximum at some point. Let us name this maximum depending on $\varepsilon$, but not on $\lambda$ by \begin{equation*} \Phi (t_\varepsilon, s_\varepsilon, v_\varepsilon, \xi_\varepsilon) \ge \Phi (t_1,t_1,v_1,v_1) > \frac{3\sigma}{4} - 2 \lambda t_1 > \frac{\sigma}{2}. \end{equation*} In particular, it holds that \begin{equation} \label{eq:estimate m - M at max of Phi ee} \underline M(t_\varepsilon, v_\varepsilon) - \overline M(s_\varepsilon, \xi_\varepsilon) \ge \Phi(t_\varepsilon,s_\varepsilon, v_\varepsilon, \xi_\varepsilon) > \frac{\sigma}{2} . \end{equation} \noindent \textbf{Step 1. Variables collapse.} As $\Phi(t_\varepsilon, s_\varepsilon, v_\varepsilon, \xi_\varepsilon) \ge \Phi(0,0,0,0)$, we have \begin{equation*} \frac{|v_\varepsilon-\xi_\varepsilon|^2 + |s_\varepsilon-t_\varepsilon|^2}{\varepsilon^2} + \lambda (s_\varepsilon + t_\varepsilon) \le \underline M(t_\varepsilon, v_\varepsilon) - \overline M(s_\varepsilon, \xi_\varepsilon) - \Phi(0,0,0,0) \le C. \end{equation*} Therefore, we obtain \begin{equation*} |v_\varepsilon - \xi_\varepsilon| + |t_\varepsilon - s_\varepsilon| \le C \varepsilon. \end{equation*} This implies that, as $\varepsilon \to 0$, the variable doubling collapses to a single point. \\ We can improve the first estimate using that $\Phi(t_\varepsilon, s_\varepsilon, v_\varepsilon, \xi_\varepsilon) \ge \Phi(t_\varepsilon, t_\varepsilon, v_\varepsilon, v_\varepsilon)$. This gives us \begin{align*} \frac{|v_\varepsilon-\xi_\varepsilon|^2 + |s_\varepsilon-t_\varepsilon|^2}{\varepsilon^2} &\le \overline M(t_\varepsilon, v_\varepsilon) - \overline M(s_\varepsilon, \xi_\varepsilon) + \lambda(t_\varepsilon - s_\varepsilon) \\ &\le \overline M(t_\varepsilon, v_\varepsilon) - \overline M(s_\varepsilon, \xi_\varepsilon) + C \varepsilon. \end{align*} Since $\overline M$ is uniformly continuous, we have that \begin{equation} \label{eq:comparison principle mass collapse rate} \lim_{\varepsilon \to 0}\frac{|v_\varepsilon-\xi_\varepsilon|^2 + |s_\varepsilon-t_\varepsilon|^2}{\varepsilon^2} = 0. \end{equation} \noindent \textbf{Step 2. For $\varepsilon > 0$ sufficiently small, the points are interior.} We show that there exists $\mu$ such that $t_\varepsilon, s_\varepsilon \ge \mu > 0$ for $\varepsilon>0$ small enough. For this, since $\underline M$ and $\overline M$ are uniformly continuous we can estimate as \begin{align*} \frac{\sigma}{2} &< \underline M(t_\varepsilon, v_\varepsilon) - \overline M(s_\varepsilon, \xi_\varepsilon) \\ &=\underline M(t_\varepsilon, v_\varepsilon) - \underline M(0, v_\varepsilon) + \underline M(0, v_\varepsilon) - \overline M(0,v_\varepsilon) + \overline M(0,v_\varepsilon) - \overline M(t_\varepsilon, v_\varepsilon) + \overline M(t_\varepsilon, v_\varepsilon) - \overline M(s_\varepsilon, \xi_\varepsilon) \\ &\le \omega(t_\varepsilon) + \omega( |v_\varepsilon - \xi_\varepsilon| + |t_\varepsilon -s_\varepsilon| ), \end{align*} where $\omega \ge 0$ is a modulus of continuity (the minimum of the moduli of continuity of $\underline M$ and $\overline M$), i.e.\ a continuous non-decreasing function such that $\lim_{r \to 0} \omega(r) = 0$. For $\varepsilon > 0$ such that \begin{equation*} \omega( |v_\varepsilon - \xi_\varepsilon| + |t_\varepsilon -s_\varepsilon| ) < \frac{\sigma}{4}, \end{equation*} we have $\omega(t_\varepsilon) > \frac{\sigma}{4}$. The reasoning is analogous for $s_\varepsilon$. For $v_\varepsilon$ we can proceed much in the same manner \begin{align*} \frac{\sigma}{2} & < \underline M(t_\varepsilon, v_\varepsilon) - \overline M(s_\varepsilon, \xi_\varepsilon) \\ &=\underline M(t_\varepsilon, v_\varepsilon) - \underline M(t_\varepsilon, 0) + \underline M(t_\varepsilon,0) - \overline M(t_\varepsilon, v_\varepsilon) + \overline M(t_\varepsilon, v_\varepsilon) - \overline M(s_\varepsilon,\xi_\varepsilon) \\ &\le \omega(v_\varepsilon) + \omega( |v_\varepsilon - \xi_\varepsilon| + |t_\varepsilon -s_\varepsilon| ). \end{align*} And analogously for $\xi_\varepsilon$. A similar argument holds for $R_v - v_\varepsilon$ and $R_v - \xi_\varepsilon$. \noindent \textbf{Step 3. Choosing viscosity test functions.} Unlike in the case of first order equations, there is no simple choice of $\varphi$ that works in the viscosity formula. We have to take a detailed look at the jet sets. Due to \cite[Theorem 3.2]{crandall+ishii+lions1992users-guide-viscosity} applied to $u_1 = \underline M$, $u_2 = - \overline M$ and \begin{equation*} \varphi_\varepsilon (t,s,v,\xi) = \frac{|v-\xi|^2 + |s-t|^2}{\varepsilon^2} + \lambda (s + t) \end{equation*} for any $\delta > 0$, there exists $\underline X$ and $\overline X$ in the corresponding jets such that \begin{equation*} \left(\frac{\partial \varphi_\varepsilon}{\partial (t,v)} (z_\varepsilon) , \underline X \right) \in \mathcal J^{2,+} \underline M (t_\varepsilon, v_\varepsilon), \qquad \left( - \frac{\partial \varphi_\varepsilon}{\partial (s,\xi)} (z_\varepsilon) , - \overline X \right) \in \mathcal J^{2,-} \overline M (s_\varepsilon, \xi_\varepsilon), \end{equation*} where $z_\varepsilon = (t_\varepsilon, s_\varepsilon, v_\varepsilon, \xi_\varepsilon)$ and we have \begin{equation*} -( \delta^{-1} + \| A \| ) I \le \begin{pmatrix} \underline X \\ & -\overline X \end{pmatrix} \le A + \delta A^2 \end{equation*} where $A = D^2 \varphi_\varepsilon (z_\varepsilon)$. In particular, this implies that the term of second spatial derivatives satisfies $\underline X_{22} \le \overline X_{22}$ (see \cite{crandall+ishii+lions1992users-guide-viscosity}). Notice that \begin{equation*} \frac{\partial \varphi}{\partial t} (z_\varepsilon) = \frac{2(t_\varepsilon - s_\varepsilon)}{\varepsilon^2} + \lambda, \qquad - \frac{\partial \varphi}{\partial s} (z_\varepsilon) = \frac{2(t_\varepsilon - s_\varepsilon)}{\varepsilon^2} - \lambda \end{equation*} and \begin{equation*} \frac{\partial \varphi}{\partial v} (z_\varepsilon) = \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} = - \frac{\partial \varphi}{\partial \xi} (z_\varepsilon). \end{equation*} Since $\underline M (t_\varepsilon, v) - \Phi(t_\varepsilon, s_\varepsilon, v, \xi_\varepsilon)$ as a maximum at $ v = \xi_\varepsilon$ we have that, for $v > v_\varepsilon$ \begin{equation*} \frac{|v - \xi_\varepsilon|^2 - |v_\varepsilon - \xi_\varepsilon|^2 }{\varepsilon^2} \ge \underline M (t_\varepsilon, v) - \underline M(t_\varepsilon, v_\varepsilon) \ge C_0 (v - v_\varepsilon). \end{equation*} Therefore, we conclude \begin{equation*} \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} \ge C_0. \end{equation*} Plugging everything back into the notion of viscosity sub and super-solution \begin{gather*} \frac{2(t_\varepsilon - s_\varepsilon)}{\varepsilon^2} + \lambda + H \left( v_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \underline X \right)\le 0 \\ \frac{2(t_\varepsilon - s_\varepsilon)}{\varepsilon^2} - \lambda + H \left( \xi_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \overline X \right)\ge 0 \end{gather*} where \begin{equation*} H(v,p,X) = -(n \omega_n^{\frac 1n} v^{\frac{n-1}{n}})^2 \left\{ m p^{m-1} X_{22} + p \frac{\partial V}{\partial v}(v) \right\} . \end{equation*} \noindent \textbf{Step 4. A contradiction.} Substracting these two equations \begin{align*} 0 < 2 \lambda &\le H \left( \xi_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \overline X \right) - H \left( v_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \underline X \right) \\ &= H \left( \xi_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \overline X \right) - H \left( \xi_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \underline X \right) \\ &\qquad + H \left( \xi_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \underline X \right) - H \left( v_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \underline X \right) \\ &\le H \left( \xi_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \underline X \right) - H \left( v_\varepsilon, \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} , \underline X \right)\\ &=(n \omega_n^{\frac 1n})^2 \frac{2(v_\varepsilon - \xi_\varepsilon)}{\varepsilon^2} \left(v_\varepsilon^{2\frac{n-1}{n}}\frac{\partial V}{\partial v}(v_\varepsilon) - \xi_\varepsilon^{2\frac{n-1}{n}}\frac{\partial V}{\partial v}(\xi_\varepsilon) \right) \to 0, \end{align*} since $v^{2\frac{n-1}{n}}\frac{\partial V}{\partial v}(v) = r^{n-1} \frac{\partial V}{\partial r}$ is Lipschitz continuous and \eqref{eq:comparison principle mass collapse rate}. \end{proof} \section{Existence of concentrating solutions} \label{sec:concentration} When we now take $F: (0,\infty) \to (0,\infty)$ \begin{equation} \label{eq:initial data with concentration} \rho_F (x) = \Big( \tfrac{1-m}{m} F( V (x) ) \Big )^{- \frac 1 {1-m}} , \qquad F' \le 1, \qquad F(0) = 0 \qquad F(s) > 0 \text{ for all } s > 0, \end{equation} we have that $\rho_F \ge \rho_V$, so the corresponding solutions with $\rho(0,x) = \rho_F(x)$ satisfies \[ \rho (t,x) \ge \rho_V (x) , \qquad \forall t \ge 0 , x \in {B_R}. \] We will prove that with this initial data we have $ U = \frac{\partial M}{\partial t} \ge 0 $ by showing it satisfies a PDE with a comparison principle and $U(0,\cdot) \ge 0$. First, we prove an auxiliary result for the regularised problem. \begin{theorem}[Solutions of \eqref{eq:main regularised bounded} with increasing mass] \label{thm:solutions with increasing mass} Let $h \in \mathbb R$, $F$ be such that $F' \le 1$, $F \ge 0$, $F(0) = 0, $ % \begin{equation} \label{eq:increasing solutions Phi smooth} u_0 = \Theta^{-1} \Big ( h - F (V(x)) \Big ), \end{equation} $u$ be the solution of \eqref{eq:main regularised bounded} and $M$ be its mass. Then, we have that \begin{equation} \label{eq:mass increasing in time} M(t + h, x) \ge M(t, x) , \qquad \forall h \ge 0 \end{equation} and \begin{equation} \label{eq:lower bound on dv M} M(t, v_2) - M(t, v_1) \ge \int_{\widetilde B_{v_2} \setminus \widetilde B_{v_1}} \Theta^{-1} (h - V(x)) \diff x , \qquad \forall v_1 \le v_2. \end{equation} \end{theorem} \begin{proof} Notice also that $F(s) \le s$ so $ u_0 (x) \ge \Theta^{-1} ( h - V (x) ), $ and this is a stationary solution. Hence this inequality holds for $u(t)$ as well, due to \Cref{thm:ball comparison Phi smooth}. Thus \eqref{eq:lower bound on dv M} holds. Since $u \in C^1 ((0,T) ; C(\overline B_R))$, we can consider \begin{equation*} U(t,v) = \int_{\widetilde B_v} \frac{\partial u}{\partial t} (t,x ) \diff x = \frac{\partial M}{\partial t}. \end{equation*} Due to \eqref{eq:bounded regularised regularity of $u$}, $ U \in C((0,T) \times [0,R_v] ) . $ Since we have $ M(t,0) = 0, M(t,R_v) = 1 $ the boundary conditions are $ U(t,0) = U(t, R_v) = 0. $ Using the equation for the mass we have that \begin{equation*} U(0,r) = (n \omega_n v^{ \frac{n-1}n })^2 {u_0} \frac{\partial }{\partial v} \left( \Theta(u_0) + V \right) \ge 0, \end{equation*} since $\partial V / \partial v \ge 0$, by hypothesis. Taking formally a time derivative in the equation of the mass, we obtain that \begin{align*} \frac{\partial U}{\partial t} &= (n \omega_n v^{ \frac{n-1}n })^2 \left( \frac{\partial}{\partial v} \left( \Phi'(u) \frac{\partial M}{\partial v \partial t} \right) + \frac{\partial M}{\partial t \partial v } \frac{\partial V}{\partial v} \right )= (n \omega_n v^{ \frac{n-1}n })^2 \left( \frac{\partial}{\partial v} \left( \Phi'(u) \frac{\partial U}{\partial v} \right) + \frac{\partial U}{ \partial v } \frac{\partial V}{\partial v} \right ) \\ &= A(v) \frac{\partial^2 U}{\partial v^2} + B(v) \frac{\partial U}{\partial v} \end{align*} where $A(v) = (n \omega_n v^{ \frac{n-1}n })^2 \Phi'(u) \ge 0$ and $B(v) = (n \omega_n v^{ \frac{n-1}n })^2 ( \frac{\partial}{\partial v} [\Phi'(u)] + \frac{\partial V}{\partial v}).$ This can be justified in the weak local sense. For $\varphi \in C_c^\infty ((0,T) \times (0,R_v))$ we can write \begin{equation*} -\int_0^T\int_0^{R_v} M \frac{\partial \varphi}{\partial t} = - \int_0^T\int_0^{R_v} \Phi \left( u \right) \frac{\partial }{\partial v} \left( (n \omega_n v^{ \frac{n-1}n })^2 \varphi \right) + \int_0^T\int_0^{R_v} (n \omega_n v^{ \frac{n-1}n })^2 \frac{\partial M}{ \partial v } \frac{\partial V}{\partial v} \varphi , \end{equation*} we can simply take $\varphi = \frac{\partial \psi}{\partial t}$ and integrating by parts in time to recover \begin{equation*} \int_0^T\int_0^{R_v} \frac{\partial M}{\partial t} \frac{\partial \psi}{\partial t} = \int_0^T\int_0^{R_v} \frac{\partial}{\partial t} \left( \Phi \left( u \right) \right) \frac{\partial }{\partial v} \left( (n \omega_n v^{ \frac{n-1}n })^2 \psi \right) - \int_0^T\int_0^{R_v} (n \omega_n v^{ \frac{n-1}n })^2 \frac{\partial M}{\partial t \partial v } \frac{\partial V}{\partial v} \psi . \end{equation*} Since $u$ is $C^1$ then $ \frac{\partial}{\partial t} \left( \Phi \left( u \right) \right) = \Phi'(u) \frac{\partial U}{\partial t} $ is a continuous function. Operating with the derivatives of $\psi$, we recover that \begin{equation*} \int_0^T \int_0^{R_v} U \left\{ \frac{\partial \psi}{\partial t} +\frac{\partial}{\partial v} \left(A(t,v) \frac{\partial \psi}{\partial v} \right) + \frac{\partial}{\partial v} \left(B(v) \psi \right) \right\} = 0, \qquad \forall \psi \in C^\infty_c((0,T) \times (0,R_v)) . \end{equation*} We now show that $U$ is a solution in the weak sense, incorporating the boundary conditions. Since $U$ is continuous and $U(t,0) = U(t,R_v) = 0$, for any $\psi$ suitably regular we can use an approximating sequence $\psi_k \in C^\infty_c((0,T) \times (0,R_v))$ to show that \begin{equation*} \int_0^{R_v} U(T) \psi (T) \diff v + \int_0^T \int_0^{R_v} U \left\{ \frac{\partial \psi}{\partial t} +\frac{\partial}{\partial v} \left(A(t,v) \frac{\partial \psi}{\partial v} \right) + \frac{\partial}{\partial v} \left(B(v) \psi \right) \right\} = \int_0^{R_v} U(0) \psi (0) \diff v. \end{equation*} Fix $\Psi_0$ smooth and let $\Psi$ the solution of \begin{equation} \label{eq:bounded smooth dual U} \begin{dcases} \frac{\partial \Psi}{\partial t} = \frac{\partial}{\partial v} \left(A(T-t,v) \frac{\partial \Psi}{\partial v} \right) + \frac{\partial}{\partial v} \left(B(v) \Psi \right) & \text{in }(0,T) \times (0,R_v) \\ \Psi(t,0) = \Psi(0,R_v) = 0, \\ \Psi(0,v) = \Psi_0. \end{dcases} \end{equation} If $\Psi$ is a classical interior solution, then taking as a test function $\psi(t,x) = \Psi (T-t,x)$ we have that \begin{equation*} \int_0^{R_v} U(T) \Psi_0 \diff v = \int_0^{R_v} U(0) \Psi (T) \diff v. \end{equation*} Notice that $A(0) = 0$. Substituting $A$ by the uniformly elliptic diffusion $A(T-t,v) + \delta$, $\delta > 0$, and letting $\delta \searrow 0$, for any $\Psi_0 \ge 0$, we can construct a non-negative solution of \eqref{eq:bounded smooth dual U}. Therefore, since $U(0) \ge 0$ we have that $ U \ge 0 $ in $(0,T) \times (0,R_v)$, and the proof is complete. \end{proof} Before we continue, we point out that $M_{\rho_F}$, lies between $M_{\rho_V}$ and one of its upward translations \begin{remark} Notice that $F(0) = 0$ and $F' \le 1$ then $F(s) \le s$. Hence $\rho_F \ge \rho_V$. Integrating forward from $0$ we have \begin{equation*} M_{\rho_F} (v) = \int_{\widetilde B_v} \rho_F \diff x \ge \int_{\widetilde B_v} \rho_V \diff x = M_{\rho_V} (v). \end{equation*} On the other hand, integrating backwards from $R_v$ we have \begin{equation} \label{eq:bounded rho0 with concentration comparison masses} \begin{aligned} M_{\rho_F} (v) &= M_{\rho_F} (R_v) - \int_{B_R \setminus \widetilde B_v} \rho_F \diff x \le M_{\rho_F} (R_v) - \int_{B_R \setminus \widetilde B_v} \rho_V \diff x \\ &= \Big( M_{\rho_F} (R_v) - a_{V,R} \Big) + M_{\rho_V} (v). \end{aligned} \end{equation} \end{remark} Now we move to considering suitable initial data for \eqref{eq:main bounded domain}. We make the following construction \begin{lemma} \label{lem:rho_D} Let $b_1 \in (0,V(R))$. Assume that $a_{V,R} < a_{V,0}$. There exists $\overline b_{2}(b_1) < V(R)$ such that, for all $b_{2} \in (b_1, \overline b_{2}(b_1))$ there exists $D(b_1,b_2) < b_1$ such that the $\rho_D$ given by \begin{equation} \label{eq:bounded rho0 with concentration} F_D(s) = \begin{cases} s & \text{if } s \in [0,b_1 ], \\ % % D & \text{if } s \in [b_1 , b_2 ] , \\ % % D + s - b_2 & \text{if } s \ge b_2. % \end{cases} \qquad \rho_D (x) = \Big( \tfrac{1-m}{m} F_D( V (x) ) \Big )^{- \frac 1 {1-m}} \end{equation} satisfies $ \int_{B_R} \rho_D = a_{0,R}. $ \end{lemma} The function $F_D$ can be taken as the limit of functions $F_\varepsilon \in C^1$ in the assumptions of \Cref{thm:solutions with increasing mass}. Take $F_\varepsilon(0) = 0$ and \begin{equation*} F_\varepsilon'(s) = \begin{cases} 1 & \text{if } s \in [0,b_1 ], \\ 1 - \frac{1+\alpha}{\varepsilon / 4} (s-b_1) & \text{if } s \in [b_1, b_1 + \tfrac \varepsilon 4] \\ -\alpha & \text{if } s \in [b_1 + \tfrac \varepsilon 4 ,b_1 + \tfrac \varepsilon 2 ] \\ -\alpha + \frac{\alpha}{\varepsilon / 2} (s - b_1 - \tfrac \varepsilon 2) & \text{if } s \in [b_1 + \tfrac \varepsilon 2 ,b_1 + \varepsilon ] \\ 0 & \text{if } s \in [b_1 + \varepsilon , b_2 - \varepsilon ] , \\ \frac 1 \varepsilon (s-b_2+\varepsilon) & \text{if } s \in [b_2 - \varepsilon , b_2 ] , \\ 1 & \text{if } s \ge b_2. \end{cases} \end{equation*} where given $0 < b_1 < b_2 < V(R)$ and $\varepsilon < \frac{b_2-b_1}{4}$ and $0 < D \le b_1$, we can always select \begin{equation*} \alpha(b_1,b_2,D,\varepsilon) > 0 \text{ such that } \qquad F_\varepsilon(b_1 + \varepsilon) = \int_0^{b_1 + \varepsilon}F_\varepsilon'(s) \diff s = D. \end{equation*} Notice that $F_\varepsilon(s) > 0$ for $s > 0$ and $F_\varepsilon' \le 1$ and $F_\varepsilon \in C^1$. This form is rather elaborate, so we pick the limit as $\varepsilon \searrow 0$. Notice that $ \inf F_\varepsilon' \to - \infty$ as $\varepsilon \searrow 0$. \begin{proof}[Proof of \Cref{lem:rho_D}] We start by pointing out that $\int_{B_R} \rho_F$ is continuous in all parameters. Taking $D = b_1$ we have that as $b_2 \searrow b_1$ we have that $ \int_{B_R} \rho_D \searrow \int_{B_R} \rho_V = a_{V,R} < a_{0,R}. $ Hence, when $D = b_1$, there exists $\overline b_2(b_1)$ such that for $b_2 < \overline b_2$, and $D = b_1$, $\int_{B_R} \rho_D < a_{0,R}$. Fixed $b_1$ and $b_2$, as $D \searrow 0$ we have $\int_{B_R} \rho_D \nearrow \infty$. So there exists a choice of $D$ such that $\int_{B_R} \rho_F = a_{0,R}$. \end{proof} Notice that $\rho_D \in L^{1+\varepsilon}(B_R)$ due to the assumption \eqref{eq:rhoV in L1ee loc}. We sketch the profile in \Cref{fig:rhoD}. \begin{figure}[h] \centering \includegraphics[width=.5\textwidth]{figures/rhoD.pdf} \caption{Example of $\rho_D$ for some parameters $b_1$ and $b_2$. In this example $a_{0,R} = 1$.} \label{fig:rhoD} \end{figure} \begin{theorem}[Solutions of \eqref{eq:main bounded domain} with increasing mass] \label{thm:R finite concentration mass} Under the hypothesis of \Cref{thm:existence bounded domain regular data}, let $\rho_D$ be given by \eqref{eq:bounded rho0 with concentration}. Then, the mass $M$ of $\rho(t) = S(t) \rho_D$ constructed in \Cref{thm:existence bounded L1} is such that \begin{equation*} M(t, v) \nearrow (a_{0,R} - a_{V,R}) + M_{\rho_V} (v) \qquad \text{ uniformly in } [\varepsilon,R_v]. \end{equation*} In particular, $\rho(t, \cdot) \rightharpoonup (a_{0,R} - a_{V,R}) \delta_0 + \rho_V$ weak-$\star$ in the sense of measures. \end{theorem} \begin{proof}[Proof of \Cref{thm:R finite concentration mass}] \textbf{Step 1. Properties by approximation.} Since $\rho_D \in L^{1+\varepsilon}$, looking at how we constructed $S(t) \rho_0$ in \Cref{thm:existence bounded domain regular data,thm:existence bounded L1}, it can approximated by $S_k (t) \rho_0$ where $S_k$ is the semigroup of \eqref{eq:main regularised bounded} with $\Phi_k$ given by \eqref{eq:Phik}. Notice that, the associated $\Theta_k$ given by \eqref{eq:Theta} is \begin{equation*} \Theta_k (s) = \tfrac{m}{1-m} \left( 1 - s^{ {m-1} } \right), \quad \text{for }s \in [k^{-1}, k] . \end{equation*} Hence, we recover \begin{equation*} \Theta_k^{-1} (s) = ( 1 - \tfrac{1-m}{m} s )^{-\frac 1 {1-m}}, \qquad \text{for } s \in [\Theta_k(k^{-1}), \Theta_k (k)]. \end{equation*} Taking $h = \frac{m}{1-m}$ in \eqref{eq:increasing solutions Phi smooth} we have initial data $u_{0,k}$ such that \begin{equation*} u_{0,k} = (\tfrac{1}{1-m} F(V))^{-\frac 1 {1-m}} , \qquad \text{whenever }F(V(x)) \in [\Theta_k (k^{-1}), \Theta_k (k)], \end{equation*} and $M_{u_k}$ non-decreasing in $t$. This corresponds to an interval of the form $v \in [\varepsilon_k, R_v - \delta_k]$. Let us denote $u_k = S_k (t) u_{0,k}$. Due to the $L^1$ contraction we have that \begin{equation*} \int_{B_R} |u_k (t) - S_k (t) \rho_D | \diff x \le \int_{{B_R}} |u_{0,k} - \rho_D| \diff x. \end{equation*} Hence, by \Cref{thm:existence bounded L1} we infer that $ S_k (t) u_{0,k} \to S(t) \rho_D $ in $ L^1 ({B_R}) $ for a.e. $t > 0. $ This guarantees the a.e. convergence of the masses. Hence, the mass function $M$, which is already a viscosity solution of \eqref{eq:mass} and $C^\alpha$ regular, also inherits the point-wise estimate from $M_{u_k}$ in \eqref{eq:lower bound on dv M}. $M$ is also non-decreasing in $t$ and $v$. Moreover, due to \eqref{eq:bounded rho0 with concentration comparison masses} and \Cref{thm:comparison principle for mass} due to \Cref{eq:local Calpha regularity}, we conclude that \begin{equation} \label{eq:R finite concentration mass aux 1} M_{\rho_D} (v) \le M(t,v) \le (a_{0,R} - a_{V,R}) + M_{\rho_V} (v). \end{equation} \noindent \textbf{Step 2. Uniform convergence of $M(t,\cdot)$ as $t \to +\infty$.} Since $M$ is point-wise non-decreasing in $t$ and bounded above by $a_{0,R}$, we know there exists a function $M_\infty$ such that \begin{equation} \label{eq:R finite concentration mass aux 2} M(t,x) \nearrow M_\infty (x) , \qquad t \nearrow \infty. \end{equation} By the estimate \eqref{eq:local Calpha regularity} we know that $M_\infty$ belongs to $C^\alpha_{loc} ((0,R_v))$ and hence continuous in interior points. On the other hand, \eqref{eq:R finite concentration mass aux 1} implies \begin{equation*} M_{\rho_D} (v) \le M_\infty(v) \le (a_{0,R} - a_{V,R}) + M_{\rho_V} (v). \end{equation*} Hence, by the sandwich theorem, $M_\infty (R_v) = a_{0,R}$ and it is continuous at $R_v$ (due to the explicit formulas we can actually show rates). Since $M_\infty$ is non-decreasing and $M_\infty \ge 0$, due to \eqref{eq:R finite concentration mass aux 2}, there exists a limit \begin{equation*} \lim_{v \to 0} M_\infty(v) \le a_{0,R} - a_{V,R}. \end{equation*} Defining $M_\infty (0) = \lim_{v \to 0} M_\infty(v)$, the function is obviously continuous in $[0,R_v]$. Hence, applying Dini's theorem, we know that \begin{equation*} \sup_{v \in [\varepsilon, R_v]} | M (t, v) - M_\infty (v) | \to 0. \end{equation*} Due to \eqref{eq:lower bound on dv M} and our choice of $h$, we have that \begin{equation} \label{eq:lower bound on dv M infty} M_\infty(v_2) - M_\infty(v_1) \ge ( v_2 - v_1 ) \inf_{\widetilde B_{v_2} \setminus \widetilde B_{v_1}} \rho_V , \qquad \forall v_1 \le v_2. \end{equation} \noindent \textbf{Step 3. Characterisation of $M_\infty$ as a viscosity solution.} Let us check that $M_\infty$ is a viscosity solution of \begin{equation} \label{eq:problem M infty} \frac{\partial^2 M_\infty}{\partial v^2} + \frac 1 m \left( \frac{\partial M_\infty}{\partial v} \right)^{2-m} \frac{\partial V}{\partial v} = 0. \end{equation} Due to our lower bound \eqref{eq:lower bound on dv M infty}, $\frac{\partial M_\infty}{\partial v}$ is bounded below. We define the sequence of masses $M_n:[0,1] \times [0,R_v] \to \mathbb R$ given by $ M_n (t,v) = M(t - n, v). $ These are viscosity solutions for \eqref{eq:mass} due to \Cref{thm:comparison principle for mass}. We also know that \begin{equation*} \sup_{(t,v) \in [0,1] \times [\varepsilon, R_v]} |M_n(t, v) - M_\infty(v) | \to 0. \end{equation*} By standard arguments of stability of viscosity solutions, $M_\infty$ is also a solution of \eqref{eq:mass}. Since it does not depend on $t$, we can select spatial viscosity test functions, and hence it is a solution of \eqref{eq:problem M infty}. Since we have removed the time dependency, we dropped also the spatial weight $(n \omega_n v^{\frac{n-1}n})^2$. \noindent \textbf{Step 4. $M_\infty$ is $C^2 ((0, R_v))$.} \textbf{Step 4a. Lipschitz regularity} Since $M_\infty$ is non-decreasing, at the point of contact of a viscosity test function touching from below, we deduce \begin{equation*} - \frac{\partial^2 \varphi}{\partial v^2} (v_0) \ge \frac 1 m \left( \frac{\partial \varphi}{\partial v} (v_0) \right)^{2-m} \frac{\partial V}{\partial v} (v_0) \ge 0. \end{equation*} Hence, $M_\infty$ is a viscosity super-solution of $- \Delta M = 0$. Due to \cite{Ishii1995}, we have that $M$ is also a distributional super-solution of $-\Delta M = 0$. Distributional super-solutions are concave. Since $M_\infty$ is concave, it is $W^{1,\infty}([\varepsilon,R_v - \varepsilon])$ of all $\varepsilon > 0$. \textbf{Step 4b. Higher regularity by bootstrap.} Now we can treat the right-hand side as a datum $$f = \frac 1 m \left( \frac{\partial M_\infty}{\partial v} (v_0) \right)^{2-m}\frac{\partial V}{\partial v} \in L^\infty (\varepsilon, R_v-\varepsilon).$$ Applying the regularisation results in \cite{Caffarelli1995} we recover that $M_\infty \in C^{1,\alpha}(2 \varepsilon, R_v - 2\varepsilon)$. Since $V \in W^{2,\infty} = C^{0,1}$, then $f \in C^{0,\beta}(2 \varepsilon, R_v - 2\varepsilon)$ for some $\beta > 0$, so $M_\infty \in C^{2,\beta}(4 \varepsilon, R_v - 4\varepsilon)$. \noindent \textbf{Step 5. Explicit formula of $M_\infty$.} Since $M_\infty \in C^{2} ((0,R_v)) \cap C ([0,R_v])$, we can integrate \eqref{eq:problem M infty} to show that \begin{equation*} M_\infty (v) = M_\infty(0) + M_{\rho_{V+h}} \end{equation*} for some $h \ge 0$. Since $M_\infty(R_v) - M_\infty (0) = a_{0,R} - M_\infty (0) \le a_V$ then, for some $h \ge 0$ we have that $M_\infty(R_v) - M_\infty (0) = a_{V + h,R}$. By the comparison principle, which holds due to \eqref{eq:lower bound on dv M infty}, we conclude the equality $M_\infty (v) - M_\infty (0) = M_{ \rho_{V + h}} (v)$ for $v \in [0,R_v]$. Due to \eqref{eq:lower bound on dv M infty}, the singularity at $0$ is incompatible with $h > 0$. Thus $h = 0$. \end{proof} \begin{remark} Notice that the aggregation does not occur in finite time, since we assume \eqref{eq:rhoV in L1ee loc}. \end{remark} \begin{proof}[Proof of \Cref{thm:bounded concentrating intro}] To compute the $\liminf$, it suffices to pick a $\rho_D$ % such that $M_{\rho_0} \ge M_{\rho_D}$. This can be done by selecting $b_1$ sufficiently close to $V(R)$. % If we assume \eqref{eq:rho0 below limit}, by the comparison of masses we have \begin{equation*} \int_{B_r} \rho (t,x) \diff x \le (a_{0,R} - a_{V,R}) + M_{\rho_V}(v), \qquad \forall r \in [0,R]. \end{equation*} Then, the $\liminf$ and $\limsup$ coincide with this upper bound, i.e. \begin{equation*} \lim_{t \to +\infty} \int_{B_r} \rho (t,x) \diff x = (a_{0,R} - a_{V,R}) + M_{\rho_V}(v), \qquad \forall r \in [0,R]. \end{equation*} To check the convergence in Wasserstein distance, we must write the convergence of the masses in $L^1$ in radial coordinates. Let $\mu_{\infty,R} = (a_{0,R} - a_{V,R}) \delta_0 + \rho_V $, then we have that \begin{equation*} d_{1} (\rho(t) , \mu_{\infty,R}) = n \omega_n \int_{0}^R\left| \int_{B_r} \rho(t,x) \diff x - \mu_{\infty,R} (B_r) \right| r^{n-1} \diff r. \end{equation*} due to the fact that the optimal transport between radial densities is radial and the characterisation of $d_1$ in one dimension (see \cite{Villani03}). Since we have shown in the proof above that $ \int_{B_r} \rho(t) \diff x \le \mu_\infty (B_r) $ \begin{equation*} d_{1} (\rho(t) , \mu_\infty) = n \omega_n \int_{0}^R\left(\mu_{\infty,R} (B_r) - \int_{B_r} \rho(t,x) \diff x \right) r^{n-1} \diff r. \end{equation*} Due to the monotone convergence $\int_{B_r} \rho(t,x) \nearrow \mu_{\infty,R} (B_r)$ for $r \in (0,R]$, the right-hand goes to $0$ as $t \to +\infty$. \end{proof} \section{Minimisation of $\mathcal F_R$} \label{sec:FR} It is very easy to see that the free energy $\mathcal F_R$ is bounded below, in particular \begin{equation} {\mathcal F_R}[\rho] \ge - \tfrac 1 {1-m} |{B_R}|^{\frac 1 {1-m}} \|\rho\|_{L^1({B_R})}^{m}, \end{equation} due to \eqref{eq:L1 controls Lm over compacts} and that $V \ge 0$. Therefore, there exists a minimising sequence. The problem is that the functional setting does not offer sufficient compactness to guarantee its minimiser is in $L^1({B_R})$. However, we can define its extension to the set of measures as \begin{equation*} \widetilde{\mathcal F_R}[\mu] = - \tfrac 1 {1-m} \int_{{B_R}} \mu_{ac}^m + \int_{{B_R}} V d\mu \end{equation*} This is the unique extension of $\mathcal F_R$ to $\mathcal M_+ ({B_R})$ that is lower-semicontinuous in the weak-$\star$ topology (see \cite{Demengel1986} and related results in \cite{Buttazzo1989}). Since we work on a bounded domain, tightness of measures is not a limitation. For convenience, let us define for $\rho \in L^1 ({B_R})$, \begin{equation*} \mathcal E_{m,R} [\rho] = \tfrac{1}{m-1}\int_{B_R} \rho(x)^m \diff x. \end{equation*} Let us denote the set of non-negative measures of fixed total mass $\mathfrak m$ in ${B_R}$ as \begin{equation*} \mathcal P_{\mathfrak{m}} (\overline {B_R}) = \{ \mu \in \mathcal M_+ (\overline {B_R}) : \mu ({B_R}) = \mathfrak{m} \}. \end{equation*} We have the following result \begin{theorem}[Characterisation of the unique minimiser of $\mathcal F_R$] \label{thm:bounded minimising FR} Let us fix $ \mathfrak{m} > 0$, $V \in W^{2,\infty} ({B_R})$, $V (0) = 0$ and $V$ is radially increasing. Then, any sequence $\rho_j$ minimising $\mathcal F_R$ over $\mathcal P_{\mathfrak{m}} (\overline {B_R}) \cap L^1 ({B_R})$ % converges weakly-$\star$ in the sense of measures to \begin{equation*} \mu_{\infty,\mathfrak{m}} = \begin{dcases} \rho_{V+h} & \text{for } h \text{ such that }a_{V+h} = \mathfrak{m}, \\ (\mathfrak{m}-a_{V,R})\delta_0 +\rho_{V} & \text{if }a_{V,R} < \mathfrak{m}. \\ \end{dcases} \end{equation*} Furthermore, \begin{equation} \label{eq:bounded mu infinity minimising} \widetilde {\mathcal F_R}[\mu_{\infty,\mathfrak{m}}] = \inf_{\mu \in \mathcal P_{\mathfrak{m}} (\overline {B_R}) } \widetilde {\mathcal F}_R[\mu] = \inf_{\rho \in \mathcal P_{\mathfrak{m}} (\overline {B_R}) \cap L^1 ({B_R})} \mathcal F_R[\rho] . \end{equation} \end{theorem} \begin{remark}[Lieb's trick] \label{rem:Liebs trick} Given a radially decreasing $\rho \ge 0$, $\rho^q \in L^1 ({B_R})$ for some $ q > 0$ (for any $R \le \infty$), using and old trick of Lieb's (see \cite{Lieb1977,Lieb1983}) we get, for $|x| \le R$, \begin{equation*} \int_{B_R} \rho^q \diff x = n \omega_n \int_0^R \rho(r)^q r^{n-1} \diff r \ge n \omega_n \int_0^{|x|} \rho(r)^q r^{n-1} \diff r \ge n \omega_n \rho ({x})^q \int_0^{|x|} r^{n-1} \diff r . \end{equation*} Hence, we deduce the point-wise estimate \begin{equation} \label{eq:Lieb's trick estimate for rhom} \rho(x) \le \left( \frac{\int_{{B_R}} \rho^q }{n \omega_n {|x|}^{n}} \right)^{\frac 1 q}. \end{equation} It is easy to see that \eqref{eq:Lieb's trick estimate for rhom} is not sharp. However, it is useful to prove tightness for sets of probability measures. Similarly, if additionally $V \rho \in L^1 ({B_R})$, and $V \ge 0$ we can estimate \begin{align*} \int_{B_R} V \rho \diff x = n \omega_n \int_0^{|x|} V(r) \rho(r) r^{n-1} \diff r \ge n \omega_n \int_0^{|x|} V(r) \rho(r) r^{n-1} \diff r \ge n \omega_n \rho (x) \int_0^{|x|} V(r) r^{n-1} \diff r, \end{align*} so we recover the point-wise estimate \begin{equation} \label{eq:Lieb's trick estimate for Vrho} \rho (x) \le \frac{\int_{B_R} V \rho }{\int_{B_{|x|}} V}. \end{equation} \end{remark} \begin{proof}[Proof of \Cref{thm:bounded minimising FR}] The second equality in \eqref{eq:bounded mu infinity minimising} is due to the weak-$\star$ density of $L^1_+ ({B_R})$ in the space of non-negative measures, and the construction of $\widetilde{\mathcal F}_R$ (see \cite{Demengel1986}). Let us consider a minimising sequence. Let us show that we can replace it by a radially-decreasing minimising sequence. Let $\rho_j \in L^1_+ ({B_R}) $ with $\| \rho_j \|_{L^1} = \mathfrak{m}$. By standard rearrangement results \begin{equation*} \mathcal E_{m,R} [\rho_j^\star] = \mathcal E_{m,R} [\rho_j]. \end{equation*} Since $V \ge 0$ and radially symmetric and non-decreasing then \begin{equation*} \int_{{B_R}} V(x) \rho_j^\star (x) \diff x \le \int_{{B_R}} V(x) \rho_j (x) \diff x. \end{equation*} Hence, there exists minimising sequence $\rho_j \in L^1 ({B_R})$ that we can assume radially non-increasing. Since $\rho_j \in \mathcal P_{\mathfrak m} (\overline B_R)$, by Prokhorov's theorem, this minimising sequence must have a weak-$\star$ limit in the sense of measures, denoted by $\mu_{\infty, \mathfrak m}$. We use the following upper and lower bounds that follow from \eqref{eq:L1 controls Lm over compacts} \begin{equation*} \int_{B_R} V \rho_j % = % \mathcal F_R[\rho_j] + \tfrac{1}{1-m} \int_{{B_R}} \rho_j^m \le \mathcal F_R[\rho_j] + \tfrac{1}{1-m} |{B_R}|^{1-m} \| \rho_j \|_{L^1}. \end{equation*} Due to \eqref{eq:Lieb's trick estimate for Vrho} we have a uniform bound in $L^\infty ({B_R} \setminus B_\varepsilon)$ for any $\varepsilon > 0$. Thus, there exists $\rho_\infty \in L^1_+ ({B_R}) \cap L^\infty ({B_R} \setminus B_\varepsilon)$ for any $\varepsilon \ge 0$ such that \begin{equation*} \mu_{\infty, \mathfrak m} = \Big(\mathfrak{m} - \| \rho_\infty \|_{L^1 ({B_R})} \Big) \delta_0 + \rho_\infty. \end{equation*} Let us now characterise this measure. For $\varphi \in C^\infty_c (\mathbb R^n)$ we take \begin{equation} \psi (x) = \left( \varphi (x) % \int_{B_R} \rho_\infty (y) \diff y % - \int_{{B_R}} \varphi (y) \rho_\infty (y) \diff y \right) \rho_\infty (x) . \end{equation} For $\varphi$ fixed, there is $\varepsilon_0>0$ such that for $\varepsilon < \varepsilon_0$, $\mu_{\infty,\mathfrak{m}} + \varepsilon \psi \in \mathcal P_{\mathfrak{m}} (\mathbb R^n)$ and, hence, \begin{equation*} {\mathcal F_R} [\rho_\infty] = \widetilde {\mathcal F_R} [\mu_{\infty,\mathfrak{m}} ] \le \widetilde {\mathcal F_R} \left [ \mu_{\infty,\mathfrak{m}} + \varepsilon \psi \right]. \end{equation*} Hence, we get the expression \begin{equation*} \mathcal E_{m,R}\left [ \rho_\infty + \varepsilon \psi \right] - \mathcal E_{m,R} [ \rho_\infty ] + \varepsilon \int_{{B_R}} V(x) \psi (x) \diff x \ge 0. \end{equation*} We write \begin{align*} \frac{ \mathcal E_{m,R}\left [ \rho_\infty + \varepsilon \psi \right] - \mathcal E_{m,R} [ \rho_\infty ] } \varepsilon &= \tfrac m {m-1} \int_0^1 \left( \int_{B_R} |\rho(x) + t \varepsilon \psi (x)|^{m-2} (\rho(x) + t \varepsilon \psi(x)) \psi (x) \diff x \right) \diff t . \end{align*} Since we have the estimate \begin{equation*} \left| \int_\Omega |\rho(x) + t \varepsilon \psi (x)|^{m-2} (\rho(x) + t \varepsilon \psi(x)) \psi (x) \diff x \right| \le (\| \rho_\infty \|_{L^m} + \varepsilon_0 \| \psi \|_{L^m})^{m-1} \| \psi \|_{L^m}, \end{equation*} we recover by the Dominated Convergence Theorem \begin{equation*} \lim_{\varepsilon \to 0} \frac{ \mathcal E_{m,R}\left [ \rho_\infty + \varepsilon \psi \right] - \mathcal E_{m,R} [ \rho_\infty ] } \varepsilon = \frac m {m-1} \int_{B_R} \rho_\infty ^{m-1} \psi . \end{equation*} Thus, as $\varepsilon \to 0$ the following inequality holds \begin{equation*} \int_{{B_R}} I[\rho_\infty] \psi \ge 0, \qquad \text{with } I[\rho] {\coloneqq} \frac m {m-1} \rho ^{m-1} + V. \end{equation*} Applying the same reasoning for $-\psi$ (which corresponds to taking $-\varphi$ instead of $\varphi$), we deduce the reversed inequality, and hence the equality to $0$. This means that \begin{align*} 0 &= \int_{{B_R}} I[\rho_\infty] (x) \varphi (x) \rho_\infty(x) % \left( \int_{B_R} \rho_\infty (y) \diff y \right) % \diff x - \int_{{B_R}} \left( \int_{{B_R}} \varphi (y) \rho_\infty(y) \diff y \right) I[\rho_\infty] (x) \rho_\infty (x) \diff x \\ &= \int_{{B_R}} \varphi (x) \rho_\infty(x) \left( I[\rho_\infty] (x) % \left( \int_{B_R} \rho_\infty (y) \diff y \right) % - \int_{{B_R}} I[\rho_\infty] (y) \rho_\infty (y) \diff y \right) \diff x \end{align*} Since $\mathfrak m \delta_0$ is not a minimiser (see \Cref{rem:energy formal analysis on Diracs}) then $\rho_\infty \not \equiv 0$. As $\varphi$ concentrates to a point, we recover for a.e. $x$ either \begin{align*} \rho_\infty (x) = 0 \qquad \text{or} \qquad I[\rho_\infty] (x) = \frac{ \int_{{B_R}} I[\rho_\infty] (y) \rho_\infty (y) \diff y}{% \int_{B_R} \rho_\infty (y) \diff y } =: \mathcal C[\rho_\infty]. \end{align*} Notice that the right hand of the second term is a constant. Since $\rho_\infty$ is radially decreasing then there exists $R_\infty > 0$ such that \begin{equation*} \rho_\infty(x) = \begin{dcases} \left ( \tfrac{1-m}{m} (V - \mathcal C[\rho_\infty] ) \right )^{- \frac 1 {1-m} } & |x| \le R_\infty, \\ 0 & R_\infty < |x| < R \end{dcases} = \rho_{V+h} (x) \chi_{B_{R_\infty}} (x) \end{equation*} where, by evaluating close to $0$ we deduce that $h = - \mathcal C[\rho_\infty] \ge 0$. % Notice that $\rho_\infty$ is the minimiser of the two variable function \begin{align*} f(\tau,h) = \widetilde{\mathcal F_R} & \left[\rho_{V+h} \chi_{B_{\tau}} + \left( \mathfrak m - \int_{B_{\tau}} \rho_{V+h}\right) \delta_0 \right] = \mathcal F_R [\rho_{V+h} \chi_{B_{\tau}}] \\ &= |\partial B_1| \int_0^{\tau} \left( -\tfrac {1}{1-m} \rho_{V+h}^{m} (r) + V (r) \rho_{V+h} (r) \right) r^{n-1} \diff r \end{align*} under the total mass constraint that \begin{equation*} |\partial B_1| \int_{0}^{\tau} \rho_{V+h} (r) r^{n-1} \diff r \le \mathfrak m. \end{equation*} It is not a difficult exercise to check that the minimum is achieved with $R_\infty = R$ and $h$ as small as possible. When $ \mathfrak m > a_{V,R}$ (which corresponds to $h = 0$) we have to add a Dirac Delta at the origin, with the difference of the masses $\mathfrak m - a_{V,R}$. To check this, first we point out that \begin{align*} -\tfrac {1}{1-m} \rho_{V+h}^{m} + V \rho_{V+h} & = \rho_{V+h} \left( -\tfrac {1}{1-m} \rho_{V+h}^{m-1} + V \right) = - \rho_{V+h} \left( \tfrac{1-m}{m} V + \tfrac 1 m h \right)< 0 \qquad \text{for all } r > 0. \end{align*} We deduce that $f \le 0$ and increasing the integration domain decreases $f$, i.e $\frac{\partial f}{\partial \tau} < 0$ for all $\tau, h > 0$. On the other hand, since $\frac{\partial \rho_{V+h}}{\partial h } < 0$ for $r, h > 0$ we have that \begin{equation*} \frac{\partial f}{\partial h} = |\partial B_1|\int_0^\tau \left( \frac{m}{m-1} \rho^{m-1} + V \right) \frac{\partial \rho_{V+h}}{\partial h} r^{n-1}\diff r = - h |\partial B_1|\int_0^\tau \frac{\partial \rho_{V+h}}{\partial h} r^{n-1}\diff r > 0 . \end{equation*} Hence, the derivative is not achieved at interior points. We look at the boundaries of the domain: \begin{enumerate} \item The segment $(\tau, h) \in \{0\} \times [0,+\infty)$, where $f = 0$. These are all maximisers. \item The segment $(\tau,h) \in \{R\} \times [h_0, +\infty)$, where $h_0$ is the minimum value so that $\int_{{B_R}} \rho_{V+h} \le \mathfrak m$. If $\mathfrak m > a_{V,R}$, then $h_0 = 0$. Using the derivative respect to $h$, the minimum in this segment is achieved at $(R,h_0)$. \item A segment $(\tau,h) \in [0,R_0] \times \{0\}$, where $R_0$ is such that $\int_{B_{R_0}} \rho_V = \mathfrak m$. If $\mathfrak m > a_{V,R}$, then $R_0 = R$. Using the derivative respect to $\tau$, the minimum in this segment is achieved at $(R_0,0)$. \end{enumerate} Hence, if $\mathfrak m \ge a_{V,R}$, the minimum is achieved at $\tau = R$ and $h = 0$. The remaining mass is completed with a Dirac. Lastly, if $\mathfrak m < a_{V,R}$ there is an extra part of the boundary, where the mass condition is achieved with equality \begin{enumerate}[resume] \item The curve $(\tau, \overline h(\tau))$ such that \begin{equation*} |\partial B_1| \int_{0}^{\tau} \rho_{V+\overline h} (r) r^{n-1} \diff r = \mathfrak m. \end{equation*} Notice that this segment contains the minima of the other segments. Taking a derivative respect to $\tau$ we deduce that \begin{equation*} \frac{\diff \overline h}{\diff \tau} = - \frac{ \rho_{V+\overline h} (\tau) \tau^{n-1} } { \displaystyle  \int_{0}^{\tau} \frac{ \partial \rho_{V+\overline h} }{\partial h} (r) r^{n-1} \diff r} \ge 0. \end{equation*} Therefore, using Leibniz's rule again we recover \begin{align*} \frac{\diff }{\diff \tau} \left ( f(\tau, \overline h (\tau)) \right ) &= - |\partial B_1| \frac{\diff }{\diff \tau} \int_0^\tau \rho_{V+\overline h} \left( \tfrac{1-m}{m} V + \tfrac 1 m \overline h \right) r^{n-1} \diff r \\ &= - |\partial B_1| \rho_{V+\overline h} (\tau) \left( \tfrac{1-m}{m} V (\tau) + \tfrac 1 m \overline h \right) \tau^{n-1} \\ &\qquad - |\partial B_1| \frac{\diff \overline h }{\diff \tau } \int_{0}^{\tau} \left( \frac{ \partial \rho_{V+\overline h} }{\partial h} \left( \tfrac{1-m}{m} V + \tfrac 1 m \overline h \right) + \rho_{V+ \overline h} \right) r^{n-1} \diff r \le 0. \end{align*} Finally, the minimum is achieved for the lasted $\tau$, so again the minimum is $(R,h_0)$. \qedhere \end{enumerate} \end{proof} \begin{remark}[$\mathfrak{m} \delta_0$ is not a minimiser] \label{rem:energy formal analysis on Diracs} Let $\rho \in L^1_+({B_R})$ smooth be fixed and let us consider the dilations $ \rho_s (x) = s^n \rho(s x) $ for $s \ge 1$. Notice that $\rho_s \to \delta_0$ as $s \to +\infty$ in the weak-$\star$ of $\mathcal M (\overline {B_R})$. As $s \to \infty$ we can compute \begin{equation*} \mathcal F_R[\rho_s] = \frac{s^{n(m-1)}}{m-1} \int_{B_R} \rho(x)^m \diff x + \int_{B_R} V(s^{-1}x) \rho(x) \diff x \longrightarrow 0+V(0)\int_{B_R} \rho(x) \diff x = 0. \end{equation*} It is not difficult see that $\mathcal F_R$ takes negative values, so this is not a minimiser. \end{remark} In \cite{Cao2020} the authors prove that in ${\mathbb R^n}$ if $ \rho_{V+h_1} \le \rho_0 \le \rho_{V+h_2} $ then $\rho(t) \to \rho_{V+h}$ of the same initial mass. This shows that $\mu_{\infty,\mathfrak{m}} = \rho_{V+h}$ is attractive in the cases without Dirac Delta concentration at the origin. We have constructed initial data $\rho_0 > \rho_V$ such that $\rho(t) \to \mu_{\infty,\mathfrak{m}}$ in the sense of their mass functions. Furthermore, we show that \begin{lemma}[Minimisation of $\mathcal F_R$ through solution of \eqref{eq:main bounded domain}] Assume $ \rho_V\le \rho_0 $, \eqref{eq:rho0 below limit}, $a_{V,R} < a_{0,R} = \|\rho_0 \|_{L^1 ({B_R})}$ and let $\rho$ be constructed in \Cref{thm:bounded concentrating intro}. Then $$ \mathcal F_R[\rho(t)] \searrow \widetilde{\mathcal F_R}[\mu_{\infty, a_{0,R}}]=\mathcal F_R[\rho_V]. $$ \end{lemma} \begin{proof} From the gradient flow structure we know $\mathcal F[\rho(t)]$ is non-increasing. First, we prove $\rho(t) \to \rho_V$ in $L^1({B_R} \setminus B_\varepsilon)$ for some $\varepsilon$ small. We know that $\rho(t) \ge \rho_V$ so \begin{align*} \int_{{B_R} \setminus B_\varepsilon} |\rho(t) - \rho_V | &= \int_{{B_R} \setminus B_\varepsilon} (\rho(t) - \rho_V ) = M(t,R)-M(t,\varepsilon) - (M_V (R) - M_V(\varepsilon)) ) \\ &= (a_{0,R} - a_{V,R}) + M_V(\varepsilon) - M(t,\varepsilon) \to 0 \end{align*} as $t \to \infty$ due to \Cref{thm:bounded concentrating intro}. Now we can explicitly compute \begin{align*} |\mathcal F[\rho(t)] - \mathcal F[\rho_V]| &\le \left| \frac{1}{1-m}\int_{B_\varepsilon} (\rho(t)^m - \rho_V^m) \right| + \left| \frac{1}{1-m}\int_{{B_R} \setminus B_\varepsilon} (\rho(t)^m - \rho_V^m) \right| \\ & \qquad + \left| \int_{B_\varepsilon} V (\rho(t) - \rho_V) \right| + \left| \int_{{B_R} \setminus B_\varepsilon} V (\rho(t) - \rho_V) \right|\\ & \le \frac{|B_\varepsilon|^{ {1-m}}}{1-m}( \|\rho(t)\|_{L^1}^m + \|\rho_V\|_{L^1}^m ) + \left| \frac{1}{1-m}\int_{{B_R} \setminus B_\varepsilon} (\rho(t)^m - \rho_V^m) \right|\\ & \quad + \left( \sup_{x \in B_\varepsilon} V(x) \right) ( \|\rho(t)\|_{L^1} + \|\rho_V\|_{L^1} ) + \left( \sup_{x \in B_R} V(x) \right) \int_{{B_R} \setminus B_\varepsilon} |\rho(t) - \rho_V | . \end{align*} Due to the $L^1$ convergence, we can extract a sequence $t_k \to \infty$ such that $\rho(t_k) \to \rho_V$ a.e. in ${B_R}\setminus B_\varepsilon$. For this subsequence, due to Fatou's lemma and $\rho(t)\ge\rho_V$ we have \begin{equation*} \int_{{B_R}\setminus B_\varepsilon} \rho(t_k)^m \to \int_{{B_R}\setminus B_\varepsilon} \rho_V^m. \end{equation*} Collecting the above estimates, we conclude that \begin{equation*} \limsup_{k \to \infty} |\mathcal F[\rho(t_k)] - \mathcal F[\rho_V]|\le \tfrac{1}{1-m}|B_\varepsilon|^{{1-m}}(\|\rho(t)\|^m_{L^1} + \|\rho_V\|_{L^1}^m) + \left( \sup_{x \in B_\varepsilon} V(x) \right)( \|\rho(t)\|_{L^1} + \|\rho_V\|_{L^1} ) \end{equation*} for any $\varepsilon > 0$. Letting $\varepsilon \to 0$ we recover that $\limsup_k$ is actually a $\lim_k$, and it is equal to $0$. Since $\mathcal F[\rho(t)]$ is non-increasing, we recover the limit as $t \to \infty$. \end{proof} \section{The problem in $\mathbb R^n$} \label{sec:Rn} We start by showing the existence of a viscosity solution of the mass equation \eqref{eq:mass}, by letting $R \to +\infty$. As $R \to \infty$ we can modify $V_R$ only on $(R-1) < |x| < R$ to have $\nabla V_R (x) \cdot x = 0$ for $|x|=R$. Fix $\rho_0 \in L^1 ({\mathbb R^n})$ radially symmetric. Let $M_R$ be the solution of the mass equation with this data. Consider the extension \begin{equation*} \widetilde M_R (t,v) = \begin{dcases} M_R (t,v) & v \le R_v, \\ \| \rho_0 \|_{L^1 ({B_R})} & v > R_v, \end{dcases} \end{equation*} where, as above, we denote $R_v = R^n |B_1|$. Since $\| \widetilde M_R \|_{L^\infty ( (0,\infty) \times (0,\infty))}$ we have that, up to a subsequence \begin{equation*} \widetilde M_{R_k} \rightharpoonup M \text{ weak-}\star \text{ in } L^\infty ((0,\infty)^2). \end{equation*} We can carry the estimate in $C^\alpha ( [T_1,T_2] \times [v_1,v_2] )$ given in \eqref{eq:local Calpha regularity}, which is uniform in $R$ since $\| M_R \| _{L^\infty} \le 1$, for any $0 < T_1 < T_2 < \infty$ and $0 < v_1 < v_2 < R_v$. Now we show $M$ is a viscosity solution. Due to the uniform continuity provided by \Cref{thm:comparison principle for mass} and the Ascoli-Arzelá theorem, for any $K = [0,T] \times [v_1,v_2]$ with $v_1, v_2 , T > 0$, we have a further subsequence that converges in $C(K)$ to some function $\widetilde M$ the uniform continuity. It is easy to characterise $\widetilde M = M$ almost everywhere. Due to the uniform convergence, we preserve the value of $M(0,v) = M_R(0,v)$ for $v \le R_v$. Applying the same stability arguments for viscosity solutions as in \Cref{thm:comparison principle for mass}, $M$ is a viscosity solution of the mass equation \eqref{eq:mass}. \begin{proposition} \label{prop:Rd existence mass} Assume $V \in W^{2,\infty}_{loc} ({\mathbb R^n})$ is radially symmetric, strictly increasing, $V\ge 0$, $V (0) = 0$ and the technical assumption \eqref{eq:rhoV in L1ee loc}. Let $\rho_0 \in L^1 ({\mathbb R^n})$ be radially symmetric such that $\| \rho_0 \|_{L^1} = 1$. Then, there exists $M \in C_{loc} ([0,+\infty] \times (0,+\infty))$ a viscosity solution of \eqref{eq:mass} in $(0,\infty) \times (0,\infty)$ that satisfies the initial condition $$M(0,v) = \int_{\widetilde B_v} \rho_0 (x) \diff x.$$ We also have the $C^\alpha_{loc}$ interior regularity estimate \eqref{eq:local Calpha regularity} with $R_v = \infty$. \end{proposition} Notice that, at this point, we do not check that $M(t,0) = 0$, and hence concentration in finite time may, in principle, happen in ${\mathbb R^n}$. We also do not show, at this point, that $M(t,\infty) = 1$. There could, in principle, be loss of mass at infinity. \begin{remark}[Conservation of total mass if $m \in (\frac{n-2}n, 1)$] For this we use the following comparison. We consider $\underline u_k$ the solution of the pure-diffusion equation \begin{equation*} \begin{dcases} \underline u_t = \Delta \Phi_k (\underline u) & t > 0 , x \in B_{R}, \\ \partial_n \underline u = 0 & t> 0, x \in \partial B_{R} \\ \underline u (0,x) = u_{0}(x). \end{dcases} \end{equation*} Then the associated mass satisfies the equation \begin{equation*} \begin{dcases} \frac{\partial \underline M}{\partial t} = (n \omega_n^{\frac 1 n} v^{\frac{n-1}{n}})^2 \frac{\partial }{\partial v} \Phi_k \left(\frac{\partial \underline M}{\partial v} \right) & t > 0, v \in (0,R_v),\\ \underline M (t,0) = 0 , & t> 0\\ \underline M (t,R_v) = \|u_{0}\|_{L^1({B_R})} & t > 0. \end{dcases} \end{equation*} If $u_0 \ge 0$ is radially decreasing, then so is $\frac{\partial M_k}{\partial v} = \underline u$. Therefore, in the viscosity sense \begin{equation*} \frac{\partial \underline M}{\partial t} \le (n \omega_n^{\frac 1 n} v^{\frac{n-1}{n}})^2 \left\{ \frac{\partial }{\partial v} \Phi_k \left(\frac{\partial \underline M}{\partial v} \right) + \frac{\partial \underline M}{\partial v}\frac{\partial V}{\partial v} \right\}. \end{equation*} Let $u$ be the solution of \eqref{eq:main regularised bounded}. Due to \Cref{thm:comparison principle for mass} we have that \begin{equation*} \underline M (t,v) \le \int_{\widetilde B_v} u(t,x) \diff x. \end{equation*} Recalling the limit through $\Phi_k$ given by \eqref{eq:Phik} and the limit $R \to \infty$, the mass constructed in \Cref{prop:Rd existence mass} we have the estimate \begin{equation*} \int_{\widetilde B_v} \underline u (t,x) \diff x \le M(t,v) \le 1. \end{equation*} where $\underline u$ is the solution of $ \underline u_t = \Delta \underline u^m $ in ${\mathbb R^n}$. When $m \in (\frac{n-2}n, 1)$ we know that $\int_{\mathbb R^n} \underline u(t,x) \diff x = \int_{\mathbb R^n} u_0 (x) \diff x$ and, hence $M(t,\infty) = 1$. \end{remark} \subsection{At least infinite-time concentration of the mass} \label{sec:Rd concentration} Let assume $a_V < 1$ and that $\rho_0$ is such that there exists $F$ with the following properties \begin{equation} \label{eq:Rd hypothesis concentration} \| \rho_F \|_{L^1 ({\mathbb R^n})} = 1 \qquad \text{ and } \qquad M_{\rho_F} \le M_{\rho_0} \le (1 - a_V) + M_{\rho_V}. \end{equation} \begin{remark} For example, this covers the class of initial data that satisfy the following three assumptions: \begin{itemize} \item $M_{\rho_V} \le M_{\rho_0} \le (1 - a_V) + M_{\rho_V}$ \item $\int_{\widetilde B_v}\rho_0 (x)\diff x = (1 - a_V) + \int_{\widetilde B_v} \rho_V (x) \diff x$ for $v \ge v_0$ \item $M_{\rho_0}$ is Lipschitz in $(v_0 - \varepsilon, v_0 + \varepsilon)$. \end{itemize} In this setting, we can take a suitable initial datum $\rho_D$ as in the case of balls, and we are reduced to a problem in $[0,v_0]$, since the upper and lower bound guarantee that $M(t,v) = (1 - a_V) + \int_{\widetilde B_v} \rho_V (x) \diff x$ for all $v \ge v_0$. This is a Dirichlet boundary condition for the mass. \end{remark} When $\rho _0 = \rho_F$ then the associated mass $M$ obtained in \Cref{prop:Rd existence mass} satisfies \begin{enumerate} \item $M$ is a viscosity solution of the mass equation and locally $C^\alpha$ \item $M (0,v) = \int_{\widetilde B_v} \rho_F (x) \diff x$ \item $M $ is non-decreasing in $t$ and $x$ (due to the properties of the approximations). \item We have the comparison \begin{equation*} M_{\rho_V} (v) \le M_{\rho_D} (v) \le M (t,v) \le (1 - a_V) + M_{\rho_V} (v). \end{equation*} In particular $M (t, \infty) = 1$ for all $t$ finite. \end{enumerate} Again, there exists a point-wise limit \begin{equation*} M_\infty(v) = \lim_{t \to \infty} M(t,v). \end{equation*} As in \Cref{thm:R finite concentration mass}, $M_\infty$ preserves the $C^\alpha_{loc}$ estimates, using Dini's theorem we can prove uniform convergence in intervals $[\varepsilon, \varepsilon^{-1}]$. Thus $M_\infty$ is a viscosity solution of \eqref{eq:problem M infty}. Due to the sandwich theorem and monotonicity \begin{equation*} M_\infty(0^+) \le 1 -a_V, \qquad M_\infty(+\infty) = 1. \end{equation*} It is easy to characterise $M_\infty$ as we have done in the case of balls. This proves \Cref{cor:Rd concentration} under hypothesis \eqref{eq:Rd hypothesis concentration}. \begin{remark}[Convergence of $\rho_R$ as $R \to \infty$] Since we do not have any $L^q$ bound for $\rho$ for $q > 1$, we do not have any suitable compactness. We can extend $\rho_R(t)$ by $0$ outside $B_R$ and we do know that $ \| \widetilde \rho_R(t) \|_{\mathcal M ({\mathbb R^n})} \le 1. $ If we assume that \eqref{eq:sufficient condition existence of minimisers} and that $V(x) \ge c |x|^\alpha$ for $c, \alpha > 0$. The properties can be inhereted to $\rho_R$ so \begin{equation*} \int_{B_R} |x|^\alpha \rho_R \le C (1 + \mathcal F [\rho_0]). \end{equation*} For $\rho_0$ in a suitable integrability class, we have tightness, and hence a weakly convergent subsequence such that \begin{equation*} \widetilde \rho_R \rightharpoonup \mu \qquad \text{ weak}-\star \text{ in } L^\infty( 0,\infty; \mathcal M({\mathbb R^n}) ) \end{equation*} We also know that $\rho_R^m$ is uniformly integrable. However, since we cannot assure $\rho_R^m \rightharpoonup (\mu_{ac})^m$, we cannot characterise $\mu$ as a solution of \eqref{eq:main}. This remark is still valid for radial initial data. \end{remark} \subsection{Minimisation of the free energy} Following the arguments in \cite{Balague2013,Calvez2017,Carrillo2019}, we have an existence and characterisation result for the minimiser. In ${\mathbb R^n}$ the free-energy of the FDE $u_t = \Delta u^m$ with $0 < m <1$, is not bounded below, and $u(t) \to 0$ as $t \to \infty$. In fact, the mass of solutions escapes through $\infty$ in finite time if $m < \frac{n-2}n$. We need to ask further assumptions on $V$ so that the formal critical points $\rho_{V+h}$ are in fact minimisers. We show below that it suffices that $V$ is not critical in the sense of constants, i.e. \begin{equation} \label{eq:sufficient condition existence of minimisers} \inf_{\rho \in \mathcal P_{ac} (\mathbb R^n)} \left( \frac 1 {m-1} \int_{\mathbb R^n} \rho^m + (1-\varepsilon) \int_{\mathbb R^n} V (x) \rho(x) \diff x \right) > -\infty, \qquad \text{for some } \varepsilon > 0. \end{equation} We provide an example of $V$ where this property holds below. As in ${B_R}$, we define an extension of $\mathcal F$ to the space of measure as \begin{equation*} \widetilde{ \mathcal F} [ \mu ] = \mathcal E_m [ \mu_{ac} ] + \int_{\mathbb R^d} V(x) \diff \mu (x) \end{equation*} where $ \mu_{ac} $ is the absolutely continuous part of the measure $ \mu $. Notice that, since we choose $V(0) = 0$, we have that $ \widetilde{ \mathcal F} [ \mathfrak m \delta_0 + \rho] = \mathcal F [\rho]. $ \begin{proposition} \label{lem:minimizer} Assume $V \ge 0$ and $V(0) = 0$ and \eqref {eq:sufficient condition existence of minimisers}. Then, we have the following: \begin{enumerate} \item \label{it:minimizer 1} There exists a constant $C>0$ such that \begin{equation*} \int_{\mathbb R^n} \rho^m + \int_{\mathbb R^d} V \rho \le C ( 1+ \mathcal F[ \rho] ). \end{equation*} \end{enumerate} If, furthermore $V$ is radially symmetric and non-decreasing then \begin{enumerate}[resume] \item \label{it:minimizer 2} There exists $\mu_\infty \in \mathcal P (\mathbb R^n)$ such that \begin{equation} \label{eq:mu infinity minimises} \widetilde {\mathcal F}[\mu_\infty] = \inf_{\mu \in \mathcal P ({\mathbb R^n}) } \widetilde{ \mathcal F}[\mu] = \inf_{\rho \in \mathcal P(\mathbb R^n) \cap L^1 ({\mathbb R^n})} \mathcal F [\rho] . \end{equation} \item \label{it:minimizer 3} We have that \begin{equation*} \mu_\infty = \begin{dcases} \rho_{V+h} & \text{if } a_{V+h} = 1, \\ (1-a_V) \delta_0 + \rho_V & \text{if } a_{V} < 1, \end{dcases} \end{equation*} \end{enumerate} \end{proposition} \begin{proof}[Proof of \Cref{lem:minimizer}] Due to the lower bound, we have that \begin{equation*} \tfrac{1}{1-m} \int_{\mathbb R^n} \rho^m \le C + (1- \varepsilon) \int_{\mathbb R^n} V \rho . \end{equation*} On the other hand, we get \begin{equation*} \int_{\mathbb R^n} V \rho = \mathcal F[\rho] + \tfrac{1}{1-m} \int_{\mathbb R^n} \rho^m \le F[\rho] + C + (1- \varepsilon) \int_{\mathbb R^n} V \rho \end{equation*} Thus \begin{equation*} \varepsilon \int_{\mathbb R^n} V \rho \le \mathcal F[\rho] + C \end{equation*} Finally, we recover \begin{equation*} \tfrac{1}{1-m} \int_{\mathbb R^n} \rho^m \le C + \frac{(1- \varepsilon)}\varepsilon (\mathcal F[\rho] + C) . \end{equation*} This completes the proof of \Cref{it:minimizer 1}. Clearly, we have that \[ \mathcal F[\mu] \ge \mathcal E_m [\rho] + (1-\varepsilon) \int_{\mathbb R^n} V (x) \rho(x) \diff x. \] Hence, the infimum of $\mathcal F$ is finite. As in the proof of \Cref{thm:bounded minimising FR}, we can consider a minimising sequence $\rho_j$. As in \Cref{thm:bounded minimising FR} we may assume that $\rho_j$ are radially symmetric and non-increasing. Let us prove \Cref{it:minimizer 2}. As in \Cref{thm:bounded minimising FR}, the second equality of \eqref{eq:mu infinity minimises} is due to the weak-$\star$ density of $L^1({\mathbb R^n})$ in the set of measures and the construction of $\widetilde{\mathcal F}$. For our minimising sequence we know hence that \begin{equation*} \int_{\mathbb R^n} \rho_j = 1 , \qquad \int_{\mathbb R^n} \rho_j^m \le C ( 1 + \mathcal F[\rho_j]) \le C. \end{equation*} Using Lieb's trick in \Cref{rem:Liebs trick}, we obtain that $ \rho_j \le C \min \{ |x|^{-n} , |x|^{-n/m} \}. $ Integrating outside of any ball $B_R$, we can estimate \begin{equation*} \int_{\mathbb R^n \setminus B_R} \rho_j \le C \left( \int_R^\infty r^{-\frac n m + n - 1} \diff r \right) \le C R^{n \left( 1 - \frac 1 m \right)}. \end{equation*} Since $m < 1$, this is a tight sequence of measures. By Prokhorov's theorem, there exists a weakly-$\star$ convergent subsequence in the sense of measures. Let its limit be $\mu_\infty$. For the proof of \Cref{it:minimizer 3}, we proceed as in \Cref{thm:bounded minimising FR}. Notice that we still have the estimate \begin{equation*} \rho_j(x) \le \frac{\int_{{\mathbb R^n}} V \rho }{\int_{B_{|x|}} V}. \end{equation*} Since $V$ is strictly increasing, this is an $L^\infty ({\mathbb R^n}\setminus B_\kappa)$ of any $\kappa > 0$, and we can repeat the argument in ${B_R}$. \end{proof} Let us illustrate the previous theorem by giving sufficient conditions on $V$ satisfying the main assumption of \Cref{lem:minimizer}. We extend the argument in \cite{Carrillo+Delgadino2018} to show a family of potentials $V$ for which \eqref{eq:sufficient condition existence of minimisers} holds. \begin{theorem} \label{thm:Rd V example} Assume that, for some $\alpha \in (0,m)$ we have that \begin{equation} \label{eq:Rd V sufficient concentration} \chi_V = \sum_{j=1}^\infty 2^{jn} V(2^{j})^{-\frac \alpha {1-m}} < \infty. \end{equation} Then, \eqref{eq:sufficient condition existence of minimisers} holds for any $\varepsilon \in (0,1)$. \end{theorem} \begin{remark} If the function $r \mapsto V(r)^{-\frac \alpha {1-m}} r^n$ is non-increasing, then the integral criterion for series and the change of variable show that the condition becomes \begin{equation*} \int_1^\infty 2^{ny} V(2^y)^{-\frac \alpha {1-m}} \diff y = % \frac{1}{\ln 2} % \int_2^\infty V(r)^{-\frac \alpha {1-m}} r^{n-1} \diff r \sim \int_{|x| \ge 2} \rho_V^{\alpha} \diff x < \infty. \end{equation*} We are requesting that $\rho_V^{m-\delta} \in L^1$ for some $\delta \in (0,m)$. This is only slightly more restrictive than simply that $\rho_V$ gives a finite quantity in either term of $\mathcal F$. \end{remark} \begin{proof}[Proof of \Cref{thm:Rd V example}] We look first at the integral on $B_1$. Due to Hölder's inequality, we have that \begin{equation*} \frac{1}{m-1}\int_{B_1} \rho^m \ge \frac{|B_1|^{1-m}}{m-1} \left( \int_{B_1} \rho \right)^{m} . \end{equation*} On the other hand, since $V, \rho \ge 0$ we know that $ \int_{B_1} V \rho \diff x \ge 0. $ Hence, we only need to care about the integration on $\mathbb R^n \setminus B_1$. We define, for $j \ge 1$ \begin{equation*} \rho_j = \int_{B_{2^j} \setminus B_{2^{j-1}}} \rho (x) \diff x. \end{equation*} First, we point out that \begin{equation*} \int_{{\mathbb R^n} \setminus B_1} V(x) \rho(x) \diff x \ge \sum_{j=1}^\infty V(2^{j-1}) \rho_j. \end{equation*} Due to Jensen's inequality \begin{align*} \int_{B_{2^j} \setminus B_{2^{j-1}}} \rho^m &\le |B_{2^j} \setminus B_{2^{j-1}}| \left( \frac{1}{|B_{2^j} \setminus B_{2^{j-1}}|} \int_{B_{2^j} \setminus B_{2^{j-1}}} \rho(x) \diff x \right)^m = |B_{2^j} \setminus B_{2^{j-1}}|^{1-m} \rho_j \\ &= C_n 2^{jn(1-m)} \rho_j^m . \end{align*} Notice that $B_{2^j} \setminus B_{2^{j-1}} = 2^j (B_{1} \setminus B_{\frac 1 2})$. Hence \begin{equation*} \int_{{\mathbb R^n} \setminus B_1} {\rho^m} \le \sum_{j=1}^\infty \frac{C_n 2^{j n (1-m)}}{V(2^{j-1})^\alpha}V(2^{j-1})^\alpha \rho_j^\alpha \rho_j^{m-\alpha}. \end{equation*} Applying the triple Hölder inequality with exponents $p = (1-m)^{-1}, q = \alpha^{-1}, r = (m - \alpha)^{-1} $ we recover \begin{align*} \int_{{\mathbb R^n} \setminus B_1} {\rho^m} &\le \left(\sum_{j=1}^\infty \frac{C_n 2^{j n}}{V(2^{j-1})^\frac{\alpha}{1-m}}\right)^{1-m} \left( \sum_{j=1}^\infty V(2^{j-1}) \rho_j \right)^\alpha \left( \sum_{j=1}^\infty \rho_j \right)^{m-\alpha} \\ &\le \chi_V^{1-m} \| \rho \|_{L^1}^{m-\alpha} \left( \int_{\mathbb R^n \setminus B_1} V (x) \rho(x) \diff x \right)^{\alpha} \end{align*} Lastly, using Young's inequality we have, for any $\varepsilon > 0$ \begin{equation*} \int_{{\mathbb R^n} \setminus B_1} {\rho^m} \le \varepsilon (1-m) \int_{\mathbb R^n \setminus B_1} V (x) \rho(x) \diff x + C({\varepsilon, \alpha,m)}\chi_V^ { % \frac{1-m}{1-\alpha} % } \| \rho \|_{L^1}^ { % \frac{m-\alpha}{1-\alpha} % } \end{equation*} Therefore \begin{equation*} \frac{1}{m-1} \int_{{\mathbb R^n}} \rho^m + (1-\varepsilon) \int_{\mathbb R^n} V \rho \ge \frac{|B_1|^{1-m}}{m-1} \left( \int_{B_1} \rho \right)^{m} - \frac{C({\varepsilon, \alpha,m)}}{1-m}\chi_V^ { % \frac{1-m}{1-\alpha} % } \| \rho \|_{L^1}^ { % \frac{m-\alpha}{1-\alpha} % } . \end{equation*} This completes the proof. \end{proof} \begin{remark}[The power-type case $V(x) = C |x|^\lambda$ for $|x| \ge R_0$] \label{rem:minimisation for powers at infinity} In this setting, \eqref{eq:Rd V sufficient concentration} becomes $m > \frac{n}{n+\lambda}$ (equivalently $\frac{n(1-m)}{m} < \lambda$), and in this case can take any $\alpha$ such that $ \frac{n(1-m)}{\lambda} < \alpha <m$. This condition is sharp. Let us see that, otherwise, $\mathcal F$ is not bounded below. We recall the following computation, which can be found in \cite[Theorem 15]{Carrillo+Delgadino2018} following the reasoning in \cite[Theorem 4.3]{Carrillo+Delgadino+Patacchini2019}. Assume $m < \frac{n}{n+\lambda}$. We can construct densities $\rho$ where the energy attains $-\infty$. Let \begin{equation*} \rho_\beta = \sum_{j=j_0}^\infty \frac{\rho_j}{|B_{2^{j+1}} \setminus B_{2^j}|} \chi_{B_{2^{j+1}} \setminus B_{2^j}}, \qquad \text{where } \rho_j = \frac{2^{-j\beta}}{\sum_{j=j_0}^\infty 2^{-j\beta}}. \end{equation*} where $\beta > 0$ is a constant we will choose later, and $j_0$ is such that $2^{j_0} > R_0$. We can explicitly compute \begin{equation*} \int_{\mathbb R^n} |x|^\lambda \rho_\beta(x) \diff x = \frac{2^{n+\lambda}-1}{n + \lambda} \frac{\sum_{j=j_0}^\infty 2^{-j(\beta-\lambda)}}{\sum_{j=j_0}^\infty 2^{-j\beta}} \end{equation*} This is a finite number whenever $\beta > \lambda$. On the other hand \begin{equation*} \int_{\mathbb R^n} \rho_\beta(x)^m \diff x= C(n, \lambda) \frac{\sum_{j=j_0}^\infty 2^{-j(m \beta - n(1-m))}}{\left(\sum_{j=j_0}^\infty 2^{-j\beta}\right)^m} \end{equation*} This number is infinite if $m \beta < n(1-m)$. Hence, \begin{equation*} -\tfrac{1}{1-m}\int_{\mathbb R^n} \rho_\beta(x)^m \diff x + \int_{\mathbb R^n} C |x|^\lambda \rho_\beta (x) \diff x = -\infty , \qquad \forall C \in \mathbb R \text{ and }\lambda < \beta < \frac{n(1-m)}{m}. \end{equation*} The case of the equality $m = \frac{n}{n+\lambda}$ is, as usual, more delicate due to the scaling. However, we still prove that \begin{equation*} \inf_{\rho \in \mathcal P \cap L^1} \left(- \int_{\mathbb R^n} \rho^{\frac{n}{n+\lambda}} + C \int_{\mathbb R^n} |x|^\lambda \rho \right)= -\infty, \qquad \forall C \in \mathbb R. \end{equation*} As in the proof of \cite[Proposition 4]{Carrillo2019}, we can take the following functions: \begin{equation*} \rho_k (x) = D_k |x|^{-(n+\lambda)} \chi_{B_k \setminus B_{R_0}}, \qquad \text{ where } D_k= \left(\int_{B_k \setminus B_{R_0}} |x|^{-(n+\lambda)} \diff x \right)^{-1}. \end{equation*} It is a direct computation that \begin{equation*} \tfrac 1 {D_k}\int_{\mathbb R^n} |x|^\lambda \rho_k = \int_{B_k \setminus B_{R_0}} |x|^{-n}\diff x = \tfrac 1 {D_k^{\frac{n}{n+\lambda}}}\int_{\mathbb R^n} \rho_k^{ \frac{n}{n+\lambda} }. \end{equation*} For any $\alpha_k > 0$, we have that the rescaling $ \widetilde \rho_{k} (x) = \alpha_k^n \rho_k(\alpha_k x) $ is such that \begin{equation*} a_k {\coloneqq} \frac{ \int_{\mathbb R^n} |x|^\lambda \widetilde\rho_k} { \left( \int_{\mathbb R^n} \widetilde\rho_k^{ \frac{n}{n+\lambda} } \right)^{\frac{n+\lambda}{n}} } = \frac{ \int_{\mathbb R^n} |x|^\lambda \rho_k} { \left( \int_{\mathbb R^n} \rho_k^{ \frac{n}{n+\lambda} } \right)^{\frac{n+\lambda}{n}} } = \left( \int_{B_k \setminus B_{R_0}} |x|^{-n}\diff x \right)^{1-\frac{n+\lambda}{n}} = \left(|\partial B_1| \log \frac{ k}{R_0}\right)^{1-\frac{n+\lambda}{n}} \to 0. \end{equation*} For any sequence $b_k$ which is yet to be determined, we can pick $\alpha_k$ so that $ \int_{{\mathbb R^n}} \widetilde\rho_k^{\frac{n}{n+\lambda}} = b_k$ by taking $$ % \alpha_k = \left( \frac{ b_k }{ \int_{\mathbb R^n} \rho_k^{\frac{n}{n+\lambda}}} \right) ^{-\frac {n + \lambda} {\lambda n}}. $$ Then, passing to the notation $m = \frac{n}{n+\lambda}$, we recover that \begin{align*} -\int_{\mathbb R^n} \widetilde\rho_k^{m} + C \int_{\mathbb R^n} |x|^\lambda \widetilde\rho_k &= -b_k + C a_k b_k^{\frac 1 m} = - b_k^{\frac 1 m} \left( b_k^{-\frac{1-m}{m}} - Ca_k \right) = - b_k^{\frac 1 m - \varepsilon} , \end{align*} if pick the sequence $b_k$ so that $b_k^{-\frac{1-m}{m}} - Ca_k = b_k^{-\varepsilon}$. Notice that the function $g_{a,b} (s) = s^a - s^b$ is strictly increasing near $0$ if $a < b$. Hence, for $k$ large enough and $\varepsilon > \frac{1-m} m$, we can solve $Ca_k = b_k^{-\frac{1-m}{m}} - b_k^{-\varepsilon}$, and we recover $b_k \to +\infty$ as $k \to \infty$. Hence, taking $\varepsilon \in ( \frac{1-m} m, \frac 1 m)$, and $k \to \infty$, we prove the result. \end{remark} \begin{remark} With the sequence $\rho_k$ above, we can also prove that \begin{equation} \label{eq:Carlson critical} \inf_{\rho \in \mathcal P \cap L^1} \frac{ \int_{\mathbb R^n} |x|^\lambda \rho}{ \left( \int_{\mathbb R^n} \rho^{ \frac{n}{n+\lambda} } \right)^{\frac{n+\lambda}{n}} } = 0. \end{equation} This corresponds to the borderline case of the Carlson type inequalities \begin{equation*} \left( \int_{\mathbb R^n} \rho \right)^{ 1 - \frac{n (1-m)}{\lambda m} } \left( \int_{\mathbb R^n} |x|^\lambda \rho \right)^{\frac{n (1-m)}{\lambda m}} \ge c_{n,\lambda,m} \left( \int_{\mathbb R^n} \rho^m \right)^{\frac 1 m} , \qquad \forall \tfrac{n}{n+\lambda} < m <1 \text{ and } \rho \ge 0. \end{equation*} which are known with the explicit constant (see, e.g., \cite[Lemma 5]{Carrillo2019}). \end{remark} \subsection{Infinite-time concentration if $V$ is quadratic at $0$} \label{sec:pure aggregation} Our aim in this section is to compare the solutions of \eqref{eq:main} with the solutions of the pure-aggregation problem \begin{equation} \label{eq:aggregation} \frac{\partial \rho }{\partial t} = \nabla \cdot (\rho \nabla \widetilde V), \end{equation} where $\widetilde V$ is a different potential. The equation for the mass can be written in radial coordinates as \begin{equation} \label{eq:aggregation mass} \frac{\partial M}{\partial t} = \frac{\partial M}{\partial r} \frac{\partial \widetilde V}{\partial r}. \end{equation} We will show that infinite-time aggregation happens for \eqref{eq:aggregation} if and only if \begin{equation} \label{eq:hypothesis for concentration in infinite time} \int_{0^+} \left( \tfrac{\partial \widetilde V}{\partial r} (s) \right)^{-1} \diff s = +\infty. \end{equation} Clearly, a sufficient condition that $\frac{\partial \widetilde V}{\partial r} \le C r$ near $0$. This is the so-called Osgood condition used to distinguish infinite from finite time blow-up in aggregation equations \cite{BCL}. \begin{proposition} \label{prop:solutions of pure aggregation} Assume $\widetilde V \in C^2 ({\mathbb R^n})$, is radially symmetric, $\widetilde V(0) = 0$, $\frac{\partial \widetilde V}{\partial r} (r) > 0 $ for $r > 0$, \eqref{eq:hypothesis for concentration in infinite time} and let $ M_0$ be a continuous, non-decreasing and bounded function. Then \begin{enumerate} \item There exists a unique classical solution by characteristics $M(t,r)$ of \eqref{eq:aggregation mass} defined for all $t, r > 0$. \item We have $M(t,0) = 0$ for all $t > 0$, i.e. there is no concentration in finite time. \end{enumerate} \end{proposition} \begin{proof}[Proof of \Cref{prop:solutions of pure aggregation}] \Cref{eq:aggregation mass} is a first order linear PDE that we can solve by characteristics. We can look at the characteristic curves of constant mass $ \overline M(t, r_c(t, r_0)) = \overline M(0,r_0). $ Taking a derivative we recover $ \frac{\diff r_c}{\diff t} (t) = - \frac{\partial \widetilde V}{\partial r} (r_c(t)). $ These are the same characteristics obtained when applying the method directly to \eqref{eq:aggregation}. Clearly $ r_c (t, r_0) \le r_0.$ Since $V \in C^2 ({\mathbb R^n})$, these characteristics exists for some time $t(r_0) > 0$, and are unique up to that time. Hence, let \begin{equation} \label{eq:pure aggregation condition characteristics} t = \int_{r_c(t, r_0)}^{r_0} \left( \frac{\partial \widetilde V}{\partial r} (s) \right)^{-1} \diff s. \end{equation} Concentration will occur if $r_c(t, r_0) = 0$ for some $r_0 > 0$ and $t < \infty$, which is incompatible with \eqref{eq:hypothesis for concentration in infinite time}. Notice that since $0 < r_c(t,r_0) \le r_0$, these functions are defined for all $t > 0$. Let us check that $r_c(t,r_0)$ do not cross, and hence can be used as characteristics. If two of them cross at time $t$, we have that \begin{equation*} \int_{r_0 }^{r_c(t,r_0)} \left( \frac{\partial \widetilde V}{\partial r} (s) \right)^{-1} \diff s =- t = \int_{r_1}^{r_c(t,r_1)} \left( \frac{\partial \widetilde V}{\partial r} (s) \right)^{-1} \diff s . \end{equation*} Since $r_c(t,r_0) = r_c(t,r_1)$ then we get \begin{equation*} \int_{r_0}^{r_1} \left( \frac{\partial \widetilde V}{\partial r} (s) \right)^{-1} \diff s = 0. \end{equation*} As $ \frac{\partial \widetilde V}{\partial r} > 0$ outside $0$, then $r_0 = r_1$ and the characteristics are the same. Due to the regularity of $\widetilde V$, there is continuous dependence and, since the characteristics point inwards and do not cross, they fill the space $[0,+\infty) \times [ 0, +\infty)$. Finally, notice also that $\frac{\partial \widetilde V}{\partial r} (0) = 0 $ and positive otherwise, then for any $r_0 > 0$ we have that $ \lim_{t \to +\infty} r_c (t, r_0) = 0 . $ Since $\widetilde V$ is $C^2$, then we have $\partial \widetilde V / \partial r (0) = 0$ so $r_c(t,0) = 0$, i.e. $M(t,0) = 0$. \end{proof} \begin{proposition} \label{prop:monotone solutions of aggregation} Let $\rho$ be a solution by characteristics of the aggregation equation \eqref{eq:aggregation}, and let $r_0(t,r)$ the foot of the characteristic through $(t,r)$. Then \begin{equation} \label{eq:aggregation derivative of solution by characteristics} \frac{\partial \rho }{\partial r} (t,r) = \left( \frac { r_0} r \right)^{n-1} \rho_0 (r_0) \frac{\frac{\partial \widetilde V }{\partial r} (r_0) } { \left( \frac{\partial \widetilde V }{\partial r} (r) \right)^2 } \Bigg ( - \Delta \widetilde V (r) + \Delta \widetilde V (r_0) + \rho_0(r_0) ^{-1} \frac {\diff \rho_0 }{\diff r} (r_0) { \frac{\partial \widetilde V}{\partial r} (r_0) } \Bigg ). \end{equation} In particular, if $\rho$ is a decreasing solution and $ \widetilde V \in C^2 (\mathbb R^n)$ with $\Delta \widetilde V (0) = 0$, then \begin{equation} \label{eq:aggregation derivative of solution by characteristics decreasing solution C2 potential} \Delta \widetilde V + \rho_0^{-1} \frac{\diff \rho_0}{\diff r} \frac{\partial \widetilde V}{\partial r} \le 0 , \qquad \text{ in } \supp \rho_0. \end{equation} \end{proposition} \begin{remark} For $\widetilde V(r)= r^2$ then $\Delta \widetilde V$ is constant, and we only have the last term, so all solutions with decreasing initial datum are decreasing. If $\Delta \widetilde V$ is non-increasing, then in \eqref{eq:aggregation derivative of solution by characteristics} we have $-\Delta \widetilde V(r) + \Delta \widetilde V(r_0) \le 0$ and all solutions are decreasing. This is the case for $\widetilde V(r) = \gamma r^\gamma$ with $\lambda \in (0,2]$. % When $\widetilde V (r) = \gamma r^\lambda$ with $\lambda > 2$, let us show that decreasing solutions of \eqref{eq:aggregation} are not $L^1 (\mathbb R^n)$. Hence, any decreasing integrable initial data produces a solution that losses monotonicity. Indeed, if $\widetilde V(r) = r^\lambda$ then $\Delta \widetilde V= (n + \lambda - 2) r^{\lambda - 2}$ and integrating in \eqref{eq:aggregation derivative of solution by characteristics decreasing solution C2 potential} we recover $\rho_0 \ge Cr ^{-(n + \lambda - 2)}$ which is not integrable for $\lambda > 2$. \end{remark} \begin{proof}[Proof of \Cref{prop:monotone solutions of aggregation}] Taking the derivative directly on $M(t,r) = n \omega_n r^{n-1} \frac{\partial \rho}{\partial r}$, we recover that \begin{align*} \allowdisplaybreaks \frac{\partial \rho }{\partial r} (t,r) &= (n \omega_n)^{-1} \frac{\partial}{\partial r} \left( r^{1-n} \frac{\partial }{\partial r} (M (t,r))\right) = (n \omega_n)^{-1} \frac{\partial}{\partial r} \left( r^{1-n} \frac{\partial }{\partial r} (M_0 (r_0(t,r) ))\right) \\ &= r^{1-n} r_0^{n-1} \frac{\partial r_0}{\partial r} (t,r) \rho_0 (r_0) \Bigg ( - (n-1) r^{-1} + (n-1) r_0^{-1} \frac{\partial r_0}{\partial r} + \rho_0(r_0) ^{-1} \frac {\diff \rho_0 }{\diff r} (r_0) \frac{\partial r_0}{\partial r} + \frac{ \frac{\partial^2 r_0}{\partial r^2} }{ \frac {\partial r_0}{\partial r}} \Bigg ). \end{align*} Going back to \eqref{eq:pure aggregation condition characteristics} and taking a derivative in $r$, we deduce \begin{equation*} \frac{\partial r_0}{\partial r} (t,r) = \frac{ \frac{\partial \widetilde V}{\partial r} (r_0 (t,r) ) }{ \frac{\partial \widetilde V}{\partial r} (r) } \ge 0. \end{equation*} Taking another derivative we have that \begin{equation*} \frac{\partial^2 r_0}{\partial r^2} (t,r) = \frac{ \frac{\partial \widetilde V}{\partial r} (r_0 (t,r) ) }{ \frac{\partial \widetilde V}{\partial r} (r)^2 } \left( \frac{\partial^2 \widetilde V}{\partial r^2} (r_0) - \frac{\partial^2 \widetilde V}{\partial r^2} (r) \right) . \end{equation*} Joining this information and collecting terms we recover \eqref{eq:aggregation derivative of solution by characteristics}. Clearly, \eqref{eq:aggregation derivative of solution by characteristics decreasing solution C2 potential} and the convexity of $\rho_0$ guarantee that $\rho (t, \cdot)$ is decreasing. Let us show that the condition holds in general. If $\rho$ is decreasing, then this value is not positive. For $r_0 \in \supp \rho_0$ we therefore have \begin{equation*} - \Delta \widetilde V (r) + \Delta \widetilde V (r_0) + \rho_0(r_0) ^{-1} \frac {\diff \rho_0 }{\diff r} (r_0) { \frac{\partial \widetilde V}{\partial r} (r_0 ) } \le 0. \end{equation*} The support of $\rho_0$ is a ball. Fixing a value a value of $r \in \supp \rho_0$ we have that \begin{equation*} - \Delta \widetilde V (r_c(t,r)) + \Delta \widetilde V (r) + \rho_0(r) ^{-1} \frac {\diff \rho_0 }{\diff r} (r) { \frac{\partial \widetilde V}{\partial r} (r) } \le 0. \end{equation*} Letting $t \to +\infty$, since $r_c (t,r) \to 0$, $\Delta \widetilde V$ is continuous and $\Delta \widetilde V (0) = 0$, we recover \eqref{eq:aggregation derivative of solution by characteristics decreasing solution C2 potential}. This completes the proof. \end{proof} Now we have the tools to show that concentration does not happen in finite time if $\frac{\partial V}{\partial r} \le C_v r$ close to $0$. We construct a super-solution using the pure-aggregation equation. \begin{proof}[Proof of \Cref{prop:Rd no concentration in finite time}] Take \begin{equation*} \overline \rho_0 (x) = { \rho_0 (x) } \frac{\int_{\mathbb R^n} \rho_0 (x) \diff x} {\int_{B_{R_V}} \rho_0 (x) \diff x} \chi_{B_{R_V}} \end{equation*} and \begin{equation} \label{eq:pure aggregation replacement V} \widetilde V (r) = \frac{ C_V } 2 r^2. \end{equation} Obtain $\overline M$ as the solution by characteristics of \eqref{eq:aggregation mass} constructed in \Cref{prop:solutions of pure aggregation}. Due the definition of $\widetilde V $, we know that it satisfies the hypothesis of \Cref{prop:monotone solutions of aggregation} and we have $\Delta \widetilde V = n C_V \ge 0$. Thus, \eqref{eq:aggregation derivative of solution by characteristics} shows that $\overline \rho (t, \cdot)$ is decreasing, and non-negative. Therefore, it holds that, in the viscosity sense $ \frac{\partial \overline M}{\partial v} \ge 0$ and $\frac{\partial^2 \overline M}{\partial v^2} \ge 0. $ Hence, still in the viscosity sense \begin{align*} \frac{\partial \overline M}{\partial t} - (n \omega_n^{\frac 1 n} v^{\frac{n-1} n })^2 \left\{ m \left( \frac{\partial \overline M}{\partial v}\right)^{m-1} \frac{\partial^2 \overline M}{\partial v^2} + \frac{\partial \overline M}{\partial v} \frac{\partial V}{\partial v} \right\} % &\ge \frac{\partial \overline M}{\partial t} - (n \omega_n^{\frac 1 n} v^{\frac{n-1} n })^2 \left\{ \frac{\partial \overline M}{\partial v} \frac{\partial V}{\partial v} \right\} \\ &= \frac{\partial \overline M}{\partial t} - \frac{\partial \overline M}{\partial r} \frac{\partial V}{\partial r} = \frac{\partial \overline M}{\partial r} \left(C_V r - \frac{\partial V}{\partial r} \right). \end{align*} Since characteristics retract, $\supp \frac{\partial \overline M}{\partial v} \subset B_{R_V}$ so the last term is non-negative by the assumption, because either $\frac{\partial \overline M}{\partial v} = 0$ or $C_V r - \frac{\partial V}{\partial r} \le 0$. Thus, using the comparison principle in ${B_R}$ for $R \ge R_V$ given in \Cref{thm:comparison principle m} we have that $ M_R \le \overline M $ for all $t \ge 0 , v \in [0,R_v] $ Since $M$ is constructed by letting $R \to \infty$, we conclude $M \le \overline M$ for $t,v \ge 0$. \end{proof} \section{Final comments} \label{sec:final comments} \begin{enumerate} \item Blow-up is usually associated in the literature to superlinear nonlinearities, both in reaction diffusion or in Hamilton-Jacobi equations, cf. instance \cite{QuittSoupBook, GalVaz99} and its many references. Here it is associated to sublinear diffusion, notice that \eqref{Vgrowth} implies, at least, $0<m<1$. This might seem surprising but it is not, due to two facts. First, recall that $0 < m <1$ means that the diffusion coefficient $m u^{m-1}$ is large when $u$ is small, and small when $u$ is large. This translates into fast diffusion of the support but slow diffusion of level sets with high values (see e.g. \cite{ChassVaz2002} for a thorough discussion% ). This explains why $\delta_0$ may not be diffused for $m$ small (see \cite{Brezis1983}). Secondly, the confinement potential $V$ needs to be strong enough at the origin to compensate the diffusion and produce a concentration. In ${B_R}$, this is translated in the assumption $\int_{{B_R}} \rho_V < 1$ (recall that, for $V(x) = |x|^{\lambda_0}$, this implies $0 <m < \frac{n-\lambda_0}n < 1$). In ${\mathbb R^n}$ we need to deal with the behaviour at infinity, as mentioned in the introduction. \item Formation of a concentrated singularity in finite time is a clear possibility in this kind of problem. In this paper, we do not consider the case $V \notin W^{2,\infty}_{loc} ({\mathbb R^n})$ (e.g. $V(x) = |x|^\lambda$ with $\lambda < 2$). So long as $\frac{\partial V}{\partial v}$ is continuous (e.g. $\lambda \ge 1$), it makes sense to use the theory of viscosity solutions of the mass equation \eqref{eq:mass}. In principle, there could be concentration in finite time, even in \eqref{eq:main bounded domain}. Notice that, in our results, the estimate for $\rho(t) \in L^q({B_R})$ depends on $\| \Delta V \|_{L^\infty ({B_R})}$. For more general $V$, better estimates for $\rho$ are needed in order to pass the limits $\Phi_k(s) \to s^m$ and $R \to \infty$. Some of these issues will be studied elsewhere. \item For $\rho_0 \in L^{1}_+({B_R})$, $S_R(t) \rho_0$ is constructed extending the semigroup through a density argument. We do not know whether it is the limit of the solutions $u_k$ of \eqref{eq:main regularised bounded} with \eqref{eq:Phik}. Furthermore, this question can be extended to initial data so that $\mathcal F_R[\rho_0] < \infty$. \item Non-radial data. We provide a well-posedness theory in ${B_R}$ when $\rho_0 > 0$, but not in ${\mathbb R^n}$. In ${B_R}$, as mentioned in \Cref{rem:bounded concentration non-radial}, we can show concentration in some non-radial cases, but the exact splitting of mass in the asymptotic distribution is still unknown. The asymptotic behaviour in the non-radial case is completely open. \end{enumerate}
1,108,101,563,092
arxiv
\section{Introduction} Major progress has been made in kaon physics in the past 50 years. The number of observed $K_L \to \pi^+\pi^-$ events has increased by 6 orders of magnitude, and the observed $CP$\ violation was experimentally proven to be caused by a complex phase in the CKM matrix. This mechanism is now a fundamental piece of the standard model. Recent kaon experiments are now even searching for new physics \emph{beyond} the standard model\ with $K \to \pi \nu \overline{\nu}$ decays. The branching ratios of $K \to \pi \nu \overline{\nu}$ decays are 7--8 orders of magnitude smaller than the branching ratio of $K_L \to \pi^+\pi^-$, and $CP$-violating $K_L \to \pi^0\pi^0$ decay is now a major background for $K_L \to \pi^0 \nu \overline{\nu}$ experiments. This paper reviews the progress of kaon experiments in the US and Japan as requested by the conference organizer, and how the 6--7 orders of magnitude improvements were possible in the past 50 years.\footnote{% Unless noted, the years are given in published years.} \section{Quest for $\epsilon'/\epsilon$} Soon after the discovery of $CP$\ violation~\cite{Christenson:1964fg}, the $K_L \to \pi^+\pi^-$ decay was explained to be caused by an admixture of a $CP$-even component in the $K_L$~\cite{lee_annrev_1966}: \begin{equation} |K_L\rangle \sim |K_{odd}\rangle + \epsilon |K_{even}\rangle , \end{equation} where this $CP$-even component was decaying to a $CP$-even $\pi^+ \pi^-$ state. This admixture is introduced by a complex phase in the $K^0 - \overline{K^0}$ mixing. The next question was whether the $CP$-odd $K_{odd}$ can directly decay to a $CP$-even $\pi\pi$ state. Such process is called \textit{direct} $CP$\ violation. If the direct $CP$\ violation exists, the ratios between decay amplitudes: \begin{eqnarray} \eta_\pm \equiv &A(K_L \to \pi^+ \pi^-)/A(K_S \to \pi^+\pi^-) & = \epsilon + \epsilon' \textrm{~and}\\ \eta_{00} \equiv &A(K_L \to \pi^0 \pi^0)/A(K_S \to \pi^0\pi^0) & = \epsilon - 2\epsilon' \end{eqnarray} can be different due to isospin. The existence of the direct $CP$\ violation can thus be tested by checking whether the double ratio: \begin{eqnarray} R &\equiv &\frac{BR(K_L \to \pi^+ \pi^-)/BR(K_S \to \pi^+ \pi^-)} {BR(K_L \to \pi^0 \pi^0)/BR(K_S \to \pi^0 \pi^0)}\\ & = & \left| \frac{\eta_\pm}{\eta_{00}} \right|^2\\ & \simeq &1 + 6Re(\epsilon'/\epsilon) \end{eqnarray} deviates from 1 or not. The superweak model~\cite{sw} explained that a very weak unknown interaction that changes the strangeness by $\pm$2 brings in the phase. However, the superweak model cannot violate $CP$\ in the \(K_L \to \pi\pi\) decay process because it cannot contribute to such a $\Delta S = \pm 1$ transition. \subsection{Advancement in Experimental Technologies} To measure the double ratio $R$, high statistics is required. This means that both a higher kaon flux and detectors capable to collect data with higher rates are needed. \subsubsection{Production Target} To get a higher kaon flux, the advancement of accelerators was essential, but there was also a change in production targets. In 1964, the experiment that first discovered $CP$\ violation with 35 $K_L \to \pi^+\pi^-$ events used an ``internal target'' in the accelerator ring as shown in Fig.~\ref{fig:christensen_prb140_1965_Fig1}. The target was a Be wire with 0.5 mm in diameter~\cite{christensen_pr140_1965}. In 1969, at the CERN PS, the proton beam was extracted from the accelerator to bombard a 72-mm-thick tungsten target. About 400 $K_L \to \pi^+\pi^-$ events were collected~\cite{bohm_npb9_1969}, ten times more than in the first experiment. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.66\columnwidth]{Yamanaka/PR140_B74_1965_Fig1.pdf} \caption{Plan view of the neutral kaon experiment at BNL AGS. For the experiment that discovered $CP$\ violation, the regenerator was removed, and a He bag was installed between the collimator and spectrometers ~\cite{christensen_pr140_1965}. {\footnotesize \textcircled{c} 2014 The American Physical Society.} } \label{fig:christensen_prb140_1965_Fig1} \end{center} \end{figure} \subsubsection{Charged Spectrometers} There was also a change in detector technologies. The standard tracking devices in the 1960s were spark chambers. They had an advantage that only the tracks of interest can be made visible by applying HV pulses for triggered events. Mirrors were arranged to capture sparks in multiple spark chambers of each event in a single photograph. For the first $CP$\ violation experiment, tracks in the photographs were scanned by human ``scanners'' with digitized angular encoders~\cite{christensen_pr140_1965}. \begin{wrapfigure}{r}{0.45\columnwidth} \includegraphics[width=0.45\columnwidth]{Yamanaka/IEEE_TransNuclSci_13_34_1966_Fig5.pdf} \caption{A spot scanner using CRT as a moving light source. The light spot was controlled by a PDP-1 computer, and was focused onto the film, of which the local transparency was measured by a photomultiplier (DET. PM). {\footnotesize @2014 IEEE. Reprinted, with permission, from} \cite{wenzel_ieee_13_1966}.} \label{fig:wenzel_ieee_13_34_1966_fig5} \end{wrapfigure} In 1969, the experiment at the CERN PS~\cite{bohm_npb9_1969} used an automatic ``Luciole Flying Spot Digitizer'' which could scan 1000 frames per hour. This used a CRT instead of a mechanical system to move a bright light spot quickly across a film and record the intensity of light passing through the film with a phototube~\cite{wenzel_ieee_13_1966}. Figure~\ref{fig:wenzel_ieee_13_34_1966_fig5} shows a similar spot scanner with a CRT, made by the Univ. of Michigan. In the late 1960s, experiments started to move away from using films. Experiments at CERN and BNL used ferrite-core readout systems to read out spark positions and recorded them on tape directly~\cite{faissner_pl30b_1969, jensen_prl23_1969, fryberger_ieee15_1968}. This readout system allowed the BNL experiment to collect 9400 $K_{L, S} \to \pi^+\pi^-$ events, 300 times the first $CP$\ violation experiment. In the early 1970s, experiments started to use multi-wire proportional chambers (MWPC). For example, the experiment that observed the first $K_L \to \mu^+\mu^-$ decays~\cite{carithers_prl30_1336_1973} used MWPCs with 5000 wires that had a 2-mm wire spacing. With the same detector, the experiment collected 2 M \(K_{L, S} \to \pi^+\pi^-\) events~\cite{carithers_prl34_1244_1975}. The geometry of spectrometers has also changed. The $CP$\ violation experiment in 1964 used a double-arm spectrometer with two sets of quadrupole magnets and spark chambers located at \(\pm22^\circ\) from the neutral beam line as shown in Fig.~\ref{fig:christensen_prb140_1965_Fig1}. This geometry was optimized for the 1.1-GeV/c average $K_L$ momentum. A later experiment at CERN~\cite{bouard_pl15_1965} in 1965 used a forward spectrometer with one dipole magnet sandwiched between four spark chambers as shown in Fig.~\ref{fig:DeBouard_pl15_58_1965_1965_fig1} to have a high acceptance for pions from $K_L$'s with the average momentum 11 GeV/c. It also had a \v{C}erenkov counter between the downstream spark chambers for a particle identification. The forward spectrometer became the standard for the experiments that followed. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.8\columnwidth]{Yamanaka/PL15_58_1965_scanned_Fig1.pdf} \caption{Plan view of an experimental apparatus at CERN which used a forward spectrometer. SC$i$ are spark chambers, N$i$ and P$i$ are scintillators, B is a bending magnet, and \v{C} is a \v{C}erenkov counter. {\footnotesize Reprinted from}~\cite{bouard_pl15_1965}. {\footnotesize Copyright 2014, with permission from Elsevier.} } \label{fig:DeBouard_pl15_58_1965_1965_fig1} \end{center} \end{figure} \subsubsection{Photon Detectors} Let us now switch to the neutral $K_L \to \pi^0\pi^0$ mode. One of the questions after the discovery of $CP$\ violation was whether there was also the neutral counterpart of the $K_L \to \pi^+\pi^-$. The $K_L \to \pi^0\pi^0$ decay mode was far more difficult than the charged mode, because photons were not readily visible, and there was a large background from the $K_L \to \pi^0\pi^0\pi^0$ decay mode. In 1968, an experiment at CERN~\cite{bugadov_plb28_1968} used a bubble chamber, 1.2 m in diameter and 1 m deep, filled with heavy liquid freon to collect events with 4 photons converted in the chamber. The experiment observed 24 events with 7.4 background events, and gave \(BR(K_L \to \pi^0\pi^0) = (0.94 \pm 0.37) \times 10^{-3}\). Also in 1968, an experiment at Princeton-Pennsylvania Accelerator~\cite{banner_prl21_1107_1968, banner_pr188_1969} used a spectrometer and ``$\gamma$ chambers'' surrounding the four sides of a 30 cm $\times$ 30 cm $K_L$ beam as shown in Fig.~\ref{fig:banner_prl21_1103_1968_Fig1a}. A photon was converted in a 0.1 $X_0$ lead sheet placed on one side, and the resulting electron pair was momentum analyzed in a magnetic spectrometer. A wide gap spark chamber was used before the magnet to minimize multiple scatterings, and thin gap spark chambers were used to track the $e^\pm$ pairs after leaving the magnet. The other three sides were covered with ``$\gamma$ chambers'' consisting of layers of steel plates and spark chambers to measure the hit positions of the other three photons. The hit timings were also recorded to measure the time-of-flight of $K_L$s with a peak momentum of 0.25 GeV/c bunched in a 1 ns width. The decay vertex was assumed to be along the photon momentum axis measured with the spectrometer to reconstruct the decay. The experiment observed 57 $\pm$ 9 events, and gave \(BR(K_L \to \pi^0\pi^0) = (0.97 \pm 0.23) \times 10^{-3}\). \begin{figure}[!ht] \centering \begin{minipage}[t]{0.48\columnwidth} \includegraphics[width=1.0\columnwidth]{Yamanaka/PRL21_1103_1968_scanned_Fig1a.pdf} \caption{Plan view of the experimental apparatus that observed $K_L \to \pi^0\pi^0$ events. The $K_L$ beam entered from the right, and photon detectors surrounded the \emph{sides} of the beam~\cite{banner_prl21_1103_1968}. {\footnotesize \textcircled{c} 2014 The American Physical Society.} } \label{fig:banner_prl21_1103_1968_Fig1a} \end{minipage} \hspace{0.03\columnwidth} \begin{minipage}[t]{0.45\columnwidth} \includegraphics[width=\columnwidth]{Yamanaka/PRL28_1597_1972_Fig1.pdf} \caption{Schematic view of the experimental apparatus at BNL that was used to detect $K_L \to \pi^0\pi^0$ decays. Photon detectors were placed in the beam direction ~\cite{banner_prl28_1972}. {\footnotesize \textcircled{c} 2014 The American Physical Society.} } \label{fig:banner_prl28_1597_1972_Fig1} \end{minipage} \end{figure} The same group then moved to BNL to measure \(|\eta_{00}|/|\eta_\pm|\). The detector geometry was changed to have a higher acceptance for a higher mean $K_L$ momentum (6 GeV/c), and to detect both $\pi^+\pi^-$ and $\pi^0\pi^0$ modes. As shown in Fig.~\ref{fig:banner_prl28_1597_1972_Fig1}, a spectrometer and ``$\gamma$ counters'' were located in the beam-forward direction. Still it required one of the four photons to be converted in a 0.1-$X_0$-thick converter, and the momenta of converted electron pairs to be measured. The $\pi^0\pi^0$ events were reconstructed basically with the same technique as the previous experiment, with the momentum of one photon, along with conversion positions, but not the energies, of other three photons. The experiment observed 124 $\pm$ 11 $K_L \to \pi^0\pi^0$ events with 3 $\pm$ 3 $K_L \to \pi^0\pi^0\pi^0$ background events~\cite{banner_prl28_1972}. \subsubsection{Calorimeters} There was also an attempt to measure the energies and directions of \emph{all} four photons with limited accuracies. An experiment at CERN~\cite{gaillard_nuovo_59a_1969} used two spark chamber systems with Al and brass plates, with a total thickness of 11.6 $X_0$, as shown in Fig.~\ref{fig:gaillard_nuovo_59a_453_1969_Fig2}. The number of sparks gave an energy resolution of 25\% for 500 MeV electrons. The experiment observed about 200 $K_L \to \pi^0\pi^0$ events, and gave \(BR(K_L \to \pi^0\pi^0) = (2.5 \pm 0.8) \times 10^{-3}\). \begin{figure}[!ht] \centering \begin{minipage}[t]{0.42\columnwidth} \includegraphics[width=\columnwidth]{Yamanaka/NuovoCimentoA_59_453_1969_scanned_Fig2.pdf} \caption{Plan view of the experiment apparatus at CERN (taken from~\cite{gaillard_nuovo_59a_1969}). Two spark chambers systems with radiators were used to measure conversion points, initial direction, and shower energy of each photon from $K_L \to \pi^0\pi^0$. {\footnotesize With kind permission of Springer Science+Business Media.} } \label{fig:gaillard_nuovo_59a_453_1969_Fig2} \end{minipage} \hspace{0.01\columnwidth} \begin{minipage}[t]{0.5\columnwidth} \includegraphics[width=\columnwidth]{Yamanaka/PLB40_141_1972_scanned_Fig1_right.pdf} \caption{Plan view of the experiment apparatus at CERN using a lead glass array to measure the energy of photons from $K_L \to \pi^0\pi^0$. Spark chambers interleaved with converter foils were used to measure the conversion points and directions of two photons from the decay. {\footnotesize Reprinted from}~\cite{holder_plb40_1972}. {\footnotesize Copyright 2014, with permission from Elsevier.} } \label{fig:holder_plb40_141_1972_Fig1} \end{minipage} \end{figure} Another experiment at CERN~\cite{holder_plb40_1972} introduced a calorimeter consisting of 61 hexagonal lead-glass modules (13 $X_0$ long) to measure the energies of \emph{all four} photons, as shown in Fig.~\ref{fig:holder_plb40_141_1972_Fig1}. The energy resolution was 3.3\%/$\sqrt{E_\gamma \mathrm{(GeV)}}$. The experiment also had spark chambers with lead foils to measure the directions of at least two photons to reconstruct the decay vertex. The pulse heights from the lead-glass and spark coordinates read out via ferrite cores were written on tape. The experiment collected 167 $K_L \to \pi^0\pi^0$ events, and gave \(|\eta_{00}/\eta_\pm| = 1.00 \pm 0.06\). \subsection{Hard Wall} By 1973, the number of $K_L \to \pi^+\pi^-$ events reached 4200~\cite{messner_prl30_1973}. However, the number of $K_L \to \pi^0\pi^0$ events was still about 200. The small statistics of $K_L \to \pi^0\pi^0$ events limited the accuracy of $Re(\epoe)$; the best results were \(Re(\epoe) = -0.016 \pm 0.040\)~\cite{banner_prl28_1972}, and \(Re(\epoe) = 0.00 \pm 0.01\)~\cite{holder_plb40_1972}, which were both consistent with 0. Experiments hit a hard wall. In 1976, in his beautiful review, Kleinknecht wrote~\cite{kleinknecht_1976} \begin{quote} It is not easy to improve substantially the experimental precision. A decision between superweak and milliweak models of $CP$\ violation will therefore probably have to come from other experimental information outside the $K^0$ system. \end{quote} Considering the difficulties that they had in the mid-1970s, such a view is not surprising. What did kaon experiments do then? There were two major streams. One was to measure the charge asymmetry of semi-leptonic $K_L \to \pi e \nu$ and $K_L \to \pi \mu \nu$ decays. The charge asymmetry, $\delta = (N^+ - N^-) / (N^+ + N^-)$, gives $2Re(\epsilon)$ where $N^\pm$ is the number of observed $K_{\ell 3}\) decays with $\ell^\pm$. These measurements required high statistics and a good control of systematic errors. For example, an experiment at CERN~\cite{geweniger_plb48_1974} collected 34M $K_{e3}$ and 15M $K_{\mu 3}$ events, and measured \(\delta_e = (3.65 \pm 0.17)\times 10^{-3}\) and \(\delta_\mu = (3.23 \pm 0.26) \times 10^{-3}\). The other stream was to measure the amplitudes of the regeneration of $K_S$ from $K_L$ interacting with materials. Although it looked irrelevant, this stream lead to the measurements of $Re(\epoe)$ later. \subsection{Standard Model Prediction on the $\epsilon'/\epsilon$} In the 1970s, there were movements on the theoretical side. First, Kobayashi and Maskawa explained~\cite{km_1974} that the phase which naturally comes in the mixing between 3 generations of quarks can explain the $CP$\ violation in the \(K^0 - \overline{K^0}\) transition via a box diagram shown in Fig.~\ref{fig:k2pidiagrams}(a). Second, in addition to a standard tree diagram shown in Fig.~\ref{fig:k2pidiagrams}(b), a so-called penguin diagram for the $K_L \to \pi^+\pi^-$ decay shown in Fig.~\ref{fig:k2pidiagrams}(c) was introduced by Ellis {\it et al.}~\cite{ellis} and Shifman {\it et al.}~\cite{vainshtein_jetplett_22_1975, shifman_npb_120_1977}. The complex phase in the penguin diagram can violate $CP$\ in the decay process. Gilman and Wise predicted the value of $Re(\epoe)$ to be \((3 - 5) \times 10^{-3}\) with the Kobayashi-Maskawa model~\cite{gilman_prd20_1979}, whereas the superweak model predicted it to be 0. To measure the $Re(\epoe)$ with an accuracy of \(1 \times 10^{-3}\), 30k \(K_L \to \pi^0\pi^0\) events are needed, and systematic errors should be controlled to a 0.1\% level. Although the predicted value was 10 times smaller than the values predicted by some other models, it gave a clear goal for experiments. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.3\linewidth]{Yamanaka/box.pdf} } \hspace{5mm} \subfigure[]{ \includegraphics[width=0.24\linewidth]{Yamanaka/k2pi_tree.pdf} } \hspace{5mm} \subfigure[]{ \includegraphics[width=0.27\linewidth]{Yamanaka/k2pi_penguin.pdf} } \caption{(a):Box diagram that introduces indirect $CP$\ violation in $K^0 - \overline{K}^0$ mixing. (b): Tree diagram for \(K \to \pi\pi\). (c): Penguin diagram that can generate direct $CP$\ violation.} \label{fig:k2pidiagrams} \end{figure} \subsection{Precision Experiments} \subsubsection{Early 1980s} In the early 1980s, a new generation of experiments started at BNL~\cite{black_prl54_1985} and Fermilab (FNAL E617)~\cite{bernstein_prl54_1985}. Both experiments used dipole magnets and chambers (MWPCs for BNL, and drift chambers for Fermilab) to analyze the momentum of $\pi^+\pi^-$ tracks, as shown in Fig.~\ref{fig:black_bernstein}. A thin lead sheet was placed upstream of the first chamber to convert one of the photons from the $\pi^0\pi^0$ decays, and to track the produced electron pairs. They both used lead glass arrays to measure the hit positions and energies of photons and electrons. The \(K_L \to \pi e \nu\) events were rejected by comparing the energy deposit in the lead glass array and the track momentum. The \(K_L \to \pi \mu \nu\) events were rejected by detecting muons passing through a steel wall located downstream of the calorimeter. Both experiments inserted carbon blocks in the $K_L$ beam to regenerate $K_S$, but the techniques were different. In the BNL experiment, the regenerator was moved in and out to alternate between $K_S$ and $K_L$ runs. The Fermilab experiment had two $K_L$ beams, and moved the regenerator between the two beams to observe $K_L$ and $K_S$ decays simultaneously, which cancels various systematical effects. Another difference was the proton and kaon momentum; the BNL experiment used 28-GeV protons to produce 7-14 GeV/c kaons, while the Fermilab experiment used 400-GeV protons to produce kaons with the mean momentum around 70--90 GeV/c. The BNL experiment collected 1120 $K_L \to \pi^0\pi^0$ events and gave \(Re(\epoe) = 0.0017 \pm 0.0082\). The Fermilab experiment collected 3150 $K_L \to \pi^0\pi^0$ events and gave \(Re(\epoe) = -0.0046 \pm 0.0058\). Although both results were consistent with 0, it became clear that using higher energy kaons was more advantageous because of the higher acceptance and the better energy resolution for photons. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[width=0.35\linewidth]{Yamanaka/PRL54_1628_1985_Fig1.pdf} } \hspace{5mm} \subfigure[]{ \includegraphics[width=0.55\linewidth]{Yamanaka/PRL54_1631_1985_Fig1.pdf} } \caption{(a): Schematics of the $Re(\epoe)$ experimental apparatuses at BNL (taken from \cite{black_prl54_1985}, and (b): Fermilab E617 (taken from \cite{bernstein_prl54_1985}). Fermilab E617 used two beams to observe $K_L$ and $K_S$ decays simultaneously. {\footnotesize \textcircled{c} 2014 The American Physical Society.} } \label{fig:black_bernstein} \end{figure} It is worth noting that the Fermilab E617 was the cornerstone for the Fermilab $\epsilon'/\epsilon$ experiments that followed. The experiment introduced a double beam technique, using a regenerator to observe $K_L$ and $K_S$ decays simultaneously to suppress systematic errors. This double-beam technique was actually inherited from Fermilab E226 and E486 which studied regeneration on electrons~\cite{molzon_prl41_1978} and coherent regeneration amplitudes on various nuclei~\cite{gsponer_prl42_13_1979}, respectively. \subsubsection{Fermilab E731} In the mid-1980s, Fermilab E731 was built. It used 800 GeV protons to produce two $K_L$ beams with the average energy of 70 GeV. An improved \ce{B4C} regenerator alternated between the two beams every spill. Four drift chambers were built for a better position resolution, and a lead glass array was used as an electromagnetic calorimeter. Initially, for the $\pi^0\pi^0$ run, a 0.1-$X_0$-thick lead sheet was inserted at the end of a decay volume to convert one of the photons for tracking, just as the past experiments. A dipole magnet located just downstream of the lead sheet split the electron pairs. Its field was tuned such that the downstream spectrometer would focus the pair on to the lead glass calorimeter, as shown in Fig.~\ref{fig:e731_conversion}. However, requiring a photon conversion imposed a limit on the statistics; only 30\% of the $\pi^0\pi^0$ decays had converted electron pairs, and the $\pi^+\pi^-$ and $\pi^0\pi^0$ events had to be collected in separate runs because the lead sheet had to be removed for the $\pi^+\pi^-$ run to minimize multiple scatterings. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.7\columnwidth]{Yamanaka/E731_conversion.pdf} \caption{Plan view of the Fermilab E731 apparatus. Two beams were aligned vertically. Initially, for the $K_L \to \pi^0\pi^0$ decay mode, one of the photons were required to be converted. } \label{fig:e731_conversion} \end{center} \end{figure} After some studies, the experiment decided to break the 30 years of tradition of converting photons. For the last part of the experiment, it ran without the lead sheet, and collected 4 photons without converting them,% \footnote{CERN NA31 experiment triggered on $2\pi^0$ events with a hodoscope in a liquid Ar calorimeter.} and also took $\pi^+\pi^-$ and $\pi^0\pi^0$ modes simultaneously for the first time. This big step was made possible by two technological developments. One was a second-level trigger to count the number of clusters in the calorimeter~\cite{hcf} which reduced the trigger rate by a factor 10 by collecting only 4 and 6 clusters. The other was a FASTBUS-based data acquisition system which reduced the dead time by a factor 10, from 5 ms to 0.5 ms. At the end, Fermilab E731 collected 410 k $K_L \to \pi^0\pi^0$ events and 327 k $K_L \to \pi^+\pi^-$ events, and gave \(Re(\epoe) = (7.4 \pm 5.2 \mathrm{(stat)} \pm 2.9 \mathrm{(syst)}) \times 10^{-4}\)~\cite{e731final}. Compared to E617, it reduced the overall error by a factor 14 with 130 times more $K_L \to \pi^0\pi^0$ events. The Fermilab result was only 1.2$\sigma$ away from 0, and still consistent with 0, whereas its competitor, CERN NA31, gave \(Re(\epoe) = (20 \pm 7) \times 10^{-4}\)~\cite{barr_plb_317_1993} which was 3$\sigma$ away from 0. There were intense arguments between the two experiments. CERN NA31 used a moving production target for $K_S$ runs to make the $K_L$ and $K_S$ decay vertex distributions similar. The experiment claimed that this method made the analysis less dependent on Monte Carlo simulations. Fermilab E731 argued that even with different decay vertex distributions between $K_L$ and regenerated $K_S$, the geometrical acceptances obtained with Monte Carlo simulations can be checked with high-statistics decay modes such as $K_L \to \pi e \nu$ and \(K_L \to \pi^0\pi^0\pi^0\). Also, it argued that it is more important to collect $K_L$ and $K_S$ data simultaneously with the same detector, rather than to take $\pi^+\pi^-$ and $\pi^0\pi^0$ modes simultaneously, because efficiencies of charged and neutral mode detectors have different rate dependences. These arguments, however, could not solve the discrepancy between the two experimental results. At the end, both groups decided to build new experiments, CERN NA48, and Fermilab KTeV-E832, to improve the accuracy and precision by another order of magnitude. In this paper, I will concentrate on KTeV-E832, because CERN NA48 will be covered in detail by M.~Sozzi~\cite{sozzi}. \subsubsection{Fermilab KTeV-E832} Fermilab KTeV built a new beam line with a better collimation scheme to make a cleaner beam with less halo neutrons even with a higher proton intensity. Figure~\ref{fig:ktev_det_csi}(a) shows the plan view of the KTeV detectors. The new regenerator was made of fully active scintillators to suppress backgrounds from non-coherent regenerations. A new large spectrometer magnet with a uniform field was made for the $\pi^+\pi^-$ decays. A new electromagnetic calorimeter shown in Fig.~\ref{fig:ktev_det_csi}(b), covering 1.9 m $\times$ 1.9 m, was built with pure CsI crystals 27 $X_0$ long, to have a better energy resolution. To improve the position resolution, 2.5-cm-square crystals were used in the central 1.2 m $\times$ 1.2 m region, and 5.0-cm-square crystals in the outer region. The scintillation light from the crystals were read out by phototubes and digitized right behind the phototubes every 19 ns to minimize electric noise, record the pulse shape, and to supply hit information for counting the number of clusters. The energy resolution was less than 1\% for most of the energy range, reaching about 0.45\%, which was the best resolution for the calorimeter of this size. A new data acquisition system used a matrix of memories to buffer events during spill% \footnote{% The dead time of the trigger and data acquisition system was only 10 $\mu$s, 1/50 of that of E731. It could accept 8 kbyte events coming at 20 kHz for 20 s spill with a 60 s cycle.} and to distribute the events evenly to 36 120-MIPS CPUs (fast in those days) to reconstruct all the events for online filtering. \begin{figure}[!ht] \centering \subfigure[]{\includegraphics[width=0.6\columnwidth]{Yamanaka/ktev_plan.pdf}} \hspace{5mm} \subfigure[]{\includegraphics[width=0.275\linewidth]{Yamanaka/ktev_csi_96-1126.jpg}} \caption{(a) Plan view of the Fermilab KTeV apparatus (taken from \cite{e832_final_2011}). {\footnotesize \textcircled{c} 2014 The American Physical Society.}\ (b) KTeV CsI electromagnetic calorimeter. } \label{fig:ktev_det_csi} \end{figure} Figure \ref{fig:e832_vtxz} shows the decay vertex distributions for the $\pi^+\pi^-$, $\pi^0\pi^0$, and other high statistics decay modes. The number of $K_L \to \pi^0\pi^0$ events was 6M. The data and Monte Carlo \begin{wrapfigure}{r}{0.52\columnwidth} \includegraphics[width=0.52\columnwidth]{Yamanaka/ktev_vtxz.pdf} \caption{(a) Decay vertex distributions of \(K_L \to \pi^+\pi^-\), \(K_L \to \pi e\nu\), \(K_L \to \pi^0\pi^0\), and \(K_L \to \pi^0\pi^0\pi^0\) decays for the data (dots) and MC (histogram). (b) The data-to-MC ratios are fit to a line (taken from \cite{e832_final_2011}). {\footnotesize \textcircled{c} 2014 The American Physical Society.} } \label{fig:e832_vtxz} \end{wrapfigure} distributions agreed well, and the systematic errors on the $Re(\epoe)$ due to acceptance correction were \(0.57 \times 10^{-4}\) for charged modes and \(0.48 \times 10^{-4}\) for neutral modes. The first result based on 20\% of data taken in 1996-1997 runs was \(Re(\epoe) = (28.0 \pm 4.1) \times 10^{-4}\)~\cite{e832_prl83_1999}, 7 $\sigma$ away from 0, and the final result with the full data was \(Re(\epoe) = (19.2 \pm 1.1 \mathrm{(stat)} \pm 1.8 \mathrm{(syst)}) \times 10^{-4} = (19.2 \pm 2.1) \times 10^{-4}\)~\cite{e832_final_2011}. \subsection{Conclusion on the $\epsilon'/\epsilon$} CERN NA48 gave \(Re(\epoe) = (14.7 \pm 2.2) \times 10^{-4}\) based on all data~\cite{cern_final_2002}. The result averaged by PDG is \((16.6 \pm 2.3) \times 10^{-4}\)~\cite{pdg_epoe}, 7.2$\sigma$ away from 0. CERN NA48 and Fermilab KTeV-E832 have both clearly established that the $Re(\epoe)$ is not 0, thereby rejecting the Superweak model, and supported the Kobayashi-Maskawa model. \clearpage \subsection{Looking Back} \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{Yamanaka/epoe_history.pdf} \caption{The results of $Re(\epoe)$ vs. published year. The horizontal band shows the PDG average \cite{pdg_epoe}.} \label{fig:epoe_history} \end{figure} \begin{wrapfigure}{r}{0.5\columnwidth} \includegraphics[width=0.5\columnwidth]{Yamanaka/kl2pi0VsError} \caption{The errors on $Re(\epoe)$ are shown as a function of the number of $K_L \to \pi^0\pi^0$ events for the past experiments.} \label{fig:kl2pi0VsError} \end{wrapfigure} Figure \ref{fig:epoe_history} shows the history of results on the $Re(\epoe)$. The plot showing the final results is minuscule in the scale of early experiments. Figure~\ref{fig:kl2pi0VsError} shows the error on the $Re(\epoe)$ as a function of the number $K_L \to \pi^0\pi^0$ events ($N$). The error is clearly proportional to \(1/\sqrt{N}\). This means that the systematic errors were also reduced accordingly with statistical errors. All these improvements in statistics and precision were made not only by the beam power. Beam line design, chamber technology, photon measurement technology, trigger and data acquisition systems, and even magnetic tape technology had to be improved along with it, and behind them were people's innovative ideas, deep thinking, and many years of hard work. \section{Quest for $K \to \pi \nu \overline{\nu}$} With the $Re(\epoe)$ results and the observation of $CP$\ violation in B decays, the Kobayashi-Maskawa model was determined to be the source of $CP$\ violations that had been observed in \emph{laboratories}, and became a solid piece in the standard model. However, the effect of this $CP$\ violation mechanism is still too small to explain the baryon -- antibaryon asymmetry in \emph{the universe}. There should be new physics beyond the standard model\ that violates $CP$. After the establishment of $Re(\epoe) \neq 0$, kaon experiments changed their focus to search for new physics beyond the standard model. To search for a small sign of new physics, there are two important points. First, the background has to be small, and second, it has to be known with a small error. Here, in addition to backgrounds caused by experimental techniques such as misidentifying particles or mis-measurements, background decays caused by the standard model\ itself should also be well known. \subsection{Physics of $K \to \pi \nu \overline{\nu}$} \begin{wrapfigure}{r}{0.3\columnwidth} \includegraphics[width=0.3\columnwidth]{Yamanaka/pinn_penguin.pdf} \caption{Penguin diagram of the $K \to \pi \nu \overline{\nu}$ decay. In standard model, top quark is dominant in the loop. New physics particles can enter the loop and have additional contribution.} \label{fig:pinn_penguin} \end{wrapfigure} The decay modes that have small and well-known branching ratios are $K_L \to \pi^0 \nu \overline{\nu}$ and $K^+ \to \pi^+ \nu \overline{\nu}$. These decay modes proceed through a penguin diagram as shown in Fig.~\ref{fig:pinn_penguin}. In the standard model, the major contribution comes from a diagram with a top quark in the loop. The branching ratios predicted by the standard model\ are small: \(BR(K_L \to \pi^0 \nu \overline{\nu}) = 2.4 \times 10^{-11}\), and \(BR(K^+ \to \pi^+ \nu \overline{\nu}) = 7.8 \times 10^{-11}\)~\cite{brod_prd83_2011}. Also, theoretical uncertainties are only about $2-4$\%. The current errors are dominated by the errors of the known CKM parameters, and those will be reduced in the upcoming B factory experiments. New physics can contribute to these decays by having new physics particles in the loop. It can then change the branching ratios from the values predicted by the standard model. The $K_L \to \pi^0 \nu \overline{\nu}$ decay is sensitive to new physics that breaks $CP$\ symmetry, because $K_L$ is mostly $CP$\ odd, and the $\pi^0\nu\overline{\nu}$ state is $CP$\ even. \subsection{History of $K^+ \to \pi^+ \nu \overline{\nu}$ Experiments} Let us first start from the charged $K^+ \to \pi^+ \nu \overline{\nu}$ decay. The signature of the decay is a single $\pi^+$ decaying from a $K^+$. Major backgrounds are \(K^+ \to \mu^+ \nu\) decay where the $\mu^+$ is misidentified as $\pi^+$, and \(K^+ \to \pi^+ \pi^0\) decay where the two photons from the $\pi^0$ are missed. The search for $K^+ \to \pi^+ \nu \overline{\nu}$ also has a long history, and dates back to 1970~\cite{klems_prl24_1970}. The $K^+ \to \pi^+ \nu \overline{\nu}$ events were first observed by BNL E787, and its branching ratio was measured by the E787 and E949 experiments. Figure \ref{fig:bnl949}(a) shows the detector of BNL E949. The experiment stopped the $K^+$ beam in a target, and looked for a single $\pi^+$ coming out from the target. Stopping the $K^+$ simplifies the kinematics in reconstruction because the lab frame and the center of mass frame are the same. The momentum of the $\pi^+$ is measured with a solenoid magnetic and a central drift chamber. The energy deposit and the range of the $\pi^+$ were measured in a stack of scintillators (range counter) for identifying pions. In addition, the \(\pi^+ \to \mu^+ \to e^+\) decay chain was traced by recording wave forms in the range counter to further identify pions.% \footnote{This technique was first used in KEK E10 experiment to search for the $K^+ \to \pi^+ \nu \overline{\nu}$ decay~\cite{asano_plb1107_1981}.} The detector was surrounded by photon veto counters to suppress the background from \(K^+ \to \pi^+ \pi^0\) decays. Figure \ref{fig:bnl949}(b) shows the scatter plot of the energy deposit and the range of $\pi^+$ events observed by BNL E787 and E949. The experiments found 7 events in total, and gave \(BR(K^+ \to \pi^+ \nu \overline{\nu}) = (1.73^{+1.15}_{-1.05}) \times 10^{-10}\)~\cite{bnl949_prd79_2009}. \begin{figure}[!ht] \centering \subfigure[]{ \includegraphics[height=0.32\linewidth]{Yamanaka/PRD79_092004_2009_Fig2.pdf} } \hspace{5mm} \subfigure[]{ \includegraphics[height=0.32\linewidth]{Yamanaka/PRD79_092004_2009_Fig10.pdf} } \caption{(a) Schematic side view of the upper half of the BNL E949 detector. (b) Range vs kinetic energy of all events passing all other cuts observed by the BNL E787 and E949 experiments~\cite{bnl949_prd79_2009}. {\footnotesize \textcircled{c} 2014 The American Physical Society.} } \label{fig:bnl949} \end{figure} Currently, CERN NA61 is preparing a new experiment to collect 45 $K^+ \to \pi^+ \nu \overline{\nu}$ standard model\ events per year. It uses high energy decay-in-flight $K^+$s to minimize hadron interactions in the beam line, and uses \v{C}erenkov counters to identify pions and kaons. More details of the NA61 is covered by M.~Sozzi~\cite{sozzi}. \subsection{History of $K_L \to \pi^0 \nu \overline{\nu}$ Experiments} Let us next move to the neutral $K_L \to \pi^0 \nu \overline{\nu}$ decay. Although the $K_L \to \pi^0 \nu \overline{\nu}$ decay mode is theoretically clean, it is challenging experimentally. One cannot trigger on the incoming $K_L$ because it is neutral, and the only observable particles are the two photons from the $\pi^0$ decay. The initial $K_L$ momentum and its decay vertex is unknown, making the signal selection difficult. In addition, there is a major background from the $K_L \to \pi^0\pi^0$ decay mode if two of the four photons from the decay are missed. The first experimental upper limit on the branching ratio was \(BR(K_L \to \pi^0 \nu \overline{\nu}) < 7.6 \times 10^{-3}\) (90\% CL)~\cite{littenberg}, based on the Cronin's old result on $K_L \to \pi^0\pi^0$~\cite{cronin_prl_18_1967}. The first dedicated data was taken by KTeV E799-II, a rare-kaon experiment at Fermilab. It had a one-day special run to look for events with only 2 photons which decayed from the $\pi^0$ in $K_L \to \pi^0 \nu \overline{\nu}$ decay, and gave an upper limit, \(BR(K_L \to \pi^0 \nu \overline{\nu}) < 1.6 \times 10^{-6}\) (90\% CL)~\cite{nakaya_plb_447_1999}. The KTeV E799-II also looked for the $\pi^0$ Dalitz decay. Using the $e^+e^-$ tracks from the Dalitz decay, the decay vertex was reconstructed, the $m_{ee\gamma}$ was required to be consistent with the $\pi^0$ mass, and the $\pi^0$ was required to have a transverse momentum $P_T > 0.16$ GeV/c. Based on no observed events, the experiment gave \(BR(K_L \to \pi^0 \nu \overline{\nu}) < 5.9\times 10^{-7}$ (90\% CL)~\cite{kazu_prd61_2000}. Although the Dalitz decay can provide tight kinematical constraints, it has less sensitivity than the experiments with $\pi^0 \to \gamma \gamma$ decays, because the \(BR(\pi^0 \to e^+ e^- \gamma)\) is only 1.2\%, and the acceptance for the $e^+e^-$ pair is small due to it small opening angle. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.77\columnwidth]{Yamanaka/PRD81_072004_2010_Fig5.pdf} \caption{Schematic view of the KEK E391a detector (taken from \cite{e391a}). {\footnotesize \textcircled{c} 2014 The American Physical Society.} } \label{fig:e391a_detector} \end{center} \end{figure} The first dedicated experiment for the $K_L \to \pi^0 \nu \overline{\nu}$ decay was KEK E391a experiment. A $K_L$ beam with the average momentum of 2 GeV/c was made by bombarding a target with 12 GeV protons. \begin{wrapfigure}{r}{0.45\columnwidth} \includegraphics[width=0.45\columnwidth]{Yamanaka/PRD81_072004_2010_Fig27.pdf} \caption{Scatter plot of the reconstructed $P_T$ vs the $Z$ position of the events that passed all other cuts observed by KEK E391a experiment (taken from \cite{e391a}). The solid rectangle indicates the signal region.} \label{fig:e391a_z_pt} \end{wrapfigure} As shown in Fig.~\ref{fig:e391a_detector}, the detector consists of an electromagnetic calorimeter to detect two photons from the $\pi^0$, and a hermetic photon veto system surrounding a decay volume to suppress the $K_L \to \pi^0\pi^0$ background. The calorimeter was made of 7-cm square and 30-cm long pure CsI crystals stacked inside a cylinder with 1 m radius. The surfaces of these detectors were covered by plastic scintillators to veto charged particles. All of these detectors were housed inside a vacuum tank. The neutral beam had to be in vacuum to suppress $\pi^0$s produced by neutrons interacting with residual gas, and thus required a beam pipe. If the detectors were located outside the beam pipe, low energy photons could be absorbed in the beam pipe, and the $K_L \to \pi^0\pi^0$ background would increase. The solution was to minimize such dead material by placing most of the detectors inside effectively a large ``beam pipe''. The calorimeter had a hole at the center for the beam to pass through. In downstream, another photon veto counter was placed in the beam to veto photons escaping through the hole. Figure \ref{fig:e391a_z_pt} shows the scatter plot of decay vertex and the transverse momentum ($P_T$) of $\pi^0$'s. The decay vertex and $P_T$ were reconstructed by assuming that the two photons from a $\pi^0$ originated at the center of the beam area. Based on no events in the signal region, the experiment lowered the upper limit to \(BR(K_L \to \pi^0 \nu \overline{\nu}) < 2.6 \times 10^{-8}\) (90\% CL)~\cite{e391a}. \subsection{J-PARC KOTO Experiment} A new $K_L \to \pi^0 \nu \overline{\nu}$ experiment, called KOTO, is starting up at the J-PARC laboratory in Japan. It utilizes a high intensity 30-GeV proton beam to achieve a sensitivity close to the branching ratio predicted by the standard model~\cite{koto_proposal}. The protons extracted from the Main Ring hit a common target shared by multiple experiments. A neutral beam line is extracted at 16$^\circ$ from the proton beam line. A new pair of collimators were designed to suppress neutrons in the beam halo. The vacuum tank and photon veto detectors used at the KEK E391a were moved to J-PARC. The electromagnetic calorimeter was replaced with the pure CsI crystals used for the Fermilab KTeV experiments. With their small cross-sections (2.5-cm-square crystals in the 1.2 m $\times$ 1.2 m region, and 5-cm-square crystals in the outer region), it has a better $\gamma$/neutron identification, and it can suppress backgrounds caused by two photons hitting the calorimeter together, which is another mechanism of missing a photon. The energy resolution is also improved by their longer length, 50 cm (27$X_0$). The charged veto counter covering the upstream side of the calorimeter was replaced with two layers of plastic scintillators. Each plane is only 3 mm thick to suppress halo neutrons interacting in the counter. The photon veto detector covering the upstream end of the decay volume was replaced with CsI crystals to improve the veto efficiency and to detect beam-halo neutrons. A new photon veto detector was placed in the beam downstream of the calorimeter to veto photons escaping through the beam hole in the calorimeter. The detector was made of modules consisting of a lead converter and an aerogel \v{C}erenkov counter to have a low veto inefficiency ($10^{-3}$) for photons even under full beam intensity. The signals from all the detector components are digitized with flash ADCs. They are used to record the waveforms to identify signals in a high-rate environment, and to produce trigger signals based on the digitized information. The KOTO experiment started its first physics run in May 2013. Although the run was terminated after 100 hours of data taking due to a radiation accident in the experimental hall, it is pursuing physics analysis intensively.\footnote{% The first result was presented at the CKM2014 Conference at Vienna in September 2014.} After the J-PARC Hadron Experimental Facility recovers from the accident, KOTO is planning to install a new photon veto detector, increase the beam intensity, and start high-sensitivity runs. \subsection{Prospect of $K \to \pi \nu \overline{\nu}$ experiments} Figure \ref{fig:kpinn_br} shows the predictions of the branching ratios of $K^+ \to \pi^+ \nu \overline{\nu}$ and $K_L \to \pi^0 \nu \overline{\nu}$ by various theoretical models. In the next few years, the CERN NA61 and J-PARC KOTO experiments will explore this large unexplored region. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.7\columnwidth]{Yamanaka/Krare_BSM.pdf} \caption{Branching ratios of $K_L \to \pi^0 \nu \overline{\nu}$ vs $K^+ \to \pi^+ \nu \overline{\nu}$ predicted by various theoretical models (taken from \cite{mescia}).} \label{fig:kpinn_br} \end{center} \end{figure} \section{Summary} Figure~\ref{fig:k_evolution} shows my view of how the kaon experiments have evolved in the past 50 years. After the first series of $CP$\ violation experiments in the 1960s, kaon experiments have once moved on to high precision experiments to measure the charge asymmetry in semileptonic decays and $K_S$ regeneration amplitudes. The experimental technologies and knowledge accumulated then later became the foundations of the modern $Re(\epoe)$ experiments which established the non-zero value. The stream of high-statistics experiments have also moved onto rare K decay experiments which I could not cover in my talk. These two streams are now recombined to the new $K \to \pi \nu \overline{\nu}$ experiments to search for physics beyond the standard model. Now that we are finally entering an unexplored territory, we should be open to any new signs that may appear. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.7\columnwidth]{Yamanaka/k_evolution.pdf} \caption{Evolution of kaon experiments since the discovery of $CP$\ violation.} \label{fig:k_evolution} \end{center} \end{figure} \section{Acknowledgements} I would like to thank the conference organizer for inviting me to give this review talk, and thereby giving me a chance to study the great works of past which are the foundations of the current experiments. This work was supported by JSPS KAKENHI Grant Number 23224007.
1,108,101,563,093
arxiv
\section{Introduction} The notion of recursive realizability was introduced by S.~C.~Kleene \cite{klini1945}. It specifies the informal intuitionistic semantics by partial recursive functions \cite{plisko_review}. A natural generalization of recursive realizability is the $V$-realizability for some set of functions $V$, where functions from the set~$V$ are used instead of partial recursive functions. Recently, special cases of $V$-realizability were considered: primitive recursive realizability \cite{dam1994,sal2001_1}, minimal realizability \cite{dam1995}, arithmetical realizability \cite{kon_ar_bl,kon_ar_pr}, hyperarithmetical realizability \cite{kon_plisko_hr}. Intuitionistic Logic is sound with respect to the semantics of recursive realizability. But in general this is not the case for the $V$-realizability \cite{kon_ar_bl,kon_plisko_hr,viter_dis,pak_dis}. Basic Logic was introduced in \cite{visser,ruitenburg}. It is weaker than Intuitionistic Logic. For example, the formula $(\top \to P) \to P$ is not derivable in Basic Logic. The aim of this paper is to prove that Basic Logic is sound with respect to the semantics of $V$-realizability if $V$ satisfies some natural conditions. \section{Definitions} \subsection{$V$-functions} We begin with some notation. Denote by $\mathbb N$ the set of all natural numbers $0, 1, 2, \ldots$ Let $\mathsf c$ be a bijection of $\mathbb N^2$ to $\mathbb N$. Denote by $\mathsf p_1, \mathsf p_2$ the functions of $\mathbb N$ to $\mathbb N$ such that, for all $a, b \in \mathbb N$, $\mathsf p_1(\mathsf c(a, b)) = a$ and $\mathsf p_2(\mathsf c(a, b)) = b$. We omit the brackets in expressions of the form $\mathsf p_1(t'),\ \mathsf p_2(t'')$ and write $\mathsf p_1 t',\ \mathsf p_2 t''$. Suppose $n \ge 1$ and $1 \le i \le n$, denote by $I^i_n$ the function of $\mathbb N^n$ to $\mathbb N$ such that $I^i_n(a_1, \ldots, a_n) = a_i$ for all $a_1, \ldots, a_n \in \mathbb N$. We consider an arbitrary (countable) set $V$ of partial functions with arguments and values from $\mathbb N$. We say that $\varphi$ is a $V$-function if $\varphi\in V$. For every $n \ge 0$, denote by $V_n$ the set of all $n$-ary $V$-functions. Clearly, $V = \bigcup^\infty_{n = 0} V_n$. For every $n \ge 0$, let us fix some numbering of the set $V_n$. This means that we fix some set of indices $\mathsf I_n \subseteq \mathbb N$ and a mapping $e \mapsto \varphi^{V,\,n}_e$ such that $\varphi^{V,\,n}_e$ is an $n$-ary $V$-function whenever $e \in \mathsf I_n$ and every $n$-ary $V$-function is $\varphi^{V,\,n}_e$ for some $e \in \mathsf I_n$. We often write $\varphi^V_e$ instead of $\varphi^{V,\,n}_e$ if there is no confusion. Let $Var = \{x_1, x_2, \ldots\}$ be a countable set of variables. We say that an expression $t$ is a \textit{$V$-term} if $t$ is a natural number or $t \in Var$ or $t$ has the form $\varphi(t_1, \ldots, t_n)$, where $\varphi \in V_n$ and $t_1, \ldots, t_n$ are $V$-terms, for some $n \ge 0$. Any $V$-term without variables is called \textit{closed}. Suppose $e$ is a natural number and $t$ is a closed $V$-term, then the relation ``$e$ is the value of $t$'' is defined inductively by the length of $t$: $e$ is the value of $t$ if $t$ is the natural number $e$; $e$ is the value of $\varphi(t_1, \ldots, t_n)$ if there are natural numbers $e_1, \dots, e_n$ such that $e_1, \dots, e_n$ are the values of $t_1, \ldots, t_n$, $\varphi(e_1, \ldots, e_n)$ is defined, and $e = \varphi(e_1, \ldots, e_n)$. We say that the value of a closed $V$-term $t$ is defined if there is a natural number $e$ such that $e$ is the value of $t$. It can be easily checked that if the value of closed $V$-term $t$ is defined, then there exists a unique natural number $e$ such that $e$ is the value of $t$. In this case we denote by $\overline{t}$ the value of $t$. Suppose $t_1, t_2$ are closed $V$-terms, we write $t_1 \simeq t_2$ if either (i) the values of $t_1$ and $t_2$ are not defined, or (ii) the values of $t_1$ and $t_2$ are defined and $\overline{t}_1 = \overline{t}_2$. Let $k_1, \ldots, k_n$ be natural numbers, $x_1, \ldots, x_n$ distinct variables, and $t$ an $V$-term, denote by $[k_1, \ldots, k_n/x_1, \ldots, x_n]\,t$ the result of substituting $k_1, \ldots, k_n$ for all occurrences of $x_1, \ldots, x_n$ in $t$. Suppose $t_1,\ t_2$ are $V$-terms and all variables in $t_1$ and $t_2$ are in a list of distinct variables $x_1, \ldots, x_n$, we write $t_1 \simeq t_2$ if for all natural numbers $k_1, \ldots, k_n$ we have $[k_1, \ldots, k_n/x_1, \ldots, x_n]\,t_1 \simeq [k_1, \ldots, k_n/x_1, \ldots, x_n]\,t_2.$ We assume that the following conditions hold: \begin{itemize} \item[(BF)] $I^i_n$, $\mathsf c$, $\mathsf p_1$, $\mathsf p_2$ are $V$-functions for all $n \ge 1$,\ $1 \le i \le n$; \item[(Cm)] the composition of $V$-functions is a $V$-function and an index of it can be obtained by some $V$-function: for all natural numbers $n, m_1, \ldots, m_n$ there is an $(n+1)$-ary $V$-function $s$ such that $s(e, e_1, \ldots, e_n) \in \mathsf I_m$ and \begin{equation*} \varphi^V_{s(e, e_1, \ldots, e_n)}(x_1, \ldots, x_m) \simeq \varphi^V_e(\varphi^V_{e_1}(x_1, \ldots, x_{m_1}), \ldots, \varphi^V_{e_n}(x_1, \ldots, x_{m_n})) \end{equation*} for all~$e \in \mathsf I_n, e_1 \in \mathsf I_{m_1}, \ldots, e_n \in \mathsf I_{m_n}$, where $m = \max_{1 \le i \le n}{m_i}$; \item[(Cn)] every constant function is a $V$-function and an index of it can be obtained by some $V$-function: there exists a $V$-function $s$ such that, for all natural numbers $k$, we have $s(k) \in \mathsf I_0$ and $\varphi^{V,\ 0}_{s(k)} \simeq k$. \item[(Cs)] an index of a ``conditional function'' can be obtained by some $V$-function: for every natural number $n$ there is a $V$-function $s$ such that, for all natural numbers $d$ and $e_1, e_2 \in \mathsf I_{n+1}$, we have $s(e_1, e_2) \in \mathsf I_{n+1}$, \begin{align*} \varphi^V_{s(e_1, e_2)}(x_1, \ldots, x_n, d) \simeq \varphi^V_{e_1}(x_1, \ldots, x_n, d)\ \mbox{ if } \mathsf p_1 d = 0, \\ \varphi^V_{s(e_1, e_2)}(x_1, \ldots, x_n, d) \simeq \varphi^V_{e_2}(x_1, \ldots, x_n, d)\ \mbox{ if } \mathsf p_1 d \not= 0; \end{align*} \end{itemize} For example, if $\mathsf c$, $\mathsf p_1$, $\mathsf p_2$ are recursive (see \S 5.3 in \cite{rodgers}), then the following sets of functions with some numbering satisfy the conditions (BF), (Cm), (Cn), (Cs): \begin{itemize} \item the set of all partial recursive functions; \item the set of all arithmetical functions (see \cite{kon_ar_bl,kon_ar_pr}); \item the set of all hyperarithmetical functions (see \cite{kon_plisko_hr}); \item the set of all $L$-defined functions, where $L$ is an extension of the language of arithmetic (see \cite{kon_Lr_IPC,kon_Lr_IPC_IS}). \end{itemize} Now we show that the following conditions hold: \begin{itemize} \item[(PV)] Any permutation of variables is available for the $V$-functions: if $p$ is a permutation of the set $\{1, \ldots, n\}$, then there is a $V$-function $s$ such that, for all $e \in \mathsf I_n$, $s(e) \in \mathsf I_n$ and $ \varphi^V_{s(e)}(x_1, \ldots, x_n) \simeq \varphi^V_e(x_{p(1)}, \ldots, x_{p(n)}); $ \item[(DV)] Adding of a dummy variable is available for the $V$-functions: for all natural numbers $n$ there exists a $V$-function $s$ such that, for all $e \in \mathsf I_n$, $s(e) \in \mathsf I_{n+1}$ and $ \varphi^V_{s(e)}(x_1, \ldots, x_n, x_{n+1}) \simeq \varphi^V_e(x_1, \ldots, x_n); $ \item[(SMN)] An analog of the ($s-m-n$)-theorem (Theorem V \S 1.8 in \cite{rodgers}) holds: for all natural numbers $m,\ n$ there exists a $V$-function $s$ such that, for all natural numbers $k_1, \ldots, k_m$ and $e \in \mathsf I_{m+n}$, we have $s(e, k_1, \ldots, k_m) \in \mathsf I_n$ and \begin{equation*} \varphi^V_{s(e, k_1, \ldots, k_m)}(x_1, \ldots, x_n) \simeq \varphi^V_e(x_1, \ldots, x_n, k_1, \ldots, k_m). \end{equation*} \end{itemize} \begin{lemma} (BF), (Cm), (Cn) imply (PV). \end{lemma} \begin{proof} Let $p$ be a permutation of the set $\{1, \ldots, n\}$. Since $x_{p(j)} \simeq I^{p(j)}_n(x_1, \ldots, x_n)$ for all $j = 1, \ldots, n$, we see that, for all $e \in \mathsf I_n$, \begin{equation*} \varphi^V_e(x_{p(1)}, \ldots, x_{p(n)}) \simeq \varphi^V_e(I^{p(1)}_n(x_1, \ldots, x_n), \ldots, I^{p(n)}_n(x_1, \ldots, x_n)). \end{equation*} It follows from (BF) that there are natural numbers $i_1, \ldots, i_n$ such that $i_j$ is an index of $I^{p(j)}_n$ for all $j = 1, \ldots, n$. Using (Cm), we get that there exists a $V$-function $s'$ such that, for all $e \in \mathsf I_n$, \begin{equation*} \varphi^V_{s'(e, i_1, \ldots, i_n)}(x_1, \ldots, x_n) \simeq \varphi^V_e(I^{p(1)}_n(x_1, \ldots, x_n), \ldots, I^{p(n)}_n(x_1, \ldots, x_n)). \end{equation*} Thus for all $e \in \mathsf I_n$ we have \begin{equation}\label{l1_eq_1} \varphi^V_{s'(e, i_1, \ldots, i_n)}(x_1, \ldots, x_n) \simeq \varphi^V_e(x_{p(1)}, \ldots, x_{p(n)}). \end{equation} By (Cn), there are natural numbers $l_1, \ldots, l_n$ such that $\varphi^V_{l_j} \simeq i_j$ for all $j = 1, \ldots, n$. Let $i$ denote an index of $I^1_1$. It is obvious that, for all $e \in \mathsf I_n$, \begin{equation*} s'(e, i_1, \ldots, i_n) \simeq s'(\varphi^V_i(e), \varphi^V_{l_1}, \ldots, \varphi^V_{l_n}). \end{equation*} It follows from (Cm) that there exists a $V$-function $s$ such that \begin{equation*} s(x) \simeq s'(\varphi^V_i(x), \varphi^V_{l_1}, \ldots, \varphi^V_{l_n}). \end{equation*} Thus for all natural numbers $e$ we have \begin{equation}\label{l1_eq_2} s(e) \simeq s'(e, i_1, \ldots, i_n). \end{equation} From \eqref{l1_eq_1}, \eqref{l1_eq_2} it follows that, for all $e \in \mathsf I_n$, $$\varphi^V_{s(e)}(x_1, \ldots, x_n) \simeq \varphi^V_e(x_{p(1)}, \ldots, x_{p(n)}).$$ \end{proof} \begin{lemma} (BF), (Cm), (Cn) imply (DV). \end{lemma} \begin{proof} By (BF), there are natural numbers $i_1, \ldots, i_n$ such that $i_j$ is an index of $I^j_{n+1}$ for all $j = 1, \ldots, n$. It is obvious that, for all $e \in \mathsf I_n$, \begin{equation*} \varphi^V_e(x_1, \ldots, x_n) \simeq \varphi^V_e(I^1_{n+1}(x_1, \ldots, x_n, x_{n+1}), \ldots, I^n_{n+1}(x_1, \ldots, x_n, x_{n+1})). \end{equation*} It follows from (Cm) that there exists a $V$-function $s'$ such that, for all $e \in \mathsf I_n$, $s'(e, i_1, \ldots, i_n) \in \mathsf I_{n+1}$ and \begin{equation*} \varphi^V_{s'(e, i_1, \ldots, i_n)}(x_1, \ldots, x_n, x_{n+1}) \simeq \varphi^V_e(I^1_{n+1}(x_1, \ldots, x_n, x_{n+1}), \ldots, I^n_{n+1}(x_1, \ldots, x_n, x_{n+1})). \end{equation*} Thus for all $e \in \mathsf I_n$ we have \begin{equation}\label{l2_eq_1} \varphi^V_{s'(e, i_1, \ldots, i_n)}(x_1, \ldots, x_n, x_{n+1}) \simeq \varphi^V_e(x_1, \ldots, x_n). \end{equation} By (Cn), there are natural numbers $l_1, \ldots, l_n$ such that $\varphi^V_{l_j} \simeq i_j$ for all $j = 1, \ldots, n$. Let $i$ denote an index of $I^1_1$. It is obvious that, for all $e \in \mathsf I_n$, \begin{equation*} s'(e, i_1, \ldots, i_n) \simeq s'(\varphi^V_i(e), \varphi^V_{l_1}, \ldots, \varphi^V_{l_n}). \end{equation*} It follows from (Cm) that there exists a $V$-function $s$ such that \begin{equation*} s(x) \simeq s'(\varphi^V_i(x), \varphi^V_{l_1}, \ldots, \varphi^V_{l_n}). \end{equation*} Thus for all natural numbers $e$ we have \begin{equation}\label{l2_eq_2} s(e) \simeq s'(e, i_1, \ldots, i_n). \end{equation} From \eqref{l2_eq_1}, \eqref{l2_eq_2} it follows that $ \varphi^V_{s(e)}(x_1, \ldots, x_n, x_{n+1}) \simeq \varphi^V_e(x_1, \ldots, x_n) $ for all $e \in \mathsf I_n$. \end{proof} \begin{lemma} (BF), (Cm), (Cn) imply (SMN). \end{lemma} \begin{proof} By (Cn), there is a $V$-function $s'$ such that, for every $k$, we have $s'(k) \in \mathsf I_0$ and $\varphi^V_{s'(k)} \simeq k$. Obviously, for all natural numbers $k_1, \ldots, k_m$ and $e \in \mathsf I_{n+m}$, \begin{equation*} \varphi^V_e(I^1_n({\overline x}), \ldots, I^n_n({\overline x}), \varphi^V_{s'(k_1)}, \ldots, \varphi^V_{s'(k_m)}) \simeq \varphi^V_e({\overline x}, k_1, \ldots, k_m), \end{equation*} where ${\overline x} = x_1, \ldots, x_n$. It follows from (BF) that there are natural numbers $i_1, \ldots, i_n$ such that $i_j$ is an index of $I^j_n$ for all $j = 1, \ldots, n$. It follows from (Cm) that there exists a $V$-function $s''$ such that, for all natural numbers $k_1, \ldots, k_m$ and $e \in \mathsf I_{n+m}$, $s''(e, i_1, \ldots, i_n, s'(k_1), \ldots, s'(k_m)) \in \mathsf I_n$ and \begin{equation*} \varphi^V_{s''(e, i_1, \ldots, i_n, s'(k_1), \ldots, s'(k_m))}({\overline x}) \simeq \varphi^V_e(I^1_n({\overline x}), \ldots, I^n_n({\overline x}), \varphi^V_{s'(k_1)}, \ldots, \varphi^V_{s'(k_m)}), \end{equation*} where ${\overline x} = x_1, \ldots, x_n$. Thus for all natural numbers $k_1, \ldots, k_m$ and $e \in \mathsf I_{n+m}$, \begin{equation}\label{l3_eq_1} \varphi^V_{s''(e, i_1, \ldots, i_n, s'(k_1), \ldots, s'(k_m))}(x_1, \ldots, x_n) \simeq \varphi^V_e(x_1, \ldots, x_n, k_1, \ldots, k_m) \end{equation} For each $j = 1, \ldots, m$ denote by $\psi_j$ the function such that $\psi_j(x, {\overline y}) \simeq s'(I^{j+1}_{m+1}(x, {\overline y})),$ where ${\overline y} = y_1, \ldots, y_m$. It follows from (BF), (Cm) that $\psi_j$ is a $V$-function for all $j = 1, \ldots, m$. By (Cn), there are natural numbers $l_1, \ldots, l_n$ such that $\varphi^V_{l_j} \simeq i_j$ for all $j = 1, \ldots, n$. It follows from (Cm) that there exists a $V$-function $s$ such that $$ s(x, {\overline y}) \simeq s''(I^1_{n+1}(x, {\overline y}), \varphi^V_{l_1}, \ldots, \varphi^V_{l_n}, \psi_1(x, {\overline y}), \ldots, \psi_m(x, {\overline y})), $$ where ${\overline y} = y_1, \ldots, y_m$. Since $\psi_j(x, y_1, \ldots, y_m) \simeq s'(y_j)$ for all $j = 1, \ldots, m$ and $\varphi^V_{l_j} \simeq i_j$ for all $j = 1, \ldots, n$, we see that \begin{equation}\label{l3_eq_2} s(x, y_1, \ldots, y_m) \simeq s''(x, i_1, \ldots, i_n, s'(y_1), \ldots, s'(y_m)). \end{equation} From \eqref{l3_eq_1}, \eqref{l3_eq_2} it follows that $$ \varphi^V_{s(e, k_1, \ldots, k_m)}(x_1, \ldots, x_n) \simeq \varphi^V_e(x_1, \ldots, x_n, k_1, \ldots, k_m) $$ for all $e \in \mathsf I_n$. \end{proof} \subsection{Basic Predicate Calculus} Basic Predicate Calculus ($\ensuremath{\mathsf{BQC}}$) is introduced in \cite{ruitenburg}. \textit{The language of $\ensuremath{\mathsf{BQC}}$} contains a countably infinite set of predicate symbols for each finite arity, a countably infinite set of variables, parentheses, the logical constants $\bot$ (falsehood), $\top$ (truth), the logical connectives $\land$, $\lor$, $\to$ and the quantifiers $\forall$, $\exists$. Suppose $M \subseteq \mathbb N$, denote by $L^M_\ensuremath{\mathsf{BQC}}$ the extension of the language of $\ensuremath{\mathsf{BQC}}$ by individual constants from the set $M$. Thus the language of $\ensuremath{\mathsf{BQC}}$ is a special case of $L^M_\ensuremath{\mathsf{BQC}}$ for $M = \varnothing$. We write $L_\ensuremath{\mathsf{BQC}}$ instead of $L^\varnothing_\ensuremath{\mathsf{BQC}}$. \textit{Terms} of $L^M_\ensuremath{\mathsf{BQC}}$ are constants from $M$ and variables. \textit{Atoms} of $L^M_\ensuremath{\mathsf{BQC}}$ are $\bot,\ \top$, and expressions of the form $P(t_1, \ldots, t_n)$, where $P$ is an $n$-ary predicate symbol and $t_1, \ldots, t_n$ are terms of $L^M_\ensuremath{\mathsf{BQC}}$. \textit{Formulas} of $L^M_\ensuremath{\mathsf{BQC}}$ are built up according to the following grammar: \begin{equation*}\label{grammarLBQC} A,\, B ::= \Phi \mid A \land B \mid A \lor B \mid \forall {\overline x}\,(A \to B) \mid \exists y\, A; \end{equation*} here $\Phi$ is an atom of $L^M_\ensuremath{\mathsf{BQC}}$, ${\overline x}$ is a (possibly empty) list of distinct variables, and $y$ is a variable. We write $A \to B$ instead of $\forall\,(A \to B)$. Terms and formulas of $L^M_\ensuremath{\mathsf{BQC}}$ will be called \textit{$M$-terms} and \textit{$M$-formulas}, for short. At the same time formulas of $L_\ensuremath{\mathsf{BQC}}$ are said to be \textit{formulas}. Free and bound variables are defined in the usual way. An occurrence of a variable $x$ in an $M$-formula $A$ is \textit{free} if it is not in the scope of a quantifier $\exists x$ or $\forall {\overline z}$ in $A$, where $x$ is in ${\overline z}$. An occurrence of a variable in an $M$-formula that is not free is called \textit{bound}. We say that a variable $x$ is a \textit{free variable} (\textit{bound variable}) of an $M$-formula $A$ if there exists a free (bound) occurrence of $x$ in $A$. A sentence of $L^M_\ensuremath{\mathsf{BQC}}$ is a formula of $L^M_\ensuremath{\mathsf{BQC}}$ without free variables. Sentences of $L^M_\ensuremath{\mathsf{BQC}}$ are called \textit{$M$-sentences}, and sentences of $L_\ensuremath{\mathsf{BQC}}$ simply \textit{sentences}, for short. An $M$-term $t$ is called \textit{free} for a variable $x$ in a $M$-formula $A$ if for each variable $y$ in $t$ there is no occurrence of $x$ in the scope of a quantifier $\exists y$ or $\forall {\overline z}$ for some ${\overline z}$ such that $y$ is in ${\overline z}$. Let $t_1, \ldots, t_n$ be $M$-terms, $x_1, \ldots, x_n$ be distinct variables, and $A$ be an $M$-formula, denote by $[t_1, \ldots, t_n/x_1, \ldots, x_n] A$ the result of substituting $t_1, \ldots, t_n$ for all free occurrences of $x_1, \ldots, x_n$ in a formula $A'$ obtained from $A$ by renaming all bound variables in such a way that, for each $i = 1, \ldots, n$, the $M$-term $t_i$ is free for $x_i$ in $A'$. Suppose $A$ is an $M$-formula and all free variables of $A$ are in ${\overline x}$, where ${\overline x}$ is a list of distinct variables. By the statement ``$A({\overline x})$ is a $M$-formula'' we mean the conjunction of statements: ``$A$ is an $M$-formula'', ``${\overline x}$ is a list of distinct variables'', and ``all free variables of $A$ are in ${\overline x}$''. If ${\overline t} = t_1, \ldots, t_n$ is a list of $M$-terms, then put $|{\overline t}| \rightleftharpoons n$. Let $A({\overline x})$ be an $M$-formula and ${\overline t}$ be a list of $M$-terms such that $|{\overline t}| = |{\overline x}|$; then by $A({\overline t})$ denote $[{\overline t}/ {\overline x}] A$. A \textit{sequent} is an expression of the form $A \Rightarrow B$, where $A$ and $B$ are formulas. The axioms of $\ensuremath{\mathsf{BQC}}$ are: \smallskip A1) $A \Rightarrow A$; A2) $A \Rightarrow \top$; A3) $\bot \Rightarrow A$; A4) $A\land \exists x\,B \Rightarrow \exists x\,(A\land B)$, where $x$ is not free in $A$; A5) $A\land (B\lor C) \Rightarrow (A\land B)\lor (A\land C)$; A6) $\forall \overline x\,(A\to B)\land \forall \overline x\,(B\to C) \Rightarrow \forall\overline x\,(A\to C)$; A7) $\forall \overline x\,(A\to B)\land \forall \overline x\,(A\to C) \Rightarrow \forall \overline x\,(A\to B\land C)$; A8) $\forall \overline x\,(B\to A)\land \forall \overline x\,(C\to A) \Rightarrow \forall \overline x\,(B\lor C\to A)$; A9) $\forall \overline x\,(A \to B) \Rightarrow \forall \overline x\,([{\overline y}/{\overline x}] A \to [{\overline y}/{\overline x}] B)$; A10) $\forall \overline x (A \to B) \Rightarrow\forall {\overline y} (A \to B)$, where no variable in ${\overline y}$ is free in $\forall \overline x (A \to B)$ A11) $\forall \overline x,x\,(B\to A) \Rightarrow \forall \overline x\,(\exists x\, B\to A)$, where $x$ is not free in $A$. \medskip The rules of $\ensuremath{\mathsf{BQC}}$ are: \medskip R1) $\frac{\displaystyle A\Rightarrow B\; B\Rightarrow C}{\displaystyle A\Rightarrow C}$; \medskip R2) $\frac{\displaystyle A\Rightarrow B\; A\Rightarrow C}{\displaystyle A\Rightarrow B\land C}$; \medskip R3) $\frac{\displaystyle A\Rightarrow B\land C}{\displaystyle A\Rightarrow B}$ (a),\; $\frac{\displaystyle A\Rightarrow B\land C}{\displaystyle A\Rightarrow C}$ (b); \medskip R4) $\frac{\displaystyle B\Rightarrow A\; C\Rightarrow A}{\displaystyle B\lor C\Rightarrow A}$; \medskip R5) $\frac{\displaystyle B\lor C\Rightarrow A}{\displaystyle B\Rightarrow A}$ (a),\; $\frac{\displaystyle B\lor C\Rightarrow A}{\displaystyle C\Rightarrow A}$ (b); \medskip R6) $\frac{\displaystyle A \Rightarrow B}{\displaystyle [{\overline y}/{\overline x}] A \to [{\overline y}/{\overline x}] B}$; \medskip R7) $\frac{\displaystyle B\Rightarrow A}{\displaystyle \exists x\, B\Rightarrow A}$, where $x$ is not free in $A$; \medskip R8) $\frac{\displaystyle \exists x\,B\Rightarrow A}{\displaystyle B\Rightarrow A}$, where $x$ is not free in $A$; \medskip R9) $\frac{\displaystyle A\land B\Rightarrow C}{\displaystyle A\Rightarrow \forall \overline x(B\to C)}$, where each variable in ${\overline x}$ is not free in $A$. \medskip In the axioms and rules of $\ensuremath{\mathsf{BQC}}$\ $A,\ B,\ C$ are formulas, ${\overline x}$ and ${\overline y}$ are lists of distinct variables such that $|{\overline x}| = |{\overline y}|$, and $x$ is a variable. Given a sequent $S$, we write $\ensuremath{\mathsf{BQC}} \vdash S$ if $S$ is derivable in $\ensuremath{\mathsf{BQC}}$. We say that a formula $A$ is derivable in $\ensuremath{\mathsf{BQC}}$ if $\ensuremath{\mathsf{BQC}} \vdash \top \Rightarrow A$. \subsection{$V$-realizability} In \cite{kon_gr_for_lang_ar,kon_gr_for_lang_ar_IS} we introduced a notion of $V$-realizability for the language of arithmetic. Using methods of \cite{plisko1983,kon_MP,kon_MP_IS}, in this paper we define a notion of absolute $V$-realizability in some domain $M \subseteq \mathbb N$ for the formulas of $L_\ensuremath{\mathsf{BQC}}$. Suppose $M \subseteq \mathbb N$, we call any total function from $M^n$ to $2^\mathbb N$ an \textit{$n$-ary generalized predicate} on $M$, where $2^\mathbb N$ is the set of all subsets of $\mathbb N$. A mapping $f$ is called an \textit{$M$-evaluation} if $f(P)$ is an $n$-ary generalized predicate on $M$ whenever $P$ is an $n$-ary predicate symbol of $L_\ensuremath{\mathsf{BQC}}$. We write $P^f$ instead of $f(P)$. We say that $f$ is an \textit{evaluation} if $f$ is an $M$-evaluation for some $M \subseteq \mathbb N$. \begin{definition} Let $e$ be a natural number, $M$ a subset of $\mathbb N$, $f$ an $M$-evaluation, and $A$ an $M$-sentence. The relation ‘‘$e$ $V$-\textit{realizes} $A$ \textit{on} $f$'' is denoted $e \mathrel{\mathbf{r}^V_f} A$ and is defined by induction on the number of logical connectives and quantifiers in $A$: \begin{itemize} \item there is no $e$ such that $e \mathrel{\mathbf{r}^V_f} \bot$; \smallskip \item $e \mathrel{\mathbf{r}^V_f} \top$ for all $e$; \smallskip \item $e \mathrel{\mathbf{r}^V_f} P(a_1, \ldots, a_n) \rightleftharpoons e \in P^f(a_1, \ldots, a_n)$, where $P$ is an $n$-ary predicate symbol and $a_1, \ldots, a_n \in M$; \smallskip \item $e \mathrel{\mathbf{r}^V_f} (\Phi \land \Psi) \rightleftharpoons$ $\mathsf p_1 e \mathrel{\mathbf{r}^V_f} \Phi$ and $\mathsf p_2 e \mathrel{\mathbf{r}^V_f} \Psi$; \smallskip \item $e \mathrel{\mathbf{r}^V_f} (\Phi \lor \Psi) \rightleftharpoons$ $(\mathsf p_1 e = 0$ and $\mathsf p_2 e \mathrel{\mathbf{r}^V_f} \Phi)$ or $(\mathsf p_1 e = 1$ and $\mathsf p_2 e \mathrel{\mathbf{r}^V_f} \Psi)$; \smallskip \item $e \mathrel{\mathbf{r}^V_f} \exists x \:\Phi(x) \rightleftharpoons \mathsf p_1 e \in M$ and $\mathsf p_2 e \mathrel{\mathbf{r}^V_f} \Phi(\mathsf p_1 e)$; \smallskip \item $e \mathrel{\mathbf{r}^V_f} \forall x_1, \ldots, \forall x_n\,(\Phi(x_1, \ldots, x_n) \to \Psi(x_1, \ldots, x_n)) \rightleftharpoons$ $e \in \mathsf I_{n+1}$ and, for all $s \in \mathbb N$,\ $a_1, \ldots, a_n \in M$, if $s \mathrel{\mathbf{r}^V_f} \Phi(a_1, \ldots, a_n)$, then $\varphi^V_e(a_1, \ldots, a_n, s)$ is defined and $\varphi^V_e(a_1, \ldots, a_n, s)\mathrel{\mathbf{r}^V_f} \Psi(a_1, \ldots, a_n)$. \end{itemize} \end{definition} A sentence $A$ is called \textit{absolutely $V$-realizable over all domains} if there exists a natural number $e$ such that, for all $M \subseteq \mathbb N$, we have $e \mathrel{\mathbf{r}^V_f} A$ whenever $f$ is an $M$-evaluation. We say that a list of distinct variables ${\overline x}$ is \textit{admissible} for a sequent $A \Rightarrow B$ if all free variables of the formulas $A$ and $B$ are in ${\overline x}$. By definition, put \begin{equation*} e \rvfx{{\overline x}} A \Rightarrow B \rightleftharpoons e \mathrel{\mathbf{r}^V_f} \forall {\overline x}\:(A \to B); \end{equation*} here $e$ is a natural number, $f$ is an evaluation, $A \Rightarrow B$ is a sequent, and ${\overline x}$ is an admissible list of variables for $A \Rightarrow B$. \begin{lemma}\label{l_rvfx_tr} Let $A \Rightarrow B$ be a sequent, $x_1, \ldots, x_n$ an admissible list of variables for $A \Rightarrow B$, and $p$ a permutation of $\{1, \ldots, n\}$. For all $e \in \mathsf I_{n+1}$ there exists $e' \in \mathsf I_{n+1}$ such that, for every evaluation~$f$, $e \rvfx{x_{p(1)}, \ldots, x_{p(n)}} A \Rightarrow B$ iff $e' \rvfx{x_1, \ldots, x_n} A \Rightarrow B$. \end{lemma} \begin{proof} It follows from (PV) that, for all $e \in \mathsf I_{n+1}$, there exists $e' \in \mathsf I_{n+1}$ such that \begin{equation*} \varphi^V_{e'}(k_1, \ldots, k_n, a) \simeq \varphi^V_e(k_{p(1)}, \ldots, k_{p(n)}, a) \end{equation*} for all natural numbers $k_1, \ldots, k_n, a$. It can be easily checked that, for every evaluation~$f$, we have $e \rvfx{x_{p(1)}, \ldots, x_{p(n)}} A \Rightarrow B$ if and only if $e' \rvfx{x_1, \ldots, x_n} A \Rightarrow B$. \end{proof} \begin{lemma}\label{l_rvfx_ficvar} Let $A \Rightarrow B$ be a sequent, $z_1, \ldots, z_n$ an admissible list of variables for $A \Rightarrow B$, and $u_1, \ldots, u_m$ a list of variables such that the list $z_1, \ldots, z_n, u_1, \ldots, u_m$ is admissible for $A \Rightarrow B$. For all $e \in \mathsf I_{n+1}$ there exists $e' \in \mathsf I_{n+m+1}$ such that, for every evaluation~$f$, $e \rvfx{z_1, \ldots, z_n} A \Rightarrow B$ iff $e' \rvfx{z_1, \ldots, z_n, u_1, \ldots, u_m} A \Rightarrow B$. \end{lemma} \begin{proof} It follows from (DV), (PV) that, for every $e \in \mathsf I_{n+1}$, there exists $e' \in \mathsf I_{n+m+1}$ such that, for all natural numbers $k_1, \ldots, k_{m+n}, a$, we have \begin{equation*} \varphi^V_{e'}(k_1, \ldots, k_n, k_{n+1}, \ldots, k_{m+n}, a) \simeq \varphi^V_e(k_1, \ldots, k_n, a). \end{equation*} It can be easily checked that, for every evaluation~$f$, we have $e \rvfx{z_1, \ldots, z_n} A \Rightarrow B$ if and only if $e' \rvfx{z_1, \ldots, z_n, u_1, \ldots, u_m} A \Rightarrow B$. \end{proof} \begin{lemma}\label{l_rvfx_ficvar_del} Under the conditions of Lemma \ref{l_rvfx_ficvar}, for all $e' \in \mathsf I_{n+m+1}$ there exists $e \in \mathsf I_{n+1}$ such that, for every evaluation~$f$, $e' \rvfx{z_1, \ldots, z_n, u_1, \ldots, u_m} A \Rightarrow B$ if and only if $e \rvfx{z_1, \ldots, z_n} A \Rightarrow B$. \end{lemma} \begin{proof} It follows from (PV), (SMN) that for all $e' \in \mathsf I_{n+m+1}$ there exists $e \in \mathsf I_{n+1}$ such that, for all natural numbers $k_1, \ldots, k_n, a$, \begin{equation*} \varphi^V_e(k_1, \ldots, k_n, a) \simeq \varphi^V_{e'}(k_1, \ldots, k_n, 0, \ldots, 0, a). \end{equation*} It can be easily checked that, for every evaluation~$f$, $e' \rvfx{z_1, \ldots, z_n, u_1, \ldots, u_m} A \Rightarrow B$ if and only if $e \rvfx{z_1, \ldots, z_n} A \Rightarrow B$. \end{proof} \begin{proposition}\label{p_eq} Let $S$ be a sequent, ${\overline x}$ and ${\overline y}$ admissible lists of variables for $S$, $|{\overline x}| = n$ and $|{\overline y}| = m$. For all $e \in \mathsf I_{n+1}$ there exists $e' \in \mathsf I_{m+1}$ such that, for every evaluation~$f$,\ $e' \rvfx{{\overline x}} S$ if and only if $e \rvfx{{\overline y}} S$. \end{proposition} \begin{proof} Denote by ${\overline z}$ a list of distinct variables such that, for every variable $w$, we have $w$ in ${\overline z}$ if and only if $w$ in ${\overline x}$ and $w$ in ${\overline y}$. Note that ${\overline z}$ is admissible for $S$. Let ${\overline u}$ be a list of distinct variables such that, for every variable $w$, we have $w$ in ${\overline u}$ if and only if $w$ in ${\overline x}$ and $w$ is not in ${\overline y}$. Denote by ${\overline v}$ a list of distinct variables such that, for every variable $w$, we have $w$ in ${\overline v}$ if and only if $w$ in ${\overline y}$ and $w$ is not in ${\overline x}$. Let $e \in \mathsf I_{n+1}$. It follows from Lemmas \ref{l_rvfx_tr}, \ref{l_rvfx_ficvar}, \ref{l_rvfx_ficvar_del} that there are natural numbers $e_1,\ e_2,\ e_3,\ e'$ such that \begin{equation*} e \rvfx{{\overline x}} S \Longleftrightarrow e_1 \rvfx{{\overline z}, {\overline u}} S \Longleftrightarrow e_2 \rvfx{{\overline z}} S \Longleftrightarrow e_3 \rvfx{{\overline z}, {\overline v}} S \Longleftrightarrow e' \rvfx{{\overline y}} S \end{equation*} for all evaluations $f$. \end{proof} \section{Main result} Our main result is the following. \begin{theorem}\label{t_main} If a sequent $S$ is derivable in $\ensuremath{\mathsf{BQC}}$ and ${\overline r} = r_1, \ldots, r_l$ is an admissible list of variables for $S$, then there exists a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \end{theorem} \begin{proof} By induction on derivations of $S$. Suppose $S$ is an axiom of $\ensuremath{\mathsf{BQC}}$. \begin{itemize} \item[A1)] Let $S$ be $A({\overline r}) \Rightarrow A({\overline r}).$ By (BF) there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \begin{equation}\label{fta1f} \varphi^V_e(k_1, \ldots, k_l, d) \simeq d. \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose \begin{equation}\label{fta1g} d \mathrel{\mathbf{r}^V_f} A(k_1, \ldots, k_l) \end{equation} for some natural numbers $d$ and $k_1, \ldots, k_l \in M$. From \eqref{fta1f}, \eqref{fta1g} it follows that \begin{equation}\label{fta1e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} A(k_1, \ldots, k_l). \end{equation} Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ we have \eqref{fta1e} whenever \eqref{fta1g}. Hence $ e \rvfx{{\overline r}} A({\overline r}) \Rightarrow A({\overline r}). $ \item[A2)] Let $S$ be $A({\overline r}) \Rightarrow \top.$ By (BF) there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \eqref{fta1f}. Let $f$ be an evaluation. It can be easily checked that $ e \rvfx{{\overline r}} A({\overline r}) \Rightarrow \top. $ \item[A3)] Let $S$ be $\bot \Rightarrow A({\overline r}).$ It can be easily checked that, for every $e \in \mathsf I_{l+1}$, we have $ e \rvfx{{\overline r}} \bot \Rightarrow A({\overline r}) $ for all evaluations $f$. \item[A4)] Let $S$ be $A({\overline r}) \land \exists x\, B(x, {\overline r}) \Rightarrow \exists x\,(A({\overline r}) \land B(x, {\overline r})).$ It follows from (BF), (Cm), (DV), (PV) that there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \begin{equation}\label{ft_a4_f} \varphi^V_e(k_1, \ldots, k_l, d) \simeq \mathsf c(\mathsf p_1\mathsf p_2 d,\mathsf c(\mathsf p_1 d,\mathsf p_2\mathsf p_2 d)). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose \begin{equation}\label{ft_a4_b} d \mathrel{\mathbf{r}^V_f} A({\overline k}) \land \exists x\,B(x, {\overline k}) \end{equation} for some natural number $d$ and ${\overline k} = k_1, \ldots, k_l \in M$. Let us prove that \begin{equation}\label{ft_a4_e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} \exists x \:(A({\overline k}) \land B(x, {\overline k})). \end{equation} Using \eqref{ft_a4_b}, we get \begin{equation}\label{ft_a4_b1} \mathsf p_1 d \mathrel{\mathbf{r}^V_f} A({\overline k}), \end{equation} \begin{equation}\label{ft_a4_b2} \mathsf p_2 d \mathrel{\mathbf{r}^V_f} \exists x\,B(x, {\overline k}). \end{equation} From \eqref{ft_a4_b2} it follows that \begin{equation}\label{ft_a4_b20} \mathsf p_2\mathsf p_2 d \mathrel{\mathbf{r}^V_f} B(\mathsf p_1\mathsf p_2 d, {\overline k}). \end{equation} Using \eqref{ft_a4_b1} and \eqref{ft_a4_b20}, we obtain \begin{equation}\label{ft_a4_c1} \mathsf c(\mathsf p_1 d,\mathsf p_2\mathsf p_2 d) \mathrel{\mathbf{r}^V_f} A({\overline k}) \land B(\mathsf p_1\mathsf p_2 d, {\overline k}). \end{equation} From \eqref{ft_a4_c1} it follows that \begin{equation}\label{ft_a4_cc} \mathsf c(\mathsf p_1\mathsf p_2 d, \mathsf c(\mathsf p_1 d,\mathsf p_2\mathsf p_2 d)) \mathrel{\mathbf{r}^V_f} \exists x \:(A({\overline k}) \land B(x, {\overline k})). \end{equation} Using \eqref{ft_a4_f} and \eqref{ft_a4_cc}, we obtain \eqref{ft_a4_e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ we have \eqref{ft_a4_e} whenever \eqref{ft_a4_b}. Hence $e \rvfx{{\overline r}} S$. \item[A5)] Let $S$ be $A({\overline r}) \land (B({\overline r}) \lor C({\overline r})) \Rightarrow (A({\overline r}) \land B({\overline r}))\lor (A({\overline r}) \land C({\overline r})).$ By (BF), (Cm), (DV), and (PV), there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \begin{equation}\label{fta5f} \varphi^V_e(k_1, \ldots, k_l, d) \simeq \mathsf{c(p_1p_2}d, \mathsf{c(p_1}d, \mathsf{p_2p_2}d)). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose \begin{equation}\label{fta5r} d \mathrel{\mathbf{r}^V_f} A({\overline k}) \land (B({\overline k}) \lor C({\overline k})) \end{equation} for some natural number $d$ and ${\overline k} = k_1, \ldots, k_l \in M$. Let us prove that \begin{equation}\label{fta5e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} (A({\overline k}) \land B({\overline k})) \lor (A({\overline k}) \land C({\overline k})). \end{equation} From \eqref{fta5r} it follows that \begin{equation}\label{fta5r1} \mathsf p_1 d \mathrel{\mathbf{r}^V_f} A({\overline k}), \end{equation} \begin{equation}\label{fta5r2} \mathsf p_2 d \mathrel{\mathbf{r}^V_f} (B({\overline k}) \lor C({\overline k})). \end{equation} Using \eqref{fta5r2}, we have \begin{equation}\label{fta5r20} (\mathsf p_1 \mathsf p_2 d = 0 \text{ and } \mathsf p_2 \mathsf p_2 d \mathrel{\mathbf{r}^V_f} B({\overline k})) \text{ or } (\mathsf p_1 \mathsf p_2 d = 1 \text{ and } \mathsf p_2 \mathsf p_2 d \mathrel{\mathbf{r}^V_f} C({\overline k})). \end{equation} Using \eqref{fta5r1} and \eqref{fta5r20}, we obtain \begin{equation}\label{fta5r20c1} \mathsf p_1 \mathsf p_2 d = 0 \land \mathsf c(\mathsf p_1 d, \mathsf p_2 \mathsf p_2 d) \mathrel{\mathbf{r}^V_f} (A({\overline k}) \land B({\overline k})) \end{equation} or \begin{equation}\label{fta5r20c2} \mathsf p_1 \mathsf p_2 d = 1 \land \mathsf c(\mathsf p_1 d, \mathsf p_2 \mathsf p_2 d) \mathrel{\mathbf{r}^V_f} (A({\overline k}) \land C({\overline k})). \end{equation} Hence \begin{equation}\label{fta5r20cc} \mathsf{c(p_1p_2}d, \mathsf{c(p_1}d, \mathsf{p_2p_2}d)) \mathrel{\mathbf{r}^V_f} (A({\overline k}) \land B({\overline k})) \lor (A({\overline k}) \land C({\overline k})). \end{equation} Using \eqref{fta5f} and \eqref{fta5r20cc}, we obtan \eqref{fta5e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ we have \eqref{fta5e} whenever \eqref{fta5r}. Hence $e \rvfx{{\overline r}} S$. \item[A6)] Let $S$ be $$\forall {\overline x}\,(A({\overline x}, {\overline r})\to B({\overline x}, {\overline r})) \land \forall {\overline x}\,(B({\overline x}, {\overline r})\to C({\overline x}, {\overline r})) \Rightarrow \forall{\overline x}\,(A({\overline x}, {\overline r})\to C({\overline x}, {\overline r}))$$ and $|{\overline x}| = n$. It follows from (Cm), (BF), (PV), and (SMN) that there exists a $V$-function $\mathsf k$ such that, for all $b,\ c \in \mathsf I_{n+1}$, we have $\mathsf k(b, c) \in \mathsf I_{n+1}$ and \begin{equation}\label{fta6f} \varphi^V_{\mathsf{k}(b,c)}(m_1, \ldots, m_n, a) \simeq\varphi^V_c(m_1, \ldots, m_n, \varphi^V_b(m_1, \ldots, m_n, a)) \end{equation} for all natural numbers $m_1, \ldots, m_n, a$. By (Cm), (BF), (DV), and (PV), there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \begin{equation}\label{fta6n} \varphi^V_e(k_1, \ldots, k_l, d) \simeq \mathsf k(\mathsf p_1 d, \mathsf p_2 d). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose \begin{equation}\label{fta6r} d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k}) \to B({\overline x}, {\overline k}))\land \forall {\overline x}\,(B({\overline x}, {\overline k})\to C({\overline x}, {\overline k})) \end{equation} for some natural number $d$ and ${\overline k} = k_1, \ldots, k_l \in M$. Let us prove that \begin{equation}\label{fta6e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k})\to C({\overline x}, {\overline k})). \end{equation} From \eqref{fta6r} it follows that \begin{equation}\label{fta6r1} \mathsf p_1 d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k}) \to B({\overline x}, {\overline k})), \end{equation} \begin{equation}\label{fta6r2} \mathsf p_2 d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(B({\overline x}, {\overline k}) \to C({\overline x}, {\overline k})). \end{equation} Suppose \begin{equation}\label{fta6g} a \mathrel{\mathbf{r}^V_f} A({\overline m}, {\overline k}) \end{equation} for some natural number $a$ and ${\overline m} = m_1, \ldots, m_n \in M$. Using \eqref{fta6r1}, \eqref{fta6g}, we obtain \begin{equation}\label{fta6c} \varphi^V_{\mathsf p_1 d}(m_1, \ldots, m_n, a) \mathrel{\mathbf{r}^V_f} B({\overline m}, {\overline k}). \end{equation} From \eqref{fta6c}, \eqref{fta6r2} it follows that \begin{equation}\label{fta6c0} \varphi^V_{\mathsf p_2 d}(m_1, \ldots, m_n, \varphi^V_{\mathsf p_1 d}(m_1, \ldots, m_n, a)) \mathrel{\mathbf{r}^V_f} C({\overline m}, {\overline k}). \end{equation} Using \eqref{fta6f} and \eqref{fta6c0}, we get \begin{equation}\label{fta6c00} \varphi^V_{\mathsf k(\mathsf p_1 d, \mathsf p_2 d)}(m_1, \ldots, m_n, a) \mathrel{\mathbf{r}^V_f} C({\overline m}, {\overline k}). \end{equation} Thus for all natural numbers $a$ and $m_1, \ldots, m_n \in M$ we have \eqref{fta6c00} whenever \eqref{fta6g}. Hence \begin{equation}\label{fta6s} \mathsf k(\mathsf p_1 d, \mathsf p_2 d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k})\to C({\overline x}, {\overline k})). \end{equation} Using \eqref{fta6n} and \eqref{fta6s}, we obtain \eqref{fta6e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ it follows from \eqref{fta6r} that \eqref{fta6e}. Hence $e \rvfx{{\overline r}} S$. \item[A7)] Let $S$ be $$\forall \overline x\,(A({\overline x}, {\overline r})\to B({\overline x}, {\overline r}))\land \forall \overline x\,(A({\overline x}, {\overline r})\to C({\overline x}, {\overline r})) \Rightarrow \forall \overline x\,(A({\overline x}, {\overline r})\to B({\overline x}, {\overline r})\land C({\overline x}, {\overline r}))$$ and $|{\overline x}| = n$. It follows from (Cm), (BF), and (SMN) that there exists a $V$-function $\mathsf k$ such that, for all $b,\ c \in \mathsf I_{n+1}$, we have $\mathsf k(b, c) \in \mathsf I_{n+1}$ and \begin{equation}\label{fta7f} \varphi^V_{\mathsf{k}(b,c)}(m_1, \ldots, m_n, a) \simeq \mathsf{c}(\varphi^V_b(m_1, \ldots, m_n, a), \varphi^V_c(m_1, \ldots, m_n, a)) \end{equation} for all natural numbers $m_1, \ldots, m_n, a$. By (Cm), (BF), (DV), and (PV) there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \begin{equation}\label{fta7n} \varphi^V_e(k_1, \ldots, k_l, d) \simeq \mathsf k(\mathsf p_1 d, \mathsf p_2 d). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_l \in M$, \begin{equation}\label{fta7r} d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k}) \to B({\overline x}, {\overline k}))\land \forall {\overline x}\,(A({\overline x}, {\overline k})\to C({\overline x}, {\overline k})), \end{equation} where ${\overline k} = k_1, \ldots, k_l$. Let us prove that \begin{equation}\label{fta7e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k})\to B({\overline x}, {\overline k}) \land C({\overline x}, {\overline k})). \end{equation} From \eqref{fta7r} it follows that \begin{equation}\label{fta7r1} \mathsf p_1 d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k}) \to B({\overline x}, {\overline k})), \end{equation} \begin{equation}\label{fta7r2} \mathsf p_2 d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k}) \to C({\overline x}, {\overline k})). \end{equation} Suppose for some natural numbers $a$ and $m_1, \ldots, m_n \in M$, \begin{equation}\label{fta7g} a \mathrel{\mathbf{r}^V_f} A({\overline m}, {\overline k}), \end{equation} where ${\overline m} = m_1, \ldots, m_n$. From \eqref{fta7r1}, \eqref{fta7g} it follows that \begin{equation}\label{fta7c1} \varphi^V_{\mathsf p_1 d}(m_1, \ldots, m_n, a) \mathrel{\mathbf{r}^V_f} B({\overline m}, {\overline k}), \end{equation} Using \eqref{fta7r2} and \eqref{fta7g}, we get \begin{equation}\label{fta7c2} \varphi^V_{\mathsf p_2 d}(m_1, \ldots, m_n, a) \mathrel{\mathbf{r}^V_f} C({\overline m}, {\overline k}). \end{equation} From \eqref{fta7c1}, \eqref{fta7c2} it follows that \begin{equation}\label{fta7u} \mathsf c(\varphi^V_{\mathsf p_1 d}(m_1, \ldots, m_n, a), \varphi^V_{\mathsf p_2 d}(m_1, \ldots, m_n, a)) \mathrel{\mathbf{r}^V_f} B({\overline m}, {\overline k}) \land C({\overline m}, {\overline k}). \end{equation} Using \eqref{fta7f} and \eqref{fta7u}, we obtain \begin{equation}\label{fta7uf} \varphi^V_{\mathsf k(\mathsf p_1 d, \mathsf p_2 d)}(m_1, \ldots, m_n, a) \mathrel{\mathbf{r}^V_f} B({\overline m}, {\overline k}) \land C({\overline m}, {\overline k}). \end{equation} Thus for all natural numbers $a$ and $m_1, \ldots, m_n \in M$ we have \eqref{fta7uf} whenever \eqref{fta7g}. Hence \begin{equation}\label{fta7s} \mathsf k(\mathsf p_1 d, \mathsf p_2 d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k})\to B({\overline x}, {\overline k}) \land C({\overline x}, {\overline k})). \end{equation} From \eqref{fta7n}, \eqref{fta7s} it follows that \eqref{fta7e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ it follows from \eqref{fta7r} that \eqref{fta7e}. Hence $ e \rvfx{{\overline r}} S. $ \item[A8)] Let $S$ be $$\forall \overline x\,(B({\overline x}, {\overline r})\to A({\overline x}, {\overline r}))\land \forall \overline x\,(C({\overline x}, {\overline r})\to A({\overline x}, {\overline r})) \Rightarrow \forall \overline x\,(B({\overline x}, {\overline r})\lor C({\overline x}, {\overline r})\to A({\overline x}, {\overline r}))$$ and $|{\overline x}| = n$. It follows from (Cs), (BF), (Cm), and (SMN) that there exists a $V$-function $\mathsf k$ such that if $b, c \in \mathsf I_{n+1}$, then $\mathsf k(b, c) \in \mathsf I_{n+1}$ and for all natural numbers $m_1, \ldots, m_n, a$ we have \begin{align}\label{fta8f_if} \varphi^V_{\mathsf{k}(b, c)}(m_1, \ldots, m_n, a) &\simeq \varphi^V_b(m_1, \ldots, m_n, \mathsf{p}_2 a)\ \mbox{ if } \mathsf p_1 a = 0, \\ \label{fta8f_notif} \varphi^V_{\mathsf{k}(b, c)}(m_1, \ldots, m_n, a) &\simeq \varphi^V_c(m_1, \ldots, m_n, \mathsf{p}_2 a)\ \mbox{ if } \mathsf p_1 a \not= 0. \end{align} By (Cm), (BF), (DV), and (PV) there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \begin{equation}\label{fta8n} \varphi^V_e(k_1, \ldots, k_l, d) \simeq \mathsf k(\mathsf p_1 d, \mathsf p_2 d). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_l \in M$, \begin{equation}\label{fta8r} d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(B({\overline x}, {\overline k}) \to A({\overline x}, {\overline k}))\land \forall {\overline x}\,(C({\overline x}, {\overline k})\to A({\overline x}, {\overline k})), \end{equation} where ${\overline k} = k_1, \ldots, k_l$. Let us prove that \begin{equation}\label{fta8e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(B({\overline x}, {\overline k}) \lor C({\overline x}, {\overline k}) \to A({\overline x}, {\overline k})). \end{equation} From \eqref{fta8r} it follows that \begin{equation}\label{fta8r1} \mathsf p_1 d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(B({\overline x}, {\overline k}) \to A({\overline x}, {\overline k})), \end{equation} \begin{equation}\label{fta8r2} \mathsf p_2 d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(C({\overline x}, {\overline k}) \to A({\overline x}, {\overline k})). \end{equation} Suppose for some natural numbers $a$ and $m_1, \ldots, m_n \in M$, \begin{equation}\label{fta8g} a \mathrel{\mathbf{r}^V_f} B({\overline m}, {\overline k}) \lor C({\overline m}, {\overline k}), \end{equation} where ${\overline m} = m_1, \ldots, m_n$. From \eqref{fta8g} it follows that either $\mathsf p_1 a = 0$, or $\mathsf p_1 a = 1$. Let us consider $2$ cases. Case $1$: $\mathsf p_1 a = 0$. Then it follows from \eqref{fta8g} that \begin{equation}\label{fta8g1} \mathsf p_2 a \mathrel{\mathbf{r}^V_f} B({\overline m}, {\overline k}). \end{equation} Using \eqref{fta8r1} and \eqref{fta8g1}, we get \begin{equation}\label{fta8c1} \varphi^V_{\mathsf p_1 d}(m_1, \ldots, m_n, \mathsf p_2 a) \mathrel{\mathbf{r}^V_f} A({\overline m}, {\overline k}), \end{equation} From \eqref{fta8f_if}, \eqref{fta8c1} it follows that \begin{equation}\label{fta8f1} \varphi^V_{\mathsf k(\mathsf p_1 d, \mathsf p_2 d)}(m_1, \ldots, m_n, a) \mathrel{\mathbf{r}^V_f} A({\overline m}, {\overline k}). \end{equation} Case $2$: $\mathsf p_1 a = 1$. Then it follows from \eqref{fta8g} that \begin{equation}\label{fta8g2} \mathsf p_2 a \mathrel{\mathbf{r}^V_f} C({\overline m}, {\overline k}). \end{equation} Using \eqref{fta8r2} and \eqref{fta8g2}, we obtain \begin{equation}\label{fta8c2} \varphi^V_{\mathsf p_2 d}(m_1, \ldots, m_n, \mathsf p_2 a) \mathrel{\mathbf{r}^V_f} A({\overline m}, {\overline k}), \end{equation} From \eqref{fta8f_notif}, \eqref{fta8c2} it follows that \eqref{fta8f1}. Thus for all natural numbers $a$ and $m_1, \ldots, m_n \in M$ we have \eqref{fta8f1} whenever \eqref{fta7g}. Hence \begin{equation}\label{fta8s} \mathsf k(\mathsf p_1 d, \mathsf p_2 d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(B({\overline x}, {\overline k}) \lor C({\overline x}, {\overline k}) \to A({\overline x}, {\overline k})). \end{equation} From \eqref{fta8n}, \eqref{fta8s} it follows that \eqref{fta8e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ we have \eqref{fta8e} whenever \eqref{fta8r}. Hence $e \rvfx{{\overline r}} S$. \item[A9)] Let $S$ be $\forall \overline x\,(A \to B) \Rightarrow \forall \overline x\,([{\overline z} / {\overline x}]A \to [{\overline z}/ {\overline x}]B),$ where $|{\overline x}| = |{\overline z}| = n$. Any variable in ${\overline z}$ is in ${\overline x}$ or in ${\overline r}$. We will write ${\overline z}({\overline x}, {\overline r})$ instead of ${\overline z}$. Thus $S$ has the form $$\forall \overline x\,(A({\overline x}, {\overline r}) \to B({\overline x}, {\overline r})) \Rightarrow \forall {\overline x}\,(A({\overline z}({\overline x}, {\overline r}), {\overline r}) \to B({\overline z}({\overline x}, {\overline r}), {\overline r})).$$ For all natural numbers ${\overline k} = k_1, \ldots, k_l$ denote by ${\overline z}({\overline x}, {\overline k})$ the result of substituting ${\overline k}$ for ${\overline r}$ in ${\overline z}({\overline x}, {\overline r})$. If ${\overline m} = m_1, \ldots, m_n$ is a list of natural numbers, then denote by ${\overline z}({\overline m}, {\overline k})$ the result of replacing ${\overline x}$ by ${\overline m}$ in ${\overline z}({\overline x}, {\overline k})$. Obviously, ${\overline z}({\overline m}, {\overline k})$ is a list of natural numbers and $|{\overline z}({\overline m}, {\overline k})| = n$. For all $i = 1, \ldots, n$ denote by $\mathsf z_i$ a function such that, for all natural numbers ${\overline m},\ {\overline k}$, $\mathsf z_i({\overline m}, {\overline k})$ is the $i$-th element of ${\overline z}({\overline m}, {\overline k})$. Clearly, any $\mathsf z_i$ is $I^j_{n+l}$ for some $j$. It follows from (Cm), (BF), (DV), (PV), and (SMN) that there exists a $V$-function $\mathsf k$ such that, for all $d \in \mathsf I_{n+1}$, we have $\mathsf k(d) \in \mathsf I_{n+l+1}$ and for all natural numbers $m_1, \ldots, m_n, a, k_1, \ldots, k_l$, \begin{equation}\label{fta9f0} \varphi^V_{\mathsf k(d)}(m_1, \ldots, m_n, a, k_1, \ldots, k_l) \simeq \varphi^V_d(\mathsf z_1({\overline m}, {\overline k}), \ldots, \mathsf z_n({\overline m}, {\overline k}), a), \end{equation} where ${\overline m} = m_1, \ldots, m_n$,\ ${\overline k} = k_1, \ldots, k_l$. Since the list $\mathsf z_1({\overline m}, {\overline k}), \ldots, \mathsf z_n({\overline m}, {\overline k})$ is ${\overline z}({\overline m}, {\overline k})$, we see that \begin{equation}\label{fta9f00} \varphi^V_{\mathsf k(d)}(m_1, \ldots, m_n, a, k_1, \ldots, k_l) \simeq \varphi^V_d({\overline z}({\overline m}, {\overline k}), a). \end{equation} It follows from (SMN) that there exists a $V$-function $\ss$ such that, for all natural numbers $k_1, \ldots, k_l$ and $c \in \mathsf I_{n+l+1}$, we have $\ss(c, k_1, \ldots, k_l) \in \mathsf I_{n+1}$ and \begin{equation}\label{fta9f1} \varphi^V_{\ss(c, k_1, \ldots, k_l)}(m_1, \ldots, m_n, a) \simeq \varphi^V_{c}(m_1, \ldots, m_n, a, k_1, \ldots, k_l) \end{equation} for all natural numbers $m_1, \ldots, m_n, a$. Using \eqref{fta9f00} and \eqref{fta9f1}, we get \begin{equation}\label{fta9f} \varphi^V_{\ss(\mathsf k(d), k_1, \ldots, k_l)}(m_1, \ldots, m_n, a) \simeq \varphi^V_d({\overline z}({\overline m}, {\overline k}), a). \end{equation} It follows from (Cm), (DV), (PV), (BF) that there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \begin{equation}\label{fta9n} \varphi^V_e(k_1, \ldots, k_l, d) \simeq \ss(\mathsf k(d), k_1, \ldots, k_l) \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_l \in M$, \begin{equation}\label{fta9r} d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline k}) \to B({\overline x}, {\overline k})), \end{equation} where ${\overline k} = k_1, \ldots, k_l$. Let us prove that \begin{equation}\label{fta9e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline z}({\overline x}, {\overline k}), {\overline k}) \to B({\overline z}({\overline x}, {\overline k}), {\overline k})). \end{equation} Suppose for some natural numbers $a$ and $m_1, \ldots, m_n \in M$, \begin{equation}\label{fta9g} a \mathrel{\mathbf{r}^V_f} A({\overline z}({\overline m}, {\overline k}), {\overline k}), \end{equation} where ${\overline m} = m_1, \ldots, m_n$. Using \eqref{fta9r} and \eqref{fta9g}, we obtain \begin{equation}\label{fta9c} \varphi^V_d({\overline z}({\overline m}, {\overline k}), a) \mathrel{\mathbf{r}^V_f} B({\overline z}({\overline m}, {\overline k}), {\overline k}). \end{equation} From \eqref{fta9f}, \eqref{fta9c} it follows that \begin{equation}\label{fta9s} \varphi^V_{\ss(\mathsf k(d), k_1, \ldots, k_l)}(m_1, \ldots, m_n, a) \mathrel{\mathbf{r}^V_f} B({\overline z}({\overline m}, {\overline k}), {\overline k}). \end{equation} Thus for all natural numbers $a$ and $m_1, \ldots, m_n \in M$ we have \eqref{fta9s} whenever \eqref{fta9g}. Hence \begin{equation}\label{fta9ss} \ss(\mathsf k(d), k_1, \ldots, k_l) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline z}({\overline x}, {\overline k}), {\overline k}) \to B({\overline z}({\overline x}, {\overline k}), {\overline k})). \end{equation} From \eqref{fta9n}, \eqref{fta9ss} it follows that \eqref{fta9e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ we have \eqref{fta9e} whenever \eqref{fta9r}. Hence $e \rvfx{{\overline r}} S$. \item[A10)] Let $S$ be $\forall {\overline x}\,(A \to B) \Rightarrow \forall {\overline y}\,(A \to B),$ where ${\overline x} = x_1, \ldots, x_n$, ${\overline y} = y_1, \ldots, y_p$ and no variable in ${\overline y}$ is free in $\forall {\overline x}\,(A \to B)$. Denote by ${\overline u}({\overline r})$ a list of distinct variables that consists all free variables of $\forall {\overline x}\,(A \to B)$. For all natural numbers ${\overline k} = k_1, \ldots, k_l$ denote by ${\overline u}({\overline k})$ the result of replacing ${\overline r}$ by ${\overline k}$ in ${\overline u}({\overline r})$. Any variable in ${\overline x}$ is in ${\overline y}$ or in ${\overline r}$. We will write ${\overline x}({\overline y}, {\overline r})$ instead of ${\overline x}$. For all natural numbers ${\overline k} = k_1, \ldots, k_l$ denote by ${\overline x}({\overline y}, {\overline k})$ the result of substituting ${\overline k}$ for ${\overline r}$ in ${\overline x}({\overline y}, {\overline r})$. If ${\overline m} = m_1, \ldots, m_n$ is a list of natural numbers, then denote by ${\overline x}({\overline m}, {\overline k})$ the result of replacing ${\overline y}$ by ${\overline m}$ in ${\overline x}({\overline x}, {\overline k})$. Obviously, ${\overline x}({\overline m}, {\overline k})$ is a list of natural numbers and $|{\overline x}({\overline m}, {\overline k})| = n$. For all $i = 1, \ldots, n$ denote by $\mathsf x_i$ a function such that $\mathsf x_i({\overline m}, {\overline k})$ is the $i$-th element of ${\overline x}({\overline m}, {\overline k})$ for all natural numbers ${\overline m},\ {\overline k}$. Clearly, any $\mathsf x_i$ is $I^j_{n+l}$ for some $j$. Thus $S$ has the form $$ \forall {\overline x}\,(A({\overline x}, {\overline u}({\overline r})) \to B({\overline x}, {\overline u}({\overline r}))) \Rightarrow \forall {\overline y}\,(A({\overline x}({\overline y}, {\overline r}), {\overline u}({\overline r})) \to B({\overline x}({\overline y}, {\overline r}), {\overline u}({\overline r}))).$$ It follows from (Cm), (BF), (SMN) that there exists a $V$-function $\mathsf k$ such that, for all $d \in \mathsf I_{n+1}$, we have $\mathsf k(d) \in \mathsf I_{p+l+1}$ and for all natural numbers $m_1, \ldots, m_p, a, k_1, \ldots, k_l$, \begin{equation}\label{fta10f0} \varphi^V_{\mathsf k(d)}(m_1, \ldots, m_p, a, k_1, \ldots, k_l) \simeq \varphi^V_d(\mathsf x_1({\overline m}, {\overline k}), \ldots, \mathsf x_n({\overline m}, {\overline k}), a), \end{equation} where ${\overline m} = m_1, \ldots, m_p$,\ ${\overline k} = k_1, \ldots, k_l$. By (SMN), there is a $V$-function $\ss$ such that, for all natural numbers $k_1, \ldots, k_l$ and $c \in \mathsf I_{p+l+1}$, we have $\ss(c, k_1, \ldots, k_l) \in \mathsf I_{p+1}$ and \begin{equation}\label{fta10f1} \varphi^V_{\ss(c, k_1, \ldots, k_l)}(m_1, \ldots, m_p, a) \simeq \varphi^V_{c}(m_1, \ldots, m_p, a, k_1, \ldots, k_l) \end{equation} for all natural numbers $m_1, \ldots, m_p, a$. Since the list $\mathsf x_1({\overline m}, {\overline k}), \ldots, \mathsf x_n({\overline m}, {\overline k})$ is ${\overline x}({\overline m}, {\overline k})$, it follows from \eqref{fta10f0}, \eqref{fta10f1} that for all natural numbers $m_1, \ldots, m_p, a, k_1, \ldots, k_l$ and $d \in \mathsf I_{n+1}$, \begin{equation}\label{fta10f} \varphi^V_{\ss(\mathsf k(d), k_1, \ldots, k_l)}(m_1, \ldots, m_p, a) \simeq \varphi^V_d({\overline x}({\overline m}, {\overline k}), a). \end{equation} By (Cm), (DV), (BF), (PV), there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, we have \begin{equation}\label{fta10n} \varphi^V_e(k_1, \ldots, k_l, d) \simeq \ss(\mathsf k(d), k_1, \ldots, k_l). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_l \in M$, \begin{equation}\label{fta10r} d \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(A({\overline x}, {\overline u}({\overline k})) \to B({\overline x}, {\overline u}({\overline k}))), \end{equation} where ${\overline k} = k_1, \ldots, k_l$. Let us prove that \begin{equation}\label{fta10e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} \forall {\overline y}\,(A({\overline x}({\overline y}, {\overline k}), {\overline u}({\overline k})) \to B({\overline x}({\overline y}, {\overline k}), {\overline u}({\overline k}))). \end{equation} Suppose for some natural numbers $a$ and $m_1, \ldots, m_p \in M$, \begin{equation}\label{fta10g} a \mathrel{\mathbf{r}^V_f} A({\overline x}({\overline m}, {\overline k}), {\overline u}({\overline k})), \end{equation} where ${\overline m} = m_1, \ldots, m_p$. Using \eqref{fta10r} and \eqref{fta10g}, we get \begin{equation}\label{fta10c} \varphi^V_d({\overline x}({\overline m}, {\overline k}), a) \mathrel{\mathbf{r}^V_f} B({\overline x}({\overline m}, {\overline k}), {\overline k}) \end{equation} From \eqref{fta10f}, \eqref{fta10c} it follows that \begin{equation}\label{fta10s} \varphi^V_{\ss(\mathsf k(d), k_1, \ldots, k_l)}(m_1, \ldots, m_p, a) \mathrel{\mathbf{r}^V_f} B({\overline x}({\overline m}, {\overline k}), {\overline k}) \end{equation} Thus for all natural numbers $a$ and $m_1, \ldots, m_p \in M$ we have \eqref{fta10s} whenever \eqref{fta10g}. Hence \begin{equation}\label{fta10ss} \ss(\mathsf k(d), k_1, \ldots, k_l) \mathrel{\mathbf{r}^V_f} \forall {\overline y}\,(A({\overline x}({\overline y}, {\overline k}), {\overline u}({\overline k})) \to B({\overline x}({\overline y}, {\overline k}), {\overline u}({\overline k}))). \end{equation} From \eqref{fta10n}, \eqref{fta10ss} it follows that \eqref{fta10e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ we have \eqref{fta10e} whenever \eqref{fta10r}. Hence $e \rvfx{{\overline r}} S$. \item[A11)] Let $S$ be $\forall {\overline x}, x\,(B({\overline x}, x, {\overline r}) \to A({\overline x}, {\overline r})) \Rightarrow \forall {\overline x}\,(\exists x\, B({\overline x}, x, {\overline r}) \to A({\overline x}, {\overline r}))$ and $|{\overline x}| = n$. It follows from (Cm), (BF), (DV), (PV) that there is a $V$-function $\mathsf k$ such that, for every $d \in \mathsf I_{n+2}$, we have $\mathsf k(d) \in \mathsf I_{n+1}$ and \begin{equation}\label{fta11f} \varphi^V_{\mathsf k(d)}(m_1, \ldots, m_n, b) \simeq \varphi^V_d(m_1, \ldots, m_n,\mathsf p_1 b,\mathsf p_2 b) \end{equation} for all natural numbers $m_1, \ldots, m_n, b$. By (DV) and (PV), there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_l, d$, \begin{equation}\label{fta11n} \varphi^V_e(k_1, \ldots, k_l, d) \simeq \mathsf k(d). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_l \in M$, \begin{equation}\label{fta11r} d \mathrel{\mathbf{r}^V_f} \forall {\overline x}, x\,(B({\overline x}, x, {\overline k}) \to A({\overline x}, {\overline k})), \end{equation} where ${\overline k} = k_1, \ldots, k_l$. Let us prove that \begin{equation}\label{fta11e} \varphi^V_e(k_1, \ldots, k_l, d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(\exists x\, B({\overline x}, x, {\overline k}) \to A({\overline x}, {\overline k})). \end{equation} Suppose for some natural numbers $b$ and $m_1, \ldots, m_n \in M$, \begin{equation}\label{fta11g} b \mathrel{\mathbf{r}^V_f} \exists x\, B({\overline m}, x, {\overline k}), \end{equation} where ${\overline m} = m_1, \ldots, m_n$. From \eqref{fta11g} it follows that \begin{equation}\label{fta11g1} \mathsf p_2 b \mathrel{\mathbf{r}^V_f} B({\overline m}, \mathsf p_1 b, {\overline k}). \end{equation} Using \eqref{fta11r} and \eqref{fta11g1}, we get \begin{equation}\label{fta11c} \varphi^V_d(m_1, \ldots, m_n, \mathsf p_1 b, \mathsf p_2 b) \mathrel{\mathbf{r}^V_f} A({\overline m}, {\overline k}). \end{equation} From \eqref{fta11f}, \eqref{fta11c} it follows that \begin{equation}\label{fta11s} \varphi^V_{\mathsf k(d)}(m_1, \ldots, m_n, b) \mathrel{\mathbf{r}^V_f} A({\overline m}, {\overline k}). \end{equation} Thus for all natural numbers $b$ and $m_1, \ldots, m_n \in M$ we have \eqref{fta11s} whenever \eqref{fta11g}. Hence \begin{equation}\label{fta11ss} \mathsf k(d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(\exists x\, B({\overline x}, x, {\overline k}) \to A({\overline x}, {\overline k})). \end{equation} From \eqref{fta11n}, \eqref{fta11ss} it follows that \eqref{fta11e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_l \in M$ we have \eqref{fta11e} whenever \eqref{fta11r}. Hence $e \rvfx{{\overline r}} S$. \end{itemize} Suppose $S$ is obtained by a rule of $\ensuremath{\mathsf{BQC}}$. \begin{itemize} \item[R1)] Let $S$ be obtained by $\frac{\displaystyle A\Rightarrow B\;\ B\Rightarrow C}{\displaystyle A\Rightarrow C}$ and ${\overline u} = u_1, \ldots, u_p$ be an admissible list of variables for $A\Rightarrow B$, $B\Rightarrow C$, and $A\Rightarrow C$. By the induction hypothesis, there exist natural numbers $a$, $b$ such that \begin{equation}\label{ftr1i1} a \rvfx{{\overline u}} A\Rightarrow B, \end{equation} \begin{equation}\label{ftr1i2} b \rvfx{{\overline u}} B\Rightarrow C \end{equation} for every evaluation~$f$. Using \eqref{ftr1i1} and \eqref{ftr1i2}, we get $a,\ b \in \mathsf I_{p+1}$. It follows from (Cm), (BF) that there is a natural number $c$ such that, for all natural numbers $k_1, \ldots, k_p, d$, we have \begin{equation}\label{ftr1f} \varphi^V_c(k_1, \ldots, k_p, d) \simeq \varphi^V_b(k_1, \ldots, k_p, \varphi^V_a(k_1, \ldots, k_p, d)). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p \in M$, \begin{equation}\label{ftr1g} d \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,A, \end{equation} where ${\overline k} = k_1, \ldots, k_p$. From \eqref{ftr1i1}, \eqref{ftr1g} it follows that \begin{equation}\label{ftr1c1} \varphi^V_a(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,B. \end{equation} Using \eqref{ftr1i2} and \eqref{ftr1c1}, we get \begin{equation}\label{ftr1c2} \varphi^V_b(k_1, \ldots, k_p, \varphi^V_a(k_1, \ldots, k_p, d)) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,C. \end{equation} From \eqref{ftr1f}, \eqref{ftr1c2} it follows that \begin{equation}\label{ftr1c2n} \varphi^V_c(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,C. \end{equation} Thus for all natural numbers $d$ and $k_1, \ldots, k_p \in M$ we have \eqref{ftr1c2n} whenever \eqref{ftr1g}. Hence $c \rvfx{{\overline u}} A \Rightarrow C.$ Thus $c \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition \ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[R2)] Let $S$ be obtained by $\frac{\displaystyle A\Rightarrow B\;\ A\Rightarrow C}{\displaystyle A\Rightarrow B\land C}$ and ${\overline u} = u_1, \ldots, u_p$ be an admissible list of variables for $A\Rightarrow B$, $A\Rightarrow C$, and $A \Rightarrow B \land C$. By the induction hypothesis, there exist natural numbers $b$, $c$ such that, for every evaluation~$f$, \begin{equation}\label{ftr2i1} b \rvfx{{\overline u}} A\Rightarrow B, \end{equation} \begin{equation}\label{ftr2i2} c \rvfx{{\overline u}} A\Rightarrow C. \end{equation} It follows from \eqref{ftr2i1}, \eqref{ftr2i2} that $b,\ c \in \mathsf I_{p+1}$. By (Cm), there is a natural number $a$ such that, for all natural numbers $k_1, \ldots, k_p, d$, \begin{equation}\label{ftr2f} \varphi^V_a(k_1, \ldots, k_p, d) \simeq \mathsf c(\varphi^V_b(k_1, \ldots, k_p, d),\, \varphi^V_c(k_1, \ldots, k_p, d)). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p \in M$, \begin{equation}\label{ftr2g} d \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,A, \end{equation} where ${\overline k} = k_1, \ldots, k_p$. From \eqref{ftr2i1}, \eqref{ftr2g} it follows that \begin{equation}\label{ftr2c1} \varphi^V_b(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,B. \end{equation} Using \eqref{ftr2i2} and \eqref{ftr2g}, we get \begin{equation}\label{ftr2c2} \varphi^V_c(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,C. \end{equation} From \eqref{ftr2c1}, \eqref{ftr2c2} it follows that \begin{equation}\label{ftr2u} \mathsf c(\varphi^V_b(k_1, \ldots, k_p, d),\, \varphi^V_c(k_1, \ldots, k_p, d)) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,(B \land C). \end{equation} Using \eqref{ftr2f} and \eqref{ftr2u}, we obtain \begin{equation}\label{ftr2s} \varphi^V_a(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,(B \land C). \end{equation} Thus for all natural numbers $d$ and $k_1, \ldots, k_p \in M$ we have \eqref{ftr2s} whenever \eqref{ftr2g}. Hence $ a \rvfx{{\overline u}} A \Rightarrow B \land C. $ Thus $a \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[R3) a)] Let $S$ be obtained by $\frac{\displaystyle A\Rightarrow B \land C}{\displaystyle A\Rightarrow B}$ and ${\overline u} = u_1, \ldots, u_p$ be an admissible list of variables for $A\Rightarrow B$ and $A \Rightarrow B \land C$. By the induction hypothesis, there is a natural number $a$ such that, for every evaluation~$f$, we have \begin{equation}\label{ftr3i} a \rvfx{{\overline u}} (A\Rightarrow B \land C). \end{equation} It follows from \eqref{ftr3i} that $a \in \mathsf I_{p+1}$. By (Cm), there is a natural number $b$ such that, for all natural numbers $k_1, \ldots, k_p, d$, \begin{equation}\label{ftr3f1} \varphi^V_b(k_1, \ldots, k_p, d) \simeq \mathsf p_1 \varphi^V_a(k_1, \ldots, k_p, d). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p \in M$, \begin{equation}\label{ftr3g} d \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,A, \end{equation} where ${\overline k} = k_1, \ldots, k_p$. From \eqref{ftr3i}, \eqref{ftr3g} it follows that \begin{equation}\label{ftr3c} \varphi^V_a(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,(B \land C). \end{equation} Using \eqref{ftr3c}, we get \begin{equation}\label{ftr3cp1} \mathsf p_1 \varphi^V_a(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,B. \end{equation} From \eqref{ftr3f1}, \eqref{ftr3cp1} it follows that \begin{equation}\label{ftr3s1} \varphi^V_b(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,B. \end{equation} Thus for all natural numbers $d$ and $k_1, \ldots, k_p \in M$ we have \eqref{ftr3s1} whenever \eqref{ftr3g}. Hence $ b \rvfx{{\overline u}} A \Rightarrow B. $ Thus $b \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[ b)] Let $S$ be obtained by $\frac{\displaystyle A\Rightarrow B \land C}{\displaystyle A\Rightarrow C}$ and ${\overline u} = u_1, \ldots, u_p$ be an admissible list of variables for $A\Rightarrow C$ and $A \Rightarrow B \land C$. By the induction hypothesis, there is a natural number $a$ such that, for every evaluation~$f$, we have \eqref{ftr3i}. Obviously, $a \in \mathsf I_{p+1}$. It follows from (Cm) that there is a natural number $b$ such that, for all natural numbers $k_1, \ldots, k_p, d$, we have \begin{equation}\label{ftr3f2} \varphi^V_b(k_1, \ldots, k_p, d) \simeq \mathsf p_2 \varphi^V_a(k_1, \ldots, k_p, d). \end{equation} It can be easily checked that $b \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[R4)] Let $S$ be obtained by $\frac{\displaystyle B\Rightarrow A\;\ C\Rightarrow A}{\displaystyle B\lor C\Rightarrow A}$ and ${\overline u} = u_1, \ldots, u_p$ be an admissible list of variables for $B\Rightarrow A,\ C \Rightarrow A$, and $B \lor C \Rightarrow A$. By the induction hypothesis, there exist natural numbers $b$, $c$ such that, for every evaluation~$f$, \begin{equation}\label{ftr4i1} b \rvfx{{\overline u}} B\Rightarrow A, \end{equation} \begin{equation}\label{ftr4i2} c \rvfx{{\overline u}} C\Rightarrow A. \end{equation} It follows from \eqref{ftr4i1}, \eqref{ftr4i2} that $b,\ c \in \mathsf I_{p+1}$. By (Cs), (Cm), (BF), there exists a natural number $a$ such that, for all natural numbers $k_1, \ldots, k_p, d$, we have \begin{align}\label{ftr4f_if} \varphi^V_a(k_1, \ldots, k_p, d) \simeq \varphi^V_b(k_1, \ldots, k_p, \mathsf{p}_2 d)\ \mbox{ if } \mathsf p_1 d = 0, \\ \label{ftr4f_notif} \varphi^V_a(k_1, \ldots, k_p, d) \simeq \varphi^V_c(k_1, \ldots, k_p, \mathsf{p}_2 d)\ \mbox{ if } \mathsf p_1 d \not= 0. \end{align} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p \in M$, \begin{equation}\label{ftr4g} d \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,(B \lor C), \end{equation} where ${\overline k} = k_1, \ldots, k_p$. From \eqref{ftr4g} it follows that either $\mathsf p_1 d = 0$, or $\mathsf p_1 d = 1$. Let us consider $2$ cases. Case $1$: $\mathsf p_1 d = 0$. Using \eqref{ftr4g}, we get \begin{equation}\label{ftr4g1} \mathsf p_2 d \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,B. \end{equation} From \eqref{ftr4i1}, \eqref{ftr4g1} it follows that \begin{equation}\label{ftr4c1} \varphi^V_{b}(k_1, \ldots, k_p, \mathsf p_2 d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,A. \end{equation} Using \eqref{ftr4f_if} and \eqref{ftr4c1}, we obtain \begin{equation}\label{ftr4f1} \varphi^V_{a}(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,A. \end{equation} Case $2$: $\mathsf p_1 d = 1$. Using \eqref{ftr4g}, we get \begin{equation}\label{ftr4g2} \mathsf p_2 d \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,C. \end{equation} From \eqref{ftr4i2}, \eqref{ftr4g2} it follows that \begin{equation}\label{ftr4c2} \varphi^V_{c}(k_1, \ldots, k_p, \mathsf p_2 d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,A. \end{equation} Using \eqref{ftr4f_notif} and \eqref{ftr4c2}, we get \eqref{ftr4f1}. Thus for all natural numbers $d$ and $k_1, \ldots, k_p \in M$ we have \eqref{ftr4f1} whenever \eqref{ftr4g}. Hence $ a \rvfx{{\overline u}} B \lor C \Rightarrow A. $ Thus $a \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[R5) a)] Let $S$ be obtained by $\frac{\displaystyle B\lor C\Rightarrow A}{\displaystyle B\Rightarrow A}$ and ${\overline u} = u_1, \ldots, u_p$ be an admissible list of variables for $B\Rightarrow A$ and $B \lor C \Rightarrow A$. By the induction hypothesis, there is a natural number $a$ such that, for every evaluation~$f$, we have \begin{equation}\label{ftr5i} a \rvfx{{\overline u}} B \lor C \Rightarrow A. \end{equation} It follows from \eqref{ftr5i} that $a \in \mathsf I_{p+1}$. By (Cm), (BF), (Cn), there is a natural number $b$ such that, for all natural numbers $k_1, \ldots, k_p, d$, \begin{equation}\label{ftr5f1} \varphi^V_b(k_1, \ldots, k_p, d) \simeq \varphi^V_a(k_1, \ldots, k_p, \mathsf c(0, d)). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p \in M$, \begin{equation}\label{ftr5g1} d \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,B, \end{equation} where ${\overline k} = k_1, \ldots, k_p$. From \eqref{ftr5g1} it follows that \begin{equation}\label{ftr5g10} \mathsf c(0, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,(B \lor C). \end{equation} Using \eqref{ftr5i} and \eqref{ftr5g10}, we get \begin{equation}\label{ftr5c1} \varphi^V_a(k_1, \ldots, k_p, \mathsf c(0, d)) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,A. \end{equation} From \eqref{ftr5f1}, \eqref{ftr5c1} it follows that \begin{equation}\label{ftr5s} \varphi^V_b(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}]\,A. \end{equation} Thus for all natural numbers $d$ and $k_1, \ldots, k_p \in M$ we have \eqref{ftr5s} whenever \eqref{ftr5g1}. Hence $ b \rvfx{{\overline u}} B \Rightarrow A. $ Thus $b \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[ b)] Let $S$ be obtained by $\frac{\displaystyle B\lor C\Rightarrow A}{\displaystyle C\Rightarrow A}$ and ${\overline u} = u_1, \ldots, u_p$ be an admissible list of variables for $C\Rightarrow A$ and $B \lor C \Rightarrow A$. By the induction hypothesis, there is a natural number $a$ such that, for every evaluation~$f$, we have \eqref{ftr5i}. It follows from \eqref{ftr5i} that $a \in \mathsf I_{p+1}$. By (Cm), (BF), (Cn), there is a natural number $b$ such that, for all natural numbers $k_1, \ldots, k_p, d$, \begin{equation}\label{ftr5f2} \varphi^V_b(k_1, \ldots, k_p, d) \simeq \varphi^V_a(k_1, \ldots, k_p, \mathsf c(1, d)). \end{equation} It can be easily checked that $b \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[R6)] Let $S$ be obtained by $\frac{\displaystyle A \Rightarrow B}{\displaystyle [{\overline y} / {\overline x}]\,A \Rightarrow [{\overline y} / {\overline x}]\,B}\ $ and $|{\overline x}| = |{\overline y}| = n$. Suppose ${\overline u} = u_1, \ldots, u_p$ is an admissible list of variables for $A \Rightarrow B$ and $[{\overline y} / {\overline x}]\,A \Rightarrow [{\overline y} / {\overline x}]\,B$. By $[{\overline y} / {\overline x}]\,{\overline u}$ denote the result of substituting ${\overline y}$ for ${\overline x}$ in ${\overline u}$. By ${\overline u}$ is admissible for $[{\overline y} / {\overline x}]\,A \Rightarrow [{\overline y} / {\overline x}]\,B$, all variables in $[{\overline y} / {\overline x}] {\overline u}$ are in ${\overline u}$. If ${\overline k} = k_1, \ldots, k_p$ is a list of natural numbers, then by $[{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,{\overline u}$ denote the result of substituting ${\overline k}$ for ${\overline u}$ in $[{\overline y} / {\overline x}]\,{\overline u}$. Obviously, $[{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,{\overline u}$ is a list of natural numbers and $|[{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,{\overline u}| = p$. For all $i = 1, \ldots, l$ by $\mathsf z_i$ denote a function such that $\mathsf z_i(k_1, \ldots, k_p)$ is the $i$-th element of $[{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,{\overline u}$ for all natural numbers $k_1, \ldots, k_p$. Clearly, for all $i$ there exists $j$ such that $\mathsf z_i$ is $\mathsf I^j_{l+1}$. By the induction hypothesis, there is a natural number $a$ such that, for every evaluation~$f$, we have \begin{equation}\label{ftr6i} a \rvfx{{\overline u}} A \Rightarrow B. \end{equation} From \eqref{ftr6i} it follows that $a \in \mathsf I_{p+1}$. By (Cm), (BF), there is a natural number $b$ such that, for all natural numbers $d$ and ${\overline k}= k_1, \ldots, k_p$, \begin{equation}\label{ftr6f0} \varphi^V_b(k_1, \ldots, k_p, d) \simeq \varphi^V_a(\mathsf z_1({\overline k}), \ldots, \mathsf z_l({\overline k}), d). \end{equation} By $[{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,{\overline u}$ is $\mathsf z_1({\overline k}), \ldots, \mathsf z_l({\overline k})$, \begin{equation}\label{ftr6f} \varphi^V_b(k_1, \ldots, k_p, d) \simeq \varphi^V_a([{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,{\overline u}, d). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p \in M$, \begin{equation}\label{ftr6g} d \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,A, \end{equation} where ${\overline k} = k_1, \ldots, k_p$. From \eqref{ftr6i}, \eqref{ftr6g} it follows that \begin{equation}\label{ftr6c} \varphi^V_a([{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,{\overline u}, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,B. \end{equation} Using \eqref{ftr6f} and \eqref{ftr6c}, we get \begin{equation}\label{ftr6s} \varphi^V_b(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} [{\overline k} / {\overline u}][{\overline y} / {\overline x}]\,B. \end{equation} Thus for all natural numbers $d$ and $k_1, \ldots, k_p \in M$ we have \eqref{ftr6s} whenever \eqref{ftr6g}. Hence $ b \rvfx{{\overline u}} [{\overline y} / {\overline x}]\,A \Rightarrow [{\overline y} / {\overline x}]\,B. $ Thus $b \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[R7)] Let $S$ be obtained by $\frac{\displaystyle B\Rightarrow A}{\displaystyle \exists x\, B\Rightarrow A},$ where $x$ is not free in $A$. It is clear that $S$ has the form $\exists x\, B({\overline u}, x) \Rightarrow A({\overline u})$ for some list of variables ${\overline u} = u_1, \ldots, u_p$. By the induction hypothesis, there is a natural number $a$ such that, for every evaluation~$f$, we have \begin{equation}\label{ftr7i} a \rvfx{{\overline u}, x} B \Rightarrow A. \end{equation} It follows from \eqref{ftr7i} that $a \in \mathsf I_{l+2}$. By (Cm), (BF), there is a natural number $b$ such that, for all natural numbers $k_1, \ldots, k_p, d$, \begin{equation}\label{ftr7f} \varphi^V_b(k_1, \ldots, k_p, d) \simeq \varphi^V_a(k_1, \ldots, k_p, \mathsf p_1 d, \mathsf p_2 d). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p \in M$, \begin{equation}\label{ftr7g} d \mathrel{\mathbf{r}^V_f} \exists x\, B({\overline k}, x), \end{equation} where ${\overline k} = k_1, \ldots, k_p$. From \eqref{ftr7g} it follows that \begin{equation}\label{ftr7g0} \mathsf p_2 d \mathrel{\mathbf{r}^V_f} B({\overline k}, \mathsf p_1 d). \end{equation} Using \eqref{ftr7i} and \eqref{ftr7g0}, we get \begin{equation}\label{ftr7c} \varphi^V_a(k_1, \ldots, k_p, \mathsf p_1 d, \mathsf p_2 d) \mathrel{\mathbf{r}^V_f} A({\overline k}). \end{equation} From \eqref{ftr7f}, \eqref{ftr7c} it follows that \begin{equation}\label{ftr7s} \varphi^V_b(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} A({\overline k}). \end{equation} Thus for all natural numbers $d$ and $k_1, \ldots, k_p \in M$ we have \eqref{ftr7s} whenever \eqref{ftr7g}. Hence $ b \rvfx{{\overline u}} \exists x\,B \Rightarrow A. $ Thus $b \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[R8)] Let $S$ be obtained by $\frac{\displaystyle \exists x\,B\Rightarrow A}{\displaystyle B\Rightarrow A},$ where $x$ is not free in $A$. It is clear that $S$ has the form $B({\overline u}, x) \Rightarrow A({\overline u})$ for some list of variables ${\overline u} = u_1, \ldots, u_p$. By the induction hypothesis, there is a natural number $a$ such that, for every evaluation~$f$, we have \begin{equation}\label{ftr8i} a \rvfx{{\overline u}} \exists x\,B \Rightarrow A. \end{equation} It follows from \eqref{ftr8i} that $a \in \mathsf I_{p+1}$. By (Cm), (BF), there is a natural number $b$ such that, for all natural numbers $k_1, \ldots, k_p, c, d$, \begin{equation}\label{ftr8f} \varphi^V_b(k_1, \ldots, k_p, c, d) \simeq \varphi^V_a(k_1, \ldots, k_p, \mathsf c(c, d)). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p, c \in M$, \begin{equation}\label{ftr8g} d \mathrel{\mathbf{r}^V_f} B({\overline k}, c), \end{equation} where ${\overline k} = k_1, \ldots, k_p$. From \eqref{ftr8g} it follows that \begin{equation}\label{ftr8g0} \mathsf c(c, d) \mathrel{\mathbf{r}^V_f} \exists x\, B({\overline k}, x). \end{equation} Using \eqref{ftr8i} and \eqref{ftr8g0}, we get \begin{equation}\label{ftr8c} \varphi^V_a(k_1, \ldots, k_p, \mathsf c(c, d)) \mathrel{\mathbf{r}^V_f} A({\overline k}). \end{equation} From \eqref{ftr8f}, \eqref{ftr8c} it follows that \begin{equation}\label{ftr8s} \varphi^V_b(k_1, \ldots, k_p, c, d) \mathrel{\mathbf{r}^V_f} A({\overline k}). \end{equation} Thus for all natural numbers $d$ and $k_1, \ldots, k_p, c \in M$ we have \eqref{ftr8s} whenever \eqref{ftr8g}. Hence $ b \rvfx{{\overline u}, x} B \Rightarrow A. $ Thus $b \rvfx{{\overline u}, x} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there is a natural number $e$ such that $e \rvfx{{\overline r}} S$ for all evaluations $f$. \item[R9)] Let $S$ be obtained by $\frac{\displaystyle A \land B\Rightarrow C}{\displaystyle A \Rightarrow \forall {\overline x}\,(B\to C)},$ where $|{\overline x}| = n$ and all variables in ${\overline x}$ are not free in $A$. It is clear that $S$ has the form $$A({\overline u}) \Rightarrow \forall {\overline x}\,(B({\overline x}, {\overline u}) \to C({\overline x}, {\overline u}))$$ for some list of variables ${\overline u} = u_1, \ldots, u_p$. By the induction hypothesis, there is a natural number $c$ such that, for every evaluation~$f$, \begin{equation}\label{ftr9i} c \rvfx{{\overline x}, {\overline u}} A({\overline u}) \land B({\overline x}, {\overline u}) \Rightarrow C({\overline x}, {\overline u}). \end{equation} It follows from \eqref{ftr9i} that $c \in \mathsf I_{n+l+1}$. By (Cm), (BF), (SMN), (PV), there exists a $V$-function $\ss$ such that we have \begin{equation}\label{ftr9f} \varphi^V_{\ss(c, k_1, \ldots, k_p, d)}(m_1, \ldots, m_n, b) \simeq \varphi^V_c(m_1, \ldots, m_n, k_1, \ldots, k_p, \mathsf c(d, b)) \end{equation} for all natural numbers $m_1, \ldots, m_n, k_1, \ldots, k_p, d, b$. It follows from (PV), (SMN) that there is a natural number $e$ such that, for all natural numbers $k_1, \ldots, k_p, d$, we have \begin{equation}\label{ftr9n} \varphi^V_e(k_1, \ldots, k_p, d) \simeq \ss(c, k_1, \ldots, k_p, d). \end{equation} Let $\varnothing \not= M \subseteq \mathbb N$ and $f$ be an $M$-evaluation. Suppose for some natural numbers $d$ and $k_1, \ldots, k_p \in M$, \begin{equation}\label{ftr9r} d \mathrel{\mathbf{r}^V_f} A({\overline k}), \end{equation} where ${\overline k} = k_1, \ldots, k_p$. Let us prove that \begin{equation}\label{ftr9e} \varphi^V_e(k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(B({\overline x}, {\overline k}) \to C({\overline x}, {\overline k})). \end{equation} Suppose for some natural numbers $b$ and $m_1, \ldots, m_n \in M$, \begin{equation}\label{ftr9g} b \mathrel{\mathbf{r}^V_f} B({\overline m}, {\overline k}), \end{equation} where ${\overline m} = m_1, \ldots, m_n$. From \eqref{ftr9r}, \eqref{ftr9g} it follows that \begin{equation}\label{ftr9u} \mathsf c(d, b) \mathrel{\mathbf{r}^V_f} A({\overline k}) \land B({\overline m}, {\overline k}). \end{equation} Using \eqref{ftr9i} and \eqref{ftr9u}, we get \begin{equation}\label{ftr9c} \varphi^V_c(m_1, \ldots, m_n, k_1, \ldots, k_p, \mathsf c(d, b)) \mathrel{\mathbf{r}^V_f} C({\overline m}, {\overline k}). \end{equation} From \eqref{ftr9f}, \eqref{ftr9c} it follows that \begin{equation}\label{ftr9s} \varphi^V_{\ss(c, k_1, \ldots, k_p, d)}(m_1, \ldots, m_n, b) \mathrel{\mathbf{r}^V_f} C({\overline m}, {\overline k}). \end{equation} Thus for all natural numbers $b$ and $m_1, \ldots, m_n \in M$ we have \eqref{ftr9s} whenever \eqref{ftr9g}. Hence \begin{equation}\label{ftr9ss} \ss(c, k_1, \ldots, k_p, d) \mathrel{\mathbf{r}^V_f} \forall {\overline x}\,(B({\overline x}, {\overline k}) \to C({\overline x}, {\overline k})). \end{equation} From \eqref{ftr9n}, \eqref{ftr9ss} it follows that \eqref{ftr9e}. Thus for all natural numbers $d$ and $k_1, \ldots, k_p \in M$ we have \eqref{ftr9e} whenever \eqref{ftr9r}. Hence $$ e \rvfx{{\overline u}}\, A \Rightarrow \forall {\overline x}\,(B\to C). $$ Thus $e \rvfx{{\overline u}} S$ for all evaluations $f$. It follows from Proposition~\ref{p_eq} that there exists a natural number $e'$ such that $e' \rvfx{{\overline r}} S$ for all evaluations $f$. \end{itemize} \end{proof} \begin{theorem} If a sentence $A$ is derivable in $\ensuremath{\mathsf{BQC}}$, then the sentence $A$ is absolutely $V$-realizable over all domains. \end{theorem} \begin{proof} Let $A$ be derivable in $\ensuremath{\mathsf{BQC}}$. Then $\ensuremath{\mathsf{BQC}} \vdash \top \Rightarrow A$. Since $A$ is a sentence, we see that an empty list of variables $\overline{v}$ is admissible for $\top \Rightarrow A$. From Theorem~\ref{t_main} it follows that there exists a natural number $e$ such that $e \rvfx{\overline{v}} \top \Rightarrow A$ for all evaluations $f$. Then $e' \mathrel{\mathbf{r}^V_f} A$ for all evaluations $f$, where $e' = \varphi^V_e(0)$. \end{proof} \section*{Acknowledgments} This research was partially supported by Russian Foundation for Basic Research under grant 20-01-00670.
1,108,101,563,094
arxiv
\section{Introduction} \label{sec:intro} Background subtraction (BGS) is a foundational, low-level task in computer vision and video processing. The aim of BGS is to segment an input video frame into regions corresponding to either foreground (e.g., motor vehicles) or background (e.g., highway surface). It is frequently used as a pre-processing step for higher-level tasks such as object tracking, people and motor-vehicle recognition, human activity recognition, etc. Since BGS is often the first pre-processing step, the accuracy of its output has an overwhelming impact on the overall performance of subsequent steps. Therefore, it is critical that BGS produce as accurate a foreground/background segmentation as possible. Traditional BGS algorithms are unsupervised and rely on a background model to predict foreground regions \cite{stauffer1999gmm, zivkovic2004improvedgmm, elgammal2002kde, mittal2004adaptivekde, barnich2011vibe, st2015pawcs, st2015subsense, icsik2018swcd, lee2018wisenetmd}. PAWCS \cite{st2015pawcs}, SWCD \cite{icsik2018swcd} and WisenetMD \cite{lee2018wisenetmd} are considered to be state-of-the-art unsupervised BGS algorithms. However, since they rely on the accuracy of the background model, they encounter difficulties when applied to complex scenes. Recently, ensemble methods and a method leveraging semantic segmentation have been proposed which significantly outperform traditional algorithms \cite{bianco2017iutis, zeng2019cnnsfc, braham2017semanticbgs}. The success of deep learning in computer vision did not bypass BGS research \cite{bouwmans2019deep}. A number of {\it supervised} deep-learning BGS algorithms have been developed \cite{braham2016deepbgs, babaee2018cnnbgs, bakkay2018bscgan, zeng2018mfcnnbgs, wang2017interactivebgs, lim2018fgsegnet, lim2018fgsegnetv3,sakkos20173dbgs} with performance easily surpassing that of traditional methods. However, most of these algorithms have been tuned to either one specific video or to a group of similar videos, and their performance on unseen videos has not been evaluated. For example, FgSegNet \cite{lim2018fgsegnet} uses 200 frames from a test video for training and the remaining frames from the {\it same} video for evaluation. If applied to an unseen video, its performance drops significantly (Section~\ref{sec:results}). In this paper, we introduce {\it Background Subtraction for Unseen Videos} (BSUV-Net), a fully-convolutional neural network for predicting foreground of an {\it unseen} video. A key feature of our approach is that the training and test sets are composed of frames originating from different videos. This guarantees that no ground-truth data from the test videos have been shown to the network in the training phase. By employing two reference backgrounds at different time scales, {BSUV-Net} addresses two challenges often encountered in BGS: varying scene illumination and intermittently-static objects that tend to get absorbed into the background. We also propose novel data augmentation which further improves our method's performance under varying illumination. Furthermore, motivated by recent work on the use of semantic segmentation in BGS \cite{braham2017semanticbgs}, we improve our method's accuracy by inputting semantic information along with the reference backgrounds and current frame. The main contributions of our work are as follows: \begin{enumerate} \topsep -2pt\partopsep -2pt\itemsep -2pt \item {\textbf{Supervised BGS for Unseen Videos:}} Although supervised algorithms, especially neural networks, have significantly improved BGS performance, they are tuned to a specific video and thus their performance on {\it unseen} videos deteriorates dramatically. To the best of our knowledge, {BSUV-Net} is the first supervised BGS algorithm that is truly generalizable to {\it unseen} videos. \item {\textbf{Data Augmentation for Increased Resilience to Varying Illumination:}} Changes in scene illumination pose a major challenge to BGS algorithms. To mitigate this, we develop a simple, yet effective, data augmentation technique. Using a simple additive model we vary the illumination of the current frame and the reference background frames that are fed into {BSUV-Net} during training. This enables us to effectively tackle various illumination change scenarios that may be present in test videos. \item \textbf{Leveraging Semantic and Multiple Time-Scale Information:} {BSUV-Net} improves foreground-boundary segmentation accuracy by accepting semantic information as one of its inputs. This is unlike in an earlier BGS method \cite{braham2017semanticbgs} which used semantic information as a \textit{post-processing} step. The other network inputs are the current frame (to be segmented) and a two-frame background model with data from different time scales. While one background frame, based on {\it distant} history, helps with the discovery of intermittently-static objects, the other frame, based on {\it recent} history, is key for handling dynamic factors such as illumination changes. \end{enumerate} Based on our extensive experiments on the CDNet-2014 dataset \cite{goyette2012changedetection}, {BSUV-Net} outperforms state-of-the-art BGS algorithms evaluated on {\it unseen} videos \vspace{-1ex} \section{Related Work} \label{sec:rel_work} A wide range of BGS algorithms have been developed in the past, each having some advantages and disadvantages over others. Since this is not a survey paper, we will not cover all BGS variants. Instead, we will focus only on recent top-performing methods. We divide these algorithms into 3 categories: (i) BGS by (unsupervised) background modeling, (ii) supervised BGS tuned to a single video or a group of videos, (iii) Improving BGS algorithms by post-processing. \subsection{BGS by Background Modeling} Nearly all traditional BGS algorithms first compute a background model, and then use it to predict the foreground. While a simple model based on the mean or median of a subset of preceding frames offers only a single background value per pixel, a probabilistic Gaussian Mixture Model (GMM) \cite{stauffer1999gmm} allows a range of background values. This idea was improved by creating an online procedure for the update of GMM parameters in a pixel-wise manner \cite{zivkovic2004improvedgmm}. Kernel Density Estimation (KDE) was introduced into BGS \cite{elgammal2002kde} as a non-parametric alternative to GMMs and was subsequently improved \cite{mittal2004adaptivekde}. The probabilistic methods achieve better performance compared to single-value models for dynamic scenes and scenes with small background changes. In \cite{barnich2011vibe}, Barnich and Droogenbroeck introduced a sample-based background model. Instead of implementing a probabilistic model, they modeled the background by a set of sample values per pixel and used a distance-based model to decide whether a pixel should be classified as background or foreground. Since color information alone is not sufficient for complex cases, such as illumination changes, Bilodeau \textit{et al. } introduced Local Binary Similarity Patterns (LBSP) to compare the current frame and background using spatio-temporal features instead of color \cite{bilodeau2013lbsp}. St-Charles \textit{et al. } combined color and texture information, and introduced a word-based approach, PAWCS \cite{st2015pawcs}. They considered pixels as background words and updated each word's reliability by its persistence. Similarly, SuBSENSE by St-Charles \textit{et al. } \cite{st2015subsense} combines LBSP and color features, and employs pixel-level feedback to improve the background model. Recently, Isik \textit{et al. } introduced SWCD, a pixel-wise, sliding-window approach leveraging a dynamic control system to update the background model \cite{icsik2018swcd}, while Lee \textit{et al. } introduced WisenetMD, a multi-step algorithm to eliminate false positives in dynamic backgrounds \cite{lee2018wisenetmd}. In \cite{sultana2019unsupervised}, Sultana \textit{et al. } introduced an unsupervised background estimation method based on a generative adversarial network (GAN). They use optical flow to create a motion mask and then in-paint covered regions with background values estimated by a GAN. The foreground is then computed by subtracting the estimated background from the current frame followed by morphological operations. They, however, do not achieve state-of-the-art results. Zeng \textit{et al. } introduced RTSS \cite{zeng2019rtss} which uses deep learning-based semantic segmentation predictions to improve the background model used in SubSENSE \cite{st2015subsense}. \subsection{Supervised BGS} \label{sec:supervisedbgs} Although background subtraction has been extensively studied in the past, the definition of a supervised BGS algorithm is still vague. Generally speaking, the aim of a supervised BGS algorithm is to learn the parameters (e.g., neural-network weights) of a complex function in order to minimize a loss function of the labeled training frames. Then, the performance of the algorithm is evaluated on a separate set of test frames. In this section we divide the supervised BGS algorithms into three groups namely, \textit{video-optimized}, \textit{video-group-optimized} and \textit{video-agnostic} depending on which frames and videos they use during training and testing. Several algorithms use some frames from a test video for training and all the frames of the {\it same} video for evaluating performance on that video. In such algorithms, parameter values are optimized separately for each video. We will refer to this class of algorithms as \textit{video-optimized} BGS algorithms. In another family of algorithms, randomly-selected frames from a \textit{group} of test videos are used for training and all the frames of the same videos are used for testing. Since some frames from {\it all} test videos are used for training, we will refer this class of algorithms as \textit{video-group-optimized} algorithms. Note that, in both of these scenarios the algorithms are neither optimized for nor evaluated on {\it unseen} videos and to the best of our knowledge all of the top-performing supervised BGS algorithms to-date are either \textit{video-optimized} or \textit{video-group-optimized}. In this paper, we introduce a new category of supervised BGS algorithms, called \textit{video-agnostic} algorithms, that can be applied to unseen videos with no or little loss of performance. To learn parameters, a {\it video-agnostic} algorithm uses frames from a set of training videos but for performance evaluation it uses a completely different set of videos. In recent years, supervised learning algorithms based on convolutional neural networks (CNNs) have been widely applied to BGS. The first CNN-based BGS algorithm was introduced in \cite{braham2016deepbgs}. This is a \textit{video-optimized} algorithm which produces a single foreground probability for the center of each $27 \times 27$ patch of pixels. A method proposed in \cite{wang2017interactivebgs} uses a similar approach, but with a modified CNN which operates on patches of size $31 \times 31$ pixels. Instead of using a patch-wise algorithm, Zeng and Zhu introduced the Multiscale Fully-Convolutional Neural Network (MFCN) which can predict the foreground of the entire input image frame in one step \cite{zeng2018mfcnnbgs}. Lim and Keles proposed a triplet CNN which uses siamese networks to create features at three different scales and combines these features within a transposed CNN \cite{lim2018fgsegnet}. In a follow-up work, they removed the triplet networks and used dilated convolutions to capture the multiscale information \cite{lim2018fgsegnetv3}. In \cite{bakkay2018bscgan}, Bakkay \textit{et al. } used generative adversarial networks for BGS. The generator performs the BGS task, whereas the discriminator tries to classify the BGS map as real or fake. Although all these algorithms perform very well on various BGS datasets, it is important to note that they are all \textit{video-optimized}, thus they will suffer a performance loss when tested on unseen videos. In \cite{babaee2018cnnbgs}, Babae \textit{et al. } designed a \textit{video-group-optimized} CNN for BGS. They randomly selected $5\%$ of CDNet-2014 frames \cite{goyette2012changedetection} as a training set and developed a single network for all of the videos in this dataset. In \cite{sakkos20173dbgs}, Sakkos \textit{et al. } used a 3D CNN to capture the temporal information in addition to the color information. Similarly to \cite{babaee2018cnnbgs}, they trained a single algorithm using 70\% of frames in CDNet-2014 and then used it to predict the foreground in all videos of the dataset. Note that even these approaches do not generalize to other videos since some ground truth data from each video exists in the training set. Table \ref{table:training_schemes} compares and summarizes the landscape of supervised BGS algorithms and the methodology used for training and evaluation. As discussed above, none of the CNN-based BGS algorithms to-date have been designed for or tested on unseen videos with no ground truth at all. This limits their practical utility since it is not possible to label some frames in {\it every} new video. Since the publication of the first version of this paper, we have learned about a recent BGS algorithm named $3DFR$ \cite{mandal20193dfr}, which uses $3D$ spatio-temporal convolution blocks in an encoder-decoder architecture to predict background in an unseen video. However, \cite{mandal20193dfr} only reports evaluation results on 10 out of the 53 videos of CDNet2014. \begin{table*} \centering \caption{Training/evaluation methodologies of supervised BGS algorithms on CDNet-2014.} \begin{tabular}{| c | c | c | c|} \hline \textbf{Algorithm} & \multicolumn{2}{c|}{\textbf{Are some frames from test videos used in training?}} & \makecell{\textbf{Training and evaluation} \\ \textbf{methodology}}\\ \hline Braham-CNN-BGS \cite{braham2016deepbgs} & Yes & First half of the labeled frames of the test video & \textit{video-optimized}\\ \hline MFCNN \cite{zeng2018mfcnnbgs} & Yes & \makecell{Randomly selected 200 frames from the first \\ 3000 labeled frames of the test video} & \textit{video-optimized}\\ \hline \makecell{Wang-CNN-BGS \cite{wang2017interactivebgs} \\ FGSegNet \cite{lim2018fgsegnet, lim2018fgsegnetv3} \\ BScGAN \cite{bakkay2018bscgan}} & Yes & Hand picked 200 labeled frames of the test video & \textit{video-optimized}\\ \hline Babae-CNN-BGS \cite{babaee2018cnnbgs} & Yes & $5\%$ of the labeled frames of all videos & \textit{video-group-optimized}\\ \hline 3D-CNN-BGS \cite{sakkos20173dbgs} & Yes & $70\%$ of the labeled frames of all videos & \textit{video-group-optimized}\\ \hline {\bfBSUV-Net} (proposed) & No & No frame from test videos is used in training & \textit{video-agnostic}\\ \hline \end{tabular} \label{table:training_schemes} \end{table*} \subsection{Improving BGS Algorithms by Post-Processing} Over the last few years, many deep-learning-based algorithms were developed for the problem of semantic segmentation and they achieved state-of-the-art performance. In \cite{braham2017semanticbgs}, Braham and Droogenbroeck introduced a post-processing step for BGS algorithms based on semantic segmentation predictions. Given an input frame, they predicted a segmentation map using PSPNet \cite{zhao2017pspnet} and obtained pixel-wise probability predictions for semantic labels such as person, car, animal, house etc. Then, they manually grouped these labels into two sets -- foreground and background labels, and used this information to improve any BGS algorithm's output in a post-processing step. They obtained very competitive results by using SubSENSE \cite{st2015subsense} as the BGS algorithm. Bianco \textit{et al. } introduced an algorithm called IUTIS which combines the results produced by several BGS algorithms \cite{bianco2017iutis}. They used genetic programming to determine how to combine several BGS algorithms' outputs using a sequence of basic binary operations, such as logical ``and/or'', majority voting and median filtering. Their best result was achieved by using 5 top-performing BGS algorithms on the CDNet-2014 dataset at the time of publication. Zeng \textit{et al. } followed the same idea, but instead of genetic programming, used a fully-convolutional neural network to fuse several BGS results into a single output \cite{zeng2019cnnsfc}, and outperformed IUTIS on CDNet-2014. \section{Proposed Algorithm: BSUV-Net} \label{sec:algo} \subsection{Inputs to {BSUV-Net}} \label{sec:input} Segmenting an unseen video frame into foreground and background regions without using any information about the background would be an ill-defined problem. In {BSUV-Net}, we use two reference frames to characterize the background. One frame is an ``empty'' background frame, with no people or other objects of interest, which can typically be extracted from the beginning of a video e.g., {\it via} median filtering over a large number of \textit{initial} frames. This provides an accurate reference that is very helpful for segmenting intermittently-static objects in the foreground. However, due to dynamic factors, such as illumination variations, this reference may not be valid after some time. To counteract this, we use another reference frame that characterizes \textit{recent} background, for example by computing median of 100 frames preceding the frame being processed. However, this frame might not be as accurate as the first reference frame since we cannot guarantee that there will be no foreground objects in it (if such objects are present for less than 50 frames, the temporal median will suppress them). By using two reference frames captured at different time scales, we aim to leverage benefits of each frame type. Braham \textit{et al. } \cite{braham2017semanticbgs} have shown that leveraging results of semantic segmentation significantly improves the performance of a BGS algorithm, for example by using semantic segmentation results in a post-processing step. In {BSUV-Net}, we follow a different idea and use semantic information as an additional input channel to our neural network. In this way, we let our network learn how to use this information. To extract semantic segmentation information, we used a state-of-the-art CNN called DeepLabv3 \cite{chen2017deeplab} trained on ADE20K \cite{zhou2017ade20k}, an extensive semantic-segmentation dataset with 150 different class labels and more than 20,000 images with dense annotations. Let us denote the set of object classes in ADE20K as $C = \{c_0,\ c_1,\ \dots, \ c_{149}\}$. Following the same procedure as in \cite{braham2017semanticbgs}, we divided these classes into two sets: foreground and background objects. As foreground objects, we used person, car, cushion, box, book, boat, bus, truck, bottle, van, bag and bicycle. The rest of the classes are used as background objects. The softmax layer of DeepLabv3 provides pixel-wise class probabilities $p_{c_j}$ for $c_j\in C$. Let ${\mathbf{I}}[m,n]$ be an input frame at spatial location $m,n$ and let $\{p_{c_j}[m, n]\}_{j=0}^{149}$ be the predicted probability distribution of ${\mathbf{I}}[m,n]$. We compute a foreground probability map (FPM) $\mathbf{S}[m, n] = \sum_{{c_j} \in F} p_{c_j}[m, n]$, where $F$ is the set of foreground classes. We use the current, recent and empty frames in color, each along with its FPM, as the input to BSUV-Net (Figure~\ref{fig:network}). Clearly, the number of channels in {BSUV-Net}'s input layer is 12 for each frame consists of 4 channels (R,G,B,FPM). \subsection{Network Architecture and Loss Function} \begin{figure*} \centering \includegraphics[width=17cm]{images/UNET-BGS.jpg} \caption{Network architecture of {BSUV-Net}. BN stands for batch normalization and SD stands for spatial dropout. Grayscale images at the network input show foreground probability maps (FPM) of the corresponding RGB frames.} \label{fig:network} \end{figure*} We use a UNET-type \cite{ronneberger2015unet} fully-convolutional neural network (FCNN) with residual connections. The architecture of {BSUV-Net} has two parts: encoder and decoder, and is shown in Figure \ref{fig:network}. In the encoder network, we use $2 \times 2$ max-pooling operators to decrease the spatial dimensions. In the decoder network, we use up-convolutional layers (transposed convolution with a stride of 2) to increase the dimensions back to those of the input. In all convolutional and up-convolutional layers, we use $3 \times 3$ convolutions as in VGG \cite{simonyan2014vgg}. The residual connections from the encoder to the decoder help the network combine low-level visual information gained in the initial layers with high-level visual information gained in the deeper layers. Since our aim is to increase the performance on unseen videos, we use strong batch normalization (BN) \cite{ioffe2015batchnorm} and spatial dropout (SD) \cite{tompson2015spatialdropout} layers to increase the generalization capacity. Specifically, we use a BN layer after each convolutional and up-convolutional layer, and an SD layer before each max-pooling layer. Since our task can be viewed as a binary segmentation, we use a sigmoid layer as the last layer in {BSUV-Net}. The operation of the overall network can be defined as a nonlinear map $\mathbf{G}(\mathbf{W}): \mathbf{X} \rightarrow \widehat{\mathbf{Y}}$ where $\mathbf{X} \in \mathbb{R}^{w \times h \times 12}$ is a 12-channel input, $w$ and $h$ are its spatial dimensions, $\mathbf{W}$ represents the parameters of neural network $\mathbf{G}$, and $\widehat{\mathbf{Y}} \in [0,1]^{w \times h}$ is a pixel-wise foreground probability prediction. Note that since this is a fully-convolutional neural network, it does not require fixed input size; any frame size can be used, but some padding may be needed to account for max-pooling operations. In most BGS datasets, the number of background pixels is much larger than the number of foreground pixels. This class imbalance creates significant problems for the commonly-used loss functions, such as cross-entropy and mean-squared error. A good alternative for unbalanced binary datasets is the Jaccard index. Since the network output is a probability map, we opted for a relaxed form of the Jaccard index as the loss function, defined as follows: {\small \begin{equation*} J_R(\mathbf{Y}, \widehat{\mathbf{Y}}) \!=\! \frac{T + \sum\limits_{m,n}(\mathbf{Y}[m,n] \widehat{\mathbf{Y}}[m,n])}{T \!+\! \sum\limits_{m,n}\!\!\Big(\mathbf{Y}[m,n] \!+\! \widehat{\mathbf{Y}}[{m,n}] \!-\! \mathbf{Y}[{m,n}] \widehat{\mathbf{Y}}[{m,n}] \Big) } \end{equation*}}where $\mathbf{Y} \in \{0, 1\}^{w \times h}$ is the ground truth of $\mathbf{X}$, $T$ is a smoothing parameter and $m$, $n$ are the spatial locations. \subsection{Resilience to Illumination Change by Data Augmentation} \label{sec:illres} Since neural networks have millions of parameters, they are very prone to overfitting. A widely-used method for reducing overfitting in computer-vision problems is to enlarge the dataset by applying several data augmentations such as random crops, rotations and noise addition. Since we are dealing with videos in this paper, we can also add augmentation in the temporal domain. In real-life BGS problems, there might be a significant illumination difference between an empty background frame acquired at an earlier time and the current frame. However, only a small portion of videos in CDnet-2014 capture significant illumination changes which limits BSUV-Net's generalization performance. Therefore, we introduce a new data-augmentation technique to account for global illumination changes between the empty reference frame and the current frame. Suppose that $\mathbf{R_E} \in \mathbb{R}^{w, h, 3}$ represents the RGB channels of an empty reference frame. Then, an augmented version of $\mathbf{R_E}$ can be computed as $\widehat{\mathbf{R}}_{\mathbf E}[m, n, c] = \mathbf{R_E}[m, n, c] + \mathbf{d}[c]$ for $c = 1,2,3$, where $\mathbf{d} \in \mathbb{R}^3$ represents a frame-specific global RGB change in our illumination model. By choosing $\mathbf{d}$ randomly for each example during training (see Section~\ref{sec:train-evaluate} for details), we can make the network resilient to illumination variations. \section{Experimental Results} \label{sec:exp_res} \subsection{Dataset and Evaluation Metrics}\label{sec:data_and_eval} In order to evaluate the performance of BSUV-Net, we used CDNet-2014 \cite{goyette2012changedetection}, the largest BGS dataset with 53 natural videos from 11 categories including challenging scenarios such as shadows, night videos, dynamic background, etc. The spatial resolution of videos varies from $320 \times 240$ to $720 \times 526$ pixels. Each video has a region of interest labelled as either 1) foreground, 2) background, 3) hard shadow or 4) unknown motion. When measuring an algorithm's performance, we ignored pixels with unknown motion label and considered hard-shadow pixels as background. Our treatment of hard shadows is consistent with what is done in CDNet-2014 for the change-detection task. \new{In CDNet-2014 \cite{goyette2012changedetection}, the authors propose seven binary performance metrics to cover a wide range of BGS cases: recall ($Re$), specificity ($Sp$), false positive rate ($FPR$), false negative rate ($FNR$), percentage of wrong classification ($PWC$), precision ($Pr$) and F-measure ($F_1$). They also introduced two ranking-based metrics namely ``average ranking'' ($R$) and ``average ranking accross categories'' ($R_{cat}$) which combine all 7 metrics into ranking scores. The details of these rankings can be found at ``changedetection.net''. } \subsection{Training and Evaluation Details}\label{sec:train-evaluate} As discussed in Section \ref{sec:supervisedbgs}, we use a \textit{video-agnostic} evaluation methodology in all experiments. This allows us to measure an algorithm's performance on real-world-like tasks when no ground-truth labels are available. To evaluate {BSUV-Net} performance on all videos in CDNet-2014, we used 18 different combinations of training/test video sets. \new{The splits are structured in such a manner that every video appears in the test set of exactly one split, but when it does so, it does not appear in the training set for that split. Detailed information about these sets can be found in the supplementary material.} Let us denote the $m$-th combination by $(V_{train}^m, \ V_{test}^m)$. Then, $\cup_{m=1}^{18} V_{test}^m$ is equal to the set of all 53 videos in CDNet-2014. During training, we used 200 frames suggested in \cite{zeng2018mfcnnbgs} for each video in $V_{train}^m$. When training on different sets $V_{train}^m$, we used {\it exactly the same} hyperparameter values across all sets to make sure that we are not tuning our network to specific videos. In all of our experiments, we used ADAM optimizer with a learning rate of $10^{-4}$, $\beta_1 = 0.9$, and $\beta_2=0.99$. The minibatch size was 8 and we trained for 50 epochs. As the empty background frame, we used the median of all foreground-free frames within the first 100 frames. In a few videos containing highway traffic, the first 100 frames did not contain a single foreground-free frame. For these videos, we hand-picked empty frames (e.g., in groups) and used their median as the empty reference. Although this may seem like a limitation, in practice one can randomly sample several hundred frames at the same time of the day across several days (similar illumination) and median filter them to obtain an empty background frame (due to random selection, a moving object is unlikely to occupy the same location in more than 50\% of frames). Since there is no single empty background frame in videos from the pan-tilt-zoom (PTZ) category, we slightly changed the inputs. Instead of ``empty background + recent background'' pair we used ``recent background + more recent background'' pair, where the recent background is computed as the median of 100 preceding frames and the more recent background is computed as the median of 30 preceding frames. Although {BSUV-Net} can accept frames of any spatial dimension, we used a {\it fixed} size of $224 \times 224$ pixels (randomly cropped from the input frame) so as to leverage parallel GPU processing in the training process. We applied random data augmentation at the beginning of each epoch. For illumination resilience, we used the data augmentation method of Section~\ref{sec:illres} with $\mathbf{d}[c] = \mathbf{I} + \mathbf{I}_c$, where $\mathbf{I}$ is the same for all $c$ and sampled from $\mathcal{N}(0, 0.1^2)$, while $\mathbf{I}_c$ is independently sampled from $\mathcal{N}(0, 0.04^2)$ for each $c$. We also added random Gaussian noise from $\mathcal{N}(0, 0.01^2)$ to each pixel in each color channel. For pixel values, we used double precision numbers that lie between $0$ and $1$. In the evaluation step, we did not apply any scaling or cropping to the inputs. To obtain binary maps, we applied thresholding with threshold $\theta=0.5$ to the output of the sigmoid layer of {BSUV-Net}. \subsection{Quantitative Results} \label{sec:results} \begin{table*}[t] \centering \caption{ \new{Comparison of methods for {\bf unseen videos} from CDNet-2014. For fairness, we separated the post-processing and self-contained algorithms.} } \smallskip \begin{tabular}{c|cc|ccccccc} \hline Method & $R$ & $R_{cat}$ & $Re$ & $Sp$ & $FPR$ & $FNR$ & $PWC$ & $Pr$ & $F_1$\\ \hline \multicolumn{10}{c}{\textit{Post-processing algorithms}} \\ {\bf BSUV-net} + SemanticBGS$^*$ & \textbf{9.00} & 14.00 & \textbf{0.8179} & 0.9944 & 0.0056 & \textbf{0.1821} & 1.1326 & \textbf{0.8319} & \textbf{0.7986}\\ IUTIS-5$^*$ + SemanticBGS$^*$ & 9.43 & 11.45 & 0.7890 & \textbf{0.9961} & \textbf{0.0039} & 0.2110 & \textbf{1.0722} & 0.8305 & 0.7892\\ IUTIS-5$^*$ & 11.43 & \textbf{10.36} & 0.7849 & 0.9948 & 0.0052 & 0.2151 & 1.1986 & 0.8087 & 0.7717 \\ \multicolumn{10}{c}{\textit{\Basealgo{}}} \\ {\bf BSUV-net} & \textbf{9.29} & \textbf{13.18} & \textbf{0.8203} & 0.9946 & 0.0054 & \textbf{0.1797} & \textbf{1.1402} & \textbf{0.8113} & \textbf{0.7868}\\ SWCD & 15.43 & 19.00 & 0.7839 & 0.9930 & 0.0070 & 0.2161 & 1.3414 & 0.7527 & 0.7583\\ WisenetMD & 16.29 & 15.18 & 0.8179 & 0.9904 & 0.0096 & 0.1821 & 1.6136 & 0.7535 & 0.7668\\ PAWCS & 14.00 & 15.45 & 0.7718 & \textbf{0.9949} & \textbf{0.0051} & 0.2282 & 1.1992 & 0.7857 & 0.7403\\ FgSegNet v2 & 44.57 &44.09& 0.5119& 0.9411& 0.0589 &0.4881& 7.3507& 0.4859& 0.3715\\ \hline \end{tabular} \label{table:overall_results} \end{table*} \begin{table*}[t] \centering \caption{ \new{Comparison of methods according to the per-category F-measure for {\bf unseen videos} from CDNet-2014.} } \smallskip \scalebox{0.8}{ \begin{tabular}{c|ccccccccccc|c} \hline Method & \!\!\! \makecell{Bad\\weather}\! & \!\! \makecell{Low\\framerate}\!\!\! & Night & \!\!\! PTZ & \!\!\!\!\!Thermal\!\! & \!\!\!\! Shadow\!\!& \!\! \makecell{Int.\ obj.\\motion} &\!\!\!\! \makecell{Camera\\jitter} \! & \!\! \makecell{Dynamic\\backgr.}\!\! & \makecell{Base-\\line} & \makecell{Turbu-\\lence} & Overall\\ \hline \multicolumn{13}{c}{\textit{Post-processing algorithms}} \\ \!\!{\bf BSUV-net} + SemanticBGS$^*$ & \textbf{0.8730} & 0.6788 & \textbf{0.6815} & \textbf{0.6562} & \textbf{0.8455} & \textbf{0.9664} & 0.7601 & 0.7788 & 0.8176 & \textbf{0.9640} & 0.7631 & \textbf{0.7986}\\ IUTIS-5$^*$ + SemanticBGS$^*$ & 0.8260 & 0.7888 & 0.5014 & 0.5673 & 0.8219 & 0.9478 & \textbf{0.7878} & \textbf{0.8388} & \textbf{0.9489} & 0.9604 & 0.6921 & 0.7892\\ IUTIS-5$^*$ & 0.8248 & \textbf{0.7743} & 0.5290 & 0.4282 & 0.8303 & 0.9084 & 0.7296 & 0.8332 & 0.8902 & 0.9567 & \textbf{0.7836} & 0.7717\\ \multicolumn{13}{c}{\textit{\Basealgo{}}} \\ {\bf BSUV-net} & \textbf{0.8713}& 0.6797& \textbf{0.6987}& \textbf{0.6282}& \textbf{0.8581}& 0.9233& 0.7499& 0.7743& 0.7967& \textbf{0.9693}& 0.7051 & 0.7868\\ RTSS & 0.8662 & 0.6771 & 0.5295 & 0.5489 & 0.8510 & \textbf{0.9551} & \textbf{0.7864} & \textbf{0.8396} & \textbf{0.9325} & 0.9597 & 0.7630 & \textbf{0.7917}\\ SWCD & 0.8233 & \textbf{0.7374} & 0.5807 & 0.4545 & \textbf{0.8581} & 0.8779 & 0.7092 & 0.7411 & 0.8645 & 0.9214 & 0.7735 & 0.7583\\ WisenetMD & 0.8616 & 0.6404 & 0.5701 & 0.3367 & 0.8152 & 0.8984 & 0.7264 & 0.8228 & 0.8376 & 0.9487 & \textbf{0.8304}& 0.7535\\ PAWCS & 0.8152 & 0.6588 & 0.4152 & 0.4615 & 0.8324 & 0.8913 & 0.7764 & 0.8137 & 0.8938 & 0.9397 & 0.6450 & 0.7403\\ FgSegNet v2 & 0.3277 & 0.2482 & 0.2800 & 0.3503 & 0.6038 & 0.5295 & 0.2002 & 0.4266 & 0.3634 & 0.6926 & 0.0643 & 0.3715\\ \hline \end{tabular} } \label{table:cat_results} \end{table*} \new{Table~\ref{table:overall_results} compares {BSUV-Net} against state-of-the-art BGS algorithms in terms of the seven metrics and two rankings discussed in Section \ref{sec:data_and_eval}. All quantitative results shown in this paper are computed by ``changedetection.net'' evaluation servers to reflect the real performance on test data. Since {BSUV-Net} is \textit{video-agnostic}, comparing it with \textit{video-optimized} or \textit{video-group-optimized} algorithms would not be fair and we omit them. Instead, we compare BSUV-Net with state-of-the-art unsupervised algorithms, namely SWCD \cite{icsik2018swcd}, WisenetMD \cite{lee2018wisenetmd} and PAWCS \cite{st2015pawcs} , which, by definition, are {\it video-agnostic}. We exclude RTSS \cite{zeng2019rtss} and $3DFR$ \cite{mandal20193dfr} in Table~\ref{table:overall_results} since their results on the test frames are not available on ``changedetection.net''. We include the results of IUTIS-5 \cite{bianco2017iutis} and SemanticBGS \cite{braham2017semanticbgs}, but we list them separately because these are \textit{post-processing} algorithms. Note that, both IUTIS-5 and SemanticBGS can be applied to any BGS algorithm from Table~\ref{table:overall_results}, including {BSUV-Net}, to improve its performance. To show this, we also report the result of {BSUV-Net} post-processed by SemanticBGS. In the \textit{\basealgo{}} category, we also list FgSegNet v2 \cite{lim2018fgsegnetv3} since it is currently the best performing algorithm on CDNet-2014. However, since FGSegNet v2's performance reported on ``changedetection.net'' has been obtained in a \textit{video-optimized} manner, we trained it anew in a {\it video-agnostic} manner using the same methodology that we used for BSUV-Net. As expected, this caused a huge performance decrease of FgSegNet v2 compared to it’s \textit{video-optimized} training. As is clear from Table~\ref{table:overall_results}, {BSUV-Net} outperforms its competitors on almost all of the metrics. The F-measure performance demonstrates that {BSUV-Net} achieves excellent results without compromising either recall or precision. Table~\ref{table:overall_results} also shows that the performance of {BSUV-Net} can be improved even further by combining it with SemanticBGS. The combined algorithm outperforms all of the {\it video-agnostic} algorithms that are available on ``changedetection.net''.} Table~\ref{table:cat_results} compares the per-category F-measure performance of {BSUV-Net} against state-of-the-art BGS algorithms.For RTSS \cite{zeng2019rtss}, the values of performance metrics shown in Table~\ref{table:cat_results} are as reported in their paper. Columns 2-12 report the F-measure for each of the 11 categories from ``changedetection.net'', while the last column reports the mean F-measure across all categories. Similarly to Table~\ref{table:overall_results}, we divided this table into \textit{post-processing} and \textit{\basealgo{}}. It can be observed that {BSUV-Net} achieves the best performance in 5 out of 11 categories. It has a striking performance advantage in the ``night'' category. All videos in this category are traffic-related and many cars have headlights turned on at night which causes significant local illumination variations in time. {BSUV-Net}'s excellent performance in this category demonstrates that the proposed model is indeed largely illumination-invariant. \new{{BSUV-Net} performs poorer than other algorithms in ``camera jitter'' and ``dynamic background'' categories. We believe this is related to the empty and recent background frames we are using as input. The median operation used to compute background frames creates very blurry images for these categories since the background is not static. Thus, {BSUV-Net} predicts some pixels in the background as foreground and increases the number of false positives.} \subsection{Visual Results} A visual comparison of {BSUV-Net} with SWCD \cite{icsik2018swcd} and WisenetMD \cite{lee2018wisenetmd} is shown in Figure \ref{fig:vis_results}. Each column shows a sample frame from one of the videos in one of the 8 categories. It can be observed that {BSUV-Net} produces visually the best results for almost all categories. In the ``night'' category, SWCD and WisenetMD produce many false positives because of local illumination changes. {BSUV-Net} produces better results since it is designed to be illumination-invariant. In the ``shadow'' category, {BSUV-Net} performs much better in the shadow regions. Results in the ``intermittent object motion'' and ``baseline'' categories show that {BSUV-Net} can successfully detect intermittently-static objects. It is safe to say that {BSUV-Net} is capable of simultaneously handling the discovery of intermittently-static objects and also the dynamic factors such as illumination change. An inspection of results in the ``dynamic background'' category shows that {BSUV-Net} has detected most of the foreground pixels but failed to detect the background pixels around the foreground objects. We believe this is due to the blurring effect of the median operation that we used in the computation of background frames. Using more advanced background models as an input to {BSUV-Net} might improve the performance in this category. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/qualitative.jpg} \caption{ \new{Visual comparison of sample results from {BSUV-Net}, SWCD and WisenetMD on {\bf unseen videos} from CDNet-2014.} } \label{fig:vis_results} \end{figure*} \subsection{Ablation Study} \new{One of the contributions of {BSUV-Net} is its multi-channel input composed of two background frames from different time scales and a foreground probability map (FPM). Another contribution is temporal data augmentation tailored to handling illumination changes. In Table~\ref{table:ablation}, we explore their impact on precision, recall and F-measure. Each column on the left represents one characteristic and each row represents a different combination of these characteristics. RGB channels of the current frame are used in all of the combinations. ``Empty BG'' and ``Recent BG'' refer to the use of empty and$\backslash$or recent background frames, respectively, in addition to the current frame. ``Data aug.'' refers to temporal data augmentation described in Section~\ref{sec:illres} and ``FPM'' refers to the use of semantic FPM channel in addition to the RGB channels for all input frames. It is clear that all these characteristics have a significant impact on the overall performance. Using only the current frame as input results in very poor metrics. The introduction of empty or/and recent background frames leads to a significant improvement. Adding temporal data augmentation or/and FPM channels further improves the performance and the final network achieves state-of-the-art results.} \begin{table}[ht] \centering \caption{ \new{Impact of background frames, data augmentation for temporal illumination change and FPM on {BSUV-Net} performance.} } \smallskip \new{\scalebox{0.85}{ \begin{tabular}{ccccc|ccc} \hline \!\!\makecell{Current \\ frame}\!\! & \!\!\makecell{Empty \\ BG}\!\! & \!\!\makecell{Recent \\ BG}\!\! &\!\!\makecell{Data \\ aug.}\!\!& \!\!FPM\!\!& $Pr$\!& \!$Re$\!& \!$F_1$\\ \hline \hline \cmark && & & & 0.3615 & 0.5509 & 0.3476\\ \hline \cmark &\cmark & & & & 0.6994 & 0.7686 & 0.6819\\ \hline \cmark & & \cmark & & &0.6976 & 0.7064 & 0.6351\\ \hline \cmark &\cmark & \cmark & & & 0.7658 & 0.7606 & 0.7156\\ \hline \cmark &\cmark &\cmark &\cmark & & 0.7574 & 0.8159 & 0.7447\\ \hline \cmark &\cmark &\cmark & &\cmark & 0.7807 & 0.7747 & 0.7450\\ \hline \cmark &\cmark &\cmark &\cmark &\cmark & 0.8113 & 0.8203 & 0.7868\\ \hline \end{tabular}} } \label{table:ablation} \end{table} In this paper, we proposed to add semantic FPM channel as input in order to improve our algorithm's performance. However, if the background and foreground object categories are chosen carefully, FPM can be used as a BGS algorithm by itself. This would require prior information about the video (to compute FPM) and, therefore, would not qualify as a \textit{video-agnostic} method. In our algorithm, however, we combine FPM information with RGB channels and background frames. When applying DeepLabv3 \cite{braham2017semanticbgs} to compute FPM frames, we pre-defined global background and foreground class categories which might be wrong for some of the videos. We did not optimize the selection of these class categories but instead used those suggested in \cite{braham2017semanticbgs}. To demonstrate that our algorithm is not replicating FPM but leverages its semantic information to boost performance, we compared {BSUV-Net} with thresholded FPM used as a BGS result (Table~\ref{table:FPM}). It is clear that FPM alone is not a powerful tool for BGS as it is significantly outperformed by {BSUV-Net}. \begin{table}[ht] \centering \caption{ \new{Comparison of {BSUV-Net} with thresholded FPM used as a BGS result (probability threshold equals 0.5).} } \smallskip \begin{tabular}{c|ccc} Method & $Pr$ & $Re$& $F_1$\\ \hline \hline FPM & 0.6549 & 0.6654 & 0.5846 \\ BSUV-net & 0.8113 & 0.8203 & 0.7868\\ \end{tabular} \label{table:FPM} \end{table} While in {BSUV-Net} we assume that the \textit{empty background frame} is foreground-free, CDNet-2014 does not provide empty background frames. Therefore, in some videos, we manually selected empty background frames from among the initial frames as explained in Section~\ref{sec:train-evaluate}. In Table~\ref{table:empty}, we show the impact of this manual process by comparing the manual selection strategy with an automatic one, that is using the median of all frames in the test video as a the \textit{empty background frame}. Clearly, the manual selection slightly improves precision while significantly decreasing recall. We believe this is due to the increase of false negatives caused by the appearance of some of the foreground objects in the empty background. Since videos in CDNet2014 are rather short (at most 10 minutes), in some cases the median of all frames does not result in an empty background. However, for stationary surveillance cameras in a real-life scenario it is often possible to compute an empty background, for example by taking the median of frames at the same time of the day (when it is expected to be empty) over many days. \begin{table}[ht] \centering \caption{ \new{Comparison of manual and automatic selection of empty background frames in {BSUV-Net}.} } \smallskip \begin{tabular}{c|ccc} \makecell{Empty background \\ selection} & $Pr$& $Re$& $F_1$\\ \hline \hline Automatic & 0.8207 & 0.7812 & 0.7639\\ Manual & 0.8113 & 0.8203 & 0.7868\\ \end{tabular} \label{table:empty} \end{table} \section{Conclusions and Future Work} \label{sec:discuss} We introduced a novel deep-learning algorithm for background subtraction of unseen videos and proposed a \textit{video-agnostic} evaluation methodology that treats each video in a dataset as unseen. The input to {BSUV-Net} consists of the current frame and two reference frames from different time-scales, along with semantic information for all three frames (computed using Deeplabv3 \cite{chen2017deeplab}). To increase the generalization capacity of {BSUV-Net}, we formulated a simple, yet effective, illumination-change model. Experimental results on CDNet-2014 show that {BSUV-Net} outperforms state-of-the-art \textit{video-agnostic} BGS algorithms in terms of 7 out of 9 performance metrics. Its performance can be further improved by adding SemanticBGS \cite{braham2017semanticbgs} as a post-processing layer. This shows great potential for deep-learning BGS algorithms designed for unseen or unlabeled videos. In the future, we are planning further work on temporal data-augmentation techniques to improve performance for challenging categories, such as ``dynamic background'' and ``camera jitter''. We will also investigate different background models for the reference frames. In this work, we kept our focus on designing a high-performance, supervised BGS algorithm for unseen videos without considering the computational complexity. To bring {BSUV-Net} closer to real-time performance, we are also planning to study a shallow-network implementation designed for fast inference. \bibliographystyle{plain} \section*{\centering{\huge{Appendices}}}] \section*{Selection of training test sets for \\ evaluation on unseen videos} In this paper, we introduced a supervised background subtraction (BGS) algorithm for unseen videos. As for all supervised learning algorithms, the size and diversity of the training data are crucially important for the learning process. Generally speaking, for most of the state-of-the-art deep neural networks, the best approach is to use all of the available training data. Unfortunately, CDNet 2014 \cite{goyette2012changedetection} does not provide different videos for training and testing. Instead, it provides some frames from each video as training data and the remaining ones -- as test data. However, this type of division is not useful for evaluating the performance on {\it unseen} videos. For comparing the performance of different models on \textit{unseen} videos, we split the dataset into 18 different sets of training/testing videos as shown in Tables~\ref{table:traintest1} and \ref{table:traintest2}. When training a supervised algorithm, the main assumption is that the training set is diverse enough to cover a wide range of test scenarios. For example, if there are no examples that include shadow in the training set, then it is impossible for the network to learn how to classify shadow regions. Therefore, we designed the splits so that the training set for each split contains some videos from the same category as the test videos. We did not perform a full ``leave-$k$-videos-out'' cross-validation due to prohibitive time need to train {BSUV-Net}. In all of the tests, we used videos from ``baseline'', ``bad weather'', ``intermittent object motion'', ``low frame rate'' and ``shadow'' categories during training since they span most of the common scenarios. For videos from more difficult scenarios, we progressively added additional categories into the training set. In particular, we considered ``PTZ'', ``thermal'' and ``turbulence'' categories as the most difficult ones since they have substantially different data characteristics from other categories. ``PTZ'' is the only category with significant camera movement and zoom in/out, while ``thermal'' and `turbulence'' categories capture different scene properties than the remaining categories (far- and near-infrared spectrum instead of RGB, respectively). For these 3 categories, we used more videos in the training set, than in the other categories. Please note that a '`leave-$k$-videos-out'' approach would have more videos in the training set compared to our splits and is therefore likely to yield better results. \begin{table*} \floatpagestyle{empty} \centering \caption{Training and test splits $\mathbf{S_1}$ to $\mathbf{S_{12}}$ used for evaluation.} \scalebox{0.92}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{category} & \textbf{video} & $\mathbf{S_1}$& $\mathbf{S_2}$& $\mathbf{S_3}$& $\mathbf{S_4}$& $\mathbf{S_5}$& $\mathbf{S_6}$& $\mathbf{S_7}$& $\mathbf{S_8}$& $\mathbf{S_9}$& $\mathbf{S_{10}}$& $\mathbf{S_{11}}$& $\mathbf{S_{12}}$ \\ \hline \hline \multirow{4}{*}{baseline} & highway & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & pedestrians & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & office & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & PETS2006 & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{4}{*}{\makecell{bad\\weather}} & skating & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & blizzard & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & snowFall & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & wetSnow & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{6}{*}{\makecell{intermittent\\object\\motion}} & abandonedBox & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & parking & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & sofa & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & streetLight & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & tramstop & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & winterDriveway & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{4}{*}{\makecell{low\\framerate}} & port 0.17fps & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & tramCrossroad 1fps & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & tunnelExit 0.35fps & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & turnpike 0.5fps & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{6}{*}{shadow} & backdoor & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & bungalows & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & busStation & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & copyMachine & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & cubicle & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-14} & peopleInShade & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}& \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{4}{*}{\makecell{camera\\jitter}} & badminton & & & & & & & \color{red}{Test} & \color{blue}{Tr} & & & & \\ \cline{2-14} & traffic & & & & & & & \color{red}{Test} & \color{blue}{Tr} & & & & \\ \cline{2-14} & boulevard & & & & & & & \color{blue}{Tr} & \color{red}{Test} & & & & \\ \cline{2-14} & sidewalk & & & & & & & \color{blue}{Tr} & \color{red}{Test} & & & & \\ \hline \hline \multirow{6}{*}{\makecell{dynamic\\background}} & boats & & & & & & & & & \color{blue}{Tr} & \color{red}{Test} & &\\ \cline{2-14} & canoe & & & & & & & & & \color{red}{Test} & \color{blue}{Tr} & &\\ \cline{2-14} & fall & & & & & & & & & \color{red}{Test} & \color{blue}{Tr} & &\\ \cline{2-14} & fountain01 & & & & & & & & & \color{blue}{Tr} & \color{red}{Test} & &\\ \cline{2-14} & fountain02 & & & & & & & & & \color{red}{Test} & \color{blue}{Tr} & &\\ \cline{2-14} & overpass & & & & & & & & & \color{blue}{Tr} & \color{red}{Test} & &\\ \hline \hline \multirow{6}{*}{\makecell{night\\videos}} & bridgeEntry & & & & & & & & & & & \color{red}{Test} & \color{blue}{Tr}\\ \cline{2-14} & busyBoulvard & & & & & & & & & & & \color{blue}{Tr} & \color{red}{Test}\\ \cline{2-14} & fluidHighway & & & & & & & & & & & \color{red}{Test} & \color{blue}{Tr}\\ \cline{2-14} & streetCornerAtNight & & & & & & & & & & & \color{blue}{Tr} & \color{red}{Test}\\ \cline{2-14} & tramStation & & & & & & & & & & & \color{red}{Test} & \color{blue}{Tr}\\ \cline{2-14} & winterStreet & & & & & & & & & & & \color{blue}{Tr} & \color{red}{Test}\\ \hline \hline \multirow{4}{*}{PTZ} & continuousPan & & & & & & & & & & & & \\ \cline{2-14} & intermittentPan & & & & & & & & & & & & \\ \cline{2-14} & twoPositionPTZCam & & & & & & & & & & & & \\ \cline{2-14} & zoomInZoomOut & & & & & & & & & & & & \\ \hline \hline \multirow{5}{*}{thermal} & corridor & & & & & & & & & & & & \\ \cline{2-14} & diningRoom & & & & & & & & & & & & \\ \cline{2-14} & lakeSide & & & & & & & & & & & & \\ \cline{2-14} & library & & & & & & & & & & & & \\ \cline{2-14} & park & & & & & & & & & & & & \\ \hline \hline \multirow{4}{*}{turbulence} & turbulence0 & & & & & & & & & & & & \\ \cline{2-14} & turbulence1 & & & & & & & & & & & & \\ \cline{2-14} & turbulence2 & & & & & & & & & & & & \\ \cline{2-14} & turbulence3 & & & & & & & & & & & & \\ \hline \end{tabular}} \label{table:traintest1} \end{table*} \begin{table*} \floatpagestyle{empty} \centering \caption{Training and test splits $\mathbf{S_{13}}$ to $\mathbf{S_{18}}$ used for evaluation.} \scalebox{0.92}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{category} & \textbf{video} & $\mathbf{S_{13}}$& $\mathbf{S_{14}}$& $\mathbf{S_{15}}$& $\mathbf{S_{16}}$& $\mathbf{S_{17}}$& $\mathbf{S_{18}}$ \\ \hline \hline \multirow{4}{*}{baseline} & highway & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & pedestrians & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & office & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & PETS2006 & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{4}{*}{\makecell{bad\\weather}} & skating & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & blizzard & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & snowFall & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & wetSnow & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{6}{*}{\makecell{intermittent\\object\\motion}} & abandonedBox & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \cline{2-8} & parking & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \cline{2-8} & sofa & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \cline{2-8} & streetLight & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \cline{2-8} & tramstop & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \cline{2-8} & winterDriveway & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \hline \hline \multirow{4}{*}{\makecell{low\\framerate}} & port 0.17fps & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & tramCrossroad 1fps & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \cline{2-8} & tunnelExit 0.35fps & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & turnpike 0.5fps & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{6}{*}{shadow} & backdoor & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & bungalows & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & busStation & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & copyMachine & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \cline{2-8} & cubicle & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} \\ \cline{2-8} & peopleInShade & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{4}{*}{\makecell{camera\\jitter}} & badminton & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & traffic & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & boulevard & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & sidewalk & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{6}{*}{\makecell{dynamic\\background}} & boats & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & canoe & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & fall & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & fountain01 & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & fountain02 & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & overpass & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{6}{*}{\makecell{night\\videos}} & bridgeEntry & & & & & & \\ \cline{2-8} & busyBoulvard & & & & & & \\ \cline{2-8} & fluidHighway & & & & & & \\ \cline{2-8} & streetCornerAtNight & & & & & & \\ \cline{2-8} & tramStation & & & & & & \\ \cline{2-8} & winterStreet & & & & & & \\ \hline \hline \multirow{4}{*}{PTZ} & continuousPan & \color{blue}{Tr} & \color{red}{Test} & & & & \\ \cline{2-8} & intermittentPan & \color{blue}{Tr} & \color{red}{Test} & & & & \\ \cline{2-8} & twoPositionPTZCam & \color{red}{Test} & \color{blue}{Tr} & & & & \\ \cline{2-8} & zoomInZoomOut & \color{red}{Test} & \color{blue}{Tr} & & & & \\ \hline \hline \multirow{5}{*}{thermal} & corridor & & & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & diningRoom & & & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & lakeSide & & & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & library & & & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \cline{2-8} & park & & & \color{red}{Test} & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr}\\ \hline \hline \multirow{4}{*}{turbulence} & turbulence0 & & & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr}\\ \cline{2-8} & turbulence1 & & & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test} & \color{blue}{Tr}\\ \cline{2-8} & turbulence2 & & & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test}\\ \cline{2-8} & turbulence3 & & & \color{blue}{Tr} & \color{blue}{Tr} & \color{blue}{Tr} & \color{red}{Test}\\ \hline \end{tabular}} \label{table:traintest2} \end{table*}
1,108,101,563,095
arxiv
\section{Introduction} Convolutional neural network (CNN) based object detectors \cite{girshick2015fast,he2017mask,ren2015faster,liu2016ssd,lin2017focal} have achieved state-of-the-art performance due to the strong discriminative power and generalization ability. However, the CNN based detection methods require massive computation and storage resources to achieve ideal performance, which limits their deployment on mobile devices. Therefore, it is desirable to develop detectors with lightweight architectures and few parameters. To reduce the complexity of deep neural networks, several model compression methods have been proposed including pruning \cite{molchanov2019importance,zhao2019variational,he2017channel}, low-rank decomposition \cite{lin2018holistic,peng2018extreme,kim2019efficient}, quantization \cite{wang2019learning,li2019fully,gong2019differentiable}, knowledge distillation \cite{wei2018quantization,wang2019private,chen2017learning}, architecture design \cite{sandler2018mobilenetv2,zhang2018shufflenet,qin2019thundernet} and architecture search \cite{wu2019fbnet,tan2019mnasnet}. Among these methods, network quantization reduces the bitwidth of the network parameters and activations for efficient inference. In the extreme cases, binarizing weights and activations of neural networks decreases the storage and computation cost by $32\times$ and $64\times$ respectively. However, deploying binary neural networks with constrained representational capacity in object detection causes numerous false positives due to the information redundancy in the networks. In this paper, we present a BiDet method to learn binarized neural networks including the backbone part and the detection part for efficient object detection. Unlike existing methods which directly binarize the weights and activations in one-stage or two-stage detectors, our method fully utilizes the representational capacity of the binary neural networks for object detection via redundancy removal, so that the detection precision is enhanced with false positive elimination. More specifically, we impose the information bottleneck (IB) principle on binarized object detector learning, where we simultaneously limit the amount of information in high-level feature maps and maximize the mutual information between object detection and the learned feature maps. Meanwhile, the learned sparse object priors are utilized in IB, so that the posteriors are enforced to be concentrated on informative prediction and the uninformative false positives are eliminated. Figure \ref{fp} (a) and (b) show an example of predicted positives obtained by Xnor-Net \cite{rastegari2016xnor} and our BiDet respectively, where the false positives are significantly reduced in the latter. Figure \ref{fp} (c) and (d) depict the information plane dynamics for the training and test sets respectively, where our BiDet removes the information redundancy and fully utilizes the representational power of the networks. Extensive experiments on the PASCAL VOC \cite{everingham2010pascal} and COCO \cite{lin2014microsoft} datasets show that our BiDet outperforms the state-of-the-art binary neural networks in object detection across various architectures. Moreover, BiDet can be integrated with other compact object detectors to acquire faster speedup and less storage. Our contributions include: \begin{itemize}[leftmargin=*] \item To the best of our knowledge, we propose the first binarized networks containing the backbone and detection parts for efficient object detection. \item We employ the IB principle for redundancy removal to fully utilize the capacity of binary neural networks and learn the sparse object priors to concentrate posteriors on informative detection prediction, so that the detection accuracy is enhanced with false positive elimination. \item We evaluate the proposed BiDet on the PASCAL VOC and the large scale COCO datasets for comprehensive comparison with state-of-the-art binary neural networks in object detection. \end{itemize} \section{Related Work} \textbf{Network Quantization: }Network quantization has been widely studied in recent years due to its efficiency in storage and computation. Existing methods can be divided into two categories: neural networks with weights and activations in one bit or multiple bits. Binary neural networks reduce the model complexity significantly due to the extremely high compression ratio. Hubara \etal \cite{hubara2016binarized} and Rastegari \etal \cite{rastegari2016xnor} binarized both weights and activations in neural networks and replaced the multiply-accumulation with xnor and bitcount operations, where straight-through estimators were applied to relax the non-differentiable sign function for back-propagation. Liu \etal \cite{liu2018bi} added extra shortcut between consecutive convolutional blocks to strengthen the representational capacity of the network. They also used custom gradients to optimize the non-differentiable networks. Binary neural networks perform poorly on difficult tasks such as object detection due to the low representational capacity, multi-bit quantization strategies have been proposed with wider bitwidth. Jacob \etal \cite{jacob2018quantization} presented an 8-bit quantized model for inference in object detection and their method can be integrated with efficient architectures. Wei \etal \cite{wei2018quantization} applied the knowledge distillation to learn 8-bit neural networks in small size from large full-precision models. Li \etal \cite{li2019fully} proposed fully quantized neural networks in four bits with hardware-friendly implementation. Meanwhile, the instabilities during training were overcome by the presented techniques. Nevertheless, multi-bit neural networks still suffer from heavy storage and computation cost. Directly applying binary neural networks with constrained representational power in object detection leads to numerous false positives and significantly degrades the performance due to the information redundancy in the networks. \iffalse Yang \etal \cite{yang2019quantization} leveraged the soft quantization strategy by approximating the rigid sign function with the sigmoid layer, where the discrepancy between the optimization objective and the gradient was minimized. \fi \textbf{Object Detection: }Object detection has aroused comprehensive interest in computer vision due to its wide application. Modern CNN based detectors are categorized into two-stage and one-stage detectors. In the former, R-CNN \cite{girshick2014rich} was among the earliest CNN-based detectors with the pipeline of bounding box regression and classification. Progressive improvements were proposed for better efficiency and effectiveness. Fast R-CNN \cite{girshick2015fast} presented the ROIpooling in the detection framework to achieve better accuracy and faster inference. Faster R-CNN \cite{ren2015faster} proposed the Region Proposal Networks to effectively generate region proposals instead of hand-crafted ones. FPN \cite{lin2017feature} introduced top-down architectures with lateral connections and the multi-scale features to integrate low-level and high-level features. In the latter regard, SSD \cite{liu2016ssd} and YOLO \cite{redmon2016you} directly predicted the bounding box and the class without region proposal generation, so that real-time inference was achieved on GPUs with competitive accuracy. RetinaNet \cite{lin2017focal} proposed the focal loss to solve the problem of foreground-background class imbalance. However, CNN based detectors suffer from heavy storage and computational cost so that their deployment is limited. \begin{figure*}[t] \centering \includegraphics[height=5cm, width=14.5cm]{BiDet_pipeline.pdf} \caption{The pipeline of the information bottleneck based detectors, which consist of the backbone part and the detection part. The solid line represents the forward propagation in the network, while the dashed line means sampling from a parameterized distribution $\Phi$. The high-level feature map $F$ is sampled from the distribution parameterized by the backbone network. The one-stage and two-stage detector framework can be both employed in the detection part of our BiDet. For the one-stage detectors, the head network parameterizes the distribution of object classes and location. For two-stage detectors, Region Proposal Networks (RPN) parameterize the prior distribution of location and the posteriors are parameterized by the refining networks. (best viewed in color).} \vspace{-0.3cm} \label{pipeline} \end{figure*} \textbf{Information Bottleneck: }The information bottleneck (IB) principle was first proposed by \cite{tishby2000information} with the goal of extracting relevant information of the input with respect to the task, so that the IB principle are widely applied in compression. The IB principle enforces the mutual information between the input and learned features to be minimized while simultaneously maximizing the mutual information between the features and groundtruth of the tasks. Louizos \etal \cite{louizos2017bayesian} and Ullrich \etal \cite{ullrich2017soft} utilized the Minimal Description Length (MDL) principle that is equivalent to IB to stochastically quantize deep neural networks. Moreover, they used the sparse horseshoe and Gaussian mixture priors for weight learning in order to reduce the quantization errors. Dai \etal \cite{dai2018compressing} pruned individual neurons via variational IB so that redundancy between adjacent layers was minimized by aggregating useful information in a subset of neurons. Despite the network compression, IB is also utilized in compact feature learning. Amjad \etal \cite{amjad2019learning} proposed stochastic deep neural networks where IB could be utilized to learn efficient representations for classification. Shen \etal \cite{shen2019embarrassingly} imposed IB on existing hash models to generate effective binary representations so that the data semantics were fully utilized. In this paper, we extend the IB principle to squeeze the redundancy in binary detection networks, so that the false positives are alleviated and the detection precision is significantly enhanced. \section{Approach} In this section, we first extend the IB principle that removes the information redundancy to object detection. Then we present the details of learning the sparse object priors for object detection, which concentrate posteriors on informative prediction with false positive elimination. Finally, we propose the efficient binarized object detectors. \subsection{Information Bottleneck for Object Detection} The information bottleneck (IB) principle directly relates to compression with the best hypothesis that the data misfit and the model complexity should simultaneously be minimized, so that the redundant information irrelevant to the task is exclusive in the compressed model and the capacity of the lightweight model is fully utilized. The task of object detection can be regarded as a Markov process with the following Markov chain: \begin{align} X\rightarrow F\rightarrow L,C \end{align} where $X$ means the input images and $F$ stands for the high-level feature maps output by the backbone part. $C$ and $L$ represent the predicted classes and location of the objects respectively. According to the Markov chain, the objective of the IB principle is written as follows: \begin{align} \min\limits_{\phi_b,\phi_d} ~~ I(X;F)-\beta I(F;C,L) \label{IB} \end{align} where $\phi_b$ and $\phi_d$ are the parameters of the backbone and the detection part respectively. $I(X;Y)$ means the mutual information between two random variables $X$ and $Y$. Minimizing the mutual information between the images and the high-level feature maps constrains the amount of information that the detector extracts, and maximizing the mutual information between the high-level feature maps and object detection enforces the detector to preserve more information related to the task. As a result, the redundant information irrelevant to object detection is removed. Figure \ref{pipeline} shows the pipeline for information bottleneck based detectors, the IB principle can be imposed on the conventional one-stage and two-stage detectors. We rewrite the first term of (\ref{IB}) according to the definition of mutual information: \begin{align} I(X;F)=\mathbb{E}_{\bm{x}\sim p(\bm{x})}\mathbb{E}_{\bm{f}\sim p(\bm{f}|\bm{x})}\log\frac{ p(\bm{f}|\bm{x})}{p(\bm{f})} \label{feature_map} \end{align}where $\bm{x}$ and $\bm{f}$ are the specific input images and the corresponding high-level feature maps. $p(\bm{x})$ and $p(\bm{f})$ are the prior distribution of $\bm{x}$ and $\bm{f}$ respectively, and $\mathbb{E}$ represents the expectation. $p(\bm{f}|\bm{x})$ is the posterior distribution of the high-level feature map conditioned on the input. We parameterize $p(\bm{f}|\bm{x})$ by the backbone due to its intractability, where evidence-lower-bound (ELBO) minimization is applied for relaxation. To estimate $I(X;F)$, we sample the training set to obtain the image $\bm{x}$ and sample the distribution parameterized by the backbone to acquire the corresponding high-level feature map $\bm{f}$. \begin{figure}[t] \centering \includegraphics[height=6.5cm, width=8cm]{BiDet_priors.pdf} \caption{The detected objects and the corresponding confidence score (a) before and (b) after optimizing (\ref{alter_sparse}). The contrast of confidence score among different detected objects is significantly enlarged by minimizing alternate objective. As the NMS eliminates the positives with confidence score lower than the threshold, the sparse object priors are acquired and the posteriors are enforced to be concentrated on informative prediction. (best viewed in color).} \vspace{-0.3cm} \label{priors} \end{figure} The location and classification of objects based on the high-level feature map are independent, as the bounding box location and the classification probability are obtained via different network branches in the detection part. The mutual information in the second term of (\ref{IB}) is factorized: \begin{align} I(F;C,L)=I(F;C)+I(F;L) \end{align}Similar to (\ref{feature_map}), we rewrite the mutual information between the high-level feature maps and the classes as follows: \begin{align} I(F;C)=\mathbb{E}_{\bm{f}\sim p(\bm{f}|\bm{x})}\mathbb{E}_{\bm{c}\sim p(\bm{c}|\bm{f})}\log\frac{p(\bm{c}|\bm{f})}{p(\bm{c})} \label{mi_class} \end{align}where $\bm{c}$ is the object class labels including the background class. $p(\bm{c})$ and $p(\bm{c}|\bm{f})$ are the prior class distribution and posterior class distribution when given the feature maps respectively. Same as the calculation of (\ref{feature_map}), we employ the classification branch networks in the detection part to parameterize the distribution. Meanwhile, we divide the images to blocks for multiple object detection. For one-stage detectors such as SSD \cite{liu2016ssd}, we project the high-level feature map cells to the raw image to obtain the block partition. For two-stage detectors such as Faster R-CNN \cite{ren2015faster}, we scale the ROI to the original image for block split. $\bm{c}\in \mathbb{Z}^{1\times b}$ represents the object class in $b$ blocks of the image. We define $c_{i}$ as the $i_{th}$ element of $\bm{c}$, which demonstrates the class of the object whose center is in the $i_{th}$ block of the image. The class of a block is assigned to background if the block does not contain the center of any groundtruth objects. As the localization contains shift parameters and scale parameters for anchors, we rewrite the mutual information between the object location and high-level feature maps: \vspace{-0.3cm} \begin{small} \begin{align*} I(F;L)=\mathbb{E}_{\bm{f}\sim p(\bm{f}|\bm{x})}\mathbb{E}_{\bm{l}_1\sim p(\bm{l}_1|\bm{f})}\mathbb{E}_{\bm{l}_2\sim p(\bm{l}_2|\bm{f})}\log\frac{p(\bm{l}_1|\bm{f})p(\bm{l}_2|\bm{f})}{p(\bm{l}_1)p(\bm{l}_2)} \end{align*} \end{small}where $\bm{l}_1 \in \mathbb{R}^{2\times b}$ represents the horizontal and vertical shift offset of the anchors in $b$ blocks of the image, and $\bm{l}_2 \in \mathbb{R}^{2\times b}$ means the height and width scale offset of the anchors. For the anchor whose center $(x,y)$ is in the $j_{th}$ block with height $h$ and width $w$, the offset changes the bounding box in the following way: $(x,y)\rightarrow (x,y)+\bm{l}_{1,j}$ and $(h,w)\rightarrow (h,w)\cdot exp(\bm{l}_{2,j})$, where $\bm{l}_{1,j}$ and $\bm{l}_{2,j}$ represent the $j_{th}$ column of $\bm{l}_1$ and $\bm{l}_2$. The priors and the posteriors of shift offset conditioned on the feature maps are denoted as $p(\bm{l}_1)$ and $p(\bm{l}_1|\bm{f})$ respectively. Similarly, the scaling offset has the prior and the posteriors given feature maps $p(\bm{l}_2)$ and $p(\bm{l}_2|\bm{f})$. We leverage the localization branch networks in the detection part for distribution parameterization. \subsection{Learning Sparse Object Priors} Since the feature maps are binarized in BiDet, we utilize the binomial distribution with equal probability as the priors for each element of the high-level feature map $\bm{f}$. We assign the priors for object localization in the following form: $p(\bm{l}_{1,j})= N(\bm{\mu}_{1,j}^0,\bm{\Sigma}_{1,j}^0)$ and $p(\bm{l}_{2,j})= N(\bm{\mu}_{2,j}^0,\bm{\Sigma}_{2,j}^0)$, where $N(\bm{\mu}, \bm{\Sigma})$ means the Gaussian distribution with mean $\bm{\mu}$ and covariance matrix $\bm{\Sigma}$. For one-stage detectors, the object localization priors $p(\bm{l}_{1,j})$ and $p(\bm{l}_{2,j})$ are hypothesized to be the two-dimensional standard normal distribution. For two-stage detectors, Region Proposal Networks (RPN) output the parameters of the Gaussian priors. As numerous false positives emerge in the binary detection networks, learning sparse object priors for detection part enforces the posteriors to be concentrated on informative detection prediction with false positive elimination. The priors for object classification is defined as follows: \begin{align*} p(c_i)=\mathbb{I}_{M_i}\cdot cat(\frac{1}{n+1}\cdot\bm{1}^{n+1})+(1-\mathbb{I}_{M_i})\cdot cat([1,\bm{0}^{n}]) \end{align*}where $\mathbb{I}_x$ is the indicator function with $\mathbb{I}_1=1$ and $\mathbb{I}_0=0$, and $M_i$ is the $i_{th}$ element of the block mask $\bm{M}\in\{0,1\}^{1\times b}$. $cat(\bm{K})$ means the categorical distribution with the parameter $\bm{K}$. $\bm{1}^{n}$ and $\bm{0}^{n}$ are the all-one and zero vectors in $n$ dimensions respectively, where $n$ is the number of class. The multinomial distribution with equal probability is utilized for the class prior in the $i_{th}$ block if $M_i$ equals to one. Otherwise, the categorical distribution with the probability $1$ for background and zero probability for other classes is leveraged for the prior class distribution. When $M_i$ equals to zero, the detection part definitely predicts the background for object classification in the $i_{th}$ block according to (\ref{mi_class}). In order to obtain sparse priors for object classification with fewer predicted positives, we minimize the $L_1$ norm of the block mask $\bm{M}$. We propose an alternative way to optimize $\bm{M}$ due to the non-differentiability, where the objective is written as follows: \begin{align} \min\limits_{s_i} -\frac{1}{m}\sum_{i=1}^{m}s_i \log s_i \label{alter_sparse} \end{align}where $m=||\bm{M}||_1$ represents the number of detected foreground objects in the image, and $s_i$ is the normalized confidence score for the $i_{th}$ predicted foreground object with $\sum_{i=1}^{m}s_i=1$. As shown in Figure \ref{priors}, minimizing (\ref{alter_sparse}) increases the contrast of confidence score among different predicted objects, and predicted objects with low confidence score are assigned to be negative by the non-maximum suppression (NMS) algorithms. Therefore, the block mask becomes sparser with fewer predicted objects, and the posteriors are concentrated on informative prediction with uninformative false positive elimination. \subsection{Efficient Binarized Object Detectors} In this section, we first briefly introduce neural networks with binary weights and activations, and then detail the learning objectives of our BiDet. Let $\bm{W^l_r}$ be the real-valued weights and $\bm{A^l_r}$ be the full-precision activations of the $l_{th}$ layer in a given L-layer detection model. During the forward propagation, the weights and activations are binarized via the sign function: $\bm{W^l_b}=sign(\bm{W^l_r})$ and $\bm{A^l_b}=sign(\bm{W^l_r}\odot\bm{A^l_b})$. $sign$ means the element-wise sign function which maps the number larger than zero to one and otherwise to minus one, and $\odot$ indicates the element-wise binary product consisting of xnor and bitcount operations. Due to the non-differentiability of the sign function, straight-through estimator (STE) is employed to calculate the approximate gradients and update the real-valued weights in the back-propagation stage. The learning objectives for the proposed BiDet is written as follows: \vspace{-0.5cm} \begin{small} \begin{align} &\min J = J_1+J_2\notag\\ &=(\sum_{t,s}\log \frac{ p(f_{st}|\bm{x})}{p(f_{st})}-\beta\sum_{i=1}^{b}\log\frac{p(c_i|\bm{f})p(\bm{l}_{1,i}|\bm{f})p(\bm{l}_{2,i}|\bm{f})}{p(c_i)p(\bm{l}_{1,i})p(\bm{l}_{2,i})})\notag\\ & \quad-\gamma\cdot\frac{1}{m}\sum_{i=1}^{m}s_i \log s_i \label{objective} \end{align} \end{small}where $\gamma$ is a hyperparameter that balances the importance of false positive elimination. The posterior distribution $p(c_i|\bm{f})$ is hypothesized to be the categorical distribution $cat(\bm{K}_i)$, where $\bm{K}_i\in \mathbb{R}^{1\times(n+1)}$ is the parameter and $n$ is the number of classes. We assume the posterior of the shift and scale offset follows the Gaussian distribution: $p(\bm{l}_{1,j}|\bm{f})= N(\bm{\mu}_{1,j},\bm{\Sigma}_{1,j})$ and $p(\bm{l}_{2,j}|\bm{f})= N(\bm{\mu}_{2,j},\bm{\Sigma}_{2,j})$. The posteriors of the element in the $s_{th}$ row and $t_{th}$ column of binary high-level feature maps $p(f_{st}|\bm{x})$ is assigned to binomial distribution $cat([p_{ts},1-p_{ts}])$, where $p_{ts}$ is the probability for $f_{st}$ to be one. All the posterior distribution is parameterized by the neural networks. $J_1$ represents for the information bottleneck employed in object detection, which aims to remove information redundancy and fully utilize the representational power of the binary neural networks. The goal of $J_2$ is to enforce the object priors to be sparse so that the posteriors are encouraged to be concentrated on informative prediction with false positive elimination. In the learning objective, $p(f_{st})$ in the binomial distribution is a constant. Meanwhile, the sparse object classification priors are imposed via $J_2$ so that $p(c_i)$ is also regarded as a constant. For one-stage detectors, constant $p(\bm{l}_{1,i})$ and $p(\bm{l}_{2,i})$ follows standard normal distribution. For two-stage detectors, $p(\bm{l}_{1,i})$ and $p(\bm{l}_{2,i})$ are parameterized by RPN, which is learned by the objective function. The last layer of the backbone that outputs the parameters of the binary high-level feature maps is real-valued in training for Monte-Carlo sampling and is binarzed with the sign function during inference. Meanwhile, the layers that output the parameters for object class and location distribution remain real-valued for accurate detection. During inference, we drop the network branch of covariance matrix for location offset, and assign all location prediction with the mean value to accelerate computation. Moreover, the prediction of object classes is set to that with the maximum probability to avoid time-consuming stochastic sampling in inference. \section{Experiments} In this section, we conducted comprehensive experiments to evaluate our proposed method on two datasets for object detection: PASCAL VOC \cite{everingham2010pascal} and COCO \cite{lin2014microsoft}. We first describe the implementation details of our BiDet, and then we validate the effectiveness of IB and sparse object priors for binarized object detectors by ablation study. Finally, we compare our method with state-of-the-art binary neural networks in the task of object detection to demonstrate superiority of the proposed BiDet. \subsection{Datasets and Implementation Details} We first introduce the datasets that we carried out experiments on and data preprocessing techniques: \textbf{PASCAL VOC: }The PASCAL VOC dataset contains natural images from 20 different classes. We trained our model on the VOC 2007 and VOC 2012 trainval sets which consist of around 16k images, and we evaluated our method on VOC 2007 test set including about 5k images. Following \cite{everingham2010pascal}, we used the mean average precision (mAP) as the evaluation criterion. \textbf{COCO: }The COCO dataset consists of images from 80 different categories. We conducted experiments on the 2014 COCO object detection track. We trained our model with the combination of 80k images from the training set and 35k images sampled from validation set (trainval35k \cite{bell2016inside}) and tested our method on the remaining 5k images in the validation set (minival \cite{bell2016inside}). Following the standard COCO evaluation metric \cite{lin2014microsoft}, we report the average precision (AP) for IoU $\in \left[0.5 : 0.05 : 0.95\right]$ denoted as mAP@$\left[.5, .95\right]$. We also report AP$_{50}$, AP$_{75}$ as well as AP$_{s}$, AP$_{m}$ and AP$_{l}$ to further analyze our method. We trained our BiDet with the SSD300 \cite{liu2016ssd} and Faster R-CNN \cite{ren2015faster} detection framework whose backbone were VGG16 \cite{simonyan2014very} and ResNet-18 \cite{he2016deep} respectively. Following the implementation of binary neural networks in \cite{hubara2016binarized}, we remained the first and last layer in the detection networks real-valued. We used the data augmentation techniques in \cite{liu2016ssd} and \cite{ren2015faster} when training our BiDet with SSD300 and Faster R-CNN detection frameworks respectively. In most cases, the backbone network was pre-trained on ImageNet \cite{russakovsky2015imagenet} in the task of image classification. Then we jointly finetuned the backbone part and trained the detection part for the object detection task. The batchsize was assigned to be $32$, and the Adam optimizer \cite{kingma2014adam} was applied. The learning rate started from 0.001 and decayed twice by multiplying $0.1$ at the $6_{th}$ and $10_{th}$ epoch out of $12$ epochs. Hyperparamters $\beta$ and $\gamma$ were set as $10$ and $0.2$ respectively. \begin{figure}[t] \centering \includegraphics[height=7.5cm, width=8.5cm]{BiDet_ablation.pdf} \caption{Ablation study w.r.t. hyperparameters $\beta$ and $\gamma$, where the variety of (a) mAP, (b) the mutual information between high-level feature maps and the object detection $I(F;L,C)$ , (c) the number of false positives and (d) the number of false negatives are demonstrated. (best viewed in color).} \vspace{-0.3cm} \label{ablation} \end{figure} \begin{table*}[t] \footnotesize \caption{Comparison of parameter size, FLOPs and mAP (\%) with the state-of-the-art binary neural networks in both one-stage and two-stage detection frameworks on PASCAL VOC. The detector with the real-valued and multi-bit backbone is given for reference. BiDet (SC) means the proposed method with extra shortcut for the architectures. } \label{MAPVOC} \centering \vspace{0.1cm} \renewcommand\arraystretch{1.2} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Framework & Input & Backbone & Quantization & W/A (bit) & \#Params & MFLOPs & mAP\\ \hline \multirow{10}{*}{SSD300} & \multirow{10}{*}{$300\times300$} & VGG16 & \multirow{2}{*}{$-$} & \multirow{2}{*}{$32/32$} & $100.28$MB & $31,750$ & $72.4$\\ & & MobileNetV1 & & & $30.07$MB & $1,150$ & $68.0$\\ \cline{3-8} & & \multirow{7}{*}{VGG16} & TWN & $2/32$ & $24.54$MB & $8,531$ & $67.8$\\ & & & DoReFa-Net & $4/4$ & $29.58$MB & $4,661$ & $69.2$\\ \cline{4-8} & & & BNN & \multirow{3}{*}{$1/1$} & $22.06$MB & $1,275$ & $42.0$\\ & & & Xnor-Net & & $22.16$MB & $1,279$ & $50.2$\\ & & & BiDet & & $22.06$MB & $1,275$ & $\bm{52.4}$\\ \cline{4-8} & & & Bi-Real-Net & \multirow{2}{*}{$1/1$} & $21.88$MB & $1,277$ & $63.8$\\ & & & BiDet (SC) & & $21.88$MB & $1,277$ & $\bm{66.0}$\\ \cline{3-8} & & \multirow{2}{*}{MobileNetV1} & Xnor-Net & \multirow{2}{*}{$1/1$} & $22.48$MB & $836$ & $48.9$\\ & & & BiDet & & $22.48$MB & $836$ & $\bm{51.2}$\\ \hline \multirow{8}{*}{Faster R-CNN} & \multirow{8}{*}{$600\times1000$} & \multirow{8}{*}{ResNet-18} & $-$ & $32/32$ & $47.35$MB & $36,013$ & $74.5$\\ \cline{4-8} & & & TWN & $2/32$ & $3.83$MB & $9,196$ & $69.9$\\ & & & DoReFa-Net & $4/4$ & $6.73$MB & $4,694$ & $71.0$\\ \cline{4-8} & & & BNN & \multirow{3}{*}{$1/1$} & $2.38$MB & $779$ & $35.6$\\ & & & Xnor-Net & & $2.48$MB & $783$ & $48.4$\\ & & & BiDet & & $2.38$MB & $779$ & $\bm{50.0}$\\ \cline{4-8} & & & Bi-Real-Net & \multirow{2}{*}{$1/1$} & $2.39$MB & $781$ & $58.2$\\ & & & BiDet (SC) & & $2.39$MB & $781$ & $\bm{59.5}$\\ \hline \end{tabular} \vspace{-0.2cm} \end{table*} \begin{table*}[t] \footnotesize \caption{Comparison of mAP@$\left[.5, .95\right]$ (\%), AP with different IOU threshold and AP for objects in various sizes with state-of-the-art binarized object detectors in SSD300 and Faster R-CNN detection framework on COCO, where the performance of real-valued and multi-bit detectors is reported for reference. BiDet (SC) means the proposed method with extra shortcut for the architectures.} \label{MAPCOCO} \centering \vspace{0.1cm} \renewcommand\arraystretch{1.2} \begin{tabular}{|c|c|c|c|c|cc|ccc|} \hline Framework & Input & Backbone & Quantization & mAP@$\left[.5, .95\right]$ & AP$_{50}$ & AP$_{75}$ (\%) & AP$_{s}$ & AP$_{m}$ & AP$_{l}$\\ \hline \multirow{8}{*}{SSD300} & \multirow{8}{*}{$300\times300$} & \multirow{8}{*}{VGG16} & $-$ & $23.2$ & $41.2$ & $23.4$ & $5.3$ & $23.2$ & $39.6$\\ \cline{4-10} & & & TWN & $16.9$ & $33.0$ & $15.8$ & $5.0$ & $16.9$ & $27.2$\\ & & & DoReFa-Net & $19.5$ & $35.0$ & $19.6$ & $5.1$ & $20.5$ & $32.8$\\ \cline{4-10} & & & BNN & $6.2$ & $15.9$ & $3.8$ & $2.4$ & $10.0$ & $9.9$\\ & & & Xnor-Net & $8.1$ & $19.5$ & $5.6$ & $2.6$ & $8.3$ & $13.3$\\ & & & BiDet & $\bm{9.8}$ & $\bm{22.5}$ & $\bm{7.2}$ & $\bm{3.1}$ & $\bm{10.8}$ & $\bm{16.1}$\\ \cline{4-10} & & & Bi-Real-Net & $11.2$ & $26.0$ & $8.3$ & $3.1$ & $12.0$ & $18.3$\\ & & & BiDet (SC)& $\bm{13.2}$ & $\bm{28.3}$ & $\bm{10.5}$ & $\bm{5.1}$ & $\bm{14.3}$ & $\bm{20.5}$\\ \hline \multirow{8}{*}{Faster R-CNN} & \multirow{8}{*}{$600\times1000$} & \multirow{8}{*}{ResNet-18} & $-$ & $26.0$ & $44.8$ & $27.2$ & $10.0$ & $28.9$ & $39.7$\\ \cline{4-10} & & & TWN & $19.7$ & $35.3$ & $19.7$ & $5.1$ & $20.7$ & $33.3$\\ & & & DoReFa-Net & $22.9$ & $38.6$ & $23.7$ & $8.0$ & $24.9$ & $36.3$\\ \cline{4-10} & & & BNN & $5.6$ & $14.3$ & $2.6$ & $2.0$ & $8.5$ & $9.3$\\ & & & Xnor-Net & $10.4$ & $21.6$ & $8.8$ & $2.7$ & $11.8$ & $15.9$\\ & & & BiDet & $\bm{12.1}$ & $\bm{24.8}$ & $\bm{10.1}$ & $\bm{4.1}$ & $\bm{13.5}$ & $\bm{17.7}$\\ \cline{4-10} & & & Bi-Real-Net & $14.4$ & $29.0$ & $13.4$ & $3.7$ & $15.4$ & $24.1$\\ & & & BiDet (SC) & $\bm{15.7}$ & $\bm{31.0}$ & $\bm{14.4}$ & $\bm{4.9}$ & $\bm{16.7}$ & $\bm{25.4}$\\ \hline \end{tabular} \vspace{-0.2cm} \end{table*} \subsection{Ablation Study} Since the IB principle removes the redundant information in binarized object detectors and the learned sparse object priors concentrate the posteriors on informative prediction with false positive alleviation, the detection accuracy is enhanced significantly. To verify the effectiveness of the IB principle and the learned sparse priors, we conducted the ablation study to evaluate our BiDet w.r.t. the hyperparameter $\beta$ and $\gamma$ in the objective function. We adopted the SSD detection framework with VGG16 backbone for our BiDet on the PASCAL VOC dataset. We report the mAP, the mutual information between high-level feature maps and the object detection $I(F;L,C)$, the number of false positives and the number of false negatives with respect to $\beta$ and $\gamma$ in Figure \ref{ablation} (a), (b), (c) and (d) respectively. Based on the results, we observe the influence of the IB principle and the learned sparse object priors as follows. By observing Figure \ref{ablation} (a) and (b), we conclude that mAP and $I(F;L,C)$ are positively correlated as they demonstrate the detection performance and the amount of related information respectively. Medium $\beta$ provides the optimal trade-off between the amount of extracted information and the related information so that the representational capacity of the binary object detectors is fully utilized with redundancy removal. Small $\beta$ fails to leverage the representational power of the networks as the amount of extracted information is limited by regularizing the high-level feature maps, while large $\beta$ enforces the networks to learn redundant information which leads to significant over-fitting. Meanwhile, medium $\gamma$ offers optimal sparse object priors that enforces the posteriors to concentrate on most informative prediction. Small $\gamma$ is not capable of sparsifying the predicted objects, and large $\gamma$ disables the posteriors to represent informative objects with excessive sparsity. By comparing the variety of false positives and false negatives w.r.t. $\beta$ and $\gamma$, we know that medium $\beta$ decreases false positives most significantly and changing $\beta$ does not varies the number of false negatives notably, which means that the redundancy removal only alleviates the uninformative false positives while remains the informative true positives unchanged. Meanwhile, small $\gamma$ fails to constrain the false positives and large $\gamma$ clearly increases the false negatives, which both degrades the performance significantly. \begin{figure*}[t] \centering \includegraphics[height=5cm, width=17.5cm]{BiDet_visualization.pdf} \caption{Qualitative results on PASCAL VOC. Images in the top row shows the object predicted by Xnor-Net, while the images with the objects detected by our BiDet are displayed in the bottom row. The proposed BiDet removes the information redundancy to fully utilize the network capacity, and learns the sparse object priors to eliminate false positives (best viewed in color).} \label{visualization} \end{figure*} \subsection{Comparison with the State-of-the-art Methods} In this section, we compare the proposed BiDet with the state-of-the-art binary neural networks including BNN \cite{courbariaux2015binaryconnect}, Xnor-Net \cite{rastegari2016xnor} and Bi-Real-Net \cite{liu2018bi} in the task of object detection on the PASCAL VOC and COCO datasets. For reference, we report the detection performance of the multi-bit quantized networks containing DoReFa-Net \cite{zhou2016dorefa} and TWN \cite{li2016ternary} and the lightweight networks MobileNetV1 \cite{howard2017mobilenets}. \textbf{Results on PASCAL VOC: }Table \ref{MAPVOC} illustrates the comparison of computation complexity, storage cost and the mAP across different quantization methods and detection frameworks. Our BiDet significantly accelerates the computation and saves the storage by $24.90\times$ and $4.55\times$ with the SSD300 detector and $46.23\times$ and $19.81\times$ with the Faster R-CNN detector. The efficiency is enhanced more notably in the Faster R-CNN detector, as there are multiple real-valued output layers of the head networks in SSD300 for multi-scale feature extraction. Compared with the state-of-the-art binary neural networks, the proposed BiDet improves the mAP of Xnor-Net by $2.2\%$ and $1.6\%$ with SSD300 and Faster R-CNN frameworks respectively with fewer FLOPs and the number of parameters than Xnor-Net. As demonstrated in \cite{liu2018bi}, adding extra shortcut between consecutive convolutional layers can further enhance the representational power of the binary neural networks, we also employ architecture with additional skip connection to evaluate our BiDet in networks with stronger capacity. Due to the information redundancy, the performance of Bi-Real-Net with constrained network capacity is degraded significantly compared with their full-precision counterparts in both one-stage and two-stage detection frameworks. On the contrary, our BiDet imposes the IB principle on learning binary neural networks for object detection and fully utilizes the network capacity with redundancy removal. As a result, the proposed BiDet increases the mAP of Bi-Real-Net by $2.2\%$ and $1.3\%$ in SSD300 and Faster R-CNN detectors respectively without additional computational and storage cost. Figure \ref{visualization} shows the qualitative results of Xnor-Net and our BiDet in the SSD300 detection framework with VGG16, where the proposed BiDet significantly alleviates the false positives. Due to the different pipelines in one-stage and two-stage detectors, the mAP gained from the proposed BiDet with Faster R-CNN is less than SSD300. As analyzed in \cite{lin2017focal}, one-stage detectors face the severe positive-negative class imbalance problem which two-stage detectors are free of, so that one-stage detectors are usually more vulnerable to false positives. Therefore, one-stage object detection framework obtains more benefits from the proposed BiDet, which learns the sparse object priors to concentrate the posteriors on informative prediction with false positive elimination. Moreover, our BiDet can be integrated with other efficient networks in object detection for further computation speedup and storage saving. We employ our BiDet as a plug-and-play module in SSD detector with the MobileNetV1 network and saves the computational and storage cost by $1.47\times$ and $1.38\times$ respectively. Compared with the detectors that directly binarize weights and activations in MobileNetV1 with Xnor-Net, BiDet improves the mAP by a sizable margin, which depicts the effectiveness of redundancy removal for networks with extremely low capacity. \textbf{Results on COCO: }The COCO dataset is much more challenging for object detection than PASCAL VOC due to the high diversity and large scale. Table \ref{MAPCOCO} demonstrates mAP, AP with different IOU threshold and AP of objects in various sizes. Compared with the state-of-the-art binary neural networks Xnor-Net, our BiDet improves the mAP by $1.7\%$ and $1.7\%$ in SSD300 and Faster R-CNN detection framework respectively due to the information redundancy removal. Moreover, the proposed BiDet also enhances the binary one-stage and two-stage detectors with extra shortcut by $2.0\%$ and $1.3\%$ on mAP. Comparing with the baseline methods of network quantization, our method achieves better performance in the AP with different IOU threshold and AP for objects in different sizes, which demonstrates the universality in different application settings. \iffalse In short, our BiDet method removes the redundant information to fully utilize the representational power of binary neural networks and learns the sparse object priors to concentrate posteriors on informative prediction with false positive elimination, so that the performance on object detection is significantly enhanced. \fi \section{Conclusion} In this paper, we have proposed a binarized neural network learning method called BiDet for efficient object detection. The presented BiDet removes the redundant information via information bottleneck principle to fully utilize the representational capacity of the networks and enforces the posteriors to be concentrated on informative prediction for false positive elimination, through which the detection precision is significantly enhanced. Extensive experiments depict the superiority of BiDet in object detection compared with the state-of-the-art binary neural networks. \section*{Acknowledgement} This work was supported in part by the National Key Research and Development Program of China under Grant 2017YFA0700802, in part by the National Natural Science Foundation of China under Grant 61822603, Grant U1813218, Grant U1713214, and Grant 61672306, in part by the Shenzhen Fundamental Research Fund (Subject Arrangement) under Grant JCYJ20170412170602564, and in part by Tsinghua University Initiative Scientific Research Program. {\small \bibliographystyle{ieee_fullname}
1,108,101,563,096
arxiv
\section{Introduction}\label{sec:intro} Advances in experimental implementations of squeezing-enhanced quantum metrology protocols \cite{PhysRevA.93.013851,thomp,PhysRevLett.122.030501,PhysRevLett.122.223203,qpm} emphasize the fact that non-classicality and many-body entanglement are resources for near-term quantum technologies. In particular, quantum circuits consisting of alternating squeezing and unsqueezing operations have allowed to amplify signals of microwave photons \cite{PhysRevLett.120.040505}, cooled mechanical oscillators \cite{winey}, and atomic ensembles (\textit{twist-untwist} protocols) \cite{PhysRevLett.116.053601,PhysRevLett.117.013001}. The possibility of integrating these circuits as modules of variational quantum sensing algorithms further suggests that squeezing-enhanced sensing could be utilized in near-term quantum computers \cite{PhysRevLett.123.260505}. Controllable interatomic interactions such as magnetic or optical Feshbach resonances \cite{PhysRevLett.92.160406} are central to entanglement generation in hybrid atomic-optical systems \cite{mmd,Borregaard_2017,PhysRevA.94.042327}. In particular, control of two-body elastic scattering allows to generate entanglement by the phenomenon of spin squeezing \cite{PhysRevLett.86.4431,PhysRevA.47.5138,MA201189}. Although spin squeezing can be analyzed by emphasizing analogies with continuous-variable quadrature squeezing, it is a many-body quantum effect, in the sense that there are spin squeezing quantifiers that imply particle entanglement of a given many-body state \cite{PhysRevA.69.052327}. In constrast, continuous-variable squeezing is not sufficient for entanglement despite it being the cause of several non-classical optical phenomena. The fact that squeezing-unsqueezing protocols can enhance measurement sensitivity for quantum estimation protocols beyond the standard quantum limit was first elucidated in the context of photonic Mach-Zehnder and four-wave-mixing interferometers \cite{PhysRevA.33.4033}. By replacing the non-passive elements of these photonic interferometers by entanglement-generating atomic interactions such as one-axis or two-axis twisting, one is led to protocols that achieve analogous scaling of the sensitivity with respect to the number of atoms. Specifically, in the context of phase sensing with atomic ensembles, a single layer one-axis twist-untwist protocol is defined by the parameterized $N$ particle quantum state $\ket{\psi_{\phi}}$ where \begin{equation} \ket{\psi_{\phi}}= e^{i\chi t J_{z}^{2}}e^{-i\phi J_{y}}e^{-i\chi t J_{z}^{2}} \ket{+}^{\otimes N}. \label{eqn:iii} \end{equation} In (\ref{eqn:iii}), the spin operators satisfy the $\mathfrak{su}(2)$ relation $[J_{i},J_{j}]=i\epsilon_{ijk}J_{k}$, and $\ket{+}^{\otimes N}$ is the maximal $J_{x}$ eigenvector in the spin-${N\over 2}$ representation of $SU(2)$. A major advantage of the the protocol (\ref{eqn:iii}) is that large values of the effective interaction time $\chi t$ are not required in order to achieve Heisenberg scaling for the estimation precision of $\phi$ \cite{PhysRevLett.116.053601}. In particular, generation of the state that minimizes the quantum Cram\'{e}r-Rao bound for estimation of a $y$-rotation, viz., an equal amplitude Schr\"{o}dinger cat state of the minimal and maximal $J_{y}$ eigenvectors, is not required. However, the question of the conditions under which protocol (\ref{eqn:iii}) is an optimal strategy for achieving Heisenberg scaling for estimation of $\phi$ remains open. In this work, we show that the protocol (\ref{eqn:iii}) achieves the minimal error possible among all protocols that apply a weak one-axis twisting before and after the rotation parameter in the limit $N\rightarrow \infty$ (Section \ref{sec:ao}). In Section \ref{sec:layer} and Section \ref{sec:fr}, respectively, we obtain analogous results for the multilayer improvements of the protocol (\ref{eqn:iii}), and its implementation in systems with uniform, finite range atom-atom interactions. The principal physical constraint in these results is our requirement of an asymptotically vanishing interaction time $\chi t \rightarrow 0$ as $N\rightarrow \infty$. This constraint is consistent with the fidelity losses encountered when generating coherence and entanglement of a many-atom system over long times. The setting of the quantum metrology problem at hand consists of: 1. preparation of a spin-${N\over 2}$ coherent state of $N$ two-level atoms, 2. application of layers of the protocol (\ref{eqn:iii}) or its finite-range generalizations, and 3. measurement of $J_{y}$. An operator-valued estimator of the phase $\phi$ is given by $\hat{\phi}={J_{y}\over \del_{\phi} \bra{\psi_{\phi}}J_{y}\ket{\psi_{\phi}}}$, and we define the empirical error as $(\Delta \phi)^{2}:= \text{Var}_{\ket{\psi_{\phi}}}\hat{\phi}$. It follows that \begin{equation}(\Delta \phi)^{2}:= {\text{Var}_{\ket{\psi_{\phi}}}J_{y}\over \left( \del_{\phi} \bra{\psi_{\phi}}J_{y}\ket{\psi_{\phi}}\right)^{2}}.\label{eqn:prec}\end{equation} This practical approach can achieve empirical errors that scale similarly to a $N^{-2}$ quantum Cram\'{e}r-Rao bound for estimation of $\phi$ on a quantum state manifold that is related to (\ref{eqn:lla}) (see Proposition \ref{prop:aaa} for a rigorous statement). However, such Heisenberg scaling for (\ref{eqn:prec}) does not require saturation of the quantum Fisher information by implementation of an optimal measurement. Since the error (\ref{eqn:prec}) is invariant under $\chi \rightarrow -\chi$, the protocol can be defined with $\chi>0$ without loss of generality. A physical explanation of the fact that Heisenberg scaling is obtained for small interaction time $\chi t$ for the protocol (\ref{eqn:iii}) is that the initial one-axis twisting drives the coherent state $\ket{+}^{\otimes N}$ toward a Schr\"{o}dinger cat state of $J_{x}$ according to the Yurke-Stoler dynamics driven by the one-axis twisting, but symmetrically about the $x$-axis in spin phase space. For small $\chi t$, this process actually creates a pseudo-cat state with respect to the $J_{y}$ generator, which is sensitive to the rotation $\phi$. The untwisting acts to amplify the phase of the rotation. Note that the protocol does not return the initial spin coherent state to the manifold of spin coherent states. The fact that the nonlinearity of an interaction can compensate for weak interaction strength to achieve Heisenberg scaling in quantum sensing can also be observed for continuous variable displacement sensing. The continuous variable analogue of the twist-untwist protocol is given by applications of the Kerr nonlinearity with opposite signs: $\ket{\psi_{\phi}}=e^{i\chi t (a^{*}a)^{2}}D(\phi)e^{-i\chi t (a^{*}a)^{2}}\ket{\alpha}$ where $D(\phi)=e^{-i\phi p}$ is a unitary displacement operator and $\ket{\alpha}$ is a Heisenberg-Weyl coherent state with $\text{Im}\, \alpha =0$. One finds that for a homodyne readout of the $p$-quadrature \begin{align} (\Delta \phi)^{2}\big\vert_{\phi=0}&:= {\text{Var}_{\ket{\psi_{\phi}}}p\over ({d\over d\phi}\langle \psi_{\phi}\vert p\vert \psi_{\phi} \rangle )^{2}}\Big\vert_{\phi=0}\nonumber \\&= {1\over 2\alpha^{4}e^{-2\alpha^{2}(1-\cos 2\chi t)}\sin^{2}\left( \alpha^{2}\sin(2\chi t)+1\right)} \label{eqn:cvtw} \end{align} which, for large $\alpha$ has an approximate minimum at $\chi t={\pi -2\over \alpha^{2} +1}$, an interaction time at which (\ref{eqn:cvtw}) scales as the inverse square of the intensity $\alpha^{2}$. \section{Asymptotic optimality of twist-untwist protocols\label{sec:ao}} Our main result in Theorem \ref{th:1} shows that when using the fixed measurement scheme defined by the estimator $\hat{\phi}$, and for two calls to a low interaction time one-axis twisting evolution separated by the rotation to be sensed, the twist-untwist protocol (\ref{eqn:iii}) gives an optimal probe state in the limit of large $N$. However, it is useful to first understand how, given the twist-untwist protocol (\ref{eqn:iii}), the measurement scheme defined by $\hat{\phi}$ performs when compared to the ultimate precision obtainable by an optimal unbiased estimator in one-shot quantum estimation theory. For this, we recall that the Heisenberg limit for the scaling of the mean squared error of an unbiased estimator of $\phi$ is defined by the quantum Fisher information appearing in the the one-shot quantum Cram\'{e}r-Rao bound scaling as $\mathcal{O}(N^{2})$. However, measurement of the operator-valued estimator $\hat{\phi}$ in Section \ref{sec:intro} does not give values in $[-\pi,\pi]$ and so there is not a unique way to relate (\ref{eqn:prec}) to a quantum Cram\'{e}r-Rao bound \cite{holevo}. Therefore we first provide a basic, but rigorous, statement that relates the optimal scaling of (\ref{eqn:prec}) for twist-untwist protocols to Heisenberg scaling of the quantum Fisher information of the twist-untwist protocol (\ref{eqn:iii}). \begin{proposition} The minimum of (\ref{eqn:prec}) with respect to twist-untwist protocol (\ref{eqn:iii}) occurs at $\chi t = \tan^{-1}{1\over \sqrt{N-2}}$ with minimum value asymptotically given by ${e\over N^{2}}$ as $N\rightarrow \infty$. With $\mathrm{QFI}(\psi_{\phi})$ defined as the quantum Fisher information for the protocol (\ref{eqn:iii}), the function \begin{equation} f(\chi t):= {\mathrm{QFI}(\psi_{\phi})^{-1}\over (\Delta \phi)^{2}\vert_{\phi=0}} \end{equation} satisfies $f(\tan^{-1}{1\over \sqrt{N-2}})\sim {2\over e-e^{-1}}$. Further, $f(\chi t)\le 1$ and the maximum value 1 is asymptotically attained when $\chi t$ is a function of $N$ that goes to zero as $N\rightarrow \infty$. \label{prop:aaa} \end{proposition} \begin{proof} The value $\tan^{-1}{1\over \sqrt{N-2}}$ for the critical interaction time is proven in Ref. \cite{PhysRevLett.116.053601}. Note that the numerator of $f$ is the lower bound appearing in the quantum Cram\'{e}r-Rao inequality (QCRI), so $f=1$ implies that the measurement saturating the QCRI for the protocol $\ket{\psi_{\phi}}$ has the same error as the $J_{y}$ measurement defining (\ref{eqn:prec}). The fact that $f\le 1$ follows from the fact that $((\Delta \phi)^{2}\vert_{\phi =0})^{-1}$ is at most the classical Fisher information with respect to the $J_{y}$ measurement at $\ket{\psi_{\phi=0}}$ \cite{ps}, and the existence of a measurement for which the classical Fisher information saturates the quantum Fisher information \cite{bc}. From the symmetric logarithmic derivative for the state manifold $\ket{\psi_{\phi}}$ (see (\ref{eqn:sld})), it follows that \begin{align}\text{QFI}(\psi_{\phi})&=4\text{Var}_{e^{\pm i\chi t J_{z}^{2}}\ket{+}^{\otimes N}}J_{y} \nonumber \\ &= {1\over 8}\left( N^{2}+N - N(N-1)\cos^{N-2}2\chi t \right). \end{align} Note that $\text{QFI}(\psi_{\phi})$ is independent of $\phi$. The value of (\ref{eqn:prec}) with respect to twist-untwist protocol (\ref{eqn:iii}) is calculated from a more general formula (\ref{eqn:anal2}) below, so the final result is \begin{equation} f(\chi t)={2N(N-1)^{2}\sin^{2}\chi t\cos^{2N-4}\chi t\over N^{2}\left( 1-\cos^{N-2}2\chi t \right) + N\left( 1+\cos^{N-2}2\chi t \right)}. \label{eqn:gfgf} \end{equation} Using $\tan^{-1}{1\over \sqrt{N-2}}\sim {1\over \sqrt{N}}$, $\cos^{N}{2\over \sqrt{N}}\sim e^{-2}$, $\cos^{2N-4}{1\over \sqrt{N}}\sim e^{-1}$, and $\sin^{2}{1\over \sqrt{N-2}}\sim {1\over N}$ gives the asymptotic result $f(\tan^{-1}{1\over \sqrt{N-2}})\sim {2\over e-e^{-1}}$, which implies that at the interaction time that minimizes (\ref{eqn:prec}), the lowest possible achievable error is only about 15\% lower than (\ref{eqn:prec}) in the limit $N\rightarrow \infty$. Taking the derivative of (\ref{eqn:gfgf}) with respect to $\chi t$ and using the assumption $\chi t \rightarrow 0$ as $N\rightarrow \infty$ to replace $\sin \chi t \approx \chi t$ and $\cos \chi t \approx 1$, one gets the critical interaction time $(\chi t)_{*} \sim {1\over \sqrt{N(N-2)}}\sim {1\over N}$. Then $\lim_{N\rightarrow \infty}f((\chi t)_{*})=1$. \end{proof} The function $f$ is plotted in Fig.\ref{fig:ooo2} for $N=10^{3}$. For $\chi t > \tan^{1}{1\over \sqrt{N-2}}$, the one-axis twisting probe state $e^{\pm i \chi t J_{z}^{2}}\ket{+}^{\otimes N}$ begins to enter the Schr\"{o}dinger cat domain, which is not accessible by the twist-untwist protocol (\ref{eqn:iii}). However, for $\chi t \le \tan^{1}{1\over \sqrt{N-2}}$, the twist-untwist protocol with error (\ref{eqn:prec}) scales similarly to the optimal error achievable with the one-axis twisting probe, with both quantities scaling as $\mathcal{O}(N^{-2})$ when $\chi t \approx \tan^{1}{1\over \sqrt{N-2}}$. Although Proposition \ref{prop:aaa} suggests how to interpret the optimal $N^{-2}$ scaling of (\ref{eqn:prec}) for the twist-untwist protocol (\ref{eqn:iii}), it remains unclear whether similar protocols involving one-axis twisting before and after the rotation would be able to achieve the same scaling. Therefore, we now consider more general protocols of the form \begin{equation} \ket{\psi_{\phi}}=e^{ia_{2}J_{z}^{2}}e^{-i\phi J_{y}}e^{ia_{1}J_{z}^{2}} \ket{+}^{\otimes N} \label{eqn:twoparam} \end{equation} with $a_{j}\in \mathbb{R}$. Numerical optimization of the signal-to-noise ratio (\ref{eqn:prec}) and effects of dephasing noise for such protocols were considered in \cite{Schulte2020ramsey}. A closed formula for (\ref{eqn:prec}) is obtained for this protocol: \begin{small} \begin{equation} {2(N+1)-2(N-1)\cos^{N-2}(2(a_{1}+a_{2})) \over N(N-1)^{2}\sin^{2}a_{2}\left( \cos^{N-2}a_{2} + \cos^{N-2}(2a_{1}+a_{2}) \right)}. \label{eqn:anal2} \end{equation} \end{small} The formula (\ref{eqn:anal2}) demands that we refine the parameter space of (\ref{eqn:twoparam}) so as to have a well-defined sensing protocol. In particular, we restrict to $a_{2}\in (-\pi/2,0)\cup (0,\pi/2)$ and $a_{1}\in (-\pi/2,0)$ without loss of generality. In experimental implementations of (\ref{eqn:twoparam}), the range of available interaction times $a_{j}$ will depend on $N$, due to decoherence. We now aim to show that when the interaction times $a_{1}$ and $ a_{2}$ go to zero as $N\rightarrow \infty$, (\ref{eqn:anal2}) is asymptotically minimized for $a_{2}=-a_{1}$, i.e., at a point at which (\ref{eqn:twoparam}) defines a twist-untwist protocol. The key observation is that for fixed $a_{1}$, (\ref{eqn:anal2}) has an asymptotic extremum at $a_{2}=-a_{1}$. This is shown in Theorem \ref{th:1} below. \begin{theorem} Let $(\Delta \phi)^{2}\big\vert_{\phi=0}$ be defined with respect to $\ket{\psi_{\phi}}$ as in (\ref{eqn:twoparam}) and let $a_{1}$ and $a_{2}$ be functions of $N$ that go to zero as $N\rightarrow \infty$. Then, as $N\rightarrow \infty$ the unique critical point of $N^{2}(\Delta \phi)^{2}\big\vert_{\phi=0}$ is given by \begin{align} a_{1}&\sim \sqrt{N\over (N+1)(N-2)}\sim {1\over \sqrt{N}}\nonumber \\ a_{2}&\sim -\sqrt{N+1\over N(N-2)}\sim -{1\over \sqrt{N}}. \label{eqn:crit} \end{align} \label{th:1} \end{theorem} \begin{proof} Call $f(a_{1},a_{2}) :=N^{2}(\Delta \phi)^{2}\big\vert_{\phi=0}$ and note that we restrict to $a_{1}<0$. The factor of $N^{2}$ in the definition of $f$ is so that the $N\rightarrow \infty$ limit of $f$ is not zero pointwise; without this factor, $f$ is asymptotically constant. For example, with $c:=\tan^{-1}{1\over\sqrt{N-2}}$, it follows that $\lim_{N\rightarrow \infty}f(-c,c)=e$. The components of $\nabla f(a_{1},a_{2})$ are rational functions of $N$ with polynomial coefficients consisting of powers of $\sin(g(a_{1},a_{2}))$, $\cos(g(a_{1},a_{2}))$ where $g(a_{1},a_{2})\in \lbrace 2a_{1}+2a_{2},2a_{1}+a_{2},a_{2}\rbrace$ are linear functions of $a_{1}$, $a_{2}$. Using the assumption that $a_{1}$ and $a_{2}$ go to zero as $N\rightarrow \infty$, we linearize the coefficients by setting $\sin (g(a_{1},a_{2})) \sim g(a_{1},a_{2})$, $\cos(g(a_{1},a_{2})) \sim 1$. The extremum condition is then asymptotically given by \begin{align} (N+1)a_{1}+Na_{2}&=0\nonumber \\ N(N-2)(a_{1}+a_{2})-a_{2}^{-1}&=0 \end{align} which has the required solution pair. Uniqueness is due to the assumption $a_{1}<0$. \end{proof} The fact that the asymptotic critical point (\ref{eqn:crit}) defines an asymptotic minimum of $f$ can be checked numerically or by taking second derivatives. Note that with the values in (\ref{eqn:crit}), $\lim_{N\rightarrow \infty}{a_{1}^{2}\over a_{2}^{2}}=1$ so that, under the assumptions of Theorem 1, a twist-untwist protocol is an optimal strategy for estimation of $\phi$ in (\ref{eqn:twoparam}). Further, note that the asymptotically optimal parameters exhibit $N^{-1/2}$ decay. This decay is modified when the interaction has finite range, as discussed in Section \ref{sec:fr}. \begin{figure*}[t!] \includegraphics[scale=0.6]{qfi_var_deriv_plot} \caption{The function $f$ in (\ref{eqn:gfgf}) for $N=10^{3}$. The blue line is at $\tan^{-1}{1\over \sqrt{N-2}}$, the critical point for the minimum of (\ref{eqn:prec}) for the protocol (\ref{eqn:iii}).} \label{fig:ooo2} \end{figure*} \section{Layered twist-untwist protocols\label{sec:layer}} An $L$ layer twist-untwist protocol can be defined by the parameterized state \begin{equation} \ket{\psi_{\phi}^{(L)}}=e^{i\phi J_{y}} \left( e^{-i\phi J_{y}} e^{i\chi t J_{z}^{2}}e^{-i\phi J_{y}}e^{-i\chi t J_{z}^{2}} \right)^{L}\ket{\zeta =1}. \label{eqn:lla} \end{equation} The state (\ref{eqn:lla}) is motivated by the consideration of a twist-untwist layer $e^{i\chi t C}e^{-i\phi J_{y}}e^{-i\chi t C}$ as a module for interferometry. Similarly structured circuits appear in asymptotically optimal variational quantum algorithms for quantum unstructured search \cite{rieffel}. It is clear that for $L$ layers, the denominator of (\ref{eqn:prec}) is given by \begin{align} \left( \del_{\phi}\langle J_{y}\rangle_{\ket{\psi_{\phi}^{(L)}}} \right)^{2} \Bigg\vert_{\phi=0} &= L^{2}\left( \del_{\phi}\langle J_{y}\rangle_{\ket{\psi_{\phi}^{(1)}}} \right)^{2} . \label{eqn:ggg} \end{align} \begin{figure*}[t!] \begin{center} \includegraphics[scale=0.6]{bosonic_fig} \caption{Black curves: inverse normalized empirical error for estimation of $\phi$ for the one-axis twist-untwist protocol in the spin wave subspace (\ref{eqn:protf}) with $N=16$ and $K=1,\ldots,5$. Red curves: the same except with $\tilde{H}_{K}$ replaced by $\tilde{H}_{K}^{(2)}$ in (\ref{eqn:protf}). For both cases, larger $Q_{\text{max}}$ corresponds to smaller $K$. \label{fig:ooo}} \end{center} \end{figure*} The numerator of (\ref{eqn:prec}) at $\phi=0$ is invariant with respect to the number of layers $L$. Because a probe state consisting of $L$ independent copies of (\ref{eqn:iii}) would have variance of the total $y$-spin component increased by a factor of $L$, we conclude that an $L$ layer protocol (\ref{eqn:lla}) allows the value of $(\Delta\phi)^{2}$ to be decreased by a factor of $L^{-1}$ compared to the protocol consisting of $L$ independent copies of (\ref{eqn:iii}). As an alternative to the layered protocol (\ref{eqn:lla}) one may consider the parameterized state $\ket{\psi_{\phi}}=e^{i\chi t C}e^{-iL\phi J_{y}}e^{-i\chi t C}\ket{+}^{\otimes N}$, which allows to obtain an $L^{2}$ increase in the derivative of the signal, similarly as in (\ref{eqn:ggg}). The difficulty with this proposal is that the map $e^{-i\theta J_{y}}\ket{\psi} \mapsto e^{-iL\theta J_{y}}\ket{\psi}$ cannot be carried out unitarily on all $\ket{\psi}$. A proof of this fact can be provided which is similar to proofs of ``no-go'' theorems for noiseless parametic amplification in the continuous-variable setting (i.e., that the map $\ket{\alpha}\mapsto \ket{L\alpha}$ cannot be achieved by a unitary operation). In fact, the multiplicative improvement obtained from the $L$ layer twist-untwist protocol extends to the quantum Fisher estimation. \begin{proposition} The quantum Fisher information $\mathrm{QFI}(\psi^{(L)}_{\phi})$ at $\phi=0$ for $L$ layer twisting-untwisting protocol (\ref{eqn:lla}) with $C=J_{z}^{2}$ is given by \begin{equation} L^{2}\mathrm{QFI}(\psi^{(1)}_{\phi}) +2L(L-1)N\cos^{N-1}\chi t + (L-1)^{2}N. \end{equation} \end{proposition} \begin{proof} The symmetric logarithmic derivative at $\phi=0$ for the one layer protocol is given by \begin{small} \begin{equation} \mathcal{L}^{(1)}_{\phi=0}=-2ie^{iC}J_{y}e^{-iC}\ket{+}\bra{+}^{\otimes N} + 2i \ket{+}\bra{+}^{\otimes N}e^{iC}J_{y}e^{-iC} \label{eqn:sld} \end{equation} \end{small} from which it follows that \begin{small} \begin{align} \mathcal{L}^{(L)}_{\phi=0}&=L\mathcal{L}^{(1)}_{\phi=0}\nonumber \\ &+\left( -2i(L-1)e^{iC}J_{y}e^{-iC}\ket{+}\bra{+}^{\otimes N} +h.c.\right). \end{align} \end{small} One then calculates \begin{small} \begin{align} \mathrm{QFI}(\psi^{(L)}_{\phi})&:= \bra{+}^{\otimes N} (\mathcal{L}^{(L)}_{\phi=0})^{2} \ket{+}^{\otimes N} \nonumber \\ &= L^{2}\mathrm{QFI}^{(1)}(\chi,t) + N(L-1)^{2} \nonumber \\ &{}+ 4L(L-1)\left( \bra{+}^{\otimes N} e^{iC}J_{y}e^{-iC}J_{y} \ket{+}^{\otimes N} +c.c \right) \end{align} \end{small} and the last term can be evaluated explicitly by using $e^{iaJ_{z}^{2}}J_{y}e^{-iaJ_{z}^{2}} = {1\over 2i}J_{+}e^{2ia(J_{z}+{1\over 2})} + h.c.$ \end{proof} A generalization of (\ref{eqn:lla}) that allows one-axis twisting to alternate with rotations is given by \begin{equation} \ket{\psi_{\phi}^{(L)}(\vec{a})}=e^{i\phi J_{y}} \prod_{j=1}^{L} e^{-i\phi J_{y}} e^{ia_{j,2} C}e^{-i\phi J_{y}}e^{ia_{j,1} C} \ket{\zeta =1} \label{eqn:lla2} \end{equation} where $\vec{a}:=(a_{L,2},a_{L,1},a_{L-1,2},a_{L-1,1},\ldots)$ is the row vector of parameters. Defining the partial sums $\varphi_{\ell}=\sum_{j=1}^{\ell}\vec{a}_{j}$ allows one to evaluate \begin{align} N^{2}(\Delta \phi)^{2}\big\vert_{\phi =0}&={A(\varphi_{2L})\over B(\lbrace \varphi_{j}\rbrace_{j=1}^{2L})}\nonumber \\ A(\varphi_{2L})&= 2(N+1)-2(N-1)\cos^{N-2}(2\varphi_{2L}) \nonumber \\ B(\lbrace \varphi_{j}\rbrace_{j=1}^{2L}) &= N(N-1)^{2}\left( \sum_{j=1}^{2L-1}\sin \varphi_{j}\left( \cos^{N-2}\varphi_{j} \right.\right. \nonumber \\ &{} \left. \left. + \cos^{N-2}(2\varphi_{2L}-\varphi_{j}) \right)\vphantom{\sum_{j=1}^{2L-1}}\right)^{2} \label{eqn:prec2} \end{align} Solving for extrema of (\ref{eqn:prec2}) without restriction of the protocol (\ref{eqn:lla2}) are complicated, even when the linearization method in Theorem \ref{th:1} is used. However, constraining each layer in (\ref{eqn:lla2}) to satisfy the twist-untwist condition $a_{\ell,1}=-a_{\ell,2}$ for all layers $\ell$ results in a simplification of the partial sums, viz., $\varphi_{2\ell}=0$ and $\varphi_{2\ell+1}=a_{L-\ell,2}$. One then finds that (\ref{eqn:prec2}) is minimized for equal strength twist-untwist in each layer, i.e., $\varphi_{2\ell+1}=\tan^{-1}{1\over \sqrt{N-2}}$. \section{Finite range twist-untwist protocols\label{sec:fr}} Rydberg-Rydberg atom interactions between trapped neutral atoms provide a platform for scalable many-body entanglement generation via spin-squeezing \cite{PhysRevLett.112.103601}. For $N$ atoms on a one-dimensional lattice with periodic boundary condition, a finite range, two-local Hamiltonian generalizing the one-axis twisting generator $J_{z}^{2}$ is given by \begin{align} H_{K}&={1\over 4} \sum_{j=0}^{N-1}\sum_{\substack{ i= j-K \text{ mod } N \\ i\neq j}}^{ j+K \text{ mod } N } V_{j,i} Z_{j}\otimes Z_{i} \label{eqn:h1} \end{align} where $K$ is an integer in $[1,{N\over 2})$ representing the interaction range, and where $Z_{j}$ denotes the Pauli $Z$ matrix acting on atom $j$ and the identity operator on all other atoms. A twist-untwist protocol with respect to $H_{K}$ is defined as in (\ref{eqn:lla}) with the substitution $J_{z}^{2}\rightarrow H_{K}$. For such a protocol, the expression for the denominator of (\ref{eqn:prec}) can be calculated analytically by using the identity \begin{align} &{}e^{i\chi t H_{K}}(\sigma_{+})_{r}e^{-i\chi t H_{K}} \nonumber \\ &{} =(\sigma_{+})_{r} \exp \left[ {i\chi t \sum_{\substack{ i=r-K \text{ mod } N \\ i\neq r }}^{ i=r+K \text{ mod } N }V_{r,j}Z_{j}} \right] . \end{align} where $(\sigma_{+})_{r}$ denotes the $\sigma_{+}$ matrix acting on atom $r$ and the identity operator on all other atoms. The result is \begin{small} \begin{align}&{}\del_{\phi}\langle \psi_{\phi}\vert J_{y}\vert \psi_{\phi}\rangle\big\vert_{\phi=0} = \nonumber \\ &{} {1\over 2}\sum_{r=0}^{N-1} \sum_{\substack{\ell = -K \\ \ell \neq 0}}^{K}\left[ \sin \left( \chi V_{r,r-\ell}t \right) \prod_{\substack{ i=-K \\ i\neq \ell }}^{ i=K }\cos \left( \chi V_{r,r-i}t \right) \right] \label{eqn:denom} \end{align} \end{small} where in the above equation, $r-\ell$ and $r-i$ are modulo $N$. We now consider the case of a constant, finite range interaction \begin{equation}V_{i,j}=\begin{cases} 1 & 0<\vert i-j \vert \le K \\ 0 &\vert i-j \vert > K \text{ or } i=j \end{cases},\label{eqn:knng}\end{equation} where the inequalities are interpreted modulo $N$. In this case, $V_{i,j}$ is the adjacency matrix of the $K$-nearest neighbors graph. For this choice of $V_{i,j}$, the expression (\ref{eqn:denom}) becomes $NK\sin \chi t \cos^{2K-1}\chi t$, which obtains a maximum at $\chi t=\tan^{-1}\sqrt{1\over 2K-1}$. At this maximum, one obtains the minimal value of $(\Delta \phi)^{2}\vert_{\phi=0}={1\over 2NK\left(1-{1\over 2K}\right)^{2K-1}}$. Further, we obtain an analytical formula for (\ref{eqn:prec}) for the general protocol \begin{equation}\ket{\psi_{\phi}}=e^{ia_{2}H_{K}}e^{-i\phi J_{y}}e^{ia_{1}H_{K}}\ket{+}^{\otimes N}, \label{eqn:fgfg} \end{equation} restricting to the regime of short-range interactions $K\le {N-2\over 4}$. The formula is given in Appendix \ref{sec:app1}. In analogy with Proposition \ref{prop:aaa}, one can use the aforementioned formulas (\ref{eqn:prec4}) to show that the ratio $\text{QFI}^{-1}/(\Delta \phi)^{2}\vert_{\phi=0}$ evaluated at the critical interaction time $\chi t = \tan^{-1}{1\over \sqrt{2K-1}}$ asymptotes to $(e+e^{-1}-2)^{-1}\approx 0.92$ as $K\rightarrow \infty$. Therefore, the one-shot quantum Cram\'{e}r-Rao bound is asymptotically only about 8\% lower that the empirical error achieved using $J_{y}$ measurement and propagation of error. Further, the formulas (\ref{eqn:prec4}) allow to prove asymptotic optimality for finite-range one-axis twist-untwist protocols among the protocols (\ref{eqn:fgfg}). \begin{theorem} Let $(\Delta \phi)^{2}\big\vert_{\phi=0}$ be defined with respect to $\ket{\psi_{\phi}}$ as in (\ref{eqn:fgfg}). Further, let $K=r(N-2)$ for a fixed locality parameter $0<r\le {1\over 4}$ and let $a_{1}$ and $a_{2}$ be functions of $N$ that go to zero as $N\rightarrow \infty$. Then, as $N\rightarrow \infty$ the unique critical point of $N^{2}(\Delta \phi)^{2}\big\vert_{\phi=0}$ is given by \begin{align} {a_{2}\over a_{1}}&\sim -{ 8K^{2}-2K-1\over 8K^{2}-3K-{1\over 2} } \nonumber \\ a_{1}a_{2}&\sim {-1\over 2K-1} \label{eqn:th22} \end{align} \label{th:2} \end{theorem} \begin{proof} The condition $K=r(N-2)$ for a fixed $0<r\le {1\over 4}$ allows to apply (\ref{eqn:prec4}) to calculate $N^{2}(\Delta \phi)^{2}\big\vert_{\phi=0}$. The assumption of asymptotically vanishing $a_{1}$, $a_{2}$ allows to substitute the trigonometric functions appearing in the equation $\nabla_{\vec{a}}\left( N^{2}(\Delta \phi)^{2}\big\vert_{\phi=0} \right)=0$ by their first order Maclaurin expansions as in Theorem \ref{th:1}. Applying this to $\del_{a_{1}} \left( N^{2}(\Delta \phi)^{2}\big\vert_{\phi=0} \right)=0$ gives the first asymptotic criticality condition. The second asymptotic criticality condition is obtained by combining $\del_{a_{1(2)}} \left( N^{2}(\Delta \phi)^{2}\big\vert_{\phi=0} \right)=0$. \end{proof} Note that in Theorem \ref{th:2} we have taken $K$ to be a function of $N$. Unlike the $N^{-1/2}$ decay of $a_{1(2)}$ in the case of asymptotically optimal twist-untwist protocols for full range interactions in Theorem \ref{th:1}, a $K^{-1/2}$ decay is observed in the case of range $K$ interactions. As a consequence, the rate of convergence of the optimal protocol of the form (\ref{eqn:fgfg}) to a finite-range one-axis twist-untwist protocol is controlled by the locality parameter $r$. It is also possible to obtain formulas analogous to (\ref{eqn:prec4}) and (\ref{eqn:th22}) in dimension $D$ when the size of the interaction region scales as $\mathcal{O}(r^{D}N^{D})$. However, we have not observed any change in the rate of convergence to optimality with respect to the locality parameter $r$. For instance, in the case of an $N\times N$ square lattice with periodic boundary condition and with constant interaction on squares of $K\times K$ sites (with $K$ odd and $K\le {N-2\over 4}$), we find that the optimal one-axis twisting parameters satisfy an asymptotic relation of the form \begin{equation} {a_{2}\over a_{1}}\sim -{cK^{4}+f(K)\over cK^{4}+g(K)} \end{equation} with $f(K)$, $g(K) \in \mathcal{O}(K^{3})$. \subsection{Translation-invariant finite range twist-untwist protocols\label{sec:gen}} We now consider physical systems that exhibit Bose statistics as in Section \ref{sec:ao}, but have spatial interactions inherited from the system of distinguishable spins in Section \ref{sec:fr}. Specifically, we consider the physical scenario of two-level bosonic atoms in a ring-shaped optical lattice of $N$ sites. The internal states of the bosons are assumed to interact pairwise (e.g., magnetically) over a distance of $K$ sites. The two body interaction can be written as a Heisenberg model by using the boson operators $a$ and $b$ for the internal states: \begin{equation} {1\over 4}\sum_{i=0}^{N-1}\sum_{\substack{ i= j-K \text{ mod } N \\ i\neq j}}^{ j+K \text{ mod } N } V_{j,i} \left( a^{\dagger}_{j}a_{j} -b_{j}^{\dagger}b_{j}\right) \left( a^{\dagger}_{i}a_{i} -b_{i}^{\dagger}b_{i}\right). \label{eqn:hp} \end{equation} We assume that $a^{\dagger}_{j}a_{j} +b_{j}^{\dagger}b_{j}=1$ for all $j$, so that the sites have unit occupancy. When restricted to this subspace, it is clear that (\ref{eqn:hp}) is equal to $H_{K}$ in (\ref{eqn:h1}). However, it is possible to further restrict the interaction Hamiltonian (\ref{eqn:hp}) to describe dynamics in the translation invariant subspace of spin waves, i.e., the states spanned by the basis \begin{align} &{}\lbrace \ket{0,1,\ldots,0,1} \rbrace \cup \nonumber \\ &{} \lbrace \sum_{\substack{i_{M}<\ldots <i_{1}\\i_{\ell}=0,\ldots,N-1}}\prod_{\ell=1}^{M}a_{i_{\ell}}^{\dagger}b_{i_{\ell}} \ket{0,1,\ldots,0,1} \rbrace_{M=1}^{N} \label{eqn:swb} \end{align} where $M$ is the excitation number of the spin wave and $\ket{n_{a,1},n_{b,1},\ldots,n_{a,N-1},n_{b,N-1}}$ is an insulating state with $a_{j}^{\dagger}a_{j}=n_{a,j}$ $b_{j}^{\dagger}b_{j}=n_{b,j}$. In terms of matrix elements, the Hamiltonian (\ref{eqn:hp}) after the projection to the spin wave subspace is equal to the Hamiltonian $\tilde{H}_{K}:= P_{B}H_{K}P_{B}$, where $P_{B}$ is the projection of $(\mathbb{C}^{2})^{\otimes N}$ to the symmetric subspace. For simplicity, we again restrict to the constant interaction potential $V_{i,j}=1$. In the special case of $N\equiv 1\text{ mod }2$, $K={N-1\over 2}$, and $V_{i,j}=1$, then (\ref{eqn:h1}) is equal to $J_{z}^{2}-{N\over 4}$, so no projection is needed. For the general case, one finds that the matrix elements of $\tilde{H}_{K}$ in the orthonormal basis of Dicke states are given by \begin{align} \langle \psi_{n}\vert \tilde{H}_{K}\vert \psi_{n}\rangle &= {\delta_{n,n'}\over {N\choose n}}\sum_{x:\text{Ham}(x)=n}\langle x \vert H\vert x\rangle \nonumber \\ \ket{\psi_{n}}&:= {1\over \sqrt{N\choose n}}\sum_{x:\text{Ham}(x)=n}\ket{x} \, , \, n=0,\ldots,N \end{align} where $\ket{x}$ is a state in the computational basis and $\text{Ham}(x)$ is its associated Hamming weight. The twist-untwist protocol in this subspace is defined by the parameterized state \begin{equation} \ket{\psi_{\phi}}:=e^{i\chi t \tilde{H}_{K}}e^{-i\phi J_{y}}e^{-i\chi t \tilde{H}_{K}}\ket{+}^{\otimes N} \label{eqn:protf} \end{equation} where the state $\ket{+}^{\otimes N}$ corresponds to a superposition of the spin wave basis states in (\ref{eqn:swb}). Because the exact computation of the matrix elements of $\tilde{H}_{K}$ takes exponential time in $N$, it is useful to define a model bosonic Hamiltonian for $\tilde{H}_{K}$ that helps to analyze the metrological gain obtained in the protocol (\ref{eqn:protf}). Specifically, consider the Hamiltonian \begin{align} \tilde{H}_{K}^{(2)}&={1\over 4} \sum_{j=0}^{N-1}\sum_{\substack{ i= j-K \text{ mod } N \\ i\neq j}}^{ j+K \text{ mod } N } V_{j,i} P_{B}Z_{j}P_{B}Z_{i}P_{B}. \label{eqn:h2} \end{align} The properties of $\tilde{H}_{K}^{(2)}$ that make it useful as a model interaction for $\tilde{H}_{K}$ are established in the following proposition. \begin{proposition} If $V_{i,j}=1$, then $\tilde{H}_{K}^{(2)}={2KJ_{z}^{2}\over N}$. If $V_{i,j}>0$ for all $i,j$, then $\Vert \tilde{H}_{K}^{(2)}\Vert = \Vert\tilde{H}_{K} \Vert$. \end{proposition} \begin{proof} First, note that $P_{B}Z_{j}P_{B}={1\over N}\sum_{i=1}^{N}Z_{i}$, which is proven by considering computational basis states $\ket{x}, \ket{x'}$: if $\text{Ham}(x)\neq \text{Ham}(x')$ then $\langle x'\vert P_{B}Z_{j}P_{B}\vert x\rangle =0$; if $\text{Ham}(x)= \text{Ham}(x')=n$ then \begin{align} \langle x'\vert P_{B}Z_{j}P_{B}\vert x\rangle &= {1\over {N\choose n}}\sum_{x: \text{Ham}(x)=n}(-1)^{x_{j}} \nonumber \\ &= {{N-1\choose n} - {N-1\choose n-1} \over {N\choose n}} \nonumber \\ &={1-{2K\over N}} \nonumber \\ &=\langle N-n,n\vert {2J_{z}\over N}\vert N-n,n\rangle \label{eqn:ooo} , \end{align} and ${2J_{z}\over N}={1\over N}\sum_{i=0}^{N-1}Z_{i}$, which proves the first statement. Note that the numerator of the second line of (\ref{eqn:ooo}) is the difference between the number of ways for $n$ ones ($n-1$ ones) to appear in $x$ given that $x_{j}=0$ ($x_{j}=1$). To prove the second statement, note that for $V_{i,j}>0$, \begin{align} \Vert \tilde{H}_{K}^{(2)}\Vert &=\left( \ket{0}^{\otimes N},\tilde{H}_{K}^{(2)}\ket{0}^{\otimes N} \right) \nonumber \\ &=\left( \ket{0}^{\otimes N},\tilde{H}_{K}\ket{0}^{\otimes N} \right)\nonumber \\ &= \Vert \tilde{H}\Vert. \end{align} \end{proof} Numerical computation of $(\Delta \phi)^{2}\vert_{\phi=0}$ for the protocol (\ref{eqn:protf}) using both $\tilde{H}_{K}$ and $\tilde{H}^{(2)}_{K}$ for $N=16$ is shown in Fig.\ref{fig:ooo}. The $K^{-1/2}$ decay of the optimal twist-untwist parameters which appeared in the analysis of the protocol (\ref{eqn:fgfg}) for distinguishable two-level atoms is not observed. Instead, we conclude that the translation invariant twist-untwist protocol (\ref{eqn:protf}) effectively converts a finite interaction range to a multiplicative factor in the interaction strength. This results in longer interaction times required to reach maximal sensitivity for short-range bosonic twist-untwist protocols, but independence of the maximal sensitivity on the interaction range. \section{Discussion} Our theorems show that the twist-untwist protocol (\ref{eqn:iii}), and its generalization to constant, finite range one-axis twisting generators, is asymptotically optimal among protocols that apply two calls to asymptotically weak one-axis twisting evolutions separated by a call to the rotation parameter of interest. We expect that proof methods also allow to obtain analogous results for more general spin squeezing interactions, e.g., two-axis twisting, or twist-and-turn generators \cite{PhysRevA.97.053618}. In practice, the assumption of perfect generation of the Bose-Einstein condensed initial state $\ket{+}^{\otimes N}$ is an experimentally demanding one. Further, even for large $N$, a high fidelity one axis twisted state for times on the order $O(N^{-1/2})$ is another practical challenge. The present work provides a foundation for future analyses of imperfect or noisy twist-untwist protocols for atomic interferometry. \acknowledgements The authors were supported by the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory (LANL) under project number 20210116DR. Michael J. Martin was also supported by supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center. Los Alamos National Laboratory is managed by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. 89233218CNA000001.
1,108,101,563,097
arxiv
\section{Introduction} For positive integers $k$ and $n$ we are given a finite multiset of $n$ many $k$-tuples\footnote{A $k$-tuple simply denotes a set of size $k$, .e.g., $\{1,2,3\}$ is a $3$-tuple, also referred to as a triple. Note that repetition of elements is not allowed and the order of the elements does not matter.} of characters from an alphabet such that every triple consists of three different characters. From now on a `set of $k$-tuples' always refers to such a finite multiset of $k$-tuples. We also refer to the characters as elements. Two sets of $k$-tuples is said to be \emph{equivalent} if there is a bijection between their alphabets which induces a bijection between the two sets of $k$-tuples. We do not want to distinguish equivalent sets of $k$-tuples, thus without loss of generality we can assume that the elements are positive integers from $[m]=\{1,\dots,m\}$ for some $m$ and each of these $m$ numbers is present in at least one $k$-tuple. If a $k$-tuple does not contain the element $i$ we say that the $k$-tuple \textit{avoids} $i$ (e.g., $\{1,2,3\}$ avoids $4$ but does not avoid $2$). A $c$-coloring\footnote{A $c$-coloring of a set is a mapping from this set to a set of $c$ colors (which may, e.g., be denoted by names like red and blue or by the numbers from $[c]$).} of a set of tuples (of numbers from $[m]$) is a \textit{nice $c$-coloring} if for each of the $c$ colors and for every $i\in [m]$ there is a tuple of that color that avoids $i$. Similarly, a partial $c$-coloring of a set of tuples (that is, not all tuples need to be colored) is a \textit{nice partial $c$-coloring} if for each of the $c$ colors and for every $i\in [m]$ there is a tuple of that color that avoids $i$. Notice that a nice partial $c$-coloring can always be extended to a nice $c$-coloring by coloring arbitrarily all the tuples that are uncolored in the partial $c$-coloring. We are in particular interested in nice two-colorings of triples, that is, our aim is to two-color (with colors red and blue) a set of triples such that for each of the two colors and for every $i\in [m]$ there is a triple of that color that avoids $i$. Our main result is a characterization of the sets of triples that admit a nice (partial) $2$-coloring. Section \ref{sec:main} contains the characterization, Section \ref{sec:proof} its proof, while in Section \ref{sec:partial} we give a linear (in the number of triples) time algorithm for finding such a coloring if it exists. We further extend this result, and (without having a characterization) we give an algorithm for finding a nice $c$-coloring for every $c$ and $k$ which runs in linear time (in the number of $k$-tuples). Section \ref{sec:partial} also considers the existence of nice partial colorings that color only a small number of the $k$-tuples. This research is originally motivated by a real life scheduling problem which can be phrased as a matching problem, this connection is discussed in Section \ref{sec:matching}. \section{Main results}\label{sec:main} We are mainly interested in characterizing sets of $k$-tuples that admit a nice $c$-coloring for different values of $c$ and $k$ (even more generally we could have non-uniformly sized tuples). Furthermore, we want efficient algorithms to decide if such a $c$-coloring exists and if yes then find it. Irrespective of the size of the tuples, for every $c$ a trivial necessary condition for having a nice $c$-coloring is that all elements must be missed from at least $c$ many tuples, that is, the set of tuples is $c$-fair: \begin{definition} A set of tuples (on elements from $[m]$) is \emph{$c$-fair} if each element (of $[m]$) is missed from at least $c$ many tuples. \end{definition} The case $c=1$ is trivial, in this case a set of tuples admits a nice $1$-coloring if and only if every element is missed from at least one tuple (i.e., the set of tuples is $1$-fair). The case $c=2$ and $k=3$ is already non-trivial. For the existence of a nice two-coloring of the triples it is again a trivial necessary condition that every $i\in [m]$ is avoided by at least two triples (i.e., the set of triples is $2$-fair). During the $9$th Emléktábla Workshop Cechl\'arov\'a \cite{perscomm} asked what are the sufficient conditions for the existence of a nice two-coloring. For brevity a triple $\{x,y,z\}$ is abbreviated as $xyz$ when it does not lead to confusion (e.g., $\{1,2,3\}$ is written simply as $123$). We explicitly define again the $2$-fair property for a set of triples: \begin{definition} Let $T$ be a set of $n$ triples (of positive integers). The triples containing $i$ are denoted by $T_i$. $T$ is \emph{fair} if for every $i$ there are two triples that are not in $T_i$ (i..e, $T$ is $2$-fair). \end{definition} Given a not nice two-coloring, we say that a number $i$ makes it not nice if the triples in one of the color classes all contain $i$. \begin{definition} A set of $n$ triples is called \emph{special} if and only if it contains triples of the following form: $n-3$ copies of the triple $123$ plus three more triples, $1**,2**$ and $3**$, where the $*$'s denote arbitrary numbers different from $1,2,3$. A set of $n$ triples which is not special is called \emph{non-special}. \end{definition} Observe that a special set of $n$ triples does not admit a nice two-coloring. For $n\le 3$ no set of $n$ triples can have a nice two-coloring as one of the color classes contains at most one triple. Clearly, for $n\ge 4$ being fair and non-special are both necessary conditions for a set of triples to admit a nice two-coloring. We prove that for $n\ge 6$ these conditions are also sufficient. This was conjectured by Salia (for $n\ge 8$) \cite{perscomm}. We remark that for $n=4,5$ there exist fair non-special sets of triples that nevertheless do not admit a nice two-coloring. E.g., for $n=4$ the set of triples $\{123,145,245,678\}$ and for $n=5$ the set of triples $\{123,124,134,234,567\}$. As there are only finite many triples for $n=4,5$, we omit to list all which admit a nice two-coloring. \begin{theorem}\label{thm:main} A set of $n\ge 6$ triples admits a nice two-coloring if and only if it is fair and non-special. \end{theorem} Furthermore, we show a linear (in $n$) time algorithm for any $c$ and $k$: \begin{theorem}\label{thm:linearck} For any fixed $c,k$, given a set of $n$ many $k$-tuples, there is an $O(n)$ time algorithm to check if a nice $c$-coloring exists which also finds one if it exists (the dependence on $c$ and $k$ is hidden in the $O$ notation). \end{theorem} The variants of Theorem \ref{thm:main} and Claim \ref{claim:linear} about partial colorings are stated in Section \ref{sec:partial}. \smallskip Graph coloring is a recurring tool in (sport) event scheduling (e.g., \cite{LT,JUBW}). The original motivation of our research is also a (real life) event scheduling problem which can be phrased as a certain matching problem which in turn can be solved using our coloring results. This connection is discussed in detail in Section \ref{sec:matching}. \subsection{Consequences about coloring hypergraphs} \smallskip To put our results in additional context, we phrase our results also as statements about proper and polychromatic coloring certain hypergraphs. A coloring of the vertices of a hypergraph is \emph{proper} if no hyperedge is monochromatic. A $c$-coloring of the vertices is \emph{polychromatic} if every hyperedge contains a vertex with each of the $c$ colors. Notice that for $c=2$ a coloring is proper if and only if it is polychromatic but for $c\ne 2$ the two conditions differ. Given a set $T$ of $n$ triples with elements from $[m]$, let $H_T$ be the multi-hypergraph whose vertices correspond to the triples and for each $i\in [m]$ there is a hyperedge $e_i$ containing exactly those vertices for which the corresponding triple \textit{does not contain} $i$. Note that every vertex is contained in exactly $m-3$ hyperedges. It is easy to see that this mapping $T\rightarrow H_T$ from sets of $n$ triples on elements from $[m]$, to multi-hypergraphs with $n$ vertices and $m$ hyperedges that have all degrees equal to $m-3$, is in fact a bijection. It is also easy to see that a nice two-coloring of the triples of $T$ corresponds to a proper two-coloring of the vertices of $H_T$. With this notation Theorem \ref{thm:main} is equivalent to the following statement: \begin{theorem}\label{thm:mainhg} Given a multi-hypergraph $H$ with $n$ vertices and $m$ hyperedges such that every vertex has degree $m-3$, $H$ admits a proper two-coloring if and only if every hyperedge of $H$ has size at least $2$ and $H$ is triangle-free\footnote{A hypergraph $H$ is triangle-free if in $H$ there are no three vertices $a,b,c$ for which $\{a,b\},\{a,c\},\{b,c\}$ are hyperedges (of size $2$) in $H$.}. \end{theorem} Notice that these conditions are trivially necessary, as the existence of a hyperedge of size at most $1$ or the existence of a triangle immediately prevents the hypergraph from admitting a proper coloring. Theorem \ref{thm:main} implies that these simple conditions are also sufficient. Theorem \ref{thm:linearck} can be also translated to the language of hypergraphs: \begin{theorem}\label{thm:linckhg} For any fixed $c,k$, given a multi-hypergraph $H$ with $n$ vertices and $m$ hyperedges such that every vertex has degree $m-k$, there is an $O(n)$ time algorithm\footnote{We assume that the multi-hypergraph is stored such that for each vertex the $k$ hyperedges \emph{not} containing this vertex is given.} to check if $H$ admits a polychromatic $c$-coloring which also finds one if it exists. \end{theorem} In general it is well known that proper two-colorability of a hypergraph is $NP$-complete \cite{lovasz,garey}. In contrast to this, Theorem \ref{thm:mainhg} states that there is even a simple characterization of those hypergraphs in which the degree of every vertex is $3$ less than the number of hyperedges and that are proper two-colorable. Furthermore, for every $c$ and $k$, if in a hypergraph the degree of every vertex is exactly $k$ less than the number of hyperedges, Theorem \ref{thm:linckhg} gives a linear time algorithm (in terms of the number of vertices) to decide if the hypergraph admits a polychromatic $c$-coloring (which is equivalent to a proper $2$-coloring when $c=2$) and also finds the coloring when it exists. \section{Proof of the characterization} \label{sec:proof} The proof of Theorem \ref{thm:main} is based on the following two lemmas. \begin{lemma}\label{lem:tight} If in a fair set of non-special triples there are at least three numbers that appear exactly $n-2$ times then the set admits a nice two-coloring. \end{lemma} \begin{proof} There are three numbers, wlog. the numbers $1,2,3$, that appear $n-2$ times. This implies that there are $n-6$ triples of the form $123$, removing these triples we get a set $T$ of $6$ triples in which the numbers $1,2,3$ appear exactly $4$ times. If we can find a nice two-coloring of these $6$ triples than an arbitrary extension of this coloring to the original set gives a nice two-coloring of the original set. We have that $|T_1\cap T_2|,|T_1\cap T_3|,|T_2\cap T_3|\ge 2$. \begin{enumerate} \item There exist two numbers $i,j$ out of $1,2,3$, for which $T_i= T_j$. Then wlog. there are three cases, listed in Table \ref{tab:4a}. \begin{table}[h] \centering \begin{tabular}{lllllllllllllllllll} 1 & 2 & 3 & & & & & & 1 & 2 & 3 & & & & & & 1 & 2 & 3 \\ 1 & 2 & 3 & & & & & & 1 & 2 & 3 & & & & & & 1 & 2 & 3 \\ 1 & 2 & 3 & & & & & & 1 & 2 & 3 & & & & & & 1 & 2 & * \\ 1 & 2 & 3 & & & & & & 1 & 2 & * & & & & & & 1 & 2 & * \\ * & * & * & & & & & & * & * & 3 & & & & & & * & * & 3 \\ * & * & * & & & & & & * & * & * & & & & & & * & * & 3 \end{tabular} \caption{Case 1.} \label{tab:4a} \end{table} In all three cases we color the first, the third and the last triple red and the rest of the triples blue to get a nice coloring. \item $|T_1\cap T_2|=|T_1\cap T_3|=|T_2\cap T_3|=2$. \begin{table}[h] \centering \begin{tabular}{lll} 1 & 2 & * \\ 1 & 2 & * \\ 1 & * & 3 \\ 1 & * & 3 \\ * & 2 & 3 \\ * & 2 & 3 \end{tabular} \caption{Case 2.} \label{tab:4b} \end{table} For this case see Table \ref{tab:4b}. If there is no number that appears more than $4$ times then it is easy to see that there exists a nice two-coloring of these $6$ sets. If there exists an $i$ that appears $5$ times, then in the original (fair) set there was an additional triple $123$, in which case it is easy to see that there exists a nice two-coloring of these $7$ sets. Finally, if there exists an $i$ that appears $6$ times, then in the original (fair) set there were two additional triples $123$, and then it is again easy to see that there exists a nice two-coloring of these $8$ sets. \item $2\le |T_1\cap T_2|,|T_1\cap T_3|,|T_2\cap T_2|\le 3$ and not all of $|T_1\cap T_2|,|T_1\cap T_3|,|T_2\cap T_2|$ are equal. This implies that wlog. $|T_1\cap T_2|=2$ while $|T_1\cap T_3|=3$. Then wlog. there are two cases, listed in Table \ref{tab:4c}. \begin{table}[h] \centering \begin{tabular}{lllllllllll} 1 & * & 3 & & & & & & 1 & * & 3 \\ 1 & * & 3 & & & & & & 1 & * & * \\ 1 & 2 & 3 & & & & & & 1 & 2 & 3 \\ 1 & 2 & * & & & & & & 1 & 2 & 3 \\ * & 2 & 3 & & & & & & * & 2 & 3 \\ * & 2 & * & & & & & & * & 2 & * \end{tabular} \caption{Case 3.} \label{tab:4c} \end{table} In the first case if there is no number that appears more than $4$ times then it is easy to see that there exists a nice two-coloring of these $6$ sets. If there exists an $i$ that appears $5$ times, then in the original (fair) set there was an additional triple $123$, and then it is easy to see that there exists a nice two-coloring of these $7$ sets. Finally, no number can appear $6$ times as the third triple is $123$. In the second case we color the first, third and the last triple red and the rest of the triples blue to get a nice coloring. \item $|T_1\cap T_2|=|T_1\cap T_3|=|T_2\cap T_3|=3$. Then wlog. there are two cases, listed in Table \ref{tab:4d}. \begin{table}[h] \centering \begin{tabular}{lllllllllll} 1 & 2 & 3 & & & & & & 1 & 2 & 3 \\ 1 & 2 & 3 & & & & & & 1 & 2 & 3 \\ 1 & 2 & 3 & & & & & & 1 & 2 & * \\ 1 & * & * & & & & & & 1 & * & 3 \\ * & 2 & * & & & & & & * & 2 & 3 \\ * & * & 3 & & & & & & * & * & * \end{tabular} \caption{Case 4.} \label{tab:4d} \end{table} In the first case we have a special set of triples, a contradiction. In the second case we color the first and last triple red and the rest of the triples blue to get a nice coloring. \end{enumerate} \end{proof} \begin{lemma}\label{lem:six} A fair non-special set of $6$ triples admits a nice two-coloring. \end{lemma} \begin{proof} We again split the problem into a few cases. The $6$ triples of the set together contain $18$ numbers (with multiplicities). As the set is fair, every number appears at most $4$ times. We distinguish cases based on how many numbers appear exactly $4$ times. \begin{enumerate} \item No number appears $4$ times. In this case there are at most $18/3=6$ numbers that appear $3$ times. We consider only two-colorings where both color classes contain $3-3$ triples (we call such colorings \emph{balanced}) and prove that at least one of them is nice. There are $10$ such colorings (we do not distinguish pairs of colorings with switched colors). A number that appears $3$ times makes exactly one balanced coloring not nice, thus there are at least $10-6=4$ nice two-colorings. \item Exactly one number, wlog. the number $1$, appears $4$ times. We again consider only the $10$ balanced two-colorings and prove that at least one of them is nice. There are at most $\lfloor (18-4)/3\rfloor=4$ numbers which appear $3$ times, each of them makes one coloring not nice. Number $1$ makes $4$ colorings not nice, thus altogether there are at least $10-4-4=2$ nice balanced two-colorings. \item Exactly two numbers, wlog. $1$ and $2$, appear $4$ times. In this case $|T_1\cap T_2|\ge 2$, and there are at most $\lfloor 18-2\cdot 4\rfloor=3$ numbers which appear in exactly three triples. We distinguish some subcases: \begin{enumerate} \item $|T_1\cap T_2|=4$, i.e., $T_1=T_2$. Again we consider only the balanced two-colorings. $1$ and $2$ both make the same four of them not nice while the at most $3$ numbers which appear in exactly three triples make $3$ of them not nice, so still there are at least $10-4-3=3$ nice balanced two-colorings. \item $|T_1\cap T_2|=3$. In this case we have triples $12*,12*,12*,1**,2**,***$ where $*$ are numbers different from $1,2$. Again we consider only the balanced two-colorings. The numbers $1$ and $2$ together make $7$ of them not nice while the at most $3$ numbers which appear in exactly three triples make $3$ of them not nice. Assume first that there is some coincidence among these not nice balanced colorings, then there is at least $10-7-3+1=1$ nice balanced two-coloring. Now assume that all these $10$ not nice balanced colorings are different, then there are numbers, wlog. $3,4,5$ such that $|T_3|=|T_4|=|T_5|=3$ and $|T_i\cap T_j|=2$ for every $i=1,2$ and $j=3,4,5$. This implies $|T_1\cap T_2\cap T_j|\ge 1$ for $j=3,4,5$. Then the first three triples must be $123,124,125$. To have $|T_1\cap T_j|=2$ for $j=3,4,5$ we need that the fourth triple containing $1$ contains all of $3,4,5$, a contradiction. \item $|T_1\cap T_2|=2$. Again we consider only the balanced two-colorings. The numbers $1$ and $2$ together make $6$ of them not nice while the at most $3$ numbers which appear in exactly three triples make $3$ of them not nice, so still there are at least $10-6-3=1$ nice balanced two-colorings. \end{enumerate} \item At least three numbers, wlog. the numbers $1,2,3$, appear $4$ times. This case follows from Lemma \ref{lem:tight}. \end{enumerate} \end{proof} We introduce one more notation and then we are ready to prove Theorem \ref{thm:main}. \begin{definition} Let $T$ be a set of $n$ triples (of positive integers). $T$ is \emph{reducible} if we can delete a triple from it such that the remaining set of triples is fair, otherwise it is \emph{irreducible}. \end{definition} Note that a reducible set of triples is by definition necessarily fair. \begin{proof}[Proof of Theorem \ref{thm:main}] We have seen earlier that the conditions are necessary, so we want to prove that they are also sufficient. That is, we want to find a nice two-coloring of a fair non-special set $T$ of $n\ge 6$ triples. If $T$ is reducible then we delete one of the triples such that the remaining set is still fair. We keep doing this until we get an irreducible set $T'$ or a set $T'$ with exactly $6$ triples. \begin{enumerate} \item $T'$ is non-special. If $T'$ has $n'=6$ triples then by Lemma \ref{lem:six} we get a nice two-coloring of $T'$. Otherwise $T'$ is irreducible. If $T'$ is irreducible, deleting an arbitrary triple $t$ makes the set not fair, thus there is a number (wlog. the number $1$) which appears $n-2$ times (and does not appear in $t$). Next, deleting a triple $t'$ which contains $1$ would make the set not fair, thus there is a number which appears $n-2$ times and does not appear in $t$, thus this number is different from $1$, wlog. $2$. Finally, as $n\ge 6$, there is a triple $t''$ in which $1$ and $2$ both appear. Deleting $t''$ would also make the set not fair thus there is a number different from $1$ and $2$, wlog. $3$, which also appears $n-2$ times. Thus, there are three numbers that appear $n-2$ times in the fair set of triples $T'$, so by Lemma \ref{lem:tight} we get a nice two-coloring of $T'$. In both cases, the nice two-coloring of $T'$ can be extended arbitrarily to a nice two-coloring of $T$. \item $T'$ is special. $T'$ is then a special fair set of $n'\ge 6$ triples. Wlog. $T'$ consists of $n'-3\ge 3$ triples of the form $123$ and three triples, $t_1=1**,t_2=2**,t_3=3**$ (where $*$ denote arbitrary numbers different from $1,2,3$). Now we are interested in the triples that were deleted during the process. Recall that $T$ was a non-special set, thus we must have deleted at least one triple $t$ which is not of the form $123$, thus $t$ avoids at least one of $1,2,3$. Assume wlog. that $t$ avoids $1$. Color $t, t_1$ and one triple $123$ with color red. Color the rest of the triples (including $t_2,t_3$ and another triple $123$) blue, it is easy to check that this coloring is nice, as required. \end{enumerate} \end{proof} We mention that in Theorem \ref{thm:main} we use Lemma \ref{lem:six} only on irreducible sets. \section{Algorithms and partial colorings}\label{sec:partial} For general $c$ and $k$, if a nice (partial) $c$-coloring exists of $k$-tuples, then in each color class we can choose at most $k+1$ triples such that coloring these at most $c(k+1)$ $k$-tuples (the rest of the triples can remain uncolored) already has the property of a nice partial $c$-coloring. Indeed, for each color we can choose an arbitrary $k$-tuple of that color, then using that the coloring is nice, we can choose at most $k$ other $k$-tuples of that color avoiding each element in this $k$-tuple, together these at most $c$ times $k+1$ many $k$-tuples are as required. \begin{observation}\label{obs:partialize} If a nice (partial) $c$-coloring exists of a set of $k$-tuples then there is also a nice partial $c$-coloring of the $k$-tuples which uses all colors at most $k+1$ times and the original coloring is an extension of this coloring. Moreover, from each color class of the original $c$-coloring we can fix one $k$-tuple which remains colored in the new nice partial $c$-coloring (with the same color as in the original coloring). Such a nice partial $c$-coloring can be found easily in linear time if the nice (full) $c$-coloring is given. \end{observation} From these using Theorem \ref{thm:main} we get the following: \begin{corollary}\label{cor:mainpartial} Given a set of $n\ge 6$ triples, a nice partial $2$-coloring that colors at most $4$ triples with each of the two colors exists if and only if the set of triples is fair and non-special. \end{corollary} \oref{partialize} implies that there is a $O(n^{c(k+1)})$ time algorithm to check for a set of $n$ $k$-tuples if a nice $c$-coloring exists and find one if it exists. Indeed, it is enough to check the $O(n^{c(k+1)})$ many partial colorings that color $k+1$ $k$-tuples with each color whether any of them is a nice partial $c$-coloring (and if yes, extend it arbitrarily to a nice $c$-coloring). Note that checking any one of these colorings whether it is nice can be done in constant time (dependent on $c$ and $k$). In case $c=1$ we have seen that a set of $k$-tuples has a nice $1$-coloring if and only if it is $1$-fair which can be easily checked in linear time in $n$. If it is $1$-fair then coloring all $k$-tuples with the unique color is a nice coloring. Also, we can easily find in linear time in $n$ a subset of at most $k+1$ many $k$-tuples such that coloring only these is a nice partial $1$-coloring. In case $c=2$ and $k=3$ the above argument gives that in time $O(n^8)$ we can check if a nice $2$-coloring exists of a set of triples and if yes then also find one. For this case we can improve considerably this naive algorithm. Checking that a set of $n$ triples is fair and non-special can be done easily in linear time in $n$. Indeed, being special is very easy to check while testing if a set of triples is $2$-fair, one can choose two arbitrary triples, and only check if the elements present in these two triples are avoided by at least two other triples, as these two triples both avoid all other elements. This and Theorem \ref{thm:main} implies that there is a linear time algorithm to check if a nice $2$-coloring of a set of triples exists. This does not immediately give an algorithm to also find such a coloring. Next we show how the characterization leads to a linear time algorithm for also finding a nice $2$-coloring when it exists. \begin{claim}\label{claim:linear} Given a set of $n$ triples, there is an $O(n)$ time algorithm to check if a nice $2$-coloring exists and find one if it exists. \end{claim} \begin{proof} For $n\le 5$ we can check every $2$-coloring in constant time. Given a set $T$ of $n\ge 6$ triples, checking if a set of triples is fair and non-special can be easily done in linear time. If these conditions hold, then we know that there exists a nice $2$-coloring (and otherwise it does not). Assuming that the set of triples has both of these properties, our aim is to find a constant size subset of the triples which already has both of the properties. In order to do that, take two arbitrary triples, $e$ and $f$. As the set is fair, for each element appearing on $e$ or $f$, in linear time we can find two triples that avoid this element. Altogether $e$ and $f$ has at most $6$ different elements, and thus we find at most $12$ triples which together with $e$ and $f$ form the set $T'$ (with size at most $14$). We can also assume that $T'$ has at least $6$ triples as otherwise we add to it arbitrarily some further triples so that this holds. We claim that $T'$ is fair. Indeed, by our construction for each element in $e$ or $f$ there are at least two triples in $T'$ which avoid it, while for every other element both $e$ and $f$ avoid that element. Now we check if $T'$ is non-special, which can be checked in constant time. If yes, we are done. On the other hand, if $T'$ is special then there is a unique triple $g$ that occurs at least $3$ times in $T'$, this can be identified in constant time. As $T$ is not special, in linear time we can find an additional triple from $T\setminus T'$ which is different from $g$, adding this to $T'$ makes it non-special (and it remains to be fair). Finally, having found a constant size (at most $15$) subset $T'$ which is fair and non-special, we can check in constant time all its two-colorings to find one which is nice. This is also a nice partial two-coloring of $T$, which can be extended arbitrarily (in linear time) to a nice two-coloring of $T$. Altogether the algorithm takes $O(n)$ time, as required. \end{proof} In fact there is a linear time algorithm for every $c,k$. Note that for general $c,k$ we do not have a characterization and so the algorithm is based only on the fact that it is enough to find a small partial coloring, this is stated by Theorem \ref{thm:linearck}. \begin{proof}[Proof of Theorem \ref{thm:linearck}] We fix some $c$ and $k$ which are considered to be constants and we are given a set $T$ of $n$ many $k$-tuples. The proof idea is to reduce the size of the problem, that is, we will create a constant size set $R$ of $k$-tuples such that $T$ admits a nice $c$-coloring if and only if $R$ does, moreover, given a nice $c$-coloring of $R$, we can find a nice partial $c$-coloring of $T$ in constant time. Fix an arbitrary subset $S$ of $s=(k+1)(c-1)+1$ many $k$-tuples. If a nice $c$-coloring of $T$ exists then by \oref{partialize} also a nice partial $c$-coloring exists which colors at most $k+1$ sets with each color. In this partial $c$-coloring for some integer $i$ ($0\le i\le c$) we have that among the $k$-tuples in $S$ there are at least $i$ colors present and also there are at least $c-i$ uncolored sets in $S$. We can easily extend this partial coloring to a coloring such that all colors are present on $S$ (for each color missing on $S$ we color one uncolored $k$-tuple of $S$ with this color). Summarizing, if there exists a nice $c$-coloring then there exists also a nice $c$-coloring such that all colors appear on $S$. Notice that $s$, the size of $S$, is a constant. We make a list $E'$ of the at most $ks$ elements that appear in the sets of $S$. From now on during the algorithm whenever we see an element not in $E'$, we replace it with a dummy element $*$ (it can happen that a $k$-tuple now contains several $*$'s but it will cause no problems). By this our alphabet is essentially reduced to size at most $ks+1$ (the elements in $E'$ plus $*$), and we get the set of $k$-tuples $T_*$ on this alphabet. Note that the $k$-tuples of $T_*$ are in a natural bijection with $k$-tuples of $T$, which, given a (partial) coloring of $T_*$, defines a partial coloring of $T$. \begin{lemma} A nice partial $c$-coloring of $T_*$ is also a nice partial $c$-coloring of $T$. On the other hand, if $T$ admits a nice partial $c$-coloring then $T_*$ also admits a nice partial $c$-coloring. \end{lemma} \begin{proof} Clearly, by definition of a nice coloring, if we can find a nice $c$-coloring of $T_*$, then the same coloring is also a nice $c$-coloring of the original set of $k$-tuples (indeed, merging elements just makes our task harder). On the other hand we have seen that if $T$ admits a nice partial $c$-coloring then it also admits one in which on $S$ all colors appear. We claim that this coloring is also a nice partial $c$-coloring of $T_*$. The property of a nice coloring requires for each color and each element that there is a $k$-tuple with this color avoiding this element. As we did not merge elements in $E'$, this remains true for every element in $E'$ and every color. Also, it is true for $*$ and every color because for each color any $k$-tuple in $S$ with this color avoids $*$, as required. \end{proof} This lemma shows that it is enough to find a nice partial $c$-coloring of $T_*$. If it does not exist, then $T$ does not have a nice partial $c$-coloring. On the other hand, if it exists, then it is also a nice partial $c$-coloring of $T$. Thus, from now on we restrict our attention to $T_*$. Observe that in a nice partial $c$-coloring, if some $k$-tuples contain the same elements and get the same color, then by uncoloring all but one of them we still get a nice partial $c$-coloring. From constant many elements (that is, $ks+1$) there are only constant many different $k$-tuples that can be generated. We go through the set $T_*$ of $k$-tuples one-by-one and if we already kept $c$ copies of the pending $k$-tuple, then we throw it away, otherwise we keep it. This process can be done in $O(n)$ time, at the end we are left with a set $R$ of constant many $k$-tuples, as each different $k$-tuple generated from the $ks+1$ elements has multiplicity at most $c$. By our previous observation, if $T_*$ has a nice partial $c$-coloring then $R$ also has one, as in each color class every type of $k$-tuples needs to be used at most once, and so in all colors together at most $c$ times. Summarizing, as we promised at the beginning of the proof, we have defined a constant size set $R$ of $k$-tuples which admits a nice (partial) $c$-coloring if and only if $T_*$ does which by the lemma is further equivalent with $T$ admitting a nice (partial) $c$-coloring. Moreover, if such a coloring exists of $R$ then the same coloring is nice for $T_*$ and by the lemma also for $T$. As $R$ has constant size, we can brute force check in constant time if it admits a nice $c$-coloring and if it does then we can use that coloring to get a nice partial $c$-coloring of $T$ (which can be easily extended to a $c$-coloring of $T$ in linear time). Altogether the algorithm takes $O(n)$ time, as required. \end{proof} As we stated earlier, we can easily uncolor (in linear time) some $k$-tuples in a nice $c$-coloring such that we get a nice partial $c$-coloring in which all colors are used at most $k+1$ times. Claim \ref{claim:linear} thus implies the following: \begin{corollary}\label{cor:linearpartialck} For any fixed $c,k$, given a set of $n$ many $k$-tuples, there is an $O(n)$ time algorithm to check if a nice partial $c$-coloring exists which uses every color at most $k+1$ times, and which finds one if it exists (the dependence on $c$ and $k$ is hidden in the $O$ notation). \end{corollary} \section{A matching problem application}\label{sec:matching} Here we discuss the real life problem that motivated our research, the matching problem it translates to and how these are connected to our results, as it was presented by Cechl\'arov\'a in the $9$th Emléktábla Workshop Booklet \cite{9EWbooklet} and communicated to me by Jankó \cite{perscomm}. It is about the International Young Physicists' Tournament (IYPT), sometimes referred to as `Physics World Cup', a team-oriented scientific competition between secondary school students. The real-world setup is slightly different from the model regarded here, we only restrict our attention to the model relevant for us. We are given $n$ teams, each chooses in advance $3$ problems to his portfolio (out of a given set of $m$ problems). The teams need to be split into groups of $3$ or $4$ and in each group there are $3$ rounds, and in each round each team of the group presents a problem. It is required that no problem is presented twice within a group in the same round. We are interested in finding conditions and algorithms to see if such a grouping is possible. In a group let us represent the teams and problems as the vertices of a bipartite graph, a problem is connected to a team if it is in its portfolio. In particular, every team has degree $3$. It is easy to see that the problem is equivalent to splitting the teams into groups of $3$ and $4$ and in each group splitting (in other words, coloring) the edges incident to the teams into $3$ matchings. By König's Line Coloring Theorem this can be done if and only if all degrees are at most $3$ in the subgraph of the edges incident to the teams of a given group. This trivially holds for the degrees of the teams, for the degrees of the problems this means that no problem is present in the portfolio of more than $3$ teams in the group. In groups of size $3$ this trivially holds, thus only groups of size $4$ may cause an issue. If $n$ is divisible by $3$ then we do not need such groups, we only need that $n\ge 3$. If $n\equiv 1\mod 3$ then we need that $n\ge 4$ and there needs to be one group of size $4$, which is exactly the partial coloring problem for $c=1$, where the $m$ problems correspond to the elements of $[m]$, the $n$ teams to $n$ triples (a team corresponds to a triple containing the problems choosen by this team) and the unique group of size $4$ to the color class of a partial $1$-coloring. For that we have seen that the trivial necessary and sufficient condition is that the set of triples is $1$-fair. Finally, if $n\equiv 2\mod 3$ then we need $n\ge 8$ to be able to split $n$ into sets of size $3$ and $4$. In this case we need two groups of size $4$, which is exactly the partial coloring problem for $c=2$, where the $m$ problems correspond to the elements of $[m]$, the $n$ teams to $n$ triples and the groups of size $4$ to the two color classes of a partial $2$-coloring. Corollary \ref{cor:mainpartial} implies that the necessary and sufficient condition in this case is that the set of triples is $2$-fair and non-special (note that Corollary gives a coloring which uses both colors at most $4$ times, but this can easily be extended to a coloring which uses both colors exactly $4$ times). Thus, we have solved all cases, furthermore, checking the existence of and finding such a coloring can be done in linear time by Corollary \ref{cor:linearpartialck}. \bigskip \noindent \textbf{Acknowledgement} The author is grateful to the organizers and participants of the 9th Emléktábla Workshop, in particular to Katar\'ina Cechlárová, Zsuzsanna Jankó and Nika Salia \cite{perscomm} for their helpful comments. The author also thanks an anonymous reviewer for his comments.
1,108,101,563,098
arxiv
\section{Introduction}\label{sec:intro} Recent detections of high energy neutrinos and gravitational waves (GWs) at extragalactic distances \citep{2016PhRvL.116f1102A, 2018Sci...361..147I} have ushered in a new age of ``multi-messenger'' astronomy \citep{2013RvMP...85.1401A,2019BAAS...51c.250B}. The conventional electromagnetic (EM) branch of astronomy has played an important supporting role, helping to pin-point the sources of the neutrinos and GW emission, and to constrain the physical properties of the progenitors \citep[e.g.][]{2017ApJ...848L..12A, 2018ApJ...864...84K}. The study of these multi-messenger events at radio wavelengths has been particularly rewarding. Important highlights include the detection of the first radio afterglow and direct imaging of relativistic outflow from the merger remnant GW170817 \citep{2017Sci...358.1579H,2018Natur.561..355M,2019Sci...363..968G}, and the imaging of the parsec-scale jet of the possible neutrino source TXS\,0506+056 \citep{2019A&A...630A.103B,2020arXiv200500300L}. In this paper we discuss the radio follow-up of S191216ap, the first astrophysical source that may be both a source of GWs and neutrinos. S191216ap was first reported as a compact binary merger candidate by the LIGO Scientific Collaboration (LSC) and Virgo Collaboration on 16 December 2019 at 21:33:38.473 UTC \citep{2019GCN.26454....1L}. The GW event was initially classified as a likely ``mass gap'' signal, with one component of the binary having a mass between a definitive neutron star and black hole classification. The event was later re-classified with 99\% probability as a binary black hole \citep[BBH:][]{2019GCN.26570....1L}. The final (revised) sky map and distance was posted by the LSC and Virgo, with a 90\% localization region of area 253 deg$^2$ and luminosity distance estimate of 376$\pm$70 Mpc \citep{2019GCN.26505....1L}. The IceCube Neutrino Observatory reported a single muon neutrino in the direction of S191216ap, 43\,s prior to the GW merger and with an overall p-value of 0.6\% (2.5$\sigma$) and an error radius of only $\pm{4}^\circ$ \citep{2019GCN.26460....1I, 2019GCN.26463....1C}. Initially the High-Altitude Water Cherenkov Observatory (HAWC) reported no candidate gamma-ray events at TeV energies \citep{2019GCN.26455....1H}, but in a re-analysis of their data centered on the IceCube error region the HAWC collaboration reported a sub-threshold event 80\,s after the binary coalescence \citep{2019GCN.26472....1H}. This candidate gamma-ray event was found in a 10 sec search and the significance level for this event is 4.6$\sigma$. This corresponds to a gamma-ray flux of about $7.3\times10^{-9}$ TeV$^{-1}$~cm$^{-2}$~s$^{-1}$ at 1~TeV for an intrinsic spectrum with an index of $-2$ (Israel Martinez, priv. comm.). The coordinates\footnote{We note that these coordinates are outside the 90\% credible region for S191216ap \citep[but within the 98\% credible region;][]{2019GCN.26505....1L}. The mean distance of S191216ap at the HAWC location is $286\pm43$ Mpc.} of the HAWC event are RA: 323.53 deg, Dec: 5.23 deg, with the 68\% containment region (radius) of 0.3 degrees \citep[i.e. 0.28 deg$^2$ region;][]{2019GCN.26472....1H}. Many high energy observatories were in operation during the S191216ap event, including the ANTARES neutrino detector \citep{2019GCN.26458....1A}, {\it Fermi} GBM \citep{2019GCN.26461....1W}, MAXI GSC \citep{2019GCN.26462....1N}, \textit{Swift}/BAT \citep{2019GCN.26466....1P}, CALET Gamma-Ray Burst Monitor \citep{2019GCN.26481....1S}, AGILE \citep{2019GCN.26486....1V}, AstroSAT \citep{2019GCN.26511....1S}, Insight-HXMT \citep{2019GCN.26569....1L} and Konus-Wind \citep{2020GCN.26835....1R}. A search for events both spatially and temporally coincident with S191216ap failed to find any likely counterpart high energy candidates. A direct quantitative comparison between these non-detections and the HAWC detection is difficult, as most missions did not survey the entire error region, and carried out photon searches in non-overlapping time windows. \begin{figure*} \centering \includegraphics[width=\textwidth]{S191216ap_localization.pdf}\\ \caption{The LIGO/Virgo sky localization of S191216ap (left) shown along with the localization regions for the IceCube (larger circle, 90\% containment) and HAWC events (smaller circle 0.28 deg$^2$, 68\% containment; middle panel). The right panel shows the sky coverage of our VLA follow up observations of the HAWC 68\% containment region. The 0.38 deg$^2$ image mosaic is from our epoch E3 (see Table~\ref{tab:observations}). The larger black circle corresponds to the 0.28 deg$^2$ HAWC region, while the smaller circles correspond to the primary beams (half-power beam width of about 7 arcminutes) of the 37 pointings at C band (central frequency of 6 GHz). The grey scale bar displays the pixel values, going from $-50~\mu$Jy/beam (black) to 63~$\mu$Jy/beam (white).} \label{fig:mosaic} \end{figure*} This possible multi-messenger event set off a wave of deep searches at X-ray, optical/NIR and radio wavelengths within the first week. Search strategies were of two basic types \citep{2013ApJ...767..124N}, i.e. wide-area and galaxy-targeted searches. Nine galaxies were initially identified in the overlapping LIGO/Virgo-HAWC error region \citep{2019GCN.26479....1S} and within the redshift range of S191216ap, a number that dropped to only three galaxies \citep{2019GCN.26507....1A} after the revised LIGO/Virgo skymap was released \citep{2019GCN.26505....1L}. Targeted searches of these galaxies were made from radio to X-rays wavelengths \citep{2019GCN.26478....1X,2019GCN.26487....1S,2019GCN.26488....1Z,2019GCN.26496....1Y,2019GCN.26530....1M}. Likewise, mosaicked observations were made of all or most of the LIGO-Virgo error region \citep{2019GCN.26464....1A,2019GCN.26473....1L,2019GCN.26488....1Z,2019GCN.26528....1D,2019GCN.26563....1M}, or within the overlap region of LIGO-Virgo and IceCube or HAWC \citep{2019GCN.26475....1E,2019GCN.26483....1R,2019GCN.26498....1E,2019GCN.26509....1O,2019GCN.26531....1M,2019GCN.26605....1S}. With the exception of two optical transients from UKIRT for which no follow-up was undertaken \citep{2019GCN.26605....1S}, there were no compelling EM counterparts identified in that first week. Accepting this preliminary identification of S191216ap as a multi-messenger event, we conducted a search for radio counterparts as part of the Jansky VLA mapping of Gravitational Wave bursts as Afterglows in Radio (JAGWAR) program. In \S\ref{sec:obs_proc} we describe the VLA observations, data processing imaging and source catalog generation. In \S\ref{sec:var_tran} we describe the search for variable/transient sources, finding no definitive radio counterpart for S191216ap. We end with a discussion of our results and future prospects for detecting the EM counterparts of BBH mergers. \section{Observations and Data Processing}\label{sec:obs_proc} \begin{table*}[htp] \centering \scriptsize \caption{Observing Log} \label{tab:observations} \begin{tabular}{llcrccrrr} \hline\hline No. & Start Date & Epoch & $\Delta$t & Array & RMS & BMAJ & BMIN & BPA \\ & (UT) & & (days) & Config.& ($\mu$Jy/bm) & (\text{$^{\prime\prime}$}) & (\text{$^{\prime\prime}$}) & (deg.) \\ \hline 1 & 2019-Dec-20 22:47:35 & E1 & 4 & D & \\%38 & \\ 2 & 2019-Dec-21 21:49:45 & E1 & 5 & D & 18 & 12.3 & 9.4 & 19 \\%39 & 12.3 & 9.4 & 19 \\ 3 & 2019-Dec-22 00:00:36 & E1 & 5 & D & \\%40 & \\ 4 & 2019-Dec-27 21:39:31 & E2 & 11 & D & \\%43 & \\ 5 & 2019-Dec-27 23:50:21 & E2 & 11 & D & 19 & 11.3 & 9.5 & 16 \\%40 & 11.3 & 9.5 & 16 \\ 6 & 2019-Dec-28 21:22:14 & E2 & 12 & D & \\%36 & \\ 7 & 2020-Apr-23 13:42:13 & E3 & 129 & C & \\%21 & \\ 8 & 2020-Apr-23 15:53:03 & E3 & 129 & C & 14 & 3.9 & 2.9 & 7.5 \\%22 & 3.9 & 2.9 & 7.5 \\ 9 & 2020-Apr-24 13:38:17 & E3 & 130 & C & \\%23 & \\ 10 & 2020-Aug-14 07:44:27 & Follow-up & 242 & B & 9--15 & 1.3 & 0.9 & 35 \\ \hline \multicolumn{9}{p{4.5in}}{Notes: The start date provides the time and date when each observation took place, with $\Delta$t reporting the number of days from the merger. We also report the RMS ($\mu$Jy/bm) for each epoch and the dimensions of the synthesized beam.} \end{tabular} \end{table*} \subsection{VLA Observations} With consideration of the HAWC sub-threshold event, we chose to conduct deep C band (4--8 GHz) observations of the gamma-ray 68\% containment region. To maximize the continuum imaging sensitivity, we used the Wide-band Interferometric Digital Architecture (WIDAR) correlator with 32 spectral windows, 64 2-MHz-wide channels each to get 4 GHz of total bandwidth centered on 6.0 GHz. Our observations were carried out across 3 epochs (E1, E2, E3), with each epoch being divided into 3 observations (for $\sqrt{3}$ improvement in sensitivity), with the Karl G. Jansky Very Large Array (VLA) in C and D array configurations, under the JAGWAR large program (VLA/18B-320; PI: Frail). The epoch time frame ranged from 5 days post merger to 4 months post merger (subject to scheduling constraints and sampling the putative afterglow light curve in logarithmic time-steps). Each observation lasted for 3.6 hr and consisted of 37 pointings, with the goal of creating a standard pointed image mosaic of 0.38 deg$^2$, and achieve fairly uniform sensitivity across the 0.28 deg$^2$ HAWC region. The mosaic is centered on the coordinates reported for the HAWC sub-threshold event \citep{2019GCN.26472....1H}. 3C48 was used as the flux density and bandpass calibrator. The phase calibrator J2130+0502 was observed for a duration of 1 minute every 20--30 minutes. The observational parameters for all three epochs are listed in Table~\ref{tab:observations} for which we list for each epoch, the array configuration, the rms noise (RMS) and the synthesized beam (BMAJ,BMIN, BPA) of each final image. Figure~\ref{fig:mosaic} shows the image mosaic along with the locations of the VLA pointings and the HAWC 68\% confidence region. We conducted follow-up observations in C-band of any significant variable sources identified with the mosaicked region. These pointed observations were carried out on 2020 August 14 with the VLA in B array configuration (see Table~\ref{tab:observations}). Integration times varied from $\approx$4-10 minutes, depending on the flux density of the sources. The total duration of the observation was 47 minutes. 3C48 was used as the flux density and bandpass calibrator, while the phase calibrator was J2130+0502. \subsection{RFI flagging, Calibration and Imaging}\label{sec:obs_proc.cal} Directly after the observations for each epoch were completed, we downloaded the raw data from the VLA archive onto the \textit{Lustre} file system at the NRAO AOC in Socorro. The raw data was then calibrated using the NRAO CASA pipeline (in CASA 5.6.1). Post-calibration, we carried out manual flagging to remove spectral windows affected by residual RFI. For the imaging, we used CASA {\tt clean} with Briggs weighting (robust factor of 0.5), two Taylor terms, and a threshold of 0.1 mJy. The pixel size was chosen to sample the synthesized beam across 6 pixels for the first two epochs and 4 pixels for the third epoch. The central frequency for each image is 6.0 GHz. Linear mosaicking of the 37 single-pointing images for each epoch was then carried out using {\tt FLATN} in AIPS. The mosaics for the first two epochs are 1500x1500 pix$^{2}$ and the third epoch is 3000x3000 pix$^{2}$. The primary beam parameters were acquired from EVLA Memo 195\footnote{Perley et al. 2016, \url{https://library.nrao.edu/public/memos/evla/EVLAM_195.pdf}} during the linear mosaicking step. Figure~\ref{fig:rms} shows the cumulative RMS noise plots for the three image mosaics. We followed an identical procedure for the variable source observations. After running the VLA automated calibration pipeline, we split that data set into individual measurement sets. Then we checked the phase calibrator and the flux calibrators for any RFI from antennae or spectral windows, before proceeding with flagging and clipping. For our imaging, we used almost identical parameters, with the change being image size and cell size to accommodate the array configuration change. \begin{figure} \centering \includegraphics[width=3.5in]{rms.pdf} \caption{Cumulative RMS noise across the 0.38 deg$^2$ survey region for the three epochs of observations (E1, E2 and E3). The source detection threshold (4$\sigma$) is shown on the upper x-axis. The 50\% completeness over the 0.28 deg$^2$ HAWC region corresponds to an RMS noise of about 16 $\mu$Jy for epochs E1 and E2 and about 10 $\mu$Jy for epoch E3.} \label{fig:rms} \end{figure} \subsection{Source cataloging, Point Source Selection and Flux density correction}\label{sec:obs_proc.src} We used the Search and Destroy (SAD) task within AIPS to generate 4$\sigma$ catalogs for each of our three image mosaics. These catalogs contain around 360 sources for epochs E1 and E2 and around 780 sources\footnote{{The increase in the number of sources detected is due to the reduced image noise in E3 and sources (AGN) from E1/E2 being resolved into doubles in E3 (due to the factor of 2 increase in angular resolution between D and C array configurations).}} for epoch E3. For a source beyond 200 Mpc, we do not expect contaminating radio emission from any putative host galaxy \citep{hoto2016,2018ApJ...857..143M} of the candidate merger, and hence we shortlisted only the point sources for the transient and variability search. Our criteria for selecting point-like sources and rejecting false positives (resulting from image artifacts around bright sources) were the same as those used in some previous works \citep{mooley2016,2019MNRAS.490.4898H}: \begin{itemize} \item \texttt{BMAJ}/1.7$<$MAJ$<$1.7$\times$\texttt{BMAJ} \item \texttt{BMIN}/1.7$<$MIN$<$1.7$\times$\texttt{BMIN}, \item \texttt{BMAJ}/\texttt{BMIN}$<$2.5 \item $0.67<{\rm Flux/Peak}<1.5$ \end{itemize} where \texttt{BMAJ} and \texttt{BMIN} are the major and minor axes of the synthesized beam, MAJ, MIN, Flux and Peak are the major and minor axes of the fitted Gaussian and the integrated and peak flux densities as reported by SAD. {The first three criteria are motivated by thorough inspection of archival VLA images and source catalogs, and help in differentiating side lobes (false-positives) and spike-like imaging artifacts seen occasionally around bright sources in VLA images. They also allow extended source rejection. The fourth is a simplified criterion for differentiating between resolved and point-like sources (see Figure 9 of \cite{smolcic2017}).} We then generated a single point-source catalog (PSC) by merging the list of point-like sources for all the three epochs. The PSC had tens of sources that were present in epoch E3 at the 4--5$\sigma$ level and absent in epochs E2 and E3. We rejected these sources as false positives after inspecting both the catalog and image mosaic as being due to noise/imaging artifacts, and compiled the final PSC containing 165 sources. For all sources in the PSC we plotted a histogram of the ratio of peak flux densities between E1--E2 and E1--E3, and found that flux multiplicative factors of 0.94 and 1.1 were necessary for epochs E2 and E3 respectively in order to make the histograms centered on unity. We therefore corrected all peak flux densities in E2 and E3 accordingly in the PSC. \begin{figure} \centering \includegraphics[width=3.5in]{m_vs.pdf} \caption{Variability statistic ($V_s$) versus the modulation index ($m$) for the 165 sources in our point-source catalog (PSC). Grey points indicate sources (from the E1--E2 and E1--E3 comparisons) that are not significant variables. The red points are the selected variables between E1--E3 (5 sources, see Table~\ref{tab:vartransum}). No significant variables were found in the E1--E2 comparison. The black dashed lines indicates the variability selection criteria in $|V_s|$ and $|m|$ (see \S\ref{sec:var_tran}). The flux densities of the sources defines the marker size (shown in the legend). Top horizontal scale is the fractional variability $f_{\rm var}$, defined as the ratio of the flux densities between two epochs being compared. See \S\ref{sec:var_tran} for details.} \label{fig:var} \end{figure} \begin{table*} \scriptsize \centering \caption{Summary of variables sources.} \label{tab:vartransum} \begin{tabular}{lllllllllllll} \hline\hline Name & RA & DEC & S1 & S2 & S3 & S$_F$ & m & Vs & $\alpha_{4.9}^{7.0}$(E3) & Host Ident. & r & phot-z\\ (JAGWAR J...) & (deg) & (deg) & ($\mu$Jy) & ($\mu$Jy) & ($\mu$Jy) & ($\mu$Jy) & & & & & (mag) \\ \hline \multicolumn{13}{c}{E1--E2 comparison; Timescale $<$1 week}\\ \hline \multicolumn{13}{c}{None}\\ \hline \multicolumn{13}{c}{E1--E3 comparison; Timescale $<$4 months}\\ \hline J213250+051359& 323.20873 & 5.23317 & 263$\pm$21 & 255$\pm$23 & 139$\pm$15 & 134$\pm$15 & 0.53 & 4.3& $-$1.0$\pm$0.4 & LIRG & 17.5 & 0.09$\pm$0.02\\ J213317+052104 & 323.32286 & 5.35130 & 742$\pm$15 & 881$\pm$16 & 552$\pm$12 & 558$\pm$14 & 0.20 & 7.0 & $-$0.5$\pm$0.1 & Spiral/Ellip. & 22.4 & 0.97$\pm$0.11\\ J213341+051946 & 323.42447 & 5.32948 & 44$\pm$13 & 49$\pm$12 & 87$\pm$11 & 58$\pm$9 & $-$0.74 & $-$3.0 & $-$1.2$\pm$0.8 & LIRG/Spiral & $>$22.7 & \ldots\\ J213407+051800 & 323.53327 & 5.30004 & 808$\pm$15 & 956$\pm$15 & 599$\pm$11 & 830$\pm$15 & 0.20 & 8.0 & $-$0.4$\pm$0.1 & \ldots & $>$22.7 & \ldots\\ J213453+052633 & 323.72386 & 5.44276 & 124$\pm$13 & 123$\pm$13 & 197$\pm$11 &244$\pm$15 & $-$0.54 & $-$5.4 & +0.3$\pm$0.3& LIRG & 22.1 & 0.61$\pm$0.09 \\ \hline \multicolumn{13}{p{7in}}{Notes: By investigating using two-epoch variability (with Epoch 1 as our reference epoch), we found no variable sources over E1-E2 and 5 variable sources over E1-E3. The flux of each JAGWAR source is reported in the columns S1, S2, S3, and S$_F$, corresponding to each epoch and the follow up observation. The modulation index ($m=\Delta{\rm S}/\bar{\rm S})$ and the Variability Index ($V_s=\Delta S/\sigma$) for each source are calculated and provided in columns m and $V_s$. The host identities were determined using the WISE colors \citep{wright2010} calculated from the ALLWise Catalog catalog \citep{2013yCat.2328....0C} after checking whether the source matched any AGN sources. The r-band magnitude and the spectroscopic redshift were acquired from SDSS DR14. J213341+051946 was initially not detected in Epoch 1 so no in-band spectral index was calculated for Epoch 1.}\\ \end{tabular} \end{table*} \section{Transient and Variability Search}\label{sec:var_tran} We used the PSC from \S\ref{sec:obs_proc.cal} to carry out a search for transients sources which appeared or disappeared in one or more of the three epochs. No transients were found to a 4$\sigma$ limit of $\sim$75 $\mu$Jy (mean completeness threshold for the merged catalog over 3 epochs and 100 deg$^2$). Following \cite{mooley2016} we used PSC to also investigate two-epoch variability using the the variability statistic, $V_s=\Delta S/\sigma$ and modulation index $m=\Delta{\rm S}/\bar{\rm S}$, where S is the flux density, $\bar{\rm S}$ is the average flux density over the two epochs being compared, $\Delta$S is the flux density difference, and $\sigma$ is the RMS noise. We used epoch E1 as the reference and performed the following two-epoch comparisons: E1--E2 and E1--E3. Significant variables were identified as those sources having $|V_s|$ larger than three (corresponding to Gaussian equivalent of approximately 3$\sigma$, i.e. a chance probability of finding 3 variables out of 1000 sources; this ensures that less than one false positive will be detected as a variable source in our search, assuming Gaussian statistics) and the absolute value of the modulation index, $|m|$, larger than 0.18 (i.e. a fractional variability, $f_{\rm var}>1.2$; this was chosen bearing in mind that flux correction factors of up to 10\% were applied to the flux densities within the PSC and that our flux scale is accurate to only $\sim$5\%). The plot of the variability statistic versus the modulation index is shown in Figure~\ref{fig:var}. We found no significant variable sources in the E1--E2 comparison (probing a timescale of $<$1 week) and 5 significant variables in the E1--E3 comparison (probing a timescale of $<$4 months). This indicates that $<$2\% of the persistent sources are variable on $<$1 week timescale and $3.0\pm1.3$\% of the persistent sources are variable on timescales of week-months. This level of variability is typical for the radio sky \citep[e.g.][]{2003ApJ...590..192C,bell2015,mooley2016,2016MNRAS.461.3314H}, and at these frequencies it is attributed to normal activity from active galactic nuclei \citep[AGN;][]{2019MNRAS.490.4024R}. \begin{figure} \centering \includegraphics[width=4in,angle=0]{lightcurves.pdf} \caption{Radio light curves at 6 GHz for the five variable sources identified in Table \ref{tab:vartransum}. The x-axis gives the time since merger. Flux densities for each source are shown different shapes (circle, triangle, inverted triangle, square and star).} \label{fig:LC} \end{figure} Next we examined the properties of these five variable sources for any indication that they may be long-lived transients related to S191216ap and not just background AGN. Their properties are summarized in Table~\ref{tab:vartransum}. Radio data include the sources positions (RA/DEC), the flux densities for all three epochs (S1, S2, S3), the modulation index (m), the variability statistic ($V_s$), and the in-band spectral indices ($\alpha$) for the first (E1) and third epochs (E3). Also included are the results of WISE counterpart source matching \citep{wright2010} in which we attempted to classify the radio source using WISE colors from the AllWise Catalog \citep{2013yCat.2328....0C} and \citet{2014MNRAS.442.3361N}. Finally, we used the latest release of the Sloan Digital Sky Survey (SDSS DR14) in which we record the r-band magnitude and the photometric redshifts, where available, for each WISE source. We found WISE counterparts for four JAGWAR sources, J213250+051359, J213317+052104, J213341+051946 and J213453+052633. The remaining JAGWAR source J213407+051800, is located 11.6 arcsec from AllWISE J213407.27+051804.8, with rmag=20.8 and a photometric redshift of 0.44. None of these five sources are in the WISE AGN catalog of \citet{2018ApJS..234...23A}. From their WISE colors we deduce from \citet{wright2010} that the putative hosts are variously luminous infrared galaxies (LIRGs), spirals and/or elliptical galaxies (see Table~\ref{tab:vartransum}). Two sources, J213317+052104 and J213453+052633, have photometric redshift values that place the host galaxy far beyond the luminosity distance of S191216ap. The light curves are shown in Figure \ref{fig:LC}. They include the VLA follow-up observations from day 242 (see Table~\ref{tab:observations}). We detected all of five sources at integrated flux densities of 134$\pm$15\,$\mu$Jy, 558$\pm$14\,$\mu$Jy, 58$\pm$9\,$\mu$Jy, 830$\pm$15\,$\mu$Jy, 244$\pm$10\,$\mu$Jy, respectively (in the order listed in Table~\ref{tab:vartransum}). With only four epochs, it is not easy to make any definitive statements, but they are consistent with fluctuations from persistent radio sources and none of the sources show the sharp rise and decay pattern of an afterglow. J213453+052633 exhibits a rise, nearly doubling during the 242 days. However, the photometric redshift of its host galaxy rules out an association with S121916ap. The brightest radio source J213407+051800 is also detected in the first-look images of the VLA Sky Survey (VLASS) with a 3 GHz flux density of 569$\pm$122\,$\mu$Jy, from data taken 20 October 2017 \citep{2020PASP..132c5001L}. None of the other radio sources were detected in the VLASS with 3$\sigma$ limits $\leq$375\,$\mu$Jy. Finally we note that in the higher resolution follow-up observations, JAGWAR J213250+051359 clearly shows a core-jet morphology. Based on the host galaxy classifications and redshifts, plus the amplitude, timescale and fractional variation level, persistence of the radio emission and radio source morphology, it seems likely that these variable radio sources are background low-luminosity AGN and likely unrelated to S191216ap. \section{Discussion and Future Prospects}\label{sec:discussion} Stellar mass binary black hole mergers were not widely expected to generate electromagnetic (EM) counterparts, so early predictions focused on EM signatures from binary NSs and BH-NS binaries \citep[e.g.][]{metzger2012,2013ApJ...767..124N}. Nevertheless, observations were still undertaken to search for incoherent radio emission in the first two science runs of LIGO \citep{2016ApJ...829L..28P, 2018ApJ...857..143M,2019ApJ...884...16A}. Another promising avenue has been the search for prompt {\it coherent} radio emission on timescales of minutes to hours post-merger, using wide-field low frequency arrays \citep{2015ApJ...812..168Y, 2016PASA...33...50K, 2019ApJ...877L..39C, 2019MNRAS.489L..75J,2019MNRAS.489.3316R}. Renewed impetus for afterglow searches came following the nominal detection (2.9$\sigma$) of a gamma-ray flare from {\it Fermi-GBM} detection toward GW\,150914 \citep{2016ApJ...826L...6C,2018ApJ...853L...9C}. This stimulated a number of theoretical investigations for stellar mass BBHs \citep[e.g.,][]{Loeb2016,Perna2016,Woosley2016,Zhang2016}, which in several cases predict specific EM signatures that are testable with follow-up radio observations \citep{2016PTEP.2016e1E01Y,2016ApJ...825L..24M,Perna2019}. As noted earlier (\S\ref{sec:intro}), S191216ap has been classified as a BBH merger event seen by LIGO-Virgo, which had a coincident neutrino detection from IceCube, and a possible detection of a EM counterpart at TeV energies from HAWC. We have carried out a search for a radio transient within the HAWC error circle for three epochs covering timescales between 4 days and 4 months, at a wavelength of 6 cm (\S\ref{sec:obs_proc.cal}). While no radio transients were discovered, our flux density upper limits represent a considerable improvement over past BBH radio afterglow searches. Our 4$\sigma$ limits\footnote{{This is the mean sensitivity across the HAWC region.}} of 75 $\mu$Jy for a radio afterglow from S191216ap is an improvement over the 4$\sigma$ radio limits of 600 $\mu$Jy limits for GW151226 \citep{2018ApJ...857..143M} and 180 $\mu$Jy for GW170608 \citep{2019ApJ...884...16A}, two BBH events that occurred during the O2 and O3 Virgo science runs, respectively. Factoring in the luminosity distance of S191216ap, the radio luminosity is 1.2$\times{10}^{28}$ erg s$^{-1}$ Hz$^{-1}$, or approximately 5--10 times deeper than these previous BBH radio afterglow searches. Given the absence of a radio transient for this GW/neutrino/TeV candidate event, we next look at how our radio limits can be used to set limits on any putative afterglow from S191216ap. We adopt the model of \citet{Perna2019}, in which a jet is formed during the BBH merger that propagates freely, without any baryonic contamination from tidal disrupted material, until it interacts with the surrounding interstellar medium and generates afterglow emission. In Figure \ref{fig:BBH} we compare the sensitivity reached in our VLA follow-up (horizontal lines; see also see Table \ref{tab:observations}) with models for potential BBH radio afterglows as seen from an on-axis observer. The model predictions at timescales comparable to those of our radio follow-up are shown for different values of the total kinetic energy in the jet ($E_{\rm jet}$) and of the jet half-opening angle $\theta_j$ \citep[symbols;][]{Perna2019}. We have rescaled the model values provided at 1.4\,GHz by \citet{Perna2019} to 6\,GHz assuming an optically thin spectral index of 0.65 (which is consistent with the model itself between radio and optical frequencies). These model predictions also assume $\Gamma=100$, $\epsilon_e=0.03$, $\epsilon_B=0.01$, and $n_{\rm ISM}=0.01$\,cm$^{-3}$ \citep{Perna2019}. We note that the observed fluxes scale as $n^{5/14}$ for a fully radiative blast wave and as $n^{1/2}$ for an adiabatic evolution. Thus, generally speaking, higher densities imply larger fluxes \citep{Perna2019}. Given the tentative HAWC sub-threshold event at 1 TeV, we focus our comparison on the $\Gamma=100$ (highly relativistic jet) models presented in \citet{Perna2019} rather than on the $\Gamma=10$ (mildly relativistic jet) case. As evident from this Figure \ref{fig:BBH}, our radio follow-up campaign was sensitive to only the most optimistic EM counterpart models in terms of energy coupled to a relativistic ejecta. For reference, in Figure \ref{fig:BBH} we also mark with a vertical dashed line an order-of-magnitude energy estimate derived from the flux density measurement of the HAWC sub-threshold event at 1\,TeV as \citep{2019GCN.26472....1H}: \begin{eqnarray} \nonumber E_{\rm HAWC}\sim (1\,{\rm TeV})^2 \times 7.3\times10^{-9}\,{\rm TeV^{-1}cm^{-2}\,s^{-1}}\\\times10\,{\rm s}\times \frac{4\pi d^2_L(1-\cos(20\,{\rm deg}))}{\xi}, \end{eqnarray} where we neglect redshift corrections, we set $d_L=320$\,Mpc, have assumed a high-energy signal duration of 10\,s, a jet opening angle of 20\,deg, and an efficiency of $\xi=0.9$\% for the conversion of ejecta kinetic energy into prompt emission energy at 1\,TeV. This value of the efficiency is chosen so that the flux density measured by HAWC is consistent with the hypothesis of a BBH jet with kinetic energy of $10^{49}$\,erg (and opening angle of 20\,deg), which is the minimum energy value for which our radio upper-limits are constraining of the model predictions. \begin{figure} \centering \includegraphics[height=9cm,angle=-90]{BBH} \caption{Sensitivity reached in our VLA follow-up (horizontal lines; see also see Table \ref{tab:observations}) with models for potential BBH radio afterglows as seen from an on-axis observer \citep[symbols;][]{Perna2019}. See Section \ref{sec:discussion} for details. } \label{fig:BBH} \end{figure} Our work has a number of limitations. In particular, our VLA imaging campaign focused on the area defined by the HAWC error circle \citep{2019GCN.26472....1H}. If the gamma-ray emission (or neutrino detection) were were not deemed significant, then the multi-messenger aspect of this event would be in question, and we would have surveyed less than 0.1\% of the full error region of S191216ap. Furthermore, even though the observations presented here are an improvement in sensitivity over earlier LIGO-Virgo science runs, the current radio limits are still not sufficiently constraining on {\it all} predicted EM signatures from BBHs. For example, radio methods are also not particular powerful for finding EM signatures from BBH mergers in accretion disk environments of supermassive BHs \citep{Stone2017,Bartos2017,2019ApJ...884L..50M,Tagawa2020}, a model invoked to argue that a peculiar optical flaring AGN was the counterpart of the GW event S190521g \citep{2020PhRvL.124y1102G}. Radio is well-suited for identifying transients, but radio variability is most commonly ascribed to regular AGN variability. Nonetheless, the phase space remains large for both prompt and longer-term searches for coherent and incoherent radio emission, respectively. Such searches should continue during the fourth GW science run which should include the Kamioka Gravitational Wave Detector (KAGRA) detector, with array improvements in both sensitivity and localization \citep{2018LRR....21....3A}. The detection of radio afterglow from a BBH would revolutionize the studies of such objects in much the same way that the multi-messenger studies of GW170817 advanced our understanding of binary neutron stars. \acknowledgments{\it The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We would like to thank the NRAO staff, especially Amy Mioduszewski, Heidi Medlin, Drew Medlin, Tony Perreault and Abi Smoake for help with observation scheduling and computing. K.P.M. is currently a Jansky Fellow of the National Radio Astronomy Observatory. K.P.M and G.H. acknowledge support from the National Science Foundation Grant AST-1911199. A.C., A.B., and D.B. acknowledge support from the National Science Foundation via the CAREER grant \#1455090. DK is supported by NSF grant AST-1816492. }
1,108,101,563,099
arxiv
\section{Introduction} \label{sec:introduction} The field of underwater robotics has recently been experiencing significant development, primarily driven by active research in AUVs. AUVs have seen applications in environmental monitoring (\textit{e.g.},~\cite{moline2005autonomous,fong2006evaluation,forrest2007investigation}), bathymetry surveys~\cite{huizinga2016bathymetric}, and security~\cite{tripp2006autonomous}, among others. For AUVs to navigate and operate such missions successfully, the ability to \textit{localize} accurately is essential. However, underwater localization is a challenging and open problem due to the unique circumstances AUVs face: GPS and other forms of RF-based communications are either completely unavailable or limited to extremely short ranges, and landmark-based localization using exteroceptive sensors can often be hampered by environmental factors. Our work in this paper presents a novel, low-cost approach for localizing AUVs in water bodies for which bathymetry information is available. \begin{figure}[t! \setlength\belowcaptionskip{0pt} \centering \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=\linewidth]{images/aqua_whitemargin.png} \caption{} \label{fig:bathymetrydemo_minnebot} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\linewidth]{images/lakefigure.png} \caption{} \label{fig:bathymetrydemo_sketch} \end{subfigure} \vspace{-2mm} \setlength{\belowcaptionskip}{-20pt} \caption{(\ref{fig:bathymetrydemo_minnebot}) Visual representation of AUV location in a body of water. The top of the figure is assumed to be the surface. The surface and bottom of the body are indicated by the dashed yellow lines. (\ref{fig:bathymetrydemo_sketch}) Representation of the AUV (in blue) location against a lake profile.} \label{fig:lake} \end{figure} Robot localization problems in all environments have been studied extensively. The problem we address in this paper is underwater-specific and a subset of Terrain-based Navigation (TBN)~\cite{carreno2010survey}, which is widely used across domains and refers to a general localization problem using prior maps. In a broad sense, there are primarily three major techniques to address the problem for underwater robots: using inertial data combined with dead reckoning~\cite{miller2010autonomous}, acoustic transponders~\cite{batista2009sensor}, and landmark-based (also known as \textit{geophysical} features) approaches~(\textit{e.g.},~\cite{eustice2005large,paull2014auv}). The first of these techniques uses an inertial measurement unit (IMU) and velocity measurement (\textit{e.g.}, from a Doppler velocity log (DVL)) to estimate the position of the robot by correcting the IMU's drift with velocity information. This approach, while widely adopted, often struggles with drifts over time; also, it requires expensive, high-accuracy IMUs~(\textit{e.g.},~\cite{panish2011achieving, karimi2013comparison, melo2017survey}). The techniques using acoustic transponders include long baseline (LBL)~\cite{scherbatyuk1995auv}, ultra-short baseline~(USBL)~\cite{morgado2010experimental}, and short baseline (SBL) systems. However, most of these techniques require either a surface ship carrying a \textit{transponder} or pre-installed beacons on the floor of the water body in question. In many applications, it is impractical to install these devices for localization purposes due to the added cost and associated overhead. Lastly, landmark-based methods use visual and acoustic sensors to detect known features in the marine environment, usually on the floor, and AUVs can localize themselves relative to such landmarks. However, optical distortions such as scattering, absorption, and attenuation (\textit{e.g.}, arising from turbidity in the water) can be extreme which result in features only being visible up close. This vision-related issue makes it challenging to use vision-based methods for broad-area localization. Landmark-based methods with an acoustic sensor provide practical means to tackle underwater localization problems; however, these require prior information on the topology of the water body, which is often available as bathymetry data. Bathymetry data can assist in AUV localization for each $(x,y)$ \textit{grid} location on the surface of the water body where height information is available. For the rest of this paper, the term \textit{height} will refer to the total depth of the water body from the floor to the surface (shown as $L$ in Fig.~\ref{fig:bathymetrydemo_sketch}). Although the depth of the robot can vary, the height of the water column is constant at that specific grid location. Therefore, an AUV would need both a depth sensor and a bottom sounder or altimeter to utilize this information. The general goal of the AUV localization process is to infer the robot's position $(x,y,z)$ and orientation $(\phi, \theta,\xi)$ in 6-DOF with respect to the body of water. Localization using bathymetry data generally uses a form of the Bayes filter algorithm to estimate these \textit{state variables} \cite{teck2014collaborative}. The Bayes filter~\cite{thrun2005probabilistic} provides a measure of \textit{belief} representing knowledge about the state of an AUV under a Markovian assumption. However, while the Bayes filter provides an optimal solution, it is often intractable to compute. In this paper, we developed and implemented AUV localization algorithms for water bodies with bathymetry data, taking depth data from a pressure sensor and altitude data from a single-beam sonar as inputs, by using four Bayes filter-based methods: the EKF, UKF, PF, and MPF. The EKF and UKF are parametric implementations of the Bayes filter algorithm with Gaussian assumptions, and the PF is a nonparametric implementation. The MPF, otherwise known as the Rao-Blackwellized Particle Filter (RBPF), is a ``hybrid'' approach that combines the Kalman Filter (KF) and the PF~\cite{schon2005marginalized}. The main contributions of this paper are the following: \begin{itemize} \item Propose localization algorithms to estimate the position of an AUV along all three axes $(x,y,z)$, \item Propose low-cost underwater AUV localization algorithms which work with bathymetry data, and \item Compare and evaluate the performance of four localization algorithms with real-world bathymetric data and different motion models. \end{itemize} \section{Related Work} \label{sec:related_work} Underwater localization using landmark-based methods with acoustic sensors has been widely studied. For these methods, ranging-type sonars including the single-beam, profiling, and multi-beam varieties have been used~\cite{paull2014auv}. Multi-beam and profiling sonars collect multiple measurements, and they can give more accurate results than single-beam sonars. Table \ref{tb:related} summarizes the selected existing localization algorithms. $(\phi, \theta, \psi)$ represents the Euler angles, $(u, v, w)$ is the AUV velocity in the body-fixed frame, and $(b_{x}, b_{y})$ is the velocity bias. Although DVL, multi-beam sonar, and profiling sonar-based methods yield better results (\textit{e.g.}, \cite{ura2006terrain,fairfield2008active,nakatani2009terrain,teixeira2017robust}), such sensors can be prohibitively expensive. Single-beam sonars use a narrow acoustic projection to measure altitude and are thus vulnerable to noise. However, they have been widely adopted to solve localization problems since they are among the most affordable acoustic sensors~\cite{melo2017survey}. Williams and Mahon \cite{williams2003terrain} proposed a localization algorithm based on the PF, but the computational burden of the algorithm is heavy. Meduna et al. \cite{meduna2008low} presented a point mass filter (PMF)-based algorithm, but it is limited to the $(x,y)$ positions of an AUV. Kim and Kim \cite{kim2014nonlinear} used a single-beam sonar with the MPF and estimated the 6-DOF position and orientation of an AUV along with the velocity. However, the algorithm requires highly accurate IMU data which requires costly, high-end IMUs. \begin{table}[t!] \setlength\belowcaptionskip{-0pt} \centering \caption{Selected existing localization algorithms} \begin{tabular}{p{4.3cm}|p{3cm}|p{4cm}|p{1.5cm}} \hline & Sensors & Parameters of state vector $s$& Algorithms \\ \hline \hline Teixeira et al. \cite{teixeira2017robust} & DVL, \newline Single-beam sonar & $b = (b_{x}, b_{y})$ \newline $s = (x,y,b)$ & PF \\ \hline Fairfield and Wettergreen \cite{fairfield2008active} & Multi-beam sonar& $q=(\phi, \theta, \psi, x, y, z)$ \newline $s=(q,\dot{q}, \ddot{q})$& PF \\ \hline Ura et al. \cite{ura2006terrain} & Profiling sonar& $s=(x,y)$ & PF \\ \hline Nakatani et al. \cite{nakatani2009terrain} & Profiling sonar & $s=(x,y)$ & PF \\ \hline Williams and Mahon \cite{williams2003terrain} & Single-beam sonar & $q = (x, y, z)$ \newline $s=(q,\dot{q})$ & PF \\ \hline Meduna et al. \cite{meduna2008low} & Single-beam sonar & $s=(x, y)$ & PMF\\ \hline Kim and Kim \cite{kim2014nonlinear} & Single-beam sonar & $p=(\phi, \theta, \psi)$ \newline $q=(u,v,w)$ \newline $s=(x, y, z, p, q)$ & MPF \\ \hline \end{tabular} \label{tb:related} \vspace{-5mm} \end{table} Several Bayes filter-based methods have been used to solve the localization problem \cite{carreno2010survey} with sonar data. Among Bayes filters, the EKF and UKF have seen the most use in this domain (\textit{e.g.}, \cite{he2009underwater,yoon2013ukf}). Karimi et al. \cite{karimi2013comparison} showed that the EKF can outperform the UKF in their particular case. However, the UKF captures nonlinearity up to the second-order term in the state transition process~\cite{paull2014auv}, which in theory could outperform the EKF in similar applications. We thus develop both EKF and UKF-based algorithms and compare their performance in AUV localization. Although the EKF and UKF can handle unimodal Gaussian distributions, they often fail to converge when the underlying distribution is multi-modal. The inherent nonlinearity of the underwater terrain or nonlinear AUV motions underwater make it challenging for these methods to work reliably. To address such issues, the PF has been widely used (\textit{e.g.}, \cite{thrun2002particle, karlsson2003particle, williams2003terrain, rekleitis2004particle, salmond2005introduction, ura2006terrain, maurelli2009particle, nakatani2009terrain, gustafsson2010particle, schon2011particle, teixeira2017robust}). However, the PF is computationally expensive and thus can be prohibitive to run on-board AUVs for real-time localization. The MPF, on the other hand, has a lower computational cost and provides similar benefits to the PF, handling nonlinearity to some extent~\cite{schon2005marginalized}, thus making it potentially useful for underwater localization~(\textit{e.g.}, \cite{karlsson2006bayesian, kim2014nonlinear}). However, localization with bathymetry data considering 3-DOF state vectors and using four Bayes filter-based algorithms (EKF, UKF, PF, and MPF) is yet to be extensively studied. \section{Problem Formulation} \label{sec:problem_formulation} \subsection{Motion Model} A general discrete time state-space model can be represented as Eq. \ref{modelall} to formulate the localization problem where $x_t$ is a state vector, $u_t$ is a control input, and $y_t$ is a measurement. As mentioned in Section \ref{sec:introduction}, only the 3D position of an AUV is included in the state vector. $f$ and $h$ can be either linear or nonlinear functions. $q_t$ and $r_t$ represent the noise from motion and measurements. The model in Eq. \ref{modelall} is used for the EKF, UKF, and PF-based localization algorithms. The model for the MPF-based localization algorithm is introduced in Section \ref{subsec:mpfmodel}. \begin{equation}\label{modelall} \begin{cases} x_{t} = f(x_{t-1}, u_{t}) + q_t\\ y_{t} = h(x_{t})+r_t \end{cases} \end{equation} The state vector and control inputs are defined as follows: \begin{equation}\label{statev} x_t=\begin{bmatrix} p_{x,t} &p_{y,t}& p_{z,t}\\ \end{bmatrix}^{T} \end{equation} \begin{equation}\label{controlinput} u_t=\begin{bmatrix} v_{x,t} & v_{y,t} & v_{z,t}\\ \end{bmatrix}^{T} \end{equation} The AUV motion models are defined in Eqs. \ref{move_model_lin} and \ref{move_model_nonlin}. \subsubsection{Linear motion model} All state variables are updated linearly. \begin{equation}\label{move_model_lin} f(x_t, u_t)=x_t + u_t * dt \end{equation} \subsubsection{Linear/Nonlinear mixed motion model} We propose a linear/nonlinear mixed motion model to implement a motion showing nonlinearity without using the Euler angles. Among the three state variables in the state vector, $p_{x,t}$ and $p_{y,t}$ are updated based on the height of the position $(p_{x,t},p_{y,t})$. Therefore, the changes in $p_{x,t}$ and $p_{y,t}$ are proportional to the height of the water body at the position $(p_{x,t},p_{y,t})$. In other words, the greater the height of the water body at the given position, the greater the change. Unlike the other two variables, the state variable $p_{z,t}$ is updated linearly as in the linear motion model. \begin{equation}\label{move_model_nonlin} f(x_t, u_t)=\begin{bmatrix} p_{x,t} + a \Big[\frac{L(p_{x,t},p_{y,t})}{a_{d}}+a_{off}\Big]dt\\ p_{y,t} + b \Big[\frac{L(p_{x,t},p_{y,t})}{b_{d}}+b_{off}\Big]dt\\ p_{z,t} + v_{z,t}*dt\\ \end{bmatrix} \end{equation} $a$, $a_{d}$, $a_{off}$, $b$, $b_{d}$, and $b_{off}$ are constants defined for each water body. $L(p_{x,t},p_{y,t})$ is the height of the water body at the position $(p_{x,t},p_{y,t})$. \subsection{Measurement Model} The measurement function $h$ is the same for both the linear model and the mixed model. \begin{equation}\label{meas_model} h(x_t)=\begin{bmatrix} p_{z,t} & p_{z,a,t}=L(p_{x,t},p_{y,t})-p_{z,t}\\ \end{bmatrix}^{T} \end{equation} $L$ is a bathymetry map, $p_{z,t}$ represents the depth of the vehicle from the surface measured by the pressure sensor, and $p_{z,a,t}$ represents the altitude of the AUV measured by the single-beam sonar. Therefore, the sum of $p_{z,t}$ and $p_{z,a,t}$ is the height $L(p_{x,t},p_{y,t})$ at the position ($p_{x,t},p_{y,t}$) as shown in Fig. \ref{fig:lake}. \section{Methodology} \label{sec:methodology} Since the motion model in Eq. \ref{move_model_nonlin} and the measurement model in Eq. \ref{meas_model} are nonlinear, it is necessary to use nonlinear Bayes filter algorithms to solve the localization problem. The EKF and UKF are widely used to handle nonlinear state estimation with the assumption that the state variables follow a Gaussian distribution, but they could fail when the distribution is not Gaussian \cite{schon2006marginalized}. The PF \cite{thrun2002particle} is resilient to various types of noise, but it is computationally expensive. The MPF \cite{schon2011particle} uses the PF for nonlinear state variables and the KF for linear state variables because the KF is a filter optimal for estimating linear state variables. \subsection{Extended Kalman Filter}\label{EKF} In order to approximate a nonlinear system, the EKF takes the first-order of the Taylor series expansion \cite{thrun2005probabilistic}. Linear matrices in the KF are replaced with the Jacobians to make predictions. The Jacobians for the mixed motion model in Eq. \ref{move_model_nonlin} and the measurement model in Eq. \ref{meas_model} are shown in Eqs. \ref{Fnonlin} and \ref{H}, respectively. The EKF requires the following inputs: the state vector, state covariance, control inputs, and measurements. With these inputs, the EKF uses this well-known two-step approach: \begin{itemize} \item Prediction Step: The state vector is updated based on the motion model and control inputs. Once it is updated, the Jacobian, state covariance, and noise covariance matrices are used to find the state vector and state covariance matrix for the next time step. \item Correction Step: The Kalman gain is calculated using the Jacobian, state covariance, and measurement noise covariance matrices. After the calculation, it is used to refine the state vector from the prediction step. \end{itemize} The EKF assumes that state variables follow a Gaussian distribution with a single mode. Due to this assumption, the EKF is computationally efficient, and each update is $O(d^{3})$ where $d$ is the dimension of the state vector $x_{t}$ \cite{daum2005nonlinear}. The EKF is directly applied to Eq. \ref{modelall} for linear and mixed motion cases to localize AUVs using the depth and altitude measurements. \begin{equation}\label{Fnonlin} F_t=\begin{bmatrix} 1+\frac{a}{a_{d}}\Big[\frac{\partial L(x,y)}{\partial x}\Big]dt & \frac{a}{a_{d}}\Big[\frac{\partial L(x,y)}{\partial y}\Big]dt & 0 \\ \frac{b}{b_{d}}\Big[\frac{\partial L(x,y)}{\partial x}\Big]dt & 1+\frac{b}{b_{d}}\Big[\frac{\partial L(x,y)}{\partial y}\Big]dt & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \end{equation} \begin{equation}\label{H} H_t=\begin{bmatrix} 0 & 0 & -1\\ \frac{\partial L(x,y)}{\partial x} & \frac{\partial L(x,y)}{\partial y} & 1\\ \end{bmatrix} \end{equation} \subsection{Unscented Kalman Filter} The UKF~\cite{wan2000unscented} is another approach to estimate a nonlinear system using the Unscented Transform instead of the Taylor series. It samples sigma points to capture the mean and covariance of a Gaussian distribution. After that, the UKF propagates the points through the true nonlinear system. In this way, the UKF can handle higher degrees of nonlinearity than the EKF. The UKF requires the same inputs as the EKF along with some additional parameters, namely $n, \alpha, \beta, \mbox{and } \kappa$. Like the EKF, the UKF has a two-step estimation process: \begin{itemize} \item Prediction Step: The UKF chooses $2n+1$ sigma points from the Gaussian distribution and passes them through the motion model $f$ where $n$ is the number of dimensions. $\alpha,\beta$, and $\kappa$ are used to determine weights for each sample and the distribution of the sigma points. \item Correction Step: The state vector from the prediction step is used to generate sigma points. Their measurements, noise covariance, and state covariance matrices are used to calculate the Kalman gain. With the gain, the state vector is updated like the EKF. With this, the UKF approximates a Gaussian distribution with the sigma points. \end{itemize} In this way, the UKF can create a more accurate approximation of the Gaussian distribution than the EKF. Since the estimation is based on the Gaussian distribution assumption, it may not work for multi-modal distributions. In most cases, the UKF shows notable improvements compared to the EKF, despite possessing similar computational complexity~\cite{daum2005nonlinear}. The UKF is also directly applied to Eq. \ref{modelall} for linear and mixed motion cases. \subsection{Particle Filter} The PF \cite{salmond2005introduction} implements the Bayes filter algorithm using sequential Monte Carlo methods. Unlike the EKF and UKF, the PF does not require any assumptions regarding the distribution. Instead, it uses $N$ \textit{particles} to approximate the distribution of the state vector $x_{t}$. The more particles there are, the more accurate the approximation of the distribution. Furthermore, it can handle distributions with high nonlinearity and multiple modes. However, the computational complexity of the PF for each update is $O(N{d^{2}})$, and it increases as $N$ grows. When $N$ is much larger than $d$, the PF can be much slower than the EKF and UKF \cite{gustafsson2010particle}. This computational burden is often the main drawback of the PF for real-time implementations. \begin{algorithm}[t!] \setlength\belowcaptionskip{-20pt} \caption{Depth-based PF Localization}\label{dpfl} \begin{algorithmic}[1] \State \textbf{D-PFL main} \State $L$ = Bathymetry data of a target lake \State $N$ = The number of particles \State $x_{init}$ = Initial pose of an AUV \State $x_1$ = \textbf{Initialize\_around\_pose}$(L, N)$ \State $z_t$ = Sensor measurements \State $u_t$ = Control input \For{$t= 1,..., T$} \State $x_p$ = $x_t$ \For{$m= 1,..., N$} \State{$x(m,:)$ = \textbf{motion\_update}$(u_{t}, x_{p}(m,:), L)$} \State{$w(m)$ = \textbf{sensor\_update}$(z_{t}, x(m,:), L)$} \EndFor \State $w_{total}$ = $sum(w)$ \For{$m= 1,..., N$} \State $w(m)$ = $w(m)/w_{total}$ \EndFor \State{$x_t$ = \textbf{resample\_particles}$(x(m,:), w, L, rand)$} \State $w_t$ = $w$ \State $est\_pose$ = \textbf{PFL\_get\_pose}$(x_{t}, w_{t})$ \EndFor \\ \Return $est\_pose$ \end{algorithmic} \end{algorithm} \setlength{\textfloatsep}{5pt} We proposed a depth-based PF localization (d-PFL) in Algorithm \ref{dpfl}. \textbf{PFL\_update()} takes bathymetry data, measurements, and control inputs and returns propagated particles and their weights. In order to evaluate the weights $w_t$, the multivariate Gaussian distribution is used as shown in Eq. \ref{weight}. \begin{equation}\label{weight} w_{t} = w_{t-1}e^{-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)}\\ \end{equation} \begin{equation}\label{getpose} \hat{x} = \sum_{m=1}^{M} w^{[m]}x^{[m]} \\ \end{equation} \begin{algorithm}[t!] \setlength\belowcaptionskip{-15pt} \caption{MPF-based Localization}\label{mpf} \begin{algorithmic}[1] \State \textbf{Initialize particles} \State\hspace{\algorithmicindent} Initialize nonlinear state variables \begin{equation}\label{noninit} x_{0|-1}^{n,(i)} \sim p(x_0^n) \end{equation} \State\hspace{\algorithmicindent} Initialize linear state variables \begin{equation}\label{lininit} \{x_{0|-1}^{l,(i)}, P_{0|-1}^{(i)}\}=\{\bar{x}_{0}^{l}, \bar{P}_{0}\} \end{equation} \For{$t= 1,..., T$} \State \textbf{PF measurement update} \State\hspace{\algorithmicindent} Evaluate the weights \begin{equation}\label{evalweight} w_t^{(i)} = p(y_{t}|X_{t}^{n,(i)}, Y_{t-1}) \end{equation} \State\hspace{\algorithmicindent} Estimate nonlinear state variables \begin{equation}\label{nonest} \hat{x}_{t|t}^{n} = \sum_{i=1}^{N}w_{t}^{(i)}x_{t|t-1}^{n,(i)} \end{equation} \State\hspace{\algorithmicindent} Resample particles \begin{equation}\label{resample} Pr(x_{t|t}^{n,(i)}=x_{t|t-1}^{n,(j)})=\widetilde{w}_{t}^{(j)} \end{equation} \State \textbf{KF measurement update} \State\hspace{\algorithmicindent} Estimate linear state variables \begin{equation}\label{xupdate} {x}_{t|t}^{l,(i)}={x}_{t|t-1}^{l,(i)}+K_{t}(y_{t}-h_{t}-C_{t}\hat{x}_{t|t-1}^{l}) \end{equation} \begin{equation}\label{linest} \hat{x}_{t|t}^{l} = \sum_{i=1}^{N}w_{t}^{(i)}x_{t|t}^{l,(i)} \end{equation} \State \textbf{PF prediction} \State\hspace{\algorithmicindent} Propagate nonlinear state variables \begin{equation}\label{pfpred} x_{t+1|t}^{n,(i)} \sim p(x_{t+1|t}^{n}|X_{t}^{n,(i)}, Y_{t}) \end{equation} \State \textbf{KF prediction} \State\hspace{\algorithmicindent} Propagate linear state variables \begin{equation}\label{xlupdate} \begin{split} \hat{x}_{t+1|t}^{l} = \bar{A}_{t}^{l}\hat{x}_{t|t}^{l} + G_{t}^{l}(Q_{t}^{ln})^{T}(G_{t}^{n}Q_{t}^{n})^{-1}z_{t}\\ +f_{t}^{l}+ L_{t}(z_{t}-A_{t}^{n}\hat{x}_{t|t}^{l}) \end{split} \end{equation} \EndFor \end{algorithmic} \end{algorithm} During the update process, the algorithm only assigns weights if the particle is within the boundaries of the map. Once the weights for the particles are calculated, they are normalized to ensure that they sum to 1. Then, particles are resampled based on their weights. In order to avoid a situation where all the particles are trapped in incorrect positions in similar environments, some of the $N$ \textit{particles} are sampled randomly at each time step. Although this can degrade the accuracy of the algorithm, it decreases the chance of incorrect estimation occurrences. The pose of the AUV is estimated using Eq. \ref{getpose} once the propagated particles and the corresponding weights are updated. \subsection{Marginalized Particle Filter} \label{subsec:mpfmodel} The MPF was proposed to reduce the computational complexity while retaining a similar performance when the model has a linear substructure \cite{schon2006marginalized}. The core idea of the MPF is to marginalize linear state variable(s) from the state vector and use the KF to estimate the linear state variable(s). The PF is then used to estimate the remaining nonlinear state variable(s). The computational complexity of the MPF is defined in \cite{schon2007marginalized} and can be simplified to $O(Nd_{n}^{3})$ for our case where $d_{n}$ is the dimension of the nonlinear state variable(s). In order to apply the MPF \cite{schon2005marginalized}, the model in Eq. \ref{modelall} is separated into linear and nonlinear state variables as shown in Eqs. \ref{statev2} and \ref{mpf1}. The motion model noise $q_t^n$, $q_t^l$ and the measurement model noise $r_t$ are assumed to be Gaussian with zero mean. The matrices $A, C$, and $G$ are determined by the motion model. \begin{equation}\label{statev2} x_t=\begin{bmatrix} x_{t}^{n}\\ x_{t}^{l}\\ \end{bmatrix} \end{equation} \begin{equation}\label{mpf1} \begin{cases} x_{t+1}^{n} = f_{t}^{n}(x_{t}^{n})+A_{t}^{n}(x_{t}^{n})x_{t}^{l}+G_{t}^{n}(x_{t}^{n})q_{t}^{n}\\ x_{t+1}^{l} = f_{t}^{l}(x_{t}^{n})+A_{t}^{l}(x_{t}^{n})x_{t}^{l}+G_{t}^{l}(x_{t}^{n})q_{t}^{l}\\ y_{t} = h_{t}(x_{t}^{n})+C_{t}(x_{t}^{n})x_{t}^{l}+r_{t} \end{cases} \end{equation} In our case, the ratio $\frac{N(k)}{N_{PF}}$ is 1.1 where $N(k)$ is the number of particles that can be used for the MPF, and $N_{PF}$ is the number of particles used for the standard PF. The ratio means that the MPF can use 10$\%$ more particles than the PF while retaining the same computational complexity as the standard PF. However, the EKF and UKF are still faster than the MPF, albeit less accurate. The MPF localization algorithm is shown in Algorithm \ref{mpf} along with selected simplified equations where $Q$ and $P$ are covariance matrices. The equations in detail can be found in \cite{schon2005marginalized}. \section{Experimental Setup and Results} \label{sec:experiments} \subsection{Bathymetry Data} The following lakes located in Minneapolis, MN, USA were chosen: Lake Bde Maka Ska, Lake Nokomis, Lake Hiawatha, and Lake Harriet. The lakes were chosen since they are large, well-studied, have a non-flat floor, and are easy to access for future field studies. The bathymetry data was acquired from the Minnesota Department of Natural Resources (MN DNR)~\cite{naturalresourcesdepartment2018} (see Fig.~\ref{fig:bathy_raw_data}). The grid size of each lake's bathymetry data is mostly $5$m but is $10$m in some lakes. The lake height at each position is given in feet, but we converted it to meters for our study. In our experiments, we assumed that the grid size is $1$m to simplify the calculations, and the bathymetry data was scaled accordingly. \begin{figure}[h] \centering \includegraphics[width=0.3\linewidth]{images/lakeraw.png} \caption{Visualization of raw bathymetry data in tagged interchange file format (TIFF) of Lake Bde Maka Ska. Source: Minnesota Department of Natural Resources.} \label{fig:bathy_raw_data} \vspace{-2mm} \end{figure} \begin{table}[t!] \setlength\belowcaptionskip{-0pt} \centering \caption{Model parameters} \begin{tabular}{p{5cm}|p{4cm}} \hline Parameter & Value \\ \hline \hline No. of particles for the PF, $N_{PF}$ & 5000 \\ \hline No. of particles for the MPF, $N_{NPF}$ & 300 \\ \hline Motion noise cov., $Q$ (m) & $0.01\begin{bmatrix} v_x^{2} & 0 & 0\\ 0 & v_y^{2} & 0\\ 0 & 0 & (0.3048 v_z)^{2}\\ \end{bmatrix}$ \\ \hline Measurement noise cov., $R$ (m) & $\begin{bmatrix} 0.3048^{2} & 0 \\ 0 & 0.3048^{2} \\ \end{bmatrix}$ \\ \hline Initial uncertainty cov., $P$ (m) & $\begin{bmatrix} 1^{2} & 0 & 0\\ 0 & 1^{2} & 0\\ 0 & 0 & 0.3048^{2}\\ \end{bmatrix}$\\ \hline \end{tabular} \label{tb:modelparam} \end{table} \begin{table}[t!] \setlength\belowcaptionskip{-0pt} \centering \caption{Motion parameters} \begin{tabular}{p{2.45cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1cm}|p{1cm}} \hline \multicolumn{6}{c}{Linear Motion Parameters}\\ \hline Lake & $v_{x}(m/s)$ & $v_{y}(m/s)$ & $v_{z}(m/s)$ & $a$ & $b$ \\ \hline \hline Bde Maka Ska & 1 & -3 & -0.1524 & -0.02 & 0.06 \\ \hline Nokomis & 2 & -4 & -0.1524 & -0.1 & 0.2 \\ \hline Hiawatha & 1 & 2 & -0.3048 & -0.05 & -0.1\\ \hline Harriet & 1.5 & -3 & -0.3048 & -0.03 & 0.06\\ \hline \end{tabular} \begin{tabular}{p{2.5cm}|p{0.8cm}|p{0.8cm}|p{0.6cm}|p{0.6cm}|p{0.7cm}|p{0.7cm}|p{1.4cm}} \multicolumn{8}{c}{Noninear Motion Parameters}\\ \hline Lake & $a$ & $a_{d}$ & $a_{off}$ & $b$ & $b_{d}$ & $b_{off}$ & $v_{z}(m/s)$ \\ \hline \hline Bde Maka Ska &0.6&0.2&3&2&10&23& -0.1524\\ \hline Nokomis & -0.1&0.2&1&1&-1&-1& -0.1524\\ \hline Hiawatha &-0.05&-0.1&1&1&0&0&-0.3048\\ \hline Harriet &-0.1&0.2&3&3&1&1& -0.3048\\ \hline \end{tabular} \label{tb:motionparam} \end{table} \begin{figure*}[t] \setlength\belowcaptionskip{-0pt} \centering \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\linewidth]{image_new/calhoun_lin_motion_withlake.png} \caption{Evaluation of linear motion estimations with bathymetry data of Lake Bde Maka Ska (top view).} \end{subfigure}% ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\linewidth]{image_new/calhoun_nonlin_motion_withlake.png} \caption{Evaluation of mixed motion estimations with bathymetry data of Lake Bde Maka Ska (top view).} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \includegraphics[width=\linewidth]{image_new/calhoun_lin_motion_lake_zoom.png} \caption{Evaluation of linear motion estimations with bathymetry data of Lake Bde Maka Ska (zoom-in view).} \end{subfigure}% ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\linewidth]{image_new/calhoun_nonlin_motion_lake_zoom.png} \caption{Evaluation of mixed motion estimations with bathymetry data of Lake Bde Maka Ska (zoom-in view).} \setlength\belowcaptionskip{-20pt} \end{subfigure} \vspace{-2mm} \caption{Localization performance of the four algorithms for an AUV with an altimeter and depth sensor within Lake Bde Maka Ska using bathymetry data (Start:\ding{109} End:\ding{83}).} \vspace{-0mm} \label{fig:plotresults1} \end{figure*} \subsection{Simulation Settings} The goal of this study is to evaluate each algorithm with real bathymetry data as a prerequisite to choosing a deployable localization algorithm. Due to the unique challenges of the underwater environment, it is extremely difficult, if not impossible, to obtain the ground truth of the AUV's positions. Thus, to quantify the accuracy and efficiency of the algorithms, we simulated the ground truth position of the AUV and evaluated the performances of each filter on the simulated AUV's motion. The linear and mixed motion models are designed to test the performance of each localization algorithm on the bathymetry data from different lake environments. Table~\ref{tb:modelparam} includes the model parameters for the simulation. The control inputs for each algorithm and lake were separately designed due to the lakes' different sizes and heights. In Table~\ref{tb:motionparam}, the parameters are defined for the linear motion in Eqs. \ref{controlinput} and \ref{move_model_lin}, and the mixed motion in Eq. \ref{move_model_nonlin}. For the mixed model, $x$ and $y$ are nonlinear state variables, and $z$ is a linear state variable. \subsection{Results and Discussions} We measured the performance of our algorithms on a $4.20$GHz Core i7-7700K processor running Ubuntu 18.04.2 LTS with $16$GB of DDR3 memory with Matlab R2018b. $100$ runs were performed with each filter for each lake and motion, and a total of $3,200$ runs were conducted. The results are summarized in Table \ref{tb:resulteval}. The runtime for each case was measured, and we used the root-mean-square error (RMSE) in Eq. \ref{eq:rmse} as a metric to evaluate the performance for each axis where $T$ is the number of steps, $p_g$ is the ground truth, and $p_e$ is the estimated position. \begin{equation}\label{eq:rmse} RMSE = \sqrt{\frac{\sum_{t=1}^{T}(p_{g}-p_{e})^{2}}{T}} \end{equation} One run in Lake Bde Maka Ska is shown in Figs. \ref{fig:plotresults1} and \ref{fig:plotresults2}. For both the linear and nonlinear cases, the EKF generally shows the worst performance. The UKF performed well in the linear case, but it deviated from the ground truth in the nonlinear case. It is noticeable that the PF and MPF mostly outperformed the EKF and UKF for both the linear and mixed motion cases. However, the PF often diverged from the ground truth and showed unstable performances. A likely cause is that the randomly selected particles can make the estimation diverge from the ground truth when a water body's bathymetry data does not have enough territorial variations. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=\linewidth]{image_new/calhoun_lin_motion_xyz.png} \caption{Evaluation of linear motion estimations on x,y,z axis.} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=\linewidth]{image_new/calhoun_nonlin_xyz.png} \caption{Evaluation of mixed motion estimations on x,y,z axis.} \end{subfigure} \vspace{-2mm} \caption{Evaluation of each algorithm's performance over time within Lake Bde Maka Ska using bathymetry data for an AUV with an altimeter and depth sensor.} \vspace{-2mm} \label{fig:plotresults2} \end{figure*} \begin{table*}[t!] \centering \footnotesize \caption{Localization performance evaluation for an AUV in Lake Bde Maka Ska; bathymetry data from the MN DNR.} \setlength\belowcaptionskip{-0pt} \begin{tabular}{|p{2.0cm}|p{0.9cm}||p{0.8cm}|p{0.9cm}|p{0.6cm}|p{0.6cm}| p{0.6cm}||p{0.8cm}|p{0.9cm}|p{0.6cm}| p{0.6cm}| p{0.6cm}|} \hline \multirow{3}{*}{Lake}&\multirow{3}{*}{\makecell{Number\\of\\steps}} & \multicolumn{5}{c||}{Linear motion model} & \multicolumn{5}{c|}{Mixed motion model} \\ \cline{3-12} & & \multirow{2}{*}{Method} & \multirow{2}{*}{\makecell{Runtime\\(s)}} & \multicolumn{3}{c||}{$RMSE(m)$} & \multirow{2}{*}{Method} & \multirow{2}{*}{\makecell{Runtime\\(s)}} & \multicolumn{3}{c|}{$RMSE(m)$} \\ \cline{5-7} \cline{10-12} & & & &x &y &z & & &x &y &z \\ \hline Bde Maka Ska & 50 & EKF & 90.15 &4.09 & 12.43 & 0.42 & \textbf{EKF} &90.13 & \textbf{2.25} & \textbf{5.01} & \textbf{0.53} \\ \hline Bde Maka Ska & 50 &UKF & 52.33 & 3.07 & 3.26 & 0.40 &UKF& 53.92 & 4.07 & 9.02 & 0.55\\ \hline Bde Maka Ska & 50 & \textbf{PF} & 252.14 & \textbf{1.38}& \textbf{3.57} & \textbf{0.67} &PF& 279.03 & 4.18 & 9.56 & 1.32\\ \hline Bde Maka Ska & 50 & MPF &94.02 & 2.88 & 2.32 & 0.74 & MPF & 103.70 & 4.18 & 3.90& 1.15 \\ \hline \hline Nokomis &50 & EKF & 57.77 & 3.92 & 5.60 & 0.43 & EKF& 59.84 & 4.96 & 7.12 & 0.49\\ \hline Nokomis & 50 &UKF & 29.72 & 3.05 & 3.98& 0.42& UKF & 29.47 & 12.74 & 12.70 & 0.46 \\ \hline Nokomis & 50 &PF & 251.50 & 13.69 & 23.12 & 0.89 & PF &291.11 & 11.90 & 18.60 & 0.98 \\ \hline Nokomis & 50 & \textbf{MPF} & 79.42 & \textbf{2.84} & \textbf{3.10} & \textbf{0.76} & \textbf{MPF} & 79.87 & \textbf{4.12} & \textbf{4.80} & \textbf{0.76}\\ \hline \hline Hiawatha & 30 & EKF& 12.97 & 3.56 & 3.57 & 0.48 & EKF & 12.88 & 3.33 & 2.63 & 0.51 \\ \hline Hiawatha & 30 & UKF & 6.95 & 3.07 & 2.93 & 0.53 & UKF &7.32 & 3.03 & 3.00 & 0.57 \\ \hline Hiawatha & 30 & \textbf{PF} & 137.52 & \textbf{1.66} & \textbf{2.57} & \textbf{1.13} & \textbf{PF} & 152.90 & \textbf{1.15} & \textbf{2.33} & \textbf{1.15} \\ \hline Hiawatha &30 & MPF& 37.82 & 3.10 & 3.51 & 0.97 & MPF & 42.73 & 3.33 & 3.47 & 0.90 \\ \hline \hline Harriet & 30 &EKF & 37.03 & 5.03 & 7.20 & 0.63 & EKF & 35.80 & 5.98 & 9.40 & 0.63 \\ \hline Harriet & 30 & UKF & 19.43& 3.09 & 3.09 & 0.59 & UKF & 18.01& 4.08 & 6.03 & 0.54 \\ \hline Harriet & 30 & PF & 149.71 & 1.77 & 3.72 & 1.13 & \textbf{PF} & 176.66 & \textbf{1.62} & \textbf{2.84} & \textbf{1.16} \\ \hline Harriet &30 & \textbf{MPF} & 51.71 & \textbf{2.93} & \textbf{2.42} & \textbf{1.10} & MPF &54.39 & 3.13 & 2.75 & 1.09 \\ \hline \end{tabular} \label{tb:resulteval} \end{table*} \subsubsection{Linear motion case} The EKF performs the worst for most lakes, and the results are not reliable since the performance varied between lakes. The UKF does not always perform best, but it gives reliable and relatively accurate results with lower computational complexity. The PF shows the most accurate result for Lake Hiawatha, but it performs worst for Lake Nokomis. This result is likely caused by the fact that Lake Nokomis has a symmetrical structure and fewer variations in height. The MPF generally gives the most accurate and reliable results, and it is computationally cheaper than the PF. Overall, the MPF is the most reliable and accurate filter according to the results. It is worth mentioning that the UKF is a good option if an AUV does not have much computational power and high accuracy is not required. The PF can be used for localization if the bathymetry data has enough variation in height and has an asymmetrical structure, provided that an AUV has enough computational power. \subsubsection{Nonlinear motion case} Like the linear motion cases, the PF and MPF generally perform better than the EKF and UKF. The UKF performs poorly for some cases due to the high nonlinearity of the motion. The PF reveals the same issue that it has in the linear motion cases: the estimations diverge when there are not enough variations in bathymetry data. Similar to the linear cases, the MPF gives reliable and accurate results overall. \subsubsection{Discussions} The MPF shows the most reliable and accurate results for both the linear and mixed motion cases. If an AUV needs a well-rounded algorithm for the localization problem, the MPF is the best filter among the four filters. However, if the bathymetry data of a lake has enough variation, and the task requires high accuracy, then the PF is a better option. For an AUV with low computational power, the UKF could be the best filter for localization if the AUV's motion is mostly going to be linear. \section{Conclusion} Using bathymetry data and the measurements from a single-beam sonar altimeter and a depth sensor, we present four localization algorithms based on the EKF, UKF, PF, and MPF respectively. Also, we evaluate the performance of each filter in various aquatic environments and with multiple robot motions. The results demonstrate the feasibility of the Bayesian filter-based algorithms for localizing an AUV with bathymetric information using two low-cost sensors. The MPF-based localization generally performs best, both in terms of accuracy and computational cost. However, the UKF can be a good alternative to the PF and MPF at the expense of accuracy if an AUV mostly actuates linearly and has limited computational power. Additionally, the PF seems to be the most accurate in water bodies with sufficient terrestrial variations if an AUV possesses the necessary computational power. Future work will focus on evaluating the performance of the proposed localization algorithms when run on-board AUVs, testing the algorithms on the bathymetry data from other Minnesota lakes, and deploying active acoustic sensors to improve localization accuracy. \section*{Acknowledgments} We are thankful to Chelsey Edge, Marc Ho, Jiawei Mo, and Julian Lagman for their assistance, the MN DNR for the bathymetric data, and the MnDRIVE Initiative for supporting this research. \section{TODO} \todo[inline, color=green!30]{Done} \todo[inline, color=red!30]{Todo} \todo[inline, color=orange!30]{Discard} \begin{enumerate} \item \todo[inline, color=green!30]{Some typos or bad sentences are presented along the text. For example: - “ … development of late, primarily …” - To estimate the the position …”} \item \todo[inline, color=orange!30]{Just a comparison between four state-of-the-art and well-known/tested Bayes filters} \item \todo[inline, color=green!30]{Very difficult to understand what was done in the experimental tests – how do you get the depth/altitude data?;} \item \todo[inline, color=green!30]{ the algorithms presented (EKF/UKF/PF/MPF) are not novel, nor is their application to AUV navigation using bathymetric data. Consider rephrasing the abstract to highlight the comparative/evaluative nature of the contribution} \label{todoabs} \item \todo[inline, color=green!30]{Intro: par. 1: landmarks can be detected by exteroceptive sensors (e.g. side-scan sonar for passive landmarks, {L,US}BL for active landmarks}\label{intropar1} \item \todo[inline, color=green!30]{Intro: par. 3: it could perhaps be clearer to say that it is the altitude that varies with depth (for the same location), hence the need for a depth sensor. The authors could consider making their assumptions on available measurements (e.g. depth and altitude) and what variables are going to be estimated. At this point in the text, the goal appears to be estimation of the full 6DoF state, but the rest of the paper presents the state vector as 3D position. Assuming accurate orientation (heading in particular) often necessitates the use of highly accurate IMU/INSs, or dedicated infrastructure. }\label{intropar3} \item \todo[inline, color=green!30]{Intro: par. 5 (contribution): it would perhaps be fairer to emphasize the comparative performance evaluation of the four algorithms as the main contribution of this paper.}\label{intropar5} \item \todo[inline, color=green!30]{2. Related Work - While the authors do list several related contributions, they do so while focused on (1) the sensor(s) used, (2) the parameters of the state vector, and (3) the type of filter used. This information could be represented more compactly in a table.}\label{related1} \item \todo[inline, color=orange!30]{2. Related Work - Some comment regarding the contributions and limitations of related work would be appreciated. A few references (e.g. 31) are missing from this section.}\label{related2} \item \todo[inline, color=green!30]{3formula: A. General model: the authors define the state vector as the vehicle's three dimensional position. If orientation measurements are assumed to be available, the authors should consider making this assumption explicit. }\label{formula1} \item \todo[inline, color=orange!30]{3 formula: Linear/non-linear mixed motion model. The authors introduce non-linearity by proposing a feedback model where velocity is a function of the bathymetry at the estimated position. This appears somewhat contrived or of limited applicability; as an alternative, the authors could instead consider a different model (e.g. unicycle). This choice of a model has no bearing on the next sections, as the Methodology section describes general purpose filters - as such, this particular model could be moved to the section on experimental results.}\label{formula2} \item \todo[inline, color=green!30]{3 formula: B. Model for MPF. The authors could consider moving this to subsection IV.D, as this is a decomposition necessitated by the use of the MPF. The notation, taken from Schon's 2005 paper [28], is left unexplained for the remainder of the paper (see later comments) }\label{formula3} \item \todo[inline, color=green!30]{4method: the subsections on the two Kalman filter variants (A.EKF,B.UKF) are somewhat lacking in details in comparison with the following subsections on particle filters. A few more remarks would be welcome (e.g. how is gridded bathymetric data used when computing the Jacobians? Are any local approximations used?). }\label{method1} \item \todo[inline, color=orange!30]{4 method: Like the subsections on the EKF and UKF, the subsection on the Particle filter provides a description of the filter, but only the equations to weight evaluation and how to obtain the estimate from particles are shown}\label{method2} \item \todo[inline, color=green!30]{4 method: The subsection on the marginalized particle filter is by far the most detailed of the four. While relevant, most of the equations presented here are of limited use, as very few of the terms are introduced or explained. The reader must find Schon's 2005 paper (where most of equations 11 through 29 are taken from) to understand the meaning and relevance of each of the terms. Moreover, given that these pertain to the general linear/non-linear mixed model ("model 3" in the original paper), together with the space limitations of a conference submission, the authors should consider simplifying and introducing these equations strictly as needed (instead of using the general versions), or refer the reader to the relevant literature (e.g. the A matrix could be removed from the equations under the proposed mixed model, as the linear component is a single integrator, so A=I) }\label{method3} \item \todo[inline, color=green!30]{4 method: Finally, the authors could revise section IV to provide a more balanced overview of the four filters.}\label{method4} \item \todo[inline, color=green!30]{5 experiment: The authors state early on that 100 runs were conducted for each lake and filter combination - remarks on how these were aggregated to produce the results in figure 2 and tables 2 and 3 would be appreciated.}\label{exper1} \item \todo[inline, color=green!30]{5 experiment: in tables 2 and 3, listing the computation time (instead of simulation time) of each filter could make for a stronger comparative evaluation}\label{exper2} \item \todo[inline, color=green!30]{Style suggestions (very minor) - Figure 2: Change RBPF to MPF in the figure legend for consistency with the text - Figure 2: consider using figures in a vector format (e.g. eps/pdf/svg) to avoid artifacts - Place table captions above tables}\label{stylesugg} \item \todo[inline, color=red!30]{1 The source of real-time bathymetry data and the source of motion measurement data in the simulation are not described in the experimental section. Is it a replay simulation or a simulation experiment using interpolation to get data? 2 In the conclusion section, the PF algorithm result is better when the a water body has enough terrestrial variations, but the definition of whether the water body has enough terrestrial variations or not is not very clear. It is recommended to introduce a variable related to the complexity of the terrain and explain the threshold. 3 All simulation experiments did not mention the cost of calculation. }\label{secondcomment} \item \todo[inline, color=green!30]{clear up Terrain Based Navigation (TBN)}\label{todotbn} \item \todo[inline, color=green!30]{What is more different from what can be found elsewhere is the Linear/Nonlinear mixed motion model of Equation (5), but to which no explanation whatsoever is provided. It is left to the reader to guess what is even the intuition behind such model (it seems to be some sort of simple linear LS squares approximation of the local floor gradient).}\label{todotbn} \end{enumerate}
1,108,101,563,100
arxiv
\section{Conclusion} In this work, we introduce a novel adaptive multi-corpora training algorithm to improve the performance of NNLM for ASR applications. The proposed approach provides an adaptive data sampling strategy to effectively leverage each corpus along the training process. Experiment results show prominent WER improvement for both in-domain and out-of-domain adaptation tasks. Future work might include extending the presented multi-corpora algorithm to end-to-end ASR model training. \section{Experiments} \subsection{Datasets} Table~\ref{table:corpora} summarizes the corpora used in our experiments as well as their train, validation, and test splits. Among them, we have \begin{itemize} \item Publicly available datasets, including Fisher speech corpus (\textsl{fisher}), Wall Street Journal corpus (\textsl{wsj}), and Wikitext-103 (\textsl{wiki}). For \textsl{fisher} corpus, we only utilize text training data in this study and each record represents a conversation. For \textsl{wsj} corpus, we use \textit{nov93dev} as the validation set and \textit{nov92} as the test set; \item In-house datasets, which contain video corpus sampled from public social media videos (\textsl{video}), and three conversational speech corpora with different topics (\textsl{conv1}, \textsl{conv2}, \textsl{conv3}) collected using mobile devices through crowd-sourcing from a data supplier for ASR. All these datasets are de-identified before transcription; all transcribers and researchers do not have access to any user-identifiable information. \end{itemize} \begin{table}[ht] \centering \caption{Summary of data splits from multiple corpora.} \begin{tabular}{l|ccc} \hline & \multicolumn{3}{c}{Splits} \\ \cline{2-4} Corpora & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Train\\ (\#text records)\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}} {Validation} \\ (\#utts)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Test\\ (\#utts)\end{tabular} \\ \hline \textsl{conv1} & \multicolumn{1}{c|}{138K} & \multicolumn{1}{c|}{1K} & 4K \\ \textsl{conv2} & \multicolumn{1}{c|}{2000K} & \multicolumn{1}{c|}{-} & - \\ \textsl{video} & \multicolumn{1}{c|}{1100K} & \multicolumn{1}{c|}{-} & - \\ \textsl{wiki} & \multicolumn{1}{c|}{840K} & \multicolumn{1}{c|}{-} & - \\ \textsl{fisher} & \multicolumn{1}{c|}{12K} & \multicolumn{1}{c|}{-} & - \\ \textsl{conv3} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{1K} & 12K \\ \textsl{wsj} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{0.5K} & 0.3K \\ \hline \end{tabular} \label{table:corpora} \end{table} \subsection{Setups} We consider two adaptation scenarios in this study \begin{itemize} \item \emph{In-domain adaptation}, where one of the training corpora is in the same domain with the target; \item \emph{Out-of-domain adaptation}, none of the training corpora being in the same domain with the target. \end{itemize} For each setting, NNLMs are trained on multiple corpora and integrated with an ASR model via first-pass shallow fusion. We then evaluate the performance of trained NNLMs on the test sets of target domain in terms of perplexity (PPL) and word error rate (WER). Each NNLM contains an embedding layer of dimension 300, and 2 LSTM layers with 1500 hidden nodes, which has around 15 million parameters in total. The ASR is a RNN-T model with the Emformer encoder \cite{emformer2021streaming}, LSTM predictor, and a joiner. It has around 80 million parameters and is trained from scratch using the train split of LibriSpeech ASR corpus \cite{panayotov2015librispeech}. \iffalse \begin{table*}[ht] \minipage{0.45\textwidth} \centering \begin{tabular}{l|l|ll} \hline & & WER & PPL \\ \hline No NNLM & & 24.60 & \\ \hline \textsl{conv} only & & 20.87 & 54 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2}\end{tabular}} & \texttt{even prob} & 19.75 & 43 \\ & \texttt{n-gram prob} & 20.15 & 48 \\ & \textbf{\texttt{adaptive prob}} & \textbf{18.82} & \textbf{38} \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2}\\ \textsl{+video}\end{tabular}} & \texttt{even prob} & 19.20 & 36 \\ & \texttt{n-gram prob} & 19.63 & 43 \\ & \textbf{\texttt{adaptive prob}} & \textbf{18.65} & \textbf{32} \\ \hline \end{tabular} \caption{WER and PPL results on \textsl{conv1} test set} \label{table:in-domain} \endminipage \minipage{0.5\textwidth} \centering \begin{tabular}{l|l|ll|ll} \hline & & \multicolumn{2}{c|}{\textsl{conv3}} & \multicolumn{2}{c}{\textsl{wsj}} \\ \hline & & WER & PPL & WER & PPL \\ \hline No NNLM & & 24.79 & & 10.96 & \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2}\\ +\textsl{video}\end{tabular}} & \texttt{even prob} & 18.95 & 65 & 9.82 & 91 \\ & \texttt{n-gram prob} & 18.55 & 60 & 9.39 & 80 \\ & \textbf{\texttt{adaptive prob}} & \textbf{17.98} & \textbf{49} & \textbf{8.95} & \textbf{65} \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2+}\\ \textsl{video+wiki}\\ \textsl{+fisher}\end{tabular}} & \texttt{even prob} & 18.92 & 63 & 9.44 & 78 \\ & \texttt{n-gram prob} & 17.91 & 48 & 9.10 & 66 \\ & \textbf{\texttt{adaptive prob}} & \textbf{17.34} & \textbf{41} & \textbf{8.86} & \textbf{62} \\ \hline \end{tabular} \caption{WER and PPL results on \textit{conv3} and \text{wsj} test sets} \label{table:out_domain} \endminipage \end{table*} \fi The following multi-corpora NNLM training methods are compared in our experiments: \begin{itemize} \item \texttt{uniform-weight}, which assigns the same sampling probability to each training corpus. Note that this method is close to the ``data merging'' approach where the models are trained on merged corpora, but simply merging data altogether fails to balance the size of each training corpus; \item \texttt{ngram-opt-weight}, the method presented in \cite{RajFilTiw2019}, where an $n$-gram model is trained on each corpus, and the optimized interpolation weights (with respect to the validation set) from these $n$-gram models are used as the fixed sampling probabilities over multiple corpora during NNLM training; \item \texttt{adaptive-weight}, our proposed method in Algorithm~\ref{alg:main}. \end{itemize} \subsection{Results} \subsubsection{In-domain adaptation} We first conduct a set of experiments on learning NNLMs with the train split of one corpus (\textsl{conv1}), two corpora (\textsl{conv1+conv2}), or three corpora (\textsl{conv1+conv2+video}). The validation and test sets of \textsl{conv1} are considered as the ones from the target domain. Hence, these experiments are regarded as in-domain adaptation tasks since the train split of \textsl{conv1} also appears in the training corpora. Table~\ref{table:in-domain} demonstrates the PPL and WER evaluation results on the test set of \textsl{conv1}. We can observe that the proposed adaptive training algorithm achieves the best performance in both scenarios of two corpora training and three corpora training. Compared with \texttt{uniform-weight} and \texttt{ngram-opt-weight} methods, our approach results in relative 3\%-5\% and 5\%-7\% WER reductions, respectively. It is also expected that leveraging more corpora in the training set generally improves the NNLM quality. \begin{table}[ht] \vspace{-0.3cm} \centering \caption{PPL and WER results on \textsl{conv1} test set.} \begin{tabular}{l|l|cc} \hline Train Corpora & NNLM Training Method & PPL & WER \\ \hline \textsl{n/a} & \texttt{without-NNLM} & - & 24.60 \\ \hline \textsl{conv1} & \texttt{uniform-weight} & 54.2 & 20.87 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2}\end{tabular}} & \texttt{uniform-weight} & 43.5 & 19.75 \\ & \texttt{ngram-opt-weight} & 48.5 & 20.15 \\ & \texttt{adaptive-weight} & \textbf{38.8} & \textbf{18.82} \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2}\\ \textsl{+video}\end{tabular}} & \texttt{uniform-weight} & 36.8 & 19.20 \\ & \texttt{ngram-opt-weight} & 43.8 & 19.63 \\ & \texttt{adaptive-weight} & \textbf{32.9} & \textbf{18.65} \\ \hline \end{tabular} \label{table:in-domain} \end{table} \subsubsection{Out-of-domain adaptation} Here, NNLMs are trained on three corpora (\textsl{conv1+conv2+video}) or five corpora (\textsl{conv1+conv2+video+wiki+fisher}), and evaluated on two target domains, \textsl{conv3} and \textsl{wsj}. Note that each target domain is different from any of the domains in the training corpora. Thus, this study is considered as out-of-domain adaptation. The PPL and WER evaluation results on the test sets of \textsl{conv3} and \textsl{wsj} are presented in Table~\ref{table:out-domain:conv3} and Table~\ref{table:out-domain:wsj}, respectively. Similar to our previous findings, the introduced \texttt{adaptive-weight} method outperforms all other methods consistently. Compared with \texttt{uniform-weight} approach, our method obtains relative 5\%-9\% WER reductions on \textsl{conv3} test set and 6\%-9\% WER reductions on \textsl{wsj} test set. Compared with \texttt{ngram-opt-weight} approach, our method achieves relative 3\% WER reductions on \textsl{conv3} test set and 2\%-5\% WER reductions on \textsl{wsj} test set. \begin{table}[ht] \centering \caption{PPL and WER results on \textsl{conv3} test set.} \begin{tabular}{l|l|cc} \hline Train Corpora & NNLM Training Method & PPL & WER \\ \hline \textsl{n/a} & \texttt{without-NNLM} & - & 24.79 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2}\\ \textsl{+video}\end{tabular}} & \texttt{uniform-weight} & 65.6 & 18.95 \\ & \texttt{ngram-opt-weight} & 60.2 & 18.55 \\ & \texttt{adaptive-weight} & \textbf{49.7} & \textbf{17.98} \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2+}\\ \textsl{video+wiki}\\ \textsl{+fisher}\end{tabular}} & \texttt{uniform-weight} & 63.2 & 18.92 \\ & \texttt{ngram-opt-weight} & 48.2 & 17.91 \\ & \texttt{adaptive-weight} & \textbf{41.5} & \textbf{17.34} \\ \hline \end{tabular} \label{table:out-domain:conv3} \end{table} \begin{table}[ht] \centering \caption{PPL and WER results on \textsl{wsj} test set.} \begin{tabular}{l|l|cc} \hline Train Corpora & NNLM Training Method & PPL & WER \\ \hline \textsl{n/a} & \texttt{without-NNLM} & - & 10.96 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2}\\ \textsl{+video}\end{tabular}} & \texttt{uniform-weight} & 91.2 & 9.82 \\ & \texttt{ngram-opt-weight} & 80.8 & 9.39 \\ & \texttt{adaptive-weight} & \textbf{65.2} & \textbf{8.95} \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textsl{conv1+conv2+}\\ \textsl{video+wiki}\\ \textsl{+fisher}\end{tabular}} & \texttt{uniform-weight} & 78.2 & 9.44 \\ & \texttt{ngram-opt-weight} & 66.9 & 9.10 \\ & \texttt{adaptive-weight} & \textbf{62.4} & \textbf{8.86} \\ \hline \end{tabular} \label{table:out-domain:wsj} \end{table} \subsection{Analysis} To provide more insights on how the proposed adaptive training algorithm works, we conduct additional analysis on \textsl{conv1} in-domain adaptation task with two training corpora (\textsl{conv1+conv2}), corresponding to row 4 to 6 in Table~\ref{table:in-domain}. We present the training progress as follows: in Figure~\ref{fig:analysis}(a), we compare the validation loss along the training progress of multiple methods with different sampling strategies. Their corresponding sampling probability for the in-domain training corpus \textsl{conv1} is shown in Figure~\ref{fig:analysis}(b). Seen from Figure~\ref{fig:analysis}(a), the proposed approach converges to a much lower validation loss. Without the use of out-of-domain data, training with in-domain data of \textsl{conv1} only performs the worst since the train split of \textsl{conv1} is relatively small. Similarly, leveraging the $n$-gram interpolation weights as the sampling probabilities makes it hard to perform well, because the $n$-gram model trained on in-domain adaptation set tends to receive a much higher interpolation weight. Assigning such a high probability to a small adaptation set of \textsl{conv1} results in insufficient use of the other corpus. On the other hand, the proposed \texttt{adaptive-weight} method can implicitly take into account the size and importance of each corpus while adjusting the sampling probability during the training process. According to the sampling probability curve presented in Figure~\ref{fig:analysis}(b), its training process can be split into three stages. In the first two epochs, the model mainly focuses on learning from the in-domain adaptation set \textsl{conv1}. Starting from epoch three, continuing learning heavily from the in-domain corpus may lead to overfitting or early convergence. The model thus puts more efforts on learning from the out-of-domain corpus \textsl{conv2} until epoch six. After that, the learning rate becomes relatively smaller. The model then seeks a balanced weight for both corpora. For more comparison, we train additional models by using the starting sampling probability (\textsl{conv1} 0.8), lowest probability (\textsl{conv1} 0.2), or last probability (\textsl{conv1} 0.58) along the training progress of \texttt{adaptive-weight}, and assigning it as the static sampling probability during the training. However, none of these methods performs better or even close to the proposed method, further highlighting the significance of an adaptively adjustable data sampling strategy. \begin{figure}[ht!] \begin{minipage}[c]{0.51\linewidth} \centering \includegraphics[width=\linewidth]{valid_loss.png} \\ (a) \end{minipage} \hfill \begin{minipage}[c]{0.51\linewidth} \centering \includegraphics[width=\linewidth]{weight.png} \\ (b) \end{minipage} \caption{Training progress for various sampling strategies on \textsl{conv1} adaptation task; (a): validation loss versus training epoch; (b): sampling probability for \textsl{conv1} versus training epoch.} \label{fig:analysis} \end{figure} \iffalse \begin{figure} \centering \includegraphics[scale=0.30]{valid_loss.png} \caption{Validation loss v.s. training epoch of multiple sampling strategies on \textsl{conv1} adaptation task. } \label{fig:valid_loss} \end{figure} \begin{figure} \centering \includegraphics[scale=0.30]{weight.png} \caption{Sampling prob. v.s. training epoch of multiple sampling strategies on \textsl{conv1} adaptation task. } \label{fig:weight} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{multi_corpora_analysis.png} \caption{Training process of multiple sampling strategies on \textsl{conv1} adaptation task. Left: validation loss v.s. training epoch. Right: sampling prob. of \textsl{conv1} corpus v.s. training epoch} \label{fig:analysis} \end{figure} \fi \section{Introduction} \label{sec:intro} Language models have been commonly used to promote automatic speech recognition (ASR) performance through first-pass fusion \cite{KanWuNgu2018,KimShaMah2021,ChaJaiNav2016} or second-pass rescoring \cite{LiuWanChe2014,XuCheGao2018,LiPovKhu,IriZeyAlb2019}. Recent studies show that neural network based language model (NNLM) obtains significantly stronger performance than traditional $n$-gram models given its better capability of modeling long-range dependency \cite{MikKarMar2010,CheLiuGal2015,XuLiWan2018,GraFerSan2006,Gra2012}. Typically, an external language model can be trained separately from ASR models, thus bringing further flexibility to its choice of training data and training strategy. Since only text data is required for training any language model, it provides enormous advantages on leveraging external language models on adaptation tasks for ASR applications where there is a mismatch between source and target domains. The proper performance of a language model highly depends on the quality and quantity of training data in any adaptation task. However, it is hard to guarantee this especially when the target domain is lack of adaptation data \cite{Bel2004, li2020empirical}, or such data is not fully accessible due to privacy pursuit \cite{GanRasHof2018,LiuLiBak2021,GulBeaMot2021,CuiLuKin2021}. Alternatively, we can resort to text data from multiple corpora. Some corpora may be in a similar domain with the target while some may not. Hence, how to improve the performance of a language model by effectively leveraging large-scale multiple corpora becomes a critical research topic. Training a single $n$-gram language model on multiple corpora is relatively straightforward. This is usually done by training an $n$-gram model for each corpus and then interpolating them to form a final model. The interpolation weights are usually optimized over a validation set from the target domain. The linearity of the parameters in $n$-gram language models grants direct interpolation, while it is hard to see an easy extension to non-linear NNLMs. Instead of model interpolation, mixing data from diverse corpora fits better for mini-batch fashion training. However, we found that the strategies for sampling data from multiple corpora lack study. A few works explore data mixing methods based on the relevance between each corpus and the target domain \cite{RajFilTiw2019,SchGau2005}, nonetheless, all these methods employ static sampling strategies, where each corpus's sampling probability is precomputed and fixed along the training process. Recent studies show that the order of feeding corpus into training is important for boosting the model performance \cite{AgrSinSch2021,ChaYehDem2021,LiuLaiWon2020}, indicating that the sampling probability of each corpus may need to be adjusted dynamically. Motivated by the challenges and insights above, we propose a novel adaptive multi-corpora NNLM training algorithm. We adjust the sampling probabilities based on the estimated contribution that each corpus can devote in the following training stage. Specifically, at the beginning of each epoch, we fine-tune the current model on each corpus separately, then interpolate the fine-tuned models and optimize the interpolation weights over the validation dataset of the target domain. The optimized weights will then be leveraged as the sampling probabilities over multiple corpora in the current training epoch. Our method can work with any arbitrary sized corpora and adaptively take into account the importance of each corpus along the entire training process. We conduct experiments on in-domain and out-of-domain adaptation scenarios, with results indicating that the proposed adaptive sampling probability adjustment is effective in improving the performance of NNLM for ASR applications. \iffalse The paper is organized as follow: in section 2, we briefly review relative works. We then present the proposed algorithm in section 3 then show all experiment results and analysis in section 4. Section 5 follows the above to conclude the paper. \fi \section{Methods} Consider an NNLM adaptation task where we are given $K$ different training corpora $D_1,\ldots,D_K$ with each $D_k = \{x_i^{(k)}\}_{i=1}^{N_k}$ consisting of $N_k$ records, as well as a validation corpus $D_{\tau} = \{y_i\}_{i=1}^{N_{\tau}}$ from the target domain. The goal is to train a model with records sampled from the $K$ corpora that can achieve optimized performance on the target domain. Note that $N_{\tau}$ is usually a much smaller number compared with these ${N_k}$'s, and the target domain could be close-in-domain with a few ones of these $K$ corpora or a completely different domain. Towards this goal, it is crucial to develop a sampling strategy for mixing multiple corpora during model training. We propose to adjust the sampling probability per corpus to fit best for each training stage and optimize it over the validation set dynamically. Specifically, the NNLM training is divided into an upper-level task and a lower-level task. Optimizing the data sampling probability across multiple corpora is regarded as the upper-level task, while the lower-level task is in charge of learning model parameters of NNLM given current sampling probabilities. The upper-level task and lower-level task are conducted in an alternating manner until model convergence. We first introduce a generic training framework using mixed data from multiple corpora in the following subsection. \subsection{Mixed Data Training} Define $\mathcal{D} = \mathcal{M}(D_1,\ldots, D_K| W)$ as a mixed data distribution over corpora $D_1,\ldots, D_K$ such that \begin{align} \label{formula:mix} \forall x \sim \mathcal{D}:\;&P(x \in D_k) = w_k, \\ &P(x = x_i^{(k)}| x \in D_k)=\frac{1}{N_k},\;\forall k\in [1..K] \end{align} where $W = (w_1, \ldots, w_K)$ is the sampling probability over corpora and $\Sigma_{k=1}^Kw_k=1$. This mixed data distribution will sample data from corpus $D_k$ with probability $w_k$. That is, at each training iteration (i.e.~model update), we sample a minibatch of training records from $\mathcal{D}$ with roughly $w_k$ portion of data coming from corpus $D_k$. Given the mixed data distribution $\mathcal{D}$, an NNLM $\theta$ can be trained by minimizing the negative log-likelihood of samples: \begin{align} \label{formula:loss} \mathcal{L}_{train}(\theta | \mathcal{D}) = -\Sigma_{x \sim \mathcal{D}} \log P_{\theta}(x) \end{align} where $P_{\theta}(x)$ is the model estimated probability of observing $x$ as a product of each token's probability given its preceding tokens. As each corpus may show varying impacts on the target domain at different training stages, we will adjust the sampling probability $W$ over multiple corpora before each training epoch of NNLM, with details described in the next subsection. \subsection{Sampling Probability Optimization} Adjusting the sampling probability $W$ aims to optimize the contribution of each corpus in the following training. Thus, we first need to measure the effects of continuing training with each corpus, and then adapt the sampling probability accordingly. To this end, we propose to fine-tune the current NNLM using each corpus solely, after which interpolate $K$ fine-tuned models and optimize the interpolation weights over the validation set of the target domain. The learned weights will serve as the sampling probability in the next. We break down this process into the two steps below. \subsubsection{The fine-tuning step} Define a fine-tuning operator as \begin{align} FT(\theta, D, S) \mapsto \theta_{FT} \end{align} which fine-tunes input model $\theta$ on corpus $D$ for $S$ iterations. To fairly measure the contribution each corpus could devote in the following training process, we fine-tune the current model $\theta$ on each corpus for the same number of iterations $S$. Then, we will obtain $K$ different models $\theta_1,\ldots, \theta_K$: \begin{align} \theta_k = FT(\theta, D_k, S),\,\forall k \in [1..K] \end{align} The fine-tuning step is a continued training from the current model $\theta$ per corpus and conducted at the beginning of each epoch. \subsubsection{The interpolation weight optimization step} Consider an interpolation weight optimization operator: \begin{align} WO(\Theta, D_{\tau}) \mapsto W \end{align} where $\Theta = (\theta_1,\ldots, \theta_K)$ is a collection of $K$ fine-tuned models. The $WO$ operator will optimize the model interpolation weights $W = (w_1,\ldots, w_K)$ over the performance on $D_{\tau}$ and then output the optimized weights. Specifically, the $WO$ operator can be defined as follows: \begin{align} W= &\mathop{\mathrm{argmin}}_{w_1,\ldots, w_K} -\Sigma_{y_i \in D_{\tau}}\log(\Sigma_{k=1}^K w_k\cdot P_{\theta_k}(y_i)) \\ &\,s.t.\quad w_k \in [0,1],\,\forall k \in [1..K] \\ &\qquad\;\;\Sigma_{k=1}^K w_k = 1 \end{align} By substituting $w_K$ with $1-w_1-\ldots -w_{K-1}$, the optimization problem above can be solved through either gradient descent (adopted in this work) or an expectation-maximization (EM) algorithm. As the interpolation weights well reflect the importance of each corpus in the following training process, we can make use of them as the sampling probabilities over corpora for the current NNLM training epoch (\ref{formula:mix})-(\ref{formula:loss}). \subsection{The Adaptive Multi-Corpora Training Framework} To chain all the steps we have introduced, in Algorithm \ref{alg:main}, we present the adaptive sampling framework for multi-corpora NNLM training. The proposed method adaptively determines the sampling probability per corpus during the training process. Higher probabilities are assigned to the ones who will likely play more important roles towards improving the performance on target domain in certain training epochs, and vice versa. In the scenarios where the order of feeding corpora into training is vital, the proposed algorithm can automatically choose their relative orders by assigning higher sampling probability to a corpus when its turn is coming. \begin{algorithm} \caption{Adaptive multi-corpora training} \label{alg:main} \begin{algorithmic} \State Input: $K$ training corpora $D_1,\ldots, D_K$, validation dataset of target domain $D_{\tau}$, number of fine-tuning iterations $S$ \State Initialize NNLM $\theta^0$ \For{epoch $t = 1,2,\ldots, T$} \For{each corpus $k=1,2,\ldots, K$} \State $\theta_k^t = FT(\theta^{t-1}, D_k, S)$ \EndFor \State $W^t = WO((\theta_1^t,\ldots, \theta_K^t), D_{\tau})$ \State Construct $\mathcal{D}^t = \mathcal{M}(D_1, \ldots, D_K|W^t)$ \State Learn $\theta^t$ through training on $\mathcal{D}^t$ for one epoch: \State $\mathop{\mathrm{argmin}}_{\theta^t}\left(-\Sigma_{x \sim \mathcal{D}^t} \log P_{\theta^t}(x)\right)$ \EndFor \State Output: $\theta^T$ \end{algorithmic} \end{algorithm} Oftentimes, corpora available for training could be of diverse sizes due to data scarcity. The proposed algorithm is unfastidious in the various sizes of corpora. It performs the same number of training iterations when fine-tuning on each corpus, and if some corpora are of petite size but turn out to be frequently sampled, it simply reflects the importance of these corpora in certain training stages. In Algorithm \ref{alg:main}, the frequency of optimizing sampling probabilities is once per training epoch. It can be flexibly adjusted in different applications in practice. For example, we can adapt the sampling probabilities every $Q$ training iterations instead and the rest components in the algorithm should naturally follow. It is noteworthy that the proposed method works differently than the bi-level optimization framework \cite{JenFav2018,LiuGaoZha2021} in the sense that the sampling probabilities learned at the end of training process only represents its optimality for the final training stage and can not be regarded as optimal over the entire training process. The proposed algorithm dynamically picks the optimal sampling probabilities at each training stage. Since the proposed method requires fine-tuning on each corpus before each training epoch to determine the sampling probabilities over corpora, it needs additional $K \cdot S\cdot T$ training iterations than the conventional NNLM training with static sampling probabilities. Notice that the fine-tuning process on each corpus are independent with each other and thus can be conducted in parallel for better efficiency. \section{Related Works} Although various works have studied data sampling and weighting strategies across language model and deep learning training tasks \cite{FerDow2018, RenZenYan2018, ShuXieYi2019}, most of them are applied to the sample level other than the corpus or dataset level. Learning to reweight samples usually requires the computation of second-order derivatives in every backward pass, which leads to high complexity and thus could be heavy-loaded for multi-corpora training. As another line of research, recent works have shown that the order of corpora plays a critical role in training language models \cite{AgrSinSch2021,ChaYehDem2021,LiuLaiWon2020}. Hence, abundant metrics for measuring the relevance over corpora have been designed, and curriculum learning approaches in lieu of data mixing strategies have also been proposed. However, a well-designed adaptive data mixing strategy naturally incorporates the importance of corpora order during the training process. Towards joint training with multiple corpora, multi-task based training strategies are introduced \cite{TusIriSch2016,dSaIllFoh2022,LiuHeChe2019}. These works train a multi-head language model with several shared hidden layers and corpora-specific output layers. In adaptation tasks, they learn the interpolation weights for combining corpora-specific layers and result in a single NNLM. However, the parameter size and inference latency of such model would grow with the number of corpora, making this type of approaches less attractive in practical applications. Inspired by the interpolation of $n$-gram models, authors in \cite{RajFilTiw2019} propose to train an $n$-gram model on each corpus and optimize their interpolation weights over a validation set. Then during the NNLM training, minibatches are stochastically constructed by drawing samples from each corpus with probability according to the interpolation weights. These weights are only learned once from $n$-gram models and fixed over the entire NNLM training process. Unlike this approach, our proposed method adaptively optimizes the sampling probability per corpus during the training process. \section{Formatting your paper} \label{sec:format} All printed material, including text, illustrations, and charts, must be kept within a print area of 7 inches (178 mm) wide by 9 inches (229 mm) high. Do not write or print anything outside the print area. The top margin must be 1 inch (25 mm), except for the title page, and the left margin must be 0.75 inch (19 mm). All {\it text} must be in a two-column format. Columns are to be 3.39 inches (86 mm) wide, with a 0.24 inch (6 mm) space between them. Text must be fully justified. \section{PAGE TITLE SECTION} \label{sec:pagestyle} The paper title (on the first page) should begin 1.38 inches (35 mm) from the top edge of the page, centered, completely capitalized, and in Times 14-point, boldface type. The authors' name(s) and affiliation(s) appear below the title in capital and lower case letters. Papers with multiple authors and affiliations may require two or more lines for this information. Please note that papers should not be submitted blind; include the authors' names on the PDF. \section{TYPE-STYLE AND FONTS} \label{sec:typestyle} To achieve the best rendering both in printed proceedings and electronic proceedings, we strongly encourage you to use Times-Roman font. In addition, this will give the proceedings a more uniform look. Use a font that is no smaller than nine point type throughout the paper, including figure captions. In nine point type font, capital letters are 2 mm high. {\bf If you use the smallest point size, there should be no more than 3.2 lines/cm (8 lines/inch) vertically.} This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make the paper much more readable. Larger type sizes require correspondingly larger vertical spacing. Please do not double-space your paper. TrueType or Postscript Type 1 fonts are preferred. The first paragraph in each section should not be indented, but all the following paragraphs within the section should be indented as these paragraphs demonstrate. \section{MAJOR HEADINGS} \label{sec:majhead} Major headings, for example, "1. Introduction", should appear in all capital letters, bold face if possible, centered in the column, with one blank line before, and one blank line after. Use a period (".") after the heading number, not a colon. \subsection{Subheadings} \label{ssec:subhead} Subheadings should appear in lower case (initial word capitalized) in boldface. They should start at the left margin on a separate line. \subsubsection{Sub-subheadings} \label{sssec:subsubhead} Sub-subheadings, as in this paragraph, are discouraged. However, if you must use them, they should appear in lower case (initial word capitalized) and start at the left margin on a separate line, with paragraph text beginning on the following line. They should be in italics. \section{PRINTING YOUR PAPER} \label{sec:print} Print your properly formatted text on high-quality, 8.5 x 11-inch white printer paper. A4 paper is also acceptable, but please leave the extra 0.5 inch (12 mm) empty at the BOTTOM of the page and follow the top and left margins as specified. If the last page of your paper is only partially filled, arrange the columns so that they are evenly balanced if possible, rather than having one long column. In LaTeX, to start a new column (but not a new page) and help balance the last-page column lengths, you can use the command ``$\backslash$pagebreak'' as demonstrated on this page (see the LaTeX source below). \section{PAGE NUMBERING} \label{sec:page} Please do {\bf not} paginate your paper. Page numbers, session numbers, and conference identification will be inserted when the paper is included in the proceedings. \section{ILLUSTRATIONS, GRAPHS, AND PHOTOGRAPHS} \label{sec:illust} Illustrations must appear within the designated margins. They may span the two columns. If possible, position illustrations at the top of columns, rather than in the middle or at the bottom. Caption and number every illustration. All halftone illustrations must be clear black and white prints. Colors may be used, but they should be selected so as to be readable when printed on a black-only printer. Since there are many ways, often incompatible, of including images (e.g., with experimental results) in a LaTeX document, below is an example of how to do this \cite{Lamp86}. \section{FOOTNOTES} \label{sec:foot} Use footnotes sparingly (or not at all!) and place them at the bottom of the column on the page on which they are referenced. Use Times 9-point type, single-spaced. To help your readers, avoid using footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). \begin{figure}[htb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{image1}} \centerline{(a) Result 1}\medskip \end{minipage} \begin{minipage}[b]{.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{image3}} \centerline{(b) Results 3}\medskip \end{minipage} \hfill \begin{minipage}[b]{0.48\linewidth} \centering \centerline{\includegraphics[width=4.0cm]{image4}} \centerline{(c) Result 4}\medskip \end{minipage} \caption{Example of placing a figure with experimental results.} \label{fig:res} \end{figure} \section{COPYRIGHT FORMS} \label{sec:copyright} You must submit your fully completed, signed IEEE electronic copyright release form when you submit your paper. We {\bf must} have this form before your paper can be published in the proceedings. \section{RELATION TO PRIOR WORK} \label{sec:prior} The text of the paper should contain discussions on how the paper's contributions are related to prior work in the field. It is important to put new work in context, to give credit to foundational work, and to provide details associated with the previous work that have appeared in the literature. This discussion may be a separate, numbered section or it may appear elsewhere in the body of the manuscript, but it must be present. You should differentiate what is new and how your work expands on or takes a different path from the prior studies. An example might read something to the effect: "The work presented here has focused on the formulation of the ABC algorithm, which takes advantage of non-uniform time-frequency domain analysis of data. The work by Smith and Cohen \cite{Lamp86} considers only fixed time-domain analysis and the work by Jones et al \cite{C2} takes a different approach based on fixed frequency partitioning. While the present study is related to recent approaches in time-frequency analysis [3-5], it capitalizes on a new feature space, which was not considered in these earlier studies." \vfill\pagebreak \section{REFERENCES} \label{sec:refs} List and number all bibliographical references at the end of the paper. The references can be numbered in alphabetic order or in order of appearance in the document. When referring to them in the text, type the corresponding reference number in square brackets as shown at the end of this sentence \cite{C2}. An additional final page (the fifth page, in most cases) is allowed, but must contain only references to the prior literature. \fi \vfill\pagebreak \bibliographystyle{IEEEbib} \footnotesize
1,108,101,563,101
arxiv
\section{Introduction} \label{sect:intro} The entanglement entropy of the vacuum is an example of a universal observable in quantum field theory, independent of the existence of a particular set of fields, which has many interesting and useful properties. Most prominent among these are its monotonicity properties as a function of the size of the entangling region \cite{Casini:2012ei}, and the existence of a simple geometric interpretation in the context of holography \cite{Ryu:2006bv}. We refer the reader to the review \cite{Nishioka:2018khk} for more information. The R{\'e}nyi entropy is a one parameter refinement of the entanglement entropy. Besides containing additional information, the R{\'e}nyi entropy is notable for having a straightforward Euclidean path integral interpretation known as the replica trick \cite{Calabrese:2004eu}. Supersymmetric R{\'e}nyi entropy (SRE) is a twisted version, in the sense of $(-1)^F$, of R{\'e}nyi entropy which can be defined for supersymmetric theories in a variety of spacetime dimensions and with varying amounts of supersymmetry \cite{Nishioka:2013haa,Hama:2014iea,Huang:2014pda,Crossley:2014oea}. Unlike R{\'e}nyi entropy, SRE can be calculated exactly at arbitrary coupling using the method of supersymmetric localization. It nevertheless shares many of the interesting properties of the untwisted version, including the ability to recover the entanglement entropy as a limit. In a $d$-dimensional superconformal field theory (SCFT), the SRE for a $d-2$-dimensional spherical entangling surface can be computed using the partition function on a $d$-sphere, branched $n$ times over a maximal $d-2$-sphere, where the metric has a conical singularity. In holographically dual solutions, gravity becomes dynamical and the issue arises of how to treat such a singularity. By conformally mapping the branched sphere to $\mathbb{H}^{d-1} \times \mathbb{S}^1$, where $\mathbb{H}$ denotes hyperbolic space, the singularity is pushed to infinity. The R{\'e}nyi entropy is mapped to the thermal entropy in this space, with the new Euclidean time having periodicity $\beta = 2\pi n$. The SRE is likewise mapped to a twisted thermal partition function. The details of the singularity are encoded in the boundary conditions on this space. The gravity duals are hyperbolically sliced solutions, so-called ``topological" black holes, whose boundary is indeed of the form $\mathbb{H}^{d-1} \times \mathbb{S}^1$. The computation of the SRE in $d$-dimensional models ($d=2,3,4,5,6$) with a holographic dual was performed, respectively, in \cite{Giveon:2015cgs,Mori:2015bro,Nishioka:2014mwa,Huang:2014gca,Crossley:2014oea,Hama:2014iea,Alday:2014fsa}. The matching with the gravity computation of the SRE was achieved with supergravity hyperbolic black holes supported by a single gauge field, which corresponds to the graviphoton. Here we take this one step further, by considering supergravity backgrounds with more general couplings, in particular vector multiplets. The corresponding dual field theory computation therefore includes fugacities for the global symmetries of the theory, equivalently co-dimension two flavor vortex defects in the $d$ sphere picture. In gravity, we work with four and six-dimensional supergravity solutions, achieving a match with the field theory SRE in $d=3,5$ by evaluating the supergravity renormalized on-shell action. We choose to work with $d$ odd because the finite part of the free energy in the field theory is believed to be universal. For comparison, in the even $d$ case the coefficient of the Weyl anomaly is always universal, while the subleading piece may only be universal in the presence of a sufficient amount of supersymmetry \cite{Gerchkovitz:2014gta}. By working in even-dimensional supergravity, we also avoid subtleties in holographic renormalization schemes related to the Casimir energy, see \textit{e.g.}\;\cite{Genolini:2016sxe,Papadimitriou:2017kzw,An:2017ihs}. Let us however mention that the SRE of supergravity solutions in $d + 1 = 5, 7$, coupled to matter were compared to the field theory result, respectively, in \cite{Huang:2014pda,Yankielowicz:2017xkf}. The aim of this paper is twofold. On one hand, we wish to investigate how the SRE is computed holographically in the case where matter couplings are incorporated -- in the present case, this consists in considering hyperbolic black hole solutions supported by vector multiplets. On the other hand, our setup allows to directly map the fugacities appearing in the field theory computation to the black hole chemical potentials. The mapping that we obtain is then rather manifest.% \footnote{For instance, in the case of rotating electric black holes, an elegant prescription to map the black hole chemical potentials to the field theory ones was recently put forward in \cite{Cabo-Bizet:2018ehj}. This procedure requires taking an extremal limit of a family of supersymmetric, complexified solutions, and the definition of the black hole chemical potentials via appropriate subtraction of the extremal BPS values. In our framework, upon Wick-rotating the BPS black hole solution we are left with a regular geometry with topology $\mathbb{R}^2 \times \mathbb{H}^{d-1}$, where a formal finite temperature can be defined. This allows us to directly map the chemical potentials in gravity into those on the field theory side, with no need for such a subtraction.} The paper is organized as follows. We will first provide results for the supersymmetric R{\'e}nyi entropy with flavor fugacities for specific models: the ABJM model in $d=3$, and a $\mathcal{N}=1$, $\mathrm{USp}(2N)$ gauge theory with $N_f$ fundamental and one anti-symmetric hypermultiplets in $d=5$. These models have well known gravity dual descriptions. We then focus on the gravity duals to SRE in four and six dimensions, which are hyperbolic black holes. We spell out the solutions, which are new in the $d = 6$ case, and compute their renormalized on-shell action. We show that this matches with the SRE computation. In appendix \ref{AppA}, we explicitly construct the Killing spinors for the hyperbolic black holes. Appendix \ref{AppB} shows the computation of the renormalized on-shell action via holographic renormalization techniques and appendix \ref{AppC} shows that the black hole charges computed from supergravity match those computed in the SCFT. In appendix \ref{AppD}, we present a simple example of a rotating hyperbolic black hole which generalizes the static case in section \ref{warmup}, and provide the value of its renormalized on-shell action. \section{Field theory} \label{sect:field_theory} In this section we calculate the free energy of SCFTs on $\mathbb{H}^{d-1} \times \mathbb{S}^{1}$ that are holographically dual to our hyperbolic BPS black holes. We first introduce the supersymmetric R{\'e}nyi entropy (SRE) and its deformation by BPS vortex defects. We then describe the relationship of these defects to black hole chemical potentials. Using supersymmetric localization, we construct an appropriate matrix model which captures the exact answer for the free energy. Finally, we use large $N$ techniques to explicitly evaluate the matrix model for field theories dual to the black hole solutions. \subsection{Supersymmetric R{\'e}nyi entropy} \label{subsect:SRE} We briefly review the definition of R{\'e}nyi entropy and its supersymmetric counterpart (SRE). We then show how co-dimension two defect operators alter the localization result for SRE. Finally, we relate such defects to chemical potentials in the partition function on hyperbolic space. \subsubsection{Definition of R{\'e}nyi entropy} Following the notation in \cite{Nishioka:2018khk}, we define entanglement entropy for a vacuum state $\Psi$ by first making a choice of a subregion $A$ of a spatial slice. The complement will be denoted by $B=\bar{A}$. We make the assumption that the Hilbert space of the theory can be likewise locally split as% \footnote{For a critical discussion of the validity of this assumption, see references in footnote $3$ of \cite{Nishioka:2018khk}. The subtleties associated with this splitting will not affect our results.} \begin{equation}} \newcommand{\ee}{\end{equation} \mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B} \, . \ee We then form the reduced density matrix corresponding to $A$ \begin{equation}} \newcommand{\ee}{\end{equation} \rho_{A}\equiv\text{tr}_{B}\left|\Psi\right\rangle \left\langle \Psi\right| \, . \ee The entanglement entropy associated to $A$ can be defined as the von Neumann entropy of $\rho_{A}$, \begin{equation}} \newcommand{\ee}{\end{equation} S\left(A\right)\equiv- \Tr \rho_{A}\log\rho_{A} \, . \ee The R{\'e}nyi entropy is a one parameter refinement of the entanglement entropy defined by \begin{equation}} \newcommand{\ee}{\end{equation} S_{n}\left(A\right)\equiv\frac{1}{1-n}\log\text{tr}\rho_{A}^{n}\, ,\quad n\in\mathbb{N}\, . \ee It satisfies the relation% \begin{equation}} \newcommand{\ee}{\end{equation} \lim_{n\rightarrow 1}S_{n}\left(A\right)=S\left(A\right), \ee where the limit is understood to be taken using an appropriate continuation to non-integer $n$. We will restrict our attention to the case where $A$ is the $d-1$ ball and the entangling surface is $\partial A=\mathbb{S}^{d-2}$. The R{\'e}nyi entropy of a quantum field theory is, in general, divergent. However, for $d$ odd the finite part of the R{\'e}nyi entropy of a CFT is believed to be a universal observable (see \cite{Nishioka:2018khk} and references within). The R{\'e}nyi entropy can alternatively be computed using the replica trick \cite{Calabrese:2004eu}. One considers the path integral on an $n$-fold cover of the original spacetime branched around the entangling surface $\partial A$. Denoting the partition function on this space by $Z_n$, we will \emph{define} the $n$-th R{\'e}nyi entropy for a positive integer $n$ by% \footnote{The absolute value, which is absent from the usual definition, is used here to avoid some subtleties associated with possible non-universal terms in the SRE defined later on. See \cite{Nishioka:2013haa} for a discussion of the $d=3$ case.} \begin{align} S_{n}\equiv\frac{1}{1-n}\log\left|\frac{Z_{n}}{\left(Z_{1}\right){}^{n}}\right|\, .\label{Renyi_Definition} \end{align} This definition is incomplete because the branching means that the spacetime corresponding to $Z_{n}$ is not smooth but has conical singularities. One could complete the definition by specifying appropriate boundary conditions for all fields at $\partial A$. We will instead concentrate on the definition of SRE, reviewed in section \ref{subsect:SRE}, which uses a particular prescription for smoothing out the singularities \cite{Nishioka:2013haa}. The line element on a branched $d$ sphere is defined as the round sphere metric with a different coordinate range \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation} \mathrm{d} s^{2} & =\ell^{2}\left(\mathrm{d} \theta^{2}+\sin^{2}(\theta) \mathrm{d} \tau^{2}+\cos^{2}(\theta) \mathrm{d} s_{\mathbb{S}^{d-2}}^{2}\right) \, , \\ &\theta\in\left[0,\pi/2\right],\quad\tau\in\left[0,2\pi n\right) . \eea This metric has a conical singularity along the co-dimension two maximal $d-2$ sphere at $\theta=0$. For $n$ a positive integer, the branched sphere is related by a Weyl transformation to the branched version of $\mathbb{R}^{d}$ used to define the $n$-th R{\'e}nyi entropy \cite{Casini:2011kv}. In order to avoid working with a singular space, we can conformally map this space by $\cot (\theta) = \sinh (\chi)$ to $\mathbb{H}^{d-1}\times\mathbb{S}^{1}$ with line element \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation} \label{hypn} \mathrm{d} s^2_{\mathbb{H}^{d-1} \times \mathbb{S}^{1}} & = \mathrm{d} \tau^2 + \mathrm{d} \chi^2 + \sinh(\chi)^2 \mathrm{d} s^2_{\mathbb{S}^{d-2}} \, , \\ &\chi\in\left[0, \infty \right],\quad\tau\in\left[0,2\pi n\right) . \eea The R{\'e}nyi entropy maps to the thermal entropy in this space with inverse temperature $\beta=2\pi n$. The singularity at $\theta=0$ is mapped to $\chi \rightarrow \infty$ \cite{Casini:2011kv}. \subsubsection{Definition of supersymmetric R{\'e}nyi entropy} The supersymmetric R{\'e}nyi entropy (SRE) is a twisted version, in the sense of $(-1)^F$, of R{\'e}nyi entropy \cite{Nishioka:2013haa,Hama:2014iea,Huang:2014pda,Crossley:2014oea}. In order to preserve supersymmetry in SRE, one must give nonzero values to additional fields, aside form the metric, in the background supergravity multiplet to which the SCFT is coupled \cite{Festuccia:2011ws,Closset2012,Dumitrescu:2012at,Dumitrescu:2012ha}. Specifically, one needs to turn on a background R-symmetry gauge field, $A^{(R)}$, which is flat in the bulk of the space and has a delta function like field strength supported on the singularity \cite{Nishioka:2013haa}. For example, in a three-dimensional $\mathcal{N}=2$ field theory we have \cite{Nishioka:2013haa}% \footnote{The sign of $A^{(R)}$ chosen here, which is correlated with the choice of Killing spinor preserved by SRE, corresponds to our gravity conventions and is opposite to the one chosen in \cite{Nishioka:2013haa}.} \begin{equation}} \newcommand{\ee}{\end{equation} \label{Rsymm} A^{(R)}=-\frac{n-1}{2n}\mathrm{d}\tau \, . \ee After the additional Weyl transformation to $\mathbb{H}^{d-1}\times\mathbb{S}^{1}$, the SRE is related to a twisted, in the sense of $(-1)^F$, version of the thermal partition function which we can call the \emph{hyperbolic index}, in analogy with the superconformal index \cite{Kinney:2005ej,Romelsberger:2005eg}. A representation of this quantity as a trace over the Hilbert space $\mathcal{H}_{\mathbb{H}^{d-1}}$ of states on $\mathbb{H}^{d-1}$ was given in \cite{Zhou:2016kcz}. Including flavor charges, we can write\footnote{As an index, $Z^{\text{susy}}_{n}$ does not change under renormalization group flow, and thus can be computed either in the UV or the IR SCFT. The parameter $n$ is a chemical potential for a combination of charges commuting with the supercharge, similar to those found in \cite{Kinney:2005ej,Romelsberger:2005eg}.} \begin{equation}} \newcommand{\ee}{\end{equation} \label{trace_representation} Z^{\text{susy}}_{n} = \Tr_{\mathcal{H}_{\mathbb{H}^{d-1}}} e^{-2\pi n\left(H-\mathrm{i} \sum_I \alpha^I Q_{I}^{\text{flavor}}+\mathrm{i}\frac{n-1}{n}Q_R\right)} \, , \ee where $H$ is the Hamiltonian, $Q_R$ is the R-symmetry charge, $Q_{I}^{\text{flavor}}$ are flavor charges, and the $\alpha^I$ are flavor chemical potentials. The SRE is then defined as \begin{equation}} \newcommand{\ee}{\end{equation} \label{def:SRE} S^\text{SRE}_n \equiv \frac{1}{1-n}\log \frac{Z^{\text{susy}}_{n}}{\left(Z^{\text{susy}}_{1}\right)^n}\,. \ee \subsubsection{Localization and deformation of SRE} The partition function defining the SRE can be computed exactly using the method of supersymmetric localization \cite{Witten:1988ze,Pestun:2007rz}. In the case of SRE in three dimensions, the matrix model one gets from localization coincides with the one used to compute the partition function on the squashed sphere with the squashing parameter related to $n$ in a simple way \cite{Hama:2011ea,Nishioka:2013haa}.% \footnote{This is true at the level of the matrix model, not just the final result.} This relationship continues to hold for higher dimensions and we consequently make no distinction between the free energy in the two matrix models. The partition function defining the SRE can be refined by supersymmetric deformations while remaining amenable to localization \cite{Kapustin:2009kz,Nishioka:2013haa}.% \footnote{We describe the situation in three dimensions. The situation in five dimensions is analogous.} Deformations include masses for matter multiplets and Fayet-Iliopoulos (FI) terms for abelian vector multiplets. These deformations break conformal invariance. Additionally, the form of the coupling of the theory to the background supergravity fields, including $A^{(R)}$, depends on a choice of R-symmetry current. If the R-symmetry is abelian, one may choose an arbitrary linear combination of R-symmetry and abelian flavor symmetry currents. In an SCFT, a particular combination, the result of dynamical mixing, is dictated by the superconformal algebra where the R-symmetry transformations appear \cite{Jafferis:2010un,Closset:2012vg}. Supersymmetric operators can also be added to the SRE. These include Wilson loops and co-dimension two vortex defects \cite{Kapustin:2009kz,Kapustin:2012iw,Drukker:2012sr}. The latter are inserted by demanding that the fields in the path integral have prescribed singularities on the defect worldvolume \cite{Gukov:2014gja}. If the defect is in a flavor symmetry this is equivalent to introducing background flavor symmetry gauge fields which are flat outside the defect. In fact, the deformation leading from the usual sphere partition function to the SRE is itself such a defect, embedded in the background supergravity multiplet. Due to this, addition of flavor defects to the SRE, oriented along the same sub-manifold, is essentially the same as the R-symmetry mixing effect described above. However, the strength of the defect is now unrelated to the superconformal algebra and represents a deformation of the SRE. In the hyperbolic picture, such a defect is mapped to the holonomy of a flavor symmetry connection along the time direction, \textit{i.e.}\;a flavor fugacity. The chemical potentials $\alpha$ for such a fugacity are linearly related to the $A^\text{flavor}_{\tau}$ flavor gauge fields introduced below, with a proportionality constant which depends on the normalization of the charges. \subsubsection{The SRE matrix model deformed by defects \label{def}} The matrix model for the round sphere deformed by co-dimension two defects, in dimensions $d = 3, 4, 5$ was derived in \cite{Nishioka2016}. It was shown that a background $\mathrm{U}(1)$ flavor symmetry connection $A^\text{flavor}$ with holonomy $\exp\left(2\pi \mathrm{i} A^\text{flavor}_{\tau}\right)$ induces, after localizing to a matrix model, a mass deformation term% \begin{equation}} \newcommand{\ee}{\end{equation} m_{\text{defect}}=- \mathrm{i} A^\text{flavor}_{\tau} \, . \ee The fact that the mass is imaginary is part of the relationship to R-symmetry mixing. The large $N$ limit of the same matrix models in the presence of R-symmetry mixing or of mass terms has previously been derived in \cite{Martelli:2011fu,Imamura:2011wg,Chang:2017mxc}. The mixing parameters are usually called $\Delta$,% while masses are denoted by $m$. Besides being purely imaginary, the mass term induced by the defect also has an origin which is naturally $A^\text{flavor}_{\tau}=0$. This is true also for the real ``physical masses'' $m$. On the other hand, in a theory which has a non-abelian R-symmetry group, the $\Delta$'s have an origin which is determined by the canonical R-charge, or the canonical dimensions, of matter multiplets. In three dimensions this is $\Delta=1/2$, while in five dimensions it is $\Delta=3/2$. Taking all this into account, and using the relationship between $n$ and the squashing parameter $b$ derived in \cite{Nishioka2016}, the defect deformed three-dimensional matrix models are given by those of \cite{Martelli:2011fu} with the substitution% \footnote{The setup is symmetric with respect to inversion of $b$. In order to conform to the notation in \cite{Martelli:2011fu}, we set $b=1/\sqrt{n}$ instead of $b=\sqrt{n}$ as in \cite{Nishioka2016}.} \begin{equation}} \newcommand{\ee}{\end{equation} \label{map3d} \Delta_{\text{there}} = \frac{1}{2}+\frac{2n A^\text{flavor}_{\tau}}{n+1} \, , \qquad b_{\text{there}} = \frac{1}{\sqrt{n}} \, . \ee For five-dimensional $\mathcal{N}=1$ theories appearing in \cite{Chang:2017mxc}, we can simply take \begin{equation}} \newcommand{\ee}{\end{equation} \label{6d_mapping} m_{\text{there}} = - \mathrm{i} A^\text{flavor}_{\tau} \, , \qquad \vec{\omega}_{\text{there}} = \left(1,1, 1 / n \right) \, . \ee We will adopt a democratic convention for the deformation parameters $\Delta$, whereby the physical parameters are augmented by one additional parameter and a constraint is imposed. Interpreting $\Delta$ as the result of a flavor defect, we will add a corresponding $A^\text{flavor}$. The constraint in terms of $A^\text{flavor}$ is simply \begin{equation}} \newcommand{\ee}{\end{equation} \label{chemical_potential_constraint} \sum_I A^{\text{flavor},I} = 0 \, . \ee \subsection[Squashed \texorpdfstring{$\mathbb{S}^3$}{S**3} free energy]{Squashed $\mathbb{S}^3$ free energy} \label{sec:logZ:3D} In this section, we review the squashed $\mathbb{S}^3$ partition function and its large $N$ limit, as analyzed in \cite{Martelli:2011fu,Imamura:2011wg}. For the purpose of this paper, we consider the ABJM model \cite{Aharony:2008ug}, which is holographically dual to an AdS$_4 \times \mathbb{S}^7/\mathbb{Z}_k$ background of M-theory. ABJM is a three-dimensional $\mathcal{N}=6$ supersymmetric Chern-Simons-matter theory with gauge group $\mathrm{U}(N)_k \times \mathrm{U}(N)_{-k}$ (the subscripts represent the CS levels) with two pairs of bi-fundamental chiral fields $A_i$ and $B_i$, $i=1,2$, in the representation $({\bf N},\overline{{\bf N}})$ and $(\overline{{\bf N}},{\bf N})$ of the gauge group, respectively. The chiral fields interact through the quartic superpotential \begin{equation}} \newcommand{\ee}{\end{equation} \label{superpotential:ABJM} W = \Tr \big( A_1B_1A_2B_2 - A_1B_2A_2B_1 \big) \, . \ee In the $\mathcal{N}=2$ formulation, the ABJM model has a $\mathrm{U}(2)\times \mathrm{U}(2)$ action which acts separately on the chiral fields $A_{1,2}$ and $B_{1,2}$. There is a $\mathrm{U}(1)^3$ subgroup of the Cartan of this group which preserves the superpotential, a particular linear combination of which is gauged. In addition, there are two topological $\mathrm{U}(1)_J$ symmetries. The current for one of these topological symmetries is set to zero by the equations of motion. Due to the appearance of Chern-Simons terms, the action of the other $\mathrm{U}(1)_J$ is mixed with the gauge group action. We will work in a gauge in which the fugacity conjugate to the remaining topological symmetry, which could be explicitly added using an FI parameter, is fixed to $1$. The remaining global symmetry group, which we will call the flavor group, is given by the $\mathrm{U}(1)^3$ compatible with the superpotential acting on the chiral fields. The model admits therefore a three-parameter space of flavor symmetry, or $\Delta$ type, deformations.% \footnote{We would like to thank Alberto Zaffaroni for explaining this point.} We introduce the R-charges $\Delta_I$, $I=1,\ldots,4$, one for each of the four fields $\{A_i,B_i\}$, satisfying \begin{equation}} \newcommand{\ee}{\end{equation} \label{constraint:ABJM} \sum_{I = 1}^{4} \Delta_I = 2 \, . \ee The partition function can be written as \begin{equation}} \newcommand{\ee}{\end{equation} Z_{\mathbb{S}^3_b} = \int_{- \infty}^{\infty} \left[ \prod_{i = 1}^{N} \frac{\mathrm{d} \lambda_i}{2 \pi} \frac{\mathrm{d} \tilde \lambda_i}{2 \pi} \right] e^{- F_{\mathbb{S}^3_b} (\lambda_i , \tilde \lambda_i)} \, , \ee where \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation} \label{free_energy:S^3} F_{\mathbb{S}^3_b} & = 2 \log N! - \frac{i k}{4 \pi b^2} \sum_{i = 1}^{N} \Big( \lambda_i^2 - \tilde \lambda_i^2 \Big) \\ & - \sum_{i < j}^{N} \left\{ \log \left[ 2 \sinh \left( \frac{\lambda_i - \lambda_j}{2} \right) \right] + \log \left[ 2 \sinh \left( \frac{\lambda_i - \lambda_j}{2 b^2} \right) \right] \right\} \\ & - \sum_{i < j}^{N} \left\{ \log \left[ 2 \sinh \bigg( \frac{\tilde \lambda_i - \tilde \lambda_j}{2} \bigg) \right] + \log \left[ 2 \sinh \bigg( \frac{\tilde \lambda_i - \tilde \lambda_j}{2 b^2} \bigg) \right] \right\} \\ & - \sum_{i,j=1}^{N} \sum_{a=1}^{2} S_2 \left( \frac{i \mathfrak{Q}}{2} (1 - \Delta_a ) - \frac{1}{2 \pi b} ( \lambda_i - \tilde \lambda_j ) \bigg| b \right) \\ & - \sum_{i,j=1}^{N} \sum_{b=3}^{4} S_2 \left( \frac{i \mathfrak{Q}}{2} (1 - \Delta_b ) + \frac{1}{2 \pi b} ( \lambda_i - \tilde \lambda_j ) \bigg| b \right) \, . \eea Here, $\mathfrak{Q} =b + 1/b$ and $S_2 (\lambda | b)$ is the double sine function. \paragraph*{Large $N$ free energy.} Consider the following ansatz for the large $N$ saddle point eigenvalue distribution, \begin{equation}} \newcommand{\ee}{\end{equation} \label{Ansatz:largeN:3D} \lambda_j = N^{1/2} t_j + \mathrm{i} v_j \, , \qquad \tilde \lambda_j = N^{1/2} t_j + \mathrm{i} \tilde v_j \, . \ee In the large $N$ limit, we define the continuous functions $t_j = t ( j / N )$ and $v_j = v (j / N)$, $\tilde v_j = \tilde v (j / N)$; and we introduce the density of eigenvalues \begin{equation}} \newcommand{\ee}{\end{equation} \label{rho(t)} \rho (t) = \frac{1}{N} \frac{\mathrm{d} j}{\mathrm{d} t} \, , \quad \textit{s.t.} \int \mathrm{d} t \rho (t) = 1 \, . \ee At large $N$ the sums over $N$ become Riemann integrals, for example, \begin{equation}} \newcommand{\ee}{\end{equation} \sum_{j = 1}^{N} \to N \int \mathrm{d} t \rho (t) \, . \ee The large $N$ free energy is then given by \cite{Martelli:2011fu,Imamura:2011wg} \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation} \label{S^3:free_energy:functional} \frac{F_{\mathbb{S}^3_b} \left[ \rho(t) , \delta v(t) , \Delta_I | b \right]}{N^{3/2}} & = \frac{k}{2 \pi b^2} \int \mathrm{d} t \rho(t) t \delta v(t) - \gamma \left( \int \mathrm{d} t \rho(t) - 1 \right) \\ & - \frac{b \mathfrak{Q}^3}{16} \sum_{a=1}^2 (2 - \Delta_a^+) \int \mathrm{d} t \rho(t)^2 \left[ \left( 2 \frac{\delta v(t)}{b \mathfrak{Q}} + \pi \Delta_a^- \right)^2 - \frac{\pi^2}{3} \Delta_a^+ ( 4 - \Delta_a^+ ) \right] , \eea where we defined $\delta v(t) \equiv v(t) - \tilde v(t)$, $\Delta_1^\pm \equiv \Delta_1 \pm \Delta_4$, $\Delta_2^\pm \equiv \Delta_2 \pm \Delta_3$, and we added the Lagrange multiplier $\gamma$ for the normalization of $\rho(t)$. Setting to zero the variation of \eqref{S^3:free_energy:functional} with respect to $\rho(t)$ and $\delta v(t)$ we obtain the following saddle point configuration. We have a central region where \begin{equation}} \newcommand{\ee}{\end{equation} \begin{aligned} \rho (t)&= \frac{16 b \gamma + 4 \mathfrak{Q} k t (\Delta_1 \Delta_2-\Delta_3 \Delta_4 )}{4 \pi^2 b^2 \mathfrak{Q}^3 (\Delta_1+\Delta_3 ) (\Delta_2+\Delta_3 ) (\Delta_1+\Delta_4 ) (\Delta_2+\Delta_4 )} \, , \\[.5em] \delta v (t)&= \frac{2 \pi b \mathfrak{Q}^2 k t \sum_{a<b<c} \Delta_a \Delta_b \Delta_c - 4 \pi b^2 \mathfrak{Q} \gamma (\Delta_1 \Delta_2-\Delta_3 \Delta_4 )}{8 b \gamma + 2 \mathfrak{Q} k t (\Delta_1 \Delta_2 -\Delta_3 \Delta_4 )} \, , \end{aligned} \qquad - \frac{2 b \gamma }{\mathfrak{Q} k \Delta_1} < t < \frac{2 b \gamma }{\mathfrak{Q} k \Delta_3} \, . \ee When $\delta v = - \pi b \mathfrak{Q} \Delta_2$ on the left the solution reads \begin{equation}} \newcommand{\ee}{\end{equation} \rho (t)= \frac{2 b \gamma + \mathfrak{Q} k t \Delta_2}{\pi^2 b^2 \mathfrak{Q}^3 (\Delta_1-\Delta_2 ) (\Delta_2+\Delta_3 ) (\Delta_2+\Delta_4 )} \, , \qquad - \frac{2 b \gamma }{\mathfrak{Q} k \Delta_2} < t < - \frac{2 b \gamma }{\mathfrak{Q} k \Delta_1} \, , \ee while when $\delta v = \pi b \mathfrak{Q} \Delta_4$ on the right the solution is given by \begin{equation}} \newcommand{\ee}{\end{equation} \rho (t) = - \frac{2 b \gamma - \mathfrak{Q} k t \Delta_4}{\pi^2 b^2 \mathfrak{Q}^3(\Delta_1+\Delta_4 ) (\Delta_2+\Delta_4 ) (\Delta_4-\Delta_3 )} \, , \qquad \frac{2 b \gamma }{\mathfrak{Q} k \Delta_3} < t < \frac{2 b \gamma }{\mathfrak{Q} k \Delta_4} \, . \ee The normalization of $\rho(t)$ fixes the value of $\gamma$ as \begin{equation}} \newcommand{\ee}{\end{equation} \gamma = \frac{\pi \mathfrak{Q}^2}{\sqrt{2}} \sqrt{k \Delta_1 \Delta_2 \Delta_3 \Delta_4} \, . \ee Plugging the above solution back into \eqref{S^3:free_energy:functional} we obtain the squashed $\mathbb{S}^3$ free energy% \footnote{The first equality arises from a virial theorem for the free energy \eqref{S^3:free_energy:functional}.} \begin{equation}} \newcommand{\ee}{\end{equation} \label{br_sph} F_{\mathbb{S}^3_b} (\Delta_I|\mathfrak{Q}) = \frac{2 N^{3/2}}{3} \gamma= \frac{\pi N^{3/2} \mathfrak{Q}^2}{3} \sqrt{2 k \Delta_1 \Delta_2 \Delta_3 \Delta_4} = \frac{\mathfrak{Q}^2}{4} F_{\mathbb{S}^3} (\Delta_I) \, , \ee where $F_{\mathbb{S}^3}$ is the free energy of ABJM on the round $\mathbb{S}^3$, \textit{i.e.}\;$b=1$, see \cite[sect.\,5]{Jafferis:2011zi}. This is precisely \cite[(3.38)]{Martelli:2011fu}. \subsection[Squashed \texorpdfstring{$\mathbb{S}^5$}{S**5} free energy]{Squashed $\mathbb{S}^5$ free energy} \label{sec:logZ:5D} In this section we review the large $N$ limit of the squashed $\mathbb{S}^5$ free energy of the $\mathrm{USp}(2N)$ gauge theory with $N_f$ hypermultiplets in the fundamental representation and one hypermultiplet in the antisymmetric representation of $\mathrm{USp}(2N)$, as analyzed in \cite{Chang:2017mxc}. The gauge theories of interest live on the intersection of $N$ D4-branes and $N_f$ D8-branes and orientifold planes in type I' string theory and are holographically dual to a warped AdS$_6 \times \mathbb{S}^4$ background of massive type IIA supergravity \cite{Intriligator:1997pq} (see also \cite{Brandhuber:1999np,Bergman:2012kr,Morrison:1996xf,Seiberg:1996bd}). The perturbative partition function can be written as% \footnote{We will neglect instanton contributions as they are exponentially suppressed in the large $N$ limit.} \begin{equation}} \newcommand{\ee}{\end{equation} Z_{\mathbb{S}^5_\omega}^{\text{pert}} = \int_{- \infty}^{\infty} \left[ \prod_{i = 1}^{N} \frac{\mathrm{d} \lambda_i}{2 \pi} \right] e^{- F_{\mathbb{S}^5_\omega} (\lambda_i)} \, , \ee where \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation} \label{free_energy:S^5} F_{\mathbb{S}^5_\omega} & = N \log 2 + \log N! - N \log S'_{3} ( 0 | \vec{\omega}) + (N-1) \log S_3 \left( \mathrm{i} m_{a} + \frac{\omega_{\text{tot}}}{2} \Big| \vec{\omega} \right) \\ & + \frac{1}{\omega_1 \omega_2 \omega_3} \frac{4 \pi^3}{g_{\text{YM}}^2} \sum_{i = 1}^N \lambda_i^2 - \sum_{i > j}^{N} \log S_3 \left( \mathrm{i} \left[ \pm \lambda_i \pm \lambda_j \right] | \vec{\omega} \right) - \sum_{i = 1}^N \log S_3 \left( \pm 2 \mathrm{i} \lambda_i | \vec{\omega} \right) \\ & + \sum_{i > j}^{N} \log S_3 \left( \mathrm{i} \left[ \pm \lambda_i \pm \lambda_j \right] + \mathrm{i} m_a + \frac{\omega_{\text{tot}}}{2} \Big| \vec{\omega} \right) + N_f \sum_{i = 1}^N \log S_3 \left( \pm \mathrm{i} \lambda_i + \mathrm{i} m_f + \frac{\omega_{\text{tot}}}{2} \Big| \vec{\omega} \right) \, , \eea with $S_3 ( \lambda | \vec{\omega})$ being the triple sine function. Here, $m_a$ and $m_f$ are the masses for the hypermultiplets in the antisymmetric and fundamental representations of $\mathrm{USp}(2N)$, respectively. We also introduced the notation \begin{equation}} \newcommand{\ee}{\end{equation} \omega_{\text{tot}} \equiv \omega_1 + \omega_2 + \omega_3 \, , \qquad S_{3} (\pm z | \vec{\omega} ) \equiv S_3 (z | \vec{\omega}) S_3 (- z | \vec{\omega}) \, . \ee \paragraph*{Large $N$ free energy.} We may restrict to $\lambda_i \geq 0$ due to the Weyl reflections of the $\mathrm{USp}(2N)$ group. Consider the following ansatz for the large $N$ saddle point eigenvalue distribution, \begin{equation}} \newcommand{\ee}{\end{equation} \label{Ansatz:largeN:5D} \lambda_j = N^{\alpha} t_j \, , \ee where $\alpha \in ( 0 , 1)$ will be determined later. As in the previous section, at large $N$, we define the continuous function $t_j = t (j / N)$ and we introduce the density of eigenvalues $\rho(t)$, see \eqref{rho(t)}. In the large $N$ limit, $\lambda_{i} = \mathcal{O}(N^{1/2})$ (see \eqref{Ansatz:largeN:5D} with $\alpha = 1/2$). Therefore, at large $N$, the contributions with nontrivial instanton numbers are exponentially suppressed. In the continuum limit, the free energy \eqref{free_energy:S^5} is given by \cite{Chang:2017mxc}% \footnote{Notice, that the free energy at large $N$ does \emph{not} depend on the masses of the $N_f$ fundamental hypermultiplets. As it was shown in \cite[(3.22)]{Chang:2017mxc} their contribution to the large $N$ free energy is of order $\mathcal{O}(N^{3/2})$ and, thus, subleading.} \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation} \label{S^5:free_energy:functional} F_{\mathbb{S}^5_\omega} \left[ \rho(t) , m_a | \vec{\omega} \right] & = \frac{N^{1 + 3 \alpha}}{\omega_1 \omega_2 \omega_3} \frac{\pi ( 8 - N_f )}{3} \int_{0}^{t_*} \mathrm{d} t \rho(t) | t |^3 - \mu \left( \int_{0}^{t_*} \mathrm{d} t \rho(t) - 1 \right) \\ & - \frac{N^{2 + \alpha}}{\omega_1 \omega_2 \omega_3} \frac{\pi \left( \omega_{\text{tot}}^2 + 4 m_a^2 \right)}{8} \int_{0}^{t_*} \mathrm{d} t \rho(t) \int_{0}^{t_*} \mathrm{d} t' \rho(t') \left[ t + t' + | t - t' | \right] \, , \eea where we added the Lagrange multiplier $\mu$ for the normalization of $\rho(t)$. In order to have a consistent saddle point $\alpha$ acquires the value $1/2$, and thus $F_{\mathbb{S}^5_\omega} \propto N^{5/2}$. Setting to zero the variation of \eqref{S^5:free_energy:functional} with respect to $\rho(t)$ we find the following saddle point configuration \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation} & \rho(t) = \frac{2 | t |}{t_*} \, , \qquad t_* = \frac{1}{\sqrt{2} \sqrt{8 - N_f}} \left( \omega_{\text{tot}}^2 + 4 m_a^2 \right)^{1/2} \, , \\ & \mu = - \frac{\pi}{3 \sqrt{2} \omega_1 \omega_2 \omega_3} \frac{N^{5/2}}{\sqrt{8 - N_f}} \left( \omega_{\text{tot}}^2 + 4 m_a^2 \right)^{3/2} \, . \eea Plugging this back into \eqref{S^5:free_energy:functional} we obtain the squashed $\mathbb{S}^5$ free energy of the $\mathrm{USp}(2N)$ theory, that reads (\textit{cf.}\,\cite[(3.38)]{Chang:2017mxc})% \footnote{The first equality arises from a virial theorem for the free energy \eqref{S^5:free_energy:functional}.} \begin{equation}} \newcommand{\ee}{\end{equation} \label{S^5:free_energy:on-shell} F_{\mathbb{S}^5_\omega} ( m_a | \vec{\omega}) = \frac{2}{5} \mu = - \frac{\pi \sqrt{2}}{15 \omega_1 \omega_2 \omega_3} \frac{N^{5/2}}{\sqrt{8 - N_f}} \left( \omega_{\text{tot}}^2 + 4 m_a^2 \right)^{3/2} \, . \ee Introducing the redundant but \emph{democratic} parameterization \begin{equation}} \newcommand{\ee}{\end{equation} \label{demo} \Delta_1 = 1 + \frac{2 \mathrm{i} }{\omega_{\text{tot}}} m_a \, , \qquad \Delta_2 = 1 - \frac{2 \mathrm{i} }{\omega_{\text{tot}}} m_a \, , \ee \eqref{S^5:free_energy:on-shell} can be rewritten as \begin{equation}} \newcommand{\ee}{\end{equation} \label{S^5:Delta} F_{\mathbb{S}^5_\omega} ( \Delta_i | \vec{\omega}) = - \frac{\sqrt{2} \pi}{15} \frac{\omega_{\text{tot}}^3}{\omega_1 \omega_2 \omega_3} \frac{N^{5/2}}{\sqrt{8 - N_f}} \left( \Delta_1 \Delta_2 \right)^{3/2} \, , \qquad \Delta_1 + \Delta_2 = 2 \, . \ee Finally, setting $\Delta_{1,2} = 1$ and $\omega_{1,2,3} = 1$, we find the round $\mathbb{S}^5$ free energy \cite{Jafferis:2012iv} \begin{equation}} \newcommand{\ee}{\end{equation} \label{round:S^5:free_energy} F_{\mathbb{S}^5} = - \frac{9 \sqrt{2} \pi}{5} \frac{N^{5/2}}{\sqrt{8 - N_f}} \, . \ee \section[Four-dimensional solutions from the \texorpdfstring{$stu$}{stu} model]{Four-dimensional solutions from the $stu$ model} We treat here the four-dimensional gravitational backgrounds used to compute the holographic supersymmetric R{\'e}nyi entropy. This section is organized as follows: before delving into the more intricate matter coupled solutions, we start by reviewing the simple case of the minimal supergravity BPS hyperbolic Reissner-Nordstr\"om and its SRE computation as done in \cite{Nishioka:2014mwa,Huang:2014gca}. After this, in \ref{stu4D} we first recall the basic features of four-dimensional abelian Fayet-Iliopoulos (FI) gauged supergravity and present the hyperbolic matter coupled black hole solutions which first appeared in \cite{Cvetic:1999xp}, leaving the details of the supergravity formalism and the BPS equations to appendix \ref{AppA}. In \ref{renyi4d}, we compute the renormalized on-shell action and compare the result with the field theory computation in subsection \ref{holo3dmatching}, making contact with the minimal case as well. The complete procedure of holographic renormalization is spelled out in appendix \ref{AppB}. \subsection[Warm up: BPS hyperbolic Reissner-Nordstrom]{Warm up: BPS hyperbolic Reissner-Nordstr\"om \label{warmup}} The computation of the SRE for hyperbolic solutions of $\mathcal{N}=2$ minimal gauged supergravity was treated in \cite{Nishioka:2014mwa,Huang:2014gca}. The gravity configurations are solutions to the equations of motion of the bosonic action \begin{equation}} \newcommand{\ee}{\end{equation} S = \int \mathrm{d}^4x \sqrt{g} \left(R - \frac14 F_{\mu\nu} F^{\mu\nu} -\frac{6}{l_{\text{AdS}}} \right) , \ee and read \begin{equation}} \newcommand{\ee}{\end{equation} \label{hyp_RN} \mathrm{d} s^2 = - \left(\frac{r^2}{l_{\text{AdS}}^2} -1- \frac{2M}{r} +\frac{Q^2}{r^2} \right) \mathrm{d} t^2 + \frac{\mathrm{d} r^2}{ \left(\frac{r^2}{l_{\text{AdS}}^2} -1- \frac{2M}{r} +\frac{Q^2}{r^2}\right)} + r^2 (\mathrm{d} \theta^2 + \sinh^2(\theta) \mathrm{d} \phi^2) \, , \ee with gauge field $A_t = \frac{Q}{r} \mathrm{d} t+ c \? \mathrm{d} t$. $c$ is a gauge term to be fixed later, in such a way that the gauge field is zero at the horizon $r_+$, where $g_{tt}$ vanishes, $g_{tt}(r_+)=0$. In order for the solution to preserve $1/2$ of the supersymmetries, the relation $Q = \mathrm{i} M $ should hold. In other words, the charges of the solution should be purely imaginary. As we elaborate later on, this is not a problem because our aim is to study an analytically continued solution in Euclidean signature, obtained by $t \rightarrow - \mathrm{i} \tau$, where the metric nevertheless remains real. With a slight abuse of terminology, consistent with the literature, we will continue referring to these solutions as ``topological" or hyperbolic black holes. We set for simplicity $l_{\text{AdS}} = 1$. First of all, imposing the BPS relation $M =- \mathrm{i} Q$ and the fact that $g_{tt} (r_+) =0$ we have that \begin{equation}} \newcommand{\ee}{\end{equation} \label{Qis} Q = \mathrm{i} r_+ (1\pm r_+)\,. \ee The Wick rotated solution is characterized by a temperature $T$, found as the inverse periodicity of the $\tau$ coordinate, once we impose that the metric caps off smoothly at $r_+$. Indeed, for $r \rightarrow r_+$ the metric, upon changing coordinates to $R = \sqrt{ \frac{2(r-r_+)}{2r_+-1}} $, approaches \begin{equation}} \newcommand{\ee}{\end{equation} \mathrm{d} s^2 = \mathrm{d} R^2 + R^2 \mathrm{d} \tau^2 (2r_+-1) + r_+^2 (\mathrm{d} \theta^2 + \sinh^2(\theta) \mathrm{d} \phi^2) \, . \ee Therefore, the periodicity of the $\tau $ coordinate should be $\beta \equiv \Delta \tau = \frac{2\pi}{2r_+-1}$. The temperature\footnote{Once again this we denote this as "temperature of the black hole" but indeed we stress that its meaning comes from the Euclidean solution.} is the inverse of this period: \begin{equation}} \newcommand{\ee}{\end{equation} \label{T4min} T = \frac{2r_+-1}{2\pi} \, . \ee In order for the gauge field not to be singular at the horizon \begin{equation}} \newcommand{\ee}{\end{equation} A(r_+) = \frac{Q}{r_+} \mathrm{d} t+ c \? \mathrm{d} t =0 \, , \ee we set $c= - \frac{Q}{r_+}$. We define the chemical potential $\phi$ as the asymptotic value of the gauge field, therefore $\phi \equiv \lim_{r \rightarrow \infty} A_{t} = c$. To find the SRE, one identifies $T$ with $T_0/n$, where $T_0$ is the temperature of the neutral black hole and $n$ is the replica parameter. In this way, \begin{equation}} \newcommand{\ee}{\end{equation} \label{Treplica} T = \frac{1}{2\pi n} \, . \ee Combining \eqref{T4min} and \eqref{Treplica}, we can extract the value of $r_+$ as a function of the replica parameter $n$: \begin{equation}} \newcommand{\ee}{\end{equation} \label{rns} r_+ = \frac{n \mp1}{2n} \, . \ee We choose the lower branch since, for $n=1$, $r_+$ should go to unity. Similar reasoning makes us choose the lower sign in \eqref{Qis}. The expression for the free energy found in \cite{Nishioka:2014mwa,Huang:2014gca} reads \begin{equation}} \newcommand{\ee}{\end{equation} \label{onsh_min} I = \frac{\mathrm{Vol}(\mathbb{H}^2) \beta}{8 \pi G_4} \left(-r_+^3 + \mathrm{i} Q - \frac{Q^2}{r_+} \right) \, , \ee which, upon using \eqref{Qis}, \eqref{rns} becomes \begin{equation}} \newcommand{\ee}{\end{equation} \label{onsh} I = \frac{\mathrm{Vol}(\mathbb{H}^2) \beta}{8 \pi G_4} \frac{(n+1)^2}{n^2} = \frac{ \pi }{8 G_4} \frac{(n+1)^2}{n} \, . \ee This matches the branched sphere partition function on the field theory side \cite{Nishioka:2014mwa,Huang:2014gca}, upon setting $\Delta_I = 1/2$, $I=1,\ldots,4$, in \eqref{br_sph} and using the standard AdS$_4$/CFT$_{3}$ relation $\frac{1}{G_4} = \frac{2 \sqrt2}{ 3} N^{3/2}$ and the regularized volume $\mathrm{Vol}(\mathbb{H}^2) =-2\pi$ \cite{Nishioka:2014mwa}. Finally, we notice that the chemical potential takes the form \begin{equation}} \newcommand{\ee}{\end{equation} \phi = - \frac{Q}{r_+}= - \mathrm{i} (1- r_+) = - \mathrm{i} \frac{n-1}{2n} \, , \ee matching the value of the R-symmetry background field \eqref{Rsymm}. We record this expression as it will be useful later on in the computation of the SRE in the matter coupled case. \subsection[Hyperbolic black hole solutions of the $stu$ model]{Hyperbolic black hole solutions of the $stu$ model} \label{stu4D} The AdS$_4$ black holes with hyperbolic horizon we are after are solutions to abelian FI gauged supergravity in four spacetime dimensions. $\mathrm{U}(1)$ FI gauged supergravity arises as a truncation to the Cartan subalgebra, $\mathrm{U}(1)^4$, of $\mathcal{N} = 8$ gauged supergravity. The model thus obtained, called the $stu$ model, corresponds to the prepotential \begin{equation}\label{prep} F ( X ) = - 2 \mathrm{i} \sqrt{X^0 X^1 X^2 X^3 } \, , \end{equation} in the standard notation of $\mathcal{N} =2$ supergravity. We will deal with a purely electric solution that has a hyperbolic horizon, supported by purely real scalars. In the BPS limit, the solution correspond to a $1/2$ BPS black hole, preserving 4 out of the original 8 supercharges. Spherical black holes of this model were constructed in \cite{Duff:1999gh, Sabra:1999ux}, and later elaborated upon in \cite{Toldo:2012ec}. The hyperbolic solution, along with its uplift to eleven dimensions, first appeared in \cite{Cvetic:1999xp}. It is a static black hole characterized by the following metric \begin{equation}\label{sol} \mathrm{d} s^2= -\frac{U(r)}{4} \mathrm{d} t^2 +\frac{\mathrm{d} r^2}{U(r)}+ h^2(r) ( \mathrm{d} \theta^2 + \sinh^2(\theta) \mathrm{d} \phi^2) \,, \end{equation} with \begin{equation} \label{warpp3} U(r)=\frac{1}{\sqrt{\mathcal{H}}} f(r) \,, \qquad f(r) = -1- \frac{\mu}{r}+4 g^2 r^2 \mathcal{H} \,, \qquad h^2(r)=\sqrt{\mathcal{H}} r^2 \,. \end{equation} and \begin{equation}} \newcommand{\ee}{\end{equation} \mathcal{H} = H_1 H_2 H_3 H_4 \, , \qquad H_{I} = 1 + \frac{b_{I}}{r} \, , \qquad I =1, \ldots, 4 \, . \ee We set $g=1$ from now on, and notice that we have rescaled time to match the asymptotic geometry \eqref{hypn}. The non-vanishing components of the vector fields supporting the configurations are \begin{equation}\label{gf} A^{I} = \frac12 \left(1-\frac{1}{H_I} \right) \frac{q_I}{b_I} \, \mathrm{d} t + c^I \mathrm{d} t\, , \end{equation} where we have included four constant parameters $c^I$ (to be determined later) which are required so that the gauge fields are non-singular at the horizon. The equations of motion are satisfied if the parameters satisfy the following relation: \begin{equation}} \newcommand{\ee}{\end{equation} b_I = \mu \sin^2(\zeta_I) \, , \qquad q_I= \mu \sin(\zeta_I) \cos(\zeta_I) \, . \ee Uppercase indices $I,J$ run from 1 to 4, while lowercase ones $i,j$ run from 1 to 3. The magnetic charges are set to zero, hence this is a purely electric configuration. The scalar fields $z^i$ are \emph{real} and parameterized by the holomorphic sections $X^{i}$, $z^i = X^i / X^0$. They assume the form \cite{Duff:1999gh} \begin{equation}} \newcommand{\ee}{\end{equation} \label{zis} z^1 = \frac{H_1 H_2}{H_3 H_4} \, , \qquad z^2 = \frac{H_1 H_3}{H_2 H_4} \, , \qquad z^3= \frac{H_1 H_4}{H_2 H_3} \, . \ee The uplift of the solution to eleven-dimensional supergravity was performed in \cite{Cvetic:1999xp}, where the solution was interpreted as the decoupling limit of spinning M2-branes. The BPS branch, which provides the solutions of interest here, is obtained by setting $\mu=0$ and by taking% \begin{equation}} \newcommand{\ee}{\end{equation} q_I= \mathrm{i} b_I \, . \ee This configuration solves the BPS equations, as shown in appendix \ref{4D:BPS:proof}. Notice that the electric charge assumes a purely imaginary value, as it did in the minimal case studied in \cite{Nishioka:2014mwa,Huang:2014gca}. This is not a problem, as our aim is to study an analytically continued solution preserving supersymmetry. For this purpose, it is legitimate to take some parameters to be genuinely complex, since the Killing spinor equation, being analytic in the supergravity fields, will still admit a solution in the complexified background. Nevertheless, the Euclideanized metric in this case will remain purely real. It would be desirable to find a suitable solution directly in Euclidean supergravity coupled to matter multiplets, however in the following we will content ourselves with (a Wick-rotated version of) the Lorentzian solutions at hand. The hyperbolic Reissner-Nordstr\"om solution discussed in the previous subsection is recovered from our setup upon taking the scalars to be constant \begin{equation}} \newcommand{\ee}{\end{equation} H_1=H_2=H_3=H_4 =H \, , \qquad z^i =1 \, ,\quad i = 1,2,3 \, , \ee taking all the gauge fields equal, and redefining the $stu$ fields $A^I$ (see \cite[(3.15)]{Cvetic:1999xp}) as $A^I = A / 2$. By doing so, the number of independent electric charges reduces to one, that of the graviphoton $A$. \subsection[Holographic supersymmetric Renyi entropy]{Holographic supersymmetric R{\'e}nyi entropy \label{renyi4d}} From the $stu$ black hole at our disposal, we can compute the temperature (see footnote 14) \begin{equation}} \newcommand{\ee}{\end{equation} T = \frac{1}{4\pi} \frac{\mathrm{d} U}{\mathrm{d} r} \bigg|_{r_+} \, , \ee which turns out to be \begin{equation}} \newcommand{\ee}{\end{equation} \label{temp} T = \frac{ \left(r_+^3 (b_3+b_4+2 r_+)-b_1 \left(b_2 b_3 (2 b_4+r_+)+b_2 b_4 r_++b_3 b_4 r_+-r_+^3\right)+b_2 \left(r_+^3-b_3 b_4 r_+\right)\right)}{2 \pi r_+ \sqrt{b_1+r_+} \sqrt{b_2+r_+} \sqrt{b_3+r_+} \sqrt{b_4+r_+}} \, . \ee Here, $r_+$ is the location of the horizon, obtained by requiring $U(r_+) =0$. We leave the quantity $r_+$ implicit for the moment: trying to solve for $r_+$ from the vanishing of the warp factor yields a quartic equation whose explicit expression is quite cumbersome to manipulate. Consider the uncharged black hole $q_1=q_2=q_3=q_4=0$. In this case, the requirement $U(r_+) =0$ gives $4 r_+^2 - 1=0$, hence $r_+$ takes the simple form \begin{equation}} \newcommand{\ee}{\end{equation} r_+ = \frac{1}{2} \, . \ee Denoting by $T_0$ the temperature of the uncharged black hole, we have \begin{equation}} \newcommand{\ee}{\end{equation} T_0 = \frac{1}{2\pi} \, , \ee which will be useful later when defining the supersymmetric R{\'e}nyi entropy. In order for the gauge field to be non-singular at the horizon, we require $A^I(r_+) =0$. Given the expression \eqref{gf}, this leads to \begin{equation}} \newcommand{\ee}{\end{equation} c^I = - \frac{\mathrm{i}}{2} \left(1-\frac{1}{H_I (r_+)} \right) , \qquad I = 1, \ldots, 4 \, . \ee The chemical potentials $\phi_I$ are defined as the asymptotic values of the gauge fields. They assume the form (we do not distinguish here between upper and lower indices on the chemical potentials) \begin{equation}} \newcommand{\ee}{\end{equation} \label{potential} \phi_I = c^I = -\frac{\mathrm{i}}{2} \frac{b_I}{ b_I + r_+} \, , \qquad I = 1, \ldots, 4 \, . \ee By inserting \eqref{potential} into \eqref{temp}, we can express the temperature as a function of the chemical potentials in the following way: \begin{equation}} \newcommand{\ee}{\end{equation} T= \frac{- \mathrm{i} (\phi_1+\phi_2+\phi_3+\phi_4) + 1}{ \pi ( \sqrt{1-2 \mathrm{i} \phi_1} \sqrt{1-2 \mathrm{i} \phi_2} \sqrt{1- 2 \mathrm{i} \phi_3} \sqrt{1- 2 \mathrm{i} \phi_4})} \? r_+ \, , \ee where we have once again left $r_+$ implicit. We also point out that the quantities $\phi_I$ are \emph{imaginary}, therefore $T$ is \emph{real}, as it should be. At this point, we can define \begin{equation}} \newcommand{\ee}{\end{equation} \label{renyi} T = \frac{T_0}{n} = \frac{1}{2 \pi n} \, . \ee Solving this equation for $r_+$ we obtain \begin{equation}} \newcommand{\ee}{\end{equation} \label{rplus4} r_+=\frac{1}{2 n} \frac{\sqrt{1- 2\mathrm{i} \phi_1} \sqrt{1- 2 \mathrm{i} \phi_2} \sqrt{1- 2\mathrm{i} \phi_3} \sqrt{1-2 \mathrm{i} \phi_4}}{1-\mathrm{i} (\phi_1+\phi_2+\phi_3+\phi_4) } \, . \ee Additionally, we know that the quantity $r_+$ must satisfy the relation $U(r_+) =0$. Inserting the definitions \eqref{potential} into $U(r_+) =0$ yields the condition \begin{equation}} \newcommand{\ee}{\end{equation} 1+n^2 (\phi_1 + \phi_2 + \phi_3 +\phi_4+ \mathrm{i})^2 =0 \, , \ee which is solved by \begin{equation}} \newcommand{\ee}{\end{equation} \label{constr_chempot} \phi_1 + \phi_2 + \phi_3 +\phi_4 = \frac{ \mathrm{i} ( 1\pm n )}{n} \, . \ee We choose the lower sign since for $n=1$ we should have zero chemical potential. As we will see in a moment, the choice of the upper branch translates in the dual field theory to a constraint on the value of the R-symmetry background field. To recapitulate, at this point we have obtained the expression \eqref{rplus4} for $r_+$ in terms of the chemical potentials and the R{\'e}nyi parameter $n$, supplemented by the constraint \eqref{constr_chempot}. The renormalized on-shell action is computed by adapting the procedure of \cite{Batrachenko:2004fd} to the case of hyperbolic horizons. The computation, reported in appendix \ref{AppB}, is tedious and not particularly illuminating. In the end, the thermodynamical potential reads \begin{eqnarray}\label{Omega} I &=&\beta \Omega = I_{\text{reg}}+ E_{\text{ct}}+E_{\text{fin}}= \frac{\beta \, \mathrm{Vol}(\mathbb{H}^2)}{8 \pi \textbf{c} G_4}\left(-\frac{\mu}{2}+r_+\right) \, , \end{eqnarray} where $I_{\text{reg}}$ is the regularized on-shell action, $I_{\text{ct}}= E_{\text{ct}}$, $\mathrm{Vol}(\mathbb{H}^2)$ is the (regularized) volume of $\mathbb{H}^2$, and $\beta = 1/T$ is the period of the Euclidean time direction. For the BPS case $\mu =0$, we have \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation}\label{Omega} I = \frac{\beta \? \mathrm{Vol}(\mathbb{H}^2) \, r_+}{8 \pi \textbf{c} G_4} =\frac{ 2 \pi \mathrm{Vol}(\mathbb{H}^2)}{8 \pi \textbf{c} G_4} \left(\frac{ \mathrm{i} \sqrt{1-2 \mathrm{i} \phi_1} \sqrt{1-2 \mathrm{i} \phi_2} \sqrt{1-2 \mathrm{i} \phi_3} \sqrt{1-2 \mathrm{i} \phi_4}}{ 2 ( \mathrm{i} + \phi_1+\phi_2+\phi_3+\phi_4)} \right) . \eea This expression is useful when comparing with the field theory result $Z_n$ \eqref{br_sph}. The free energy of the black hole is given by \begin{equation}} \newcommand{\ee}{\end{equation} I = - \log Z(\phi_I,T) \, . \ee The state variables are computed according to \begin{equation}} \newcommand{\ee}{\end{equation} \label{rel1} E = \left( \frac{\partial I}{ \partial \beta}\right)_{\phi} - \frac{\phi_I}{\beta} \left( \frac{\partial I}{\partial \phi_I}\right)_{\beta} \, , \qquad S_{\text{BH}} = \beta \left(\frac{\partial I}{ \partial \beta} \right)_{\phi} - I \, , \qquad \mathcal{Q}_I = -\frac{1}{\beta} \left(\frac{\partial I}{ \partial \phi_I} \right)_{\beta} \, . \ee The renormalized on-shell action, \eqref{Omega}, is computed in the grand canonical ensemble. In this ensemble, the Gibbs potential $W$ is given by (see appendix \ref{AppB}) \begin{equation}} \newcommand{\ee}{\end{equation} W = \frac{I}{\beta} = E - T S_{\text{BH}} - \phi^I Q_I + \Lambda \left( \phi_1+\phi_2+\phi_3+\phi_4 - \mathrm{i} \frac{(1-n)}{n} \right) \, , \ee where $Q_I$ are the electric charges of the black hole, and we inserted the Lagrange multiplier $\Lambda$ which enforces the constraint \eqref{constr_chempot} among the chemical potentials. \subsection{Holographic matching \label{holo3dmatching}} In this section, we perform the holographic matching. The asymptotic value of the four-dimensional bulk gauge fields is related to the dual field theory flavor symmetry connection, defined in section \ref{def}, as \begin{equation}} \newcommand{\ee}{\end{equation} A_{\text{bulk}}^I (r \to \infty)= \phi_I \mathrm{d} t = \big( A^{\text{flavor},I}({\mathbb{S}^3_n})+A^{(R)}_{\text{bulk}} \big)\mathrm{d} \tau \, , \ee where we have used $t = -\mathrm{i} \tau$. To preserve supersymmetry, the background R-symmetry gauge field must have the form \eqref{Rsymm}. The background R-symmetry gauge field is identified with the chemical potential related to the R-symmetry gauge field in supergravity, which is the diagonal combination\footnote{Note that the factor of $1/2$ between \eqref{Rcomb} and \eqref{Rsymm} is due to the fact that the gauge fields in the $stu$ model are defined with a factor of $1/2$ with respect to the graviphoton in minimal supergravity \cite{Cvetic:1999xp}.} \begin{equation}} \newcommand{\ee}{\end{equation} \label{Rcomb} A^{(R)}_{\text{bulk}} (r \rightarrow \infty) = \frac14 \left(\phi_1 + \phi_2 +\phi_3 +\phi_4 \right) \mathrm{d} t = \mathrm{i} \frac{1-n}{4n} \mathrm{d} t = \frac{1-n}{4n} \mathrm{d} \tau \, , \ee that appears in the supercovariant derivative of the spinor parameter in the susy variations \eqref{susyvar4D}. Notice that $A^{(R)}_{\text{bulk}} = \frac12 A^{(R)}$. As a simple consistency check, \eqref{Rcomb} is precisely the relation \eqref{constr_chempot} obtained previously.% We are now ready to make contact with the field theory. The bulk fields correspond to the holonomies, shifted by the amount $(1 - n) / (4n)$ due to the R-symmetry connection. In other words, we use the mapping \eqref{map3d} between the holonomies $A^{\text{flavor},I}$ and the parameters $\Delta_I$, supplemented by the shift due to the R-symmetry: \begin{equation}} \newcommand{\ee}{\end{equation} A^I = A^{\text{flavor},I}+ A^{(R)}_{\text{bulk}} = \left( \Delta_I -\frac12 \right) \left( \frac{n+1}{2n}\right) + \frac{1-n}{4n} = \left(\frac{(1+n)\Delta_I}{2n} -\frac12 \right) . \ee Thus, we have \begin{equation}} \newcommand{\ee}{\end{equation} \label{mapphi3} \phi_I = \mathrm{i} \left( \frac{(1+n)\Delta_I}{2n} -\frac12 \right) , \qquad I =1,...,4 \, . \ee Taking the sum of the LHS and the RHS we obtain the constraint \begin{equation}} \newcommand{\ee}{\end{equation} \frac{n-1}{n} = 2 - \frac{n+1}{2n} \sum_I \Delta_I \qquad \Rightarrow \qquad \sum_I \Delta_I=2 \, , \ee which reproduces the usual constraint on the parameters $\Delta_I$. We use the standard relation \begin{equation}} \newcommand{\ee}{\end{equation} \label{AdS4:CFT3:dict} \frac{l_{\text{AdS}}^2}{G_4} = \frac{2 \sqrt2}{3} N^{3/2} \, , \ee where we have taken into account $l^2_{\text{AdS}} =1/4$ from \eqref{warpp3}. Inserting \eqref{mapphi3} into \eqref{Omega}, with $\textbf{c}=2$, the expression of the free energy becomes \begin{equation}} \newcommand{\ee}{\end{equation} \label{s3} I=- \frac{ \sqrt2 \pi N^{3/2}}{3} \frac{(n+1)^2}{n} \sqrt{\Delta_1 \Delta_2 \Delta_3 \Delta_4} = - \log Z_{\mathbb{S}_n^3} \, , \qquad \sum_{I=1}^4 \Delta_I =2 \, , \ee exactly matching the field theory computation \eqref{br_sph} upon identifying $b \equiv 1/\sqrt{n}$, see \eqref{map3d}. Note that we have defined the regularized volume as $\mathrm{Vol}(\mathbb{H}^2) = -2\pi$ as in \cite{Nishioka:2014mwa}. One easily sees that at the conformal point, $\Delta_I=1/2$, which corresponds to the minimal supergravity case, the on-shell action reduces as expected to the one found in \cite{Nishioka:2014mwa,Huang:2014gca}. We are now going to compute the supersymmetric R{\'e}nyi entropy. First, notice that the partition function on the field theory side, see \eqref{br_sph}, satisfies \begin{equation}} \newcommand{\ee}{\end{equation} \log Z_{\mathbb{S}_n^3} = \frac{(n+1)^2}{4n} \log Z_{\mathbb{S}^3} \, . \ee The supersymmetric R{\'e}nyi entropy is defined as \eqref{def:SRE} \begin{equation}} \newcommand{\ee}{\end{equation} S_n^{\text{SRE}} = \frac{n \log Z_{\mathbb{S}^3} - \log Z_{\mathbb{S}_n^3}}{n-1} \, . \ee Therefore, we have \begin{equation}} \newcommand{\ee}{\end{equation} \label{renyientropy} S_n = \frac{3n+1}{4n} S_1 \, , \qquad S_1 = \log Z_{\mathbb{S}^3} \, , \ee as expected. \section[Six-dimensional hyperbolic solutions]{Six-dimensional hyperbolic solutions} We introduce here the six-dimensional hyperbolic solutions necessary for the holographic computation of the supersymmetric R{\'e}nyi entropy. We first give some details regarding six-dimensional Romans $F(4)$ gauged supergravity coupled to one vector multiplet. We then present the hyperbolic black hole solutions coupled to matter, which have not previously appeared in the literature. In \ref{SRE6dROM}, we compute the holographic R{\'e}nyi entropy, using the result of appendix \ref{AppB}, and show the matching with the field theory computation in section \ref{sec:logZ:5D}. \subsection[Romans F(4) gauged supergravity coupled to matter]{Romans F(4) gauged supergravity coupled to matter} In what follows, we will consider the six-dimensional F(4) gauged supergravity coupled to one vector multiplet. Relevant references for this theory are \cite{Andrianopoli:2001rs,DAuria:2000xty}. While the massive type IIA supergravity origin of this theory as a truncation of the supersymmetric warped AdS$_6 \times \mathbb{S}^4$ solution has not been established, there is evidence for it based on previous holographic matchings, see for instance \cite{Gutperle:2017nwo,Hosseini:2018usu}. Taking the pragmatic approach of these latter papers, we work out supersymmetric solutions and proceed with the comparison of our result with its field theory counterpart. The five-dimensional SCFT dual to the warped AdS$_6 \times \mathbb{S}^4$ background is the one described in section \ref{sec:logZ:5D}. Solutions relevant for the supersymmetric R{\'e}nyi entropy computation in the minimal theory (no vector multiplets) \cite{Romans:1985tw} were studied in \cite{Hama:2014iea,Alday:2014fsa}. The non-minimal case is characterized by the presence of an additional flavor symmetry. The bosonic fields of the six-dimensional Romans supergravity theory \cite{Romans:1985tw} consist of the metric $g_{\mu \nu}$, a scalar field $X$, a two-form potential $B_{\mu \nu}$, a one-form potential $A$, and an $\mathrm{SU}(2)$ gauge field $A^j$ with $j= 1,2,3$. In addition, there are fermionic fields comprising a pair of gravitini $\psi_{\mu}^A$, $A=1,2$ and one spin 1/2 fermion $\chi^A$. The vector multiplets consist of one gauge field $A_{\mu}$, four scalar fields $\phi_{\alpha}$, with $\alpha=0,1,2,3$, and one gaugino $\lambda_A$. The scalar fields parameterize the coset space $\frac{\mathrm{SO}(4,1)}{\mathrm{SO}(4)}$. For additional details on the model, we refer the reader to \cite{Gutperle:2017nwo,Hosseini:2018usu}. In finding the solution, we may take the Romans supergravity solution as example. In this solution, only one of the components of the $\mathrm{SU}(2)$ gauge field, which we take to be $A^3$ \cite{Alday:2014fsa}, is nonzero. This gauge field is purely electric, meaning that the only nonzero component of the field strength is $F_{rt}$. This allows us to set the two-form potential $B_{\mu \nu}$ to zero, as there is no source for it.% \footnote{This is in contrast to the six-dimensional solutions of \cite{Hosseini:2018usu} of the form AdS$_2 \times \Sigma_{g_1} \times \Sigma_{g_2}$, which realizes the partial topological twist on $\Sigma_{g_1} \times \Sigma_{g_2}$. In that case, there is magnetic flux on $\Sigma_{g_1}$ and $ \Sigma_{g_2}$. This creates a source for the $H_{\mu \nu}$ field, which needs to be canceled by a nonzero value of $B$, in order to have a solution with $H=0$.} In our setup with an additional vector multiplet, we will still require the $B$ field to vanish. Moreover, as in \cite{Hosseini:2018usu}, we require the scalar fields in the vector multiplet $\phi_{\alpha}$ to be neutral under $A^3$. This restricts the nonzero components to $\phi_0$ and $\phi_3$. We are further able to find a solution with only $\phi_3$ turned on, namely $\phi_0=0$. Thus, we are left with the bosonic content: the metric, two gauge fields, the dilaton $X$, and the scalar field $\phi_0$. \subsection[Supersymmetric hyperbolic black holes]{Six-dimensional supersymmetric hyperbolic black holes}\label{6dF4} For the non-minimal case, we adapt the solutions of \cite[sect.\,3.2]{Chow:2011fh} to the $\mathbb{H}^4$ horizon topology. The solution is a static black hole characterized by the following metric \begin{equation} \mathrm{d} s^2= -U(r) \mathrm{d} t^2 +\frac{\mathrm{d} r^2}{V(r)}+ h(r) \mathrm{d} s_{\mathbb{H}^4}^2 \, , \end{equation} with $\mathrm{d} s^2_{\mathbb{H}^4}$ the area element of four-dimensional hyperbolic space \begin{equation}} \newcommand{\ee}{\end{equation} \mathrm{d} s^2_{\mathbb{H}^4} = \mathrm{d} \chi^2 +\sinh(\chi)^2 \left( \mathrm{d} \theta^2 +\sin^2(\theta) \mathrm{d} \psi^2 + \sin^2 (\theta) \sin^2(\chi) \mathrm{d} \phi^2 \right) , \ee and \begin{equation} \label{6D_warp} U(r)=\frac92 \frac{f(r)}{\mathcal{H}^{3/4}}\,, \qquad V(r)= \frac{f(r)}{\mathcal{H}^{1/4}} \qquad h(r)=\mathcal{H}^{1/4} r^2 \, , \end{equation} with% \footnote{As in \cite{Alday:2014fsa}, we have conveniently rescaled the time direction by a factor of $3/\sqrt2$ with respect to \cite{Cvetic:1999un}.} \begin{equation}} \newcommand{\ee}{\end{equation} f(r) = -1- \frac{\mu}{r}+ \frac29 r^2 \mathcal{H} \qquad \mathcal{H} = H_1 H_2 \, , \qquad H_I = 1 + \frac{b_I}{r^3} \, . \ee Here, $I=1,2$. The vector fields supporting the configuration read \begin{equation}\label{AI6} A^{I}_t = \frac{3}{2} \left(1-\frac{1}{H_I} \right) \frac{q_I}{b_I}\, - c^I \mathrm{d} t , \qquad I=1,2 \, , \end{equation} with parameters \begin{equation}} \newcommand{\ee}{\end{equation} b_I = \mu \sin^2(\xi_I) \, , \qquad q_I = \mu \sin(\xi_I) \cos(\xi_I) \, , \ee and the scalars, in the notation of \cite{Chow:2011fh} are given by \begin{equation}} \newcommand{\ee}{\end{equation} X_1 = H_1^{-5/8} H_2^{3/8}\,, \qquad X_2 = H_1^{3/8} H_2^{-5/8} \,. \ee The configuration with spherical slicing first appeared in \cite{Chow:2011fh}, and the solution presented here is its generalization to hyperbolic slicing. However, the origin of the original configuration as a solution of a supergravity theory was unclear. It is easy to verify that the configuration is a solution to the equations of motion of F(4) gauged supergravity coupled to one vector multiplet, which are reported in \cite{Suh:2018szn}. One first truncates the theory to the $\mathrm{U}(1) \times \mathrm{U}(1)$ sector, as was done in \cite{Karndumri:2015eta}, obtaining the Lagrangian \cite[(3.2)]{Suh:2018szn}. One can then see that the field $\varphi_1$ can be consistently set to zero. Moreover, since all the field strengths are electric, there is no source term for the field $B_{\mu\nu}$, hence the latter can be set to zero as well. The remaining fields in our solutions can be mapped to those in \cite{Hosseini:2018usu,Suh:2018szn} via% \footnote{The field we call $\phi_3$ and $F_{i1}$ coincides respectively with $\phi_2$ and $F_6$ of \cite{Suh:2018szn}.} \begin{equation} \begin{aligned}} \newcommand{\eea}{\end{aligned} \end{equation} & F_1 = \mathrm{d} A_1= F_3 -F_{i1} \, , \qquad && F_2 = \mathrm{d} A_2 =F_3 + F_{i1} \, , \\ & X_1 = e^{\sigma-\phi_3} \, , && X_2 = e^{\sigma + \phi_3} \, . \eea With this mapping, and once we impose the truncations described above, one can show that the equations of motion are solved. The gauging parameters $g,m$ are set to $g=3m$ and $m=1/(3 \sqrt2)$, justifying the factor $2/9$ in the warp factor $f(r)$ in \eqref{6D_warp}. The BPS branch is obtained, as usual, by setting $\mu =0$ and $q_I = \mathrm{i} b_I$. The solution is $1/2$ BPS, and its Killing spinor is explicitly constructed in \ref{6D:BPS:proof}. These solutions, once a Wick rotation to Euclidean spacetime is performed and setting $b_1=b_2$, reduce to those considered in \cite{Hama:2014iea,Alday:2014fsa}. \subsection[Supersymmetric Renyi entropy]{Supersymmetric R{\'e}nyi entropy \label{SRE6dROM}} As in the previous case, we start the procedure by computing the period of the Euclidean time circle, namely the temperature of the hyperbolically sliced black hole. Given the expression for the warp factor \eqref{6D_warp}, we have \begin{equation}} \newcommand{\ee}{\end{equation} T=-\frac{\left(4 b_1 b_2+ b_1 r_+^3+ b_2 r_+^3-2 r_+^6\right)}{6 \sqrt2 \pi r_+^2 \sqrt{ b_1+r_+^3} \sqrt{ b_2+r_+^3}} \, . \ee Once we impose that the gauge field vanishes at the black hole horizon, we introduce the chemical potentials $\phi_I$, $I=1,2,$ as the asymptotic value of the gauge fields \eqref{AI6}. We obtain \begin{equation}} \newcommand{\ee}{\end{equation} \phi_I = - \frac32 \frac{q_I}{b_I + r_+^3} = -\frac32 \frac{\mathrm{i} b_I}{b_I + r_+^3} \, , \qquad I = 1,2 \, , \ee where in the second equality we have used the BPS relation $q_I = \mathrm{i} b_I$. The temperature can then be rewritten as \begin{equation}} \newcommand{\ee}{\end{equation} T=\frac{1}{\sqrt2 \pi} \frac{1- \mathrm{i} (\phi_1+ \phi_2) }{\sqrt{3-2 \mathrm{i} \phi_1} \sqrt{3-2 \mathrm{i} \phi_2}} \? r_+ \, . \ee By equating $T = T_0/n= 1 / (2 \pi n )$, we obtain an expression for $r_+$ in terms of the chemical potentials and the R{\'e}nyi parameter $n$: \begin{equation}} \newcommand{\ee}{\end{equation} \label{rplus6} r_+ = \frac{ \sqrt{3- 2 \mathrm{i} \phi_1} \sqrt{3-2 \mathrm{i} \phi_2}}{ \sqrt2 n(1 - \mathrm{i} ( \phi_1+ \phi_2))} \, , \ee taking into account once more that these quantities are related via \begin{equation}} \newcommand{\ee}{\end{equation} \label{constr_chem6} \phi_1 + \phi_2 = \frac{ \mathrm{i} (1 \pm n)}{n} \, . \ee As explained in the previous section, we choose the lower sign so that the configuration reduces to a neutral black hole for $n=1$. The renormalized on-shell action can be computed easily (see appendix \ref{AppB}) by imposing supersymmetry. Using $\textbf{c} = \sqrt2/3$, we obtain \begin{equation}} \newcommand{\ee}{\end{equation} \label{onsh6} I = \frac{\beta \mathrm{Vol}(\mathbb{H}^4)}{8 \pi \textbf{c} G_6} \left( - r_+^3 - \frac{\mu}{2} \right) = - \frac{3 n}{4 \sqrt2 G_6} \mathrm{Vol}(\mathbb{H}^4) r_+^3 \, . \ee This is consistent with the result of \cite{Alday:2014fsa}, which is valid in the absence of vector multiplets. \eqref{onsh6} combined with the previous expression, \eqref{rplus6} for $r_+$ yields \begin{equation}} \newcommand{\ee}{\end{equation} \label{fin_grav_6} I = \frac{ \pi^2 n}{\sqrt2 G_6} \left( \frac{ \sqrt{3- 2 \mathrm{i} \phi_1} \sqrt{3-2 \mathrm{i} \phi_2}}{ \sqrt2 n(1 - \mathrm{i} ( \phi_1+ \phi_2))} \right)^3 , \ee supplemented by the constraint \eqref{constr_chem6} between the chemical potentials. We have also used the normalized volume $ \mathrm{Vol}(\mathbb{H}^4) = 4 \pi^2 /3 $ \cite{Alday:2014fsa}. \subsection[Holographic matching]{Holographic matching} We recall the expression that relates the asymptotic value of the bulk gauge field to the corresponding dual quantities: \begin{equation}} \newcommand{\ee}{\end{equation} A_{\text{bulk}}^I (r \to \infty) = \phi_I \mathrm{d} t = \big( A^I({\mathbb{S}^5_n}) +A_{\text{bulk}}^{(R)} \big) \mathrm{d} \tau\, . \ee Recall that, on the field theory side, the R-symmetry background gauge field has the expression \eqref{Rsymm}. The corresponding chemical potential in the supergravity notation reads \begin{equation}} \newcommand{\ee}{\end{equation} A^{(R)}_{\text{bulk}} = \frac{\phi_1 + \phi_2}{2} \mathrm{d} t = \mathrm{i} \frac{1-n}{2n} \mathrm{d} t = \frac{ 1-n}{2n} \mathrm{d} \tau\, . \ee We are ready now to make contact with the field theory chemical potentials. Indeed, the bulk fields correspond to \eqref{6d_mapping}, which are related to $\Delta_I$ via \eqref{demo}, shifted by the amount $(1-n) / (2n)$ due to the R-symmetry connection, resulting in \begin{equation}} \newcommand{\ee}{\end{equation} A^I= \left( \Delta_I -1 \right) \left( \frac{2n+1}{2n}\right) + \frac{1-n}{2n} = \frac32 \left(\frac{(1+2n)\Delta_I}{3n} -1 \right) . \ee Therefore, we have \begin{equation}} \newcommand{\ee}{\end{equation} \label{mapphi} \phi_I = \mathrm{i} \frac32 \left(\frac{\Delta_I (2 n+1)}{3 n} -1\right) , \qquad I =1,2 \, . \ee Notice that taking the sum over the index $I$ and using \eqref{constr_chem6} we get the relation $\Delta_1 +\Delta_2 = 2$. Taking into account \eqref{mapphi}, noting that $l_{\text{AdS}}^2= 9/2$, and using the relation \cite{Jafferis:2012iv} \begin{equation}} \newcommand{\ee}{\end{equation} \frac{l_{\text{AdS}}^4}{G_6} = \frac{27 \sqrt2 }{ \sqrt{8-N_f}} \frac{N^{5/2}}{5 \pi} \, , \ee the gravitational on-shell action in \eqref{fin_grav_6} yields exactly \begin{equation}} \newcommand{\ee}{\end{equation} \label{final_6D} I =\frac{ \sqrt2 \pi N^{5/2}}{15 \sqrt{8-N_f} } \frac{(2 n+1)^3}{n^2} (\Delta_1 \Delta_2)^{3/2} \,, \qquad \Delta_1+ \Delta_2 =2 \, . \ee This perfectly agrees with the prediction from the field theory \eqref{S^5:Delta}, once we set $\vec{\omega} = (1,1,1/n)$. In the absence of flavor symmetry (or masses), we obtain the result of the minimal case. Indeed, imposing $\Delta_1 = \Delta_2 =1$ we retrieve the result of \cite{Hama:2014iea,Alday:2014fsa}, which reads \begin{equation}} \newcommand{\ee}{\end{equation} \label{res_min} I =\frac{ \sqrt2 \pi \? (2 n+1)^3 N^{5/2}}{15 n^2 \sqrt{8-N_f}} = - \log Z_{\mathbb{S}_n^5} \,. \ee One can easily work out the value of $S_n^{\text{SRE}}$ as \begin{equation}} \newcommand{\ee}{\end{equation} \label{sq6} S_n^{\text{SRE}} = \frac{n \log Z_{\mathbb{S}^5} - \log Z_{\mathbb{S}_n^5}}{n-1}= \frac{19n^2+7n+1}{27n^2} S_1 \, , \qquad S_1 = \log Z_{\mathbb{S}^5} \, . \ee \section{Concluding remarks} Following the work on magnetically charged AdS$_4$ black holes in \cite{Benini:2015eyy}, intense efforts have been put into the holographic computation of entropy for BPS black holes with compact horizons, using localization (see \cite{Zaffaroni:2019dhb,Hosseini:2018qsx} and references within). Some of the computations involve a rather subtle treatment of the matrix integrals which compute the relevant SCFT partition function. For instance, progress has been made on the longstanding problem of computing the entropy of rotating BPS black holes in AdS$_5$ from the superconformal index of $\mathcal{N}=4$ SYM using such a treatment \cite{Benini:2018ywd}. Our computation is somewhat similar, the black holes in question having no magnetic flux, but does not involve the same subtleties. This may be due to the observation, made in \cite{Cabo-Bizet:2018ehj}, that the Killing spinors relevant to the computation in the bulk, and hence in the SCFT, should be anti-periodic in the Euclidean time direction. While this can be arranged for partition functions like the one used to compute the superconformal index \cite{Closset:2017zgf}, it arises naturally in the context of the SRE, \textit{i.e.}\,the hyperbolic index, when viewed as a Weyl transformation of the branched sphere. This fact still awaits a satisfactory physical explanation. Regarding possible future directions, it would be interesting to incorporate magnetic charges in the black hole background, and compare the resulting free energy with the corresponding field theory computation generalized by magnetic fluxes. Moreover, one could compute the subleading $N$ corrections to the Supersymmetric R{\'e}nyi and compare with the supergravity computation, along the lines of \cite{Nian:2017hac}. Finally, it would be interesting to investigate in our setup the expansion of the SRE around $n=1$. In \cite{Closset:2012ru,Perlmutter:2013gua,Nishioka:2013haa} it was found that the first correction to the entanglement entropy is proportional to the coefficient of the stress tensor vacuum two-point function, and it would be interesting to find the interpretation of this statement in the supergravity picture. We hope to come back to these points in the future. \section*{Acknowledgements} We would like to thank Laura Andrianopoli, Davide Cassani, Martin Fluder, M\'ark Mezei, Ioannis Papadimitriou, Julian Sonner for discussions, Kiril Hristov, Tatsuma Nishioka and Alberto Zaffaroni for carefully reading a first draft of the manuscript. The work of SMH was supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. CT acknowledges support from the NSF Grant PHY-1125915 and the Agence Nationale de la Recherche (ANR) under the grant Black-dS-String (ANR-16-CE31-0004) and would like to thank the Simons Center for Geometry and Physics, Stony Brook University, Kavli IPMU and Universita' di Parma for hospitality during some steps of this paper. The work of IY was financially supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 754496 - FELLINI.
1,108,101,563,102
arxiv
\section{Introduction}\label{sec:introduction} For the numerical simulation of highly rarefied plasma flows, a fully kinetic modelling of Boltzmann's equation complemented by Maxwell's equations is necessary. For this purpose a particle codes that combines the PIC (Particle in Cell) and DSMC (Direct Simulation Monte Carlo) method is developed at IAG (Institute of Aerodynamics and Gas Dynamics) and IRS (Institute of Space Systems) in recent years \cite{Munz2014}. Particle codes are inherently numerically expensive and thus are an excellent application for parallel computing. The modelling of the Maxwell-Vlasov equations (PIC solver) has been described in previous reports~\cite{Stock_etal:2011, ortwein201401, copplestone:HLRS_2016}. In the present report we focus our attention on the simulation of rarefied, non-equilibrium, neutral gas flows, which are typical for atmospheric entry conditions at high altitude and are simulated using the DSMC part of the coupled code PICLas. The inhonogemeous particle distribution throughout the domain leads to strong imbalances. These are reduced through load balancing for which different load distribution algorithms are investigated. The physical basis of the coupled solver is the approximation of Boltzmann's equation \begin{equation}\label{eq:Boltzmann_equation} \left(\frac{\partial}{\partial t}+\textbf{v}\cdot\nabla+\frac{1}{m^s} \textcolor{black}{\textbf{F}} \cdot\nabla_{\textbf{v}}\right)f^s(\textbf{x},\textbf{v},t)=\frac{\partial f}{\partial t}\bigg|_{\mathrm{coll}}~, \end{equation} which covers basic particle kinetics, where $f^s(\mathbf{x},\mathbf{v},t)$ is the six-dimensional Particle Distribution Function (PDF) in phase-space for each species $s$ with mass $m$. It describes the amount of particles per unit volume, which are found at a certain point $(\vec{x},\vec{v})$ in phase-space and time $t$. The left hand side of~\eqref{eq:Boltzmann_equation}, where $\textbf{F}$ is an external force field, is solved using a deterministic Particle-in-Cell~\cite{hockney198801} method, while the right hand side, where the collision integral $\frac{\partial f}{\partial t}\big|_{Coll}$ accounts for all particle collisions in the system, is solved by applying the non-deterministic DSMC~\cite{bird199401} method. The PDF is approximated by summing up a certain number of weighted particles $\N_{\mathrm{part}}$ and is given by \begin{equation*} f^s(\textbf{x},\textbf{v},t)\approx\sum_{n=1}^{N_\mathrm{part}} w_\mathrm{n} \delta\left(\textbf{x}-\textbf{x}_\mathrm{n}\right)\delta\left(\textbf{v}-\textbf{v}_\mathrm{n}\right), \end{equation*} where the $\delta$-function is applied to position and velocity space, separately, and the particle weighting factor $w_\mathrm{n}=N_{\text{phy}}/N_{\text{sim}}$ is used to describe the ratio of simulated to physical particles. The DSMC method is briefly reviewed in Section~\ref{sec:DSMC}. In Section~\ref{sec:bluntedCone}, the numerical setup and results of the simulation of the flow around a 70$^\circ$ blunted cone geometry are presented. The load-distribution algorithms and the parallel performance of the DSMC code are investigated in detail in Section~\ref{sec:parallel}, followed by a summary and conclusion in Section~\ref{sec:conclusions}. \section{DSMC Solver}\label{sec:DSMC}\label{sec:dsmc} The DSMC method approximates the right hand side of Eq.~\eqref{eq:Boltzmann_equation} by modelling binary particle collisions in a probabilistic and transient manner. The main idea of the DSMC method is the non-deterministic, statistical calculation of changes in particle velocity utilizing random numbers in a collision process. Additionally, chemical reactions may occur in such collision events. The primordial concept of DSMC was developed by Bird~\cite{bird199401} and is commonly applied to the simulation of rarefied and neutral gas flows. The collision operator in Eq.~\eqref{eq:Boltzmann_equation} is given by \begin{equation} \begin{split} \frac{\partial f}{\partial t}\big|_{\mathrm{coll}}=\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\\ \int W(\textbf{v}_1,\textbf{v}_2,\textbf{v}_3,\textbf{v}_4)\lbrace f(\textbf{x},\textbf{v}_1,t) f(\textbf{x},\textbf{v}_2,t)- f(\textbf{x},\textbf{v}_3,t) f(\textbf{x},\textbf{v}_4,t) \rbrace d\textbf{v}_1 d\textbf{v}_2 d\textbf{v}_3~, \end{split} \end{equation} where $W$ represents the probability per unit time in which two particles collide and change their velocities from $\textbf{v}_1$ and $\textbf{v}_2$ to $\textbf{v}_3$ and $\textbf{v}_4$, respectively. However, the DSMC method does not solve this collision integral directly, but rather applies a phenomenological approach to the collision process of simulation particles in a statistical framework. A single standard DSMC time step is depicted schematically in Fig.~\ref{fig:DSMC_cycle}. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{DSMC-schema.pdf} \caption{Schematic of the standard DSMC method.} \label{fig:DSMC_cycle} \end{figure} First, a particle pair for the collision process is found by examining each cell and applying a nearest neighbour search with an octree based pre-sorting. An alternative method is the random pairing of all particles in each cell, but with additional restrictions to the cell size. The collision probability is modelled by choosing a cross section for each particle species using microscopic considerations. As with the PIC method, macro particles are simulated instead of real particles to reduce computational effort. The collision probability of two particles, $1$ and $2$, is determined by methods found in~\cite{bird199401,baganoff1990}, which yields \begin{equation} P_{12}=\frac{N_{p,1}N_{p,2}}{1+\delta_{12}}w\frac{\Delta t}{V_cS_{12}}(\sigma_{12}g_{12})~, \end{equation} where $\delta_{12}$ is the Kronecker delta, $V_\mathrm{c}$ the cell volume, $\Delta t$ the time step, $\sigma$ the cross section, $S_{12}$ the number of particle pairs of species $1$ and $2$ in $V_\mathrm{c}$ and $g$ the relative velocity between the two particles considered. This probability is compared to a pseudo random number $R\in [0,1)$ and if $R<P_{12}$, the collision occurs, otherwise it does not. Subsequent events such as chemical reactions or relaxation processes are computed in the same manner, but using additional probabilities. This may change the internal energy of particles, i.e. their rotational, vibrational energy and electronic excitation. Chemical reactions are modelled via the Arrhenius law or quantum-kinetic considerations, which lead to dissociation, recombination, exchange reactions or ionization. Macroscopic properties like temperature or density are calculated by sampling particle positions and velocities over time within each cell. A major requirement for a physical DSMC simulation is the ratio of the mean collision separation distance to the mean free path in each cell \begin{equation} \frac{l_\mathrm{mcs}}{\lambda} \overset{!}{<} 1. \end{equation} The former represents the distance of two simulation particles that perform a collision, while the latter is a function of the gas density. The ratio can be modified by the weighting factor $w_\mathrm{n}$ as introduced in Section \ref{sec:introduction}, which then directly depends on the local number density \begin{equation} w < \frac{1}{\left(\sqrt{2}\pi d_\mathrm{ref}^2 n^{2/3}\right)^3}, \end{equation} where $d_\mathrm{ref}$ is a species-specific reference diameter. \section{Test Case: $70^\circ$ Blunted Cone}\label{sec:bluntedCone} A popular validation case for rarefied gas flows is the wind tunnel test of the $70^\circ$ blunted cone in a diatomic nitrogen flow at a Mach number of $M=20$~\cite{Allegre1997}. The geometry of the model is depicted in Fig.~\ref{fig:70degCone_geometry}. Positions of the heat flux measurements are depicted by the numbers 1-9. While the experiments were conducted at different rarefaction levels and angles of attack, the case denoted by Set 2 and $\alpha=30^\circ$ is used for the investigation. The free-stream conditions and simulation parameters are given in Table~\ref{tab:70degCone_freestream}. Half of the fluid domain was simulated to exploit the symmetry in the $xy$-plane. \begin{figure}[tb] \centering \begin{tikzpicture} \begin{axis}[% extra description/.code={% font=\small \node[anchor=north east, fill=white] at (190,80) {% \begin{tabular}{@{}cc@{}} \hline\noalign{\smallskip} \# & $S/R_\mathrm{n}$ $\left[-\right]$\\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & 0.00 \\ 2 & 0.52 \\ 3 & 1.04 \\ 4 & 1.56 \\ 5 & 2.68 \\ 6 & 3.32 \\ 7 & 5.06 \\ 8 & 6.50 \\ 9 & 7.94 \\ \noalign{\smallskip}\hline \end{tabular} }; \node[anchor=north east, fill=white] at (-130,80) {% \small \begin{tabular}{@{}cc@{}} \hline\noalign{\smallskip} & [\si{\milli\meter}] \\ \noalign{\smallskip}\hline\noalign{\smallskip} $R_\mathrm{b}$ & 25.0 \\ $R_\mathrm{c}$ & 1.25 \\ $R_\mathrm{j}$ & 2.08 \\ $R_\mathrm{n}$ & 12.5 \\ $R_\mathrm{s}$ & 6.25 \\ \noalign{\smallskip}\hline \end{tabular} }; }, hide axis ] \pgftext{\includegraphics[width=0.65\linewidth]{70degCone}} \end{axis} \end{tikzpicture} \caption{Geometry of the $70^\circ$ blunted cone test case.}\label{fig:70degCone_geometry} \end{figure} \begin{table}[tb] \caption{Free-stream conditions of the $70^\circ$ blunted cone test case.}\label{tab:70degCone_freestream} \centering \begin{tabular}[tb]{@{}lcccccc@{}} \hline Case & $|\vec{v}_\infty|$ $\left[\si{\metre\per\second}\right]$ & $T_\infty$ [\si{\kelvin}] & $n_\infty$ [\si{\per\cubic\metre}] & $\Delta t$ $\left[\si{\second}\right]$ & $w$ $[-]$ & $N_\mathrm{part}$ $\left[-\right]$ \\ \hline Set 2 & 1502.4 & 13.58 & \num{1.115E+21} & \num{5e-8} & \num{2e10} & \num{2.84e+07} \\ \hline \end{tabular} \end{table} An exemplary simulation result is shown in Fig.~\ref{fig:70degCone_Set2_3D}. Here, the translational temperature in the symmetry plane and the velocity streamlines are shown. The simulation results are compared to the experimental measurements in terms of the heat flux in Fig.~\ref{fig:70degCone_Set2_Heatlfux}. Overall good agreement can be observed for the first four thermocouples, where the error is below $10\%$ and within experimental uncertainty~\cite{Allegre1997}. The agreement on the sting deteriorates for thermocouples further downstream to error values of up to $45\%$. \begin{figure}[p] \centering \begin{tikzpicture} \begin{axis}[ hide axis, axis equal image, width=14cm, xmin=0, xmax=1268, ymin=0, ymax=1699, colormap={rgb}{rgb255=(255,255,255) rgb255=(0,0,255) rgb255=(0,255,255) rgb255=(0,255,0) rgb255=(255,255,0) rgb255=(255,0,0) rgb255=(255,0,255)}, colorbar right, point meta min=15, point meta max=1250, colorbar style={ scaled y ticks = false, ytick={15,250,500,750,1000,1250}, height=2.5cm, at={(-0.1,0.525)}, anchor=south east, title={$T$ $[\si{\kelvin}]$}, }, colorbar/draw/.append code={ \begin{axis}[ colormap={rgb}{rgb255=(0,0,255) rgb255=(0,255,255) rgb255=(0,255,0) rgb255=(255,255,0) rgb255=(255,0,0)}, colorbar right, point meta min=0, point meta max=1500, every colorbar, at={(-0.1,0.125)}, ytick={0,500,1000,1500}, anchor=south east, width=0.5cm, colorbar shift, colorbar=false, title={$v$ $[\si{\metre\per\second}]$}, ] \pgfkeysvalueof{/pgfplots/colorbar addplot} \end{axis} } ] \addplot graphics [xmin=0,xmax=1268,ymin=0,ymax=1699] {70degCone_3D_Temp_15-1200_Velo_0-1500.png}; \end{axis} \end{tikzpicture} \caption{Exemplary simulation result: Translational temperature in the symmetry plane and velocity streamlines.}\label{fig:70degCone_Set2_3D} \end{figure} \begin{figure}[p] \centering \begin{tikzpicture} \tikzset{ small dot/.style={fill=black,circle,scale=0.3}, axis/.style={->, >=stealth'}, every pin edge/.style={draw=white}, every pin/.style={font=\footnotesize} } \begin{semilogyaxis}[ width=8cm, height=5.75cm, xlabel={$S/R_\mathrm{n}$ $[-]$}, ylabel={Heat flux $q_\mathrm{w}$ $\left[\si{\kilo\watt\per\square\metre}\right]$}, ymin=0.01, ymax=100, xmin=-0.3, xmax=8.3, xtick={0,1,...,8}, xlabel shift=-2 pt, legend style={ font=\footnotesize, legend cell align=left, at={(0.95,0.075)}, anchor=south east, fill=white, draw=black} ] \addplot+[ nodes near coords, point meta=explicit symbolic, nodes near coords align={anchor=south, above, yshift=3mm}, color=black, mark=x, only marks, font=\footnotesize] table [col sep=comma, x=position, y=alpha_30, meta=thermocouple] {Allegre1997_set2_heatflux.csv}; \addlegendentry{Experiment} \addplot[color=black] table [col sep=comma, x=SoverR, y=Heatflux_kW] {70degCone_Set2_alpha30_07ms.csv}; \addlegendentry{PICLas} \end{semilogyaxis} \end{tikzpicture} \caption{Comparison of measured and calculated heat flux.}\label{fig:70degCone_Set2_Heatlfux} \end{figure} \section{Parallelization of the DSMC Method} \label{sec:parallel} \subsection{Load Computation and Distribution} \label{sec:load_distribution} The code framework of PICLas utilizes implementations of the MPI 2.0 standard for parallelization. Load distribution between the MPI processes is a crucial step. A domain decomposition by grid elements was chosen as strategy. In a preprocessing step, all elements within the computational domain are sorted along a Hilbert curve due to its clustering property \cite{moon200101}. Then, each MPI process receives a certain segment of the space filling curve (SFC). To illustrate an optimal load balance scenario, a simplified grid is considered that consists of $8 \times 8 = 64$ elements, which are ordered along a SFC. Fig.~\ref{fig:hilbert-domain} depicts the decomposition of the grid into four regions, each corresponding to an individual MPI process when the number of processes is $N_{p}=4$. For inhomogeneous particle distributions or elements of significantly different size, the load has to be assigned carefully. In the DSMC method, the computational costs $L$ of each grid element is assumed to be linearly dependent on the contained particle number. In an optimally balanced case, each process receives approximately the average load. \begin{figure} \def0.8\textwidth{0.44\textwidth} \centering \begin{tikzpicture}[scale=0.6,every node/.style={transform shape}] \foreach \i in {1,...,16} { \pgfmathtruncatemacro{\y}{(\i-1 ) / 4}; \pgfmathtruncatemacro{\x}{\i -1 - 4 * \y}; \pgfmathtruncatemacro{\label}{\x +7 * (7 - \y)}; \node[rectangle,draw=black,fill=white!80!orange,minimum size=30] () at (\x,\y) {}; } \foreach \i in {17,...,32} { \pgfmathtruncatemacro{\y}{(\i-1 ) / 4}; \pgfmathtruncatemacro{\x}{\i -1 - 4 * \y}; \pgfmathtruncatemacro{\label}{\x +7 * (7 - \y)}; \node[rectangle,draw=black,fill=white!80!blue,minimum size=30] () at (\x,\y) {}; } \foreach \i in {1,...,16} { \pgfmathtruncatemacro{\y}{(\i-1 ) / 4}; \pgfmathtruncatemacro{\x}{\i +3 - 4 * \y}; \pgfmathtruncatemacro{\label}{\x +7 * (7 - \y)}; \node[rectangle,draw=black,fill=white!80!green,minimum size=30] () at (\x,\y) {}; } \foreach \i in {17,...,32} { \pgfmathtruncatemacro{\y}{(\i-1 ) / 4}; \pgfmathtruncatemacro{\x}{\i +3 - 4 * \y}; \pgfmathtruncatemacro{\label}{\x +7 * (7 - \y)}; \node[rectangle,draw=black,fill=white!80!yellow,minimum size=30] () at (\x,\y) {}; } \hilbert((0mm,0mm),3) \end{tikzpicture} \caption{Domain decomposition for homogeneous load distribution.} \label{fig:hilbert-domain} \end{figure} Offset elements (i.e., an index $I$ along the SFC) define the assigned segment of a process. When the SFC describes the interval of $[1,N_{elem}]$, the segment of each process $p$ is defined by $[I(p)+1,I(p+1)]$ with $I(N_{p}+1)=N_{elem}$. Thus, the total load assigned to a single process results in: \begin{equation} L_{tot}^{p}=\sum_{i=I(p)+1}^{I(p+1)} L_{i} \end{equation} The main goal of a proper load distribution is to minimize the idle time of waiting processes, i.e., the maximum of all total, process-specific loads $L_{tot}^{p}$ needs to be minimized. To achieve that, several distribution methods are implemented in PICLas. \paragraph{Distribution by elements} Assuming a homogeneous particle population, a distribution only by elements is favourable, i.e., $L_{elem}=$const. This can be achieved by dividing the number elements into: \begin{equation} N_{Elems}=N_{p}\cdot A+B,\quad A=\left \lceil{\frac{N_{Elems}}{N_{p}}}\right \rceil,\quad B=N_{Elems}~\mathrm{mod}~N_{p} \end{equation} Based on this, each process receives $A$ elements and the first $B$ processes an additional one, which can be calculated in a straightforward manner by: \begin{algorithm} \caption{Distribution by elements} \begin{algorithmic} \STATE $i_{p}\gets 1$ \WHILE{$i_{p}\leq N_{p}$} \STATE $I(i_{p})\gets A\cdot(i_{p}-1)+\mathrm{min}(i_{p}-1,B)$ \STATE $i_{p}\gets i_{p}+1$ \ENDWHILE \STATE $I(N_{p}+1)\gets N_{elem}$ \end{algorithmic} \end{algorithm} \paragraph{Simple load balance} The previous method is, however, not applicable if the elements have different loads, since a subdivision in element number does not necessarily correspond in the same fraction of total load. Therefore, while looping through the processes along the SFC, each process receives in our ``simple'' balance scheme an iteratively increasing segment until the so far gathered load is equal or greater than the ideal fraction. To ensure that the following processes receive at least one element each, the respective number of assignable elements must be reduced. The algorithm follows as: \begin{algorithm} \caption{Simple load balance} \begin{algorithmic} \STATE $L_{tot}\gets 0$ \STATE $i_{elem}\gets 1$ \STATE $i_{p}\gets 1$ \WHILE{$i_{p}\leq N_{p}$} \STATE $I(i_{p})\gets i_{elem} - 1$ \STATE $j\gets i_{elem}$ \WHILE{$j\leq N_{elem} - N_{p} + i_{p} \quad\land\quad L_{tot} < \frac{i_{p}}{N_{p}}\cdot \sum_{k=1}^{N_{elem}} L_{k}$} \STATE $L_{tot}\gets L_{tot} + L_{j}$ \STATE $j\gets j+1$ \ENDWHILE \STATE $i_{elem}\gets j + 1$ \STATE $i_{p}\gets i_{p}+1$ \ENDWHILE \end{algorithmic} \end{algorithm} \paragraph{``Combing'' algorithm} The ``simple'' algorithm ensures a very smooth load distribution for large element numbers, since the ideal, current fraction can be achieved well by the iterative adding of elements. However, if there exist elements with much higher loads than most of the remaining ones, the load distribution method fails. For this, we developed a smoothing algorithm, that ``combs'' the offset elements along the SFC iteratively from the beginning towards the end. Here, just the main characteristics of the method should be briefly described: \begin{itemize} \item The initial load distribution follows, i.e., from the ``simple'' balance method. \item A large number of different distributions is evaluated in terms of the maximum process-total load $\mathrm{max}(L_{tot}^{p})$, the one with the minimum value is chosen as final solution. \item If the maximum $L_{tot}^{p}$ belongs to a process $p$ with a greater SFC-index than the minimum one (maximum is ``right'' of the minimum), all offset elements are shifted accordingly to the left. \item Maxima are smoothed to the right, i.e., small $L_{tot}^{p}$-intervals are increased by shifting elements from maxima to minima. \item If the resultant optimum distribution was already reached before, elements are shifted from the last process all towards the first one. \end{itemize} \subsection{Scaling performance of PICLas} For the test of parallelization, multiple simulations were run for a simulation time of \SI{1e-4}{\second}, corresponding to \num{2000} iterations. The speed-up between \num{720} and \num{5760} cores was calculated by \begin{equation} S_N=\frac{t_{720}}{t_N}. \end{equation} The respective parallel efficiency was determined by \begin{equation} \eta_N=\frac{720\cdot t_{720}}{N \cdot t_N}, \end{equation} where $t_{720}$ and $t_N$ is the computational time using 720 and $N$ cores, respectively. Fig.~\ref{fig:strongscale} shows the speed-up over the number of utilized nodes and the respective parallel efficiency as a label. The case without actual load balance (distribution by elements) is compared together with the distribution method by paticle number per element against the ideal scaling behavior. The ``Combing'' algorithm resulted into the same performace values as the ``simple'' balance method, therefore, only the the latter one is displayed. The speed-up decreases with an increasing number of cores due to the more frequent communication between MPI processes. Nevertheless, a parallel efficiency of $\eta=0.87$ can be achieved using \num{5760} cores for the blunted cone test case. \begin{figure}[htb] \def0.8\textwidth{0.8\textwidth} \centering \begin{tikzpicture} \begin{axis}[ font=\footnotesize, width=0.8\textwidth, height=7cm, grid = minor, grid=both, grid style={dotted}, ymin = 0, ymax = 9.5, xlabel={$N_{\mathrm{proc}}$~[-]}, ylabel={Speed-up $S$~[-]}, ylabel style={at={(0.04,0.7)}, anchor=east}, legend cell align=left, legend pos= south east, legend style={at={(0.1,0.7)},anchor=south west} ] \addplot[mark=*, visualization depends on=720*366/(\thisrowno{0}*\thisrowno{1}) \as \labela, color=red, only marks, nodes near coords=\pgfmathprintnumber{\labela}, every node near coord/.append style={ shift={(axis direction cs:0,-1.1)} } ] table[header=false,x expr=\thisrowno{0},y expr=366/\thisrowno{1}, col sep=comma] {./wo_scal.csv}; \addlegendentry{Distribution by elements} \addplot[mark=o, visualization depends on=720*111/(\thisrowno{0}*\thisrowno{1}) \as \labela, color=blue, only marks, nodes near coords=\pgfmathprintnumber{\labela}, every node near coord/.append style={ shift={(axis direction cs:0, 0.1)} } ] table[header=false,x expr=\thisrowno{0},y expr=111/\thisrowno{1}, col sep=comma] {./case0.csv}; \addlegendentry{Simple load balance} \addplot[mark=none, color=black] table[header=false,x expr=\thisrowno{0},y expr=\thisrowno{0}/720, col sep=comma] {./wo_scal.csv}; \addlegendentry{Ideal} \end{axis} \end{tikzpicture} \caption{Parallel performance of the double cone test case between 720 and 5670 cores. Speed-up $S$ with labelled parallel efficiency $\eta$.}\label{fig:strongscale} \end{figure} \section{Summary and Conclusions}\label{sec:conclusions} The hypersonic flow around a $70^\circ$ blunted cone was simulated with the Direct Simulation Monte Carlo method. The case features complex flow phenomena such as a detached compression shock in front and rarefied gas flow in the wake of the heat shield. A comparison of the experimentally measured heat flux yielded good agreement with the simulation results. The test case was utilized to perform a strong scaling of the DSMC implementation of PICLas. With regard to the computational duration on \num{720} cores, a parallel efficiency of $99\%$ to $87\%$ could be achieved for \num{1440} and \num{5760} cores, respectively. The decrease in parallel efficiency can be explained by an increasing MPI communication effort. Currently, the implementation of cpu-time measurements into PICLas is investigated for calculating the element loads directly instead of a simple weighting of particle number, which will be focus of future reports. \section{Acknowledgements} \label{sec:acknowledgements} We gratefully acknowledge the Deutsche Forschungsgemeinschaft (DFG) for funding within the projects "Kinetic Algorithms for the Maxwell-Boltzmann System and the Simulation of Magnetospheric Propulsion Systems" and "Coupled PIC-DSMC-Simulation of Laser Driven Ablative Gas Expansions". The latter being a sub project of the Collaborative Research Center (SFB) 716 at the University of Stuttgart. The authors also wish to thank the Landesgraduier\-tenf\"{o}rderung Baden-W\"{u}rttemberg for supporting the research. Computational resources have been provided by the H\"ochst\-leistungs\-rechen\-zentrum Stuttgart (HLRS). \bibliographystyle{plain}
1,108,101,563,103
arxiv
\section{Introduction}\label{s.intro} The interaction of a short-wave $u=u(x,t)$ and a long-wave $v=v(x,t)$ in fluid mechanics (and plasma physics) is governed by the Schr\"odinger - Kortweg-de Vries (NLS-KdV) system \begin{equation}\label{e.nls-kdv} \begin{cases} i\partial_tu + \partial_x^2u = \alpha uv + \beta |u|^2u,& t\in {\mathbb R},\\ \partial_tv + \partial_x^3v + \tfrac{1}{2}\partial_x(v^2) = \gamma \partial_x(|u|^2),\\ u(x,0)=u_0(x),\;v(x,0)=v_0(x), \end{cases} \end{equation} where $u=u(x,t)$ is a complex-valued function, $v=v(x,t)$ is a real-valued function and $\alpha,\;\beta,\;\gamma$\; are real constants.\footnote{The case $\beta = 0$ of the NLS-KdV system occurs in the study of the resonant interaction between short and long capillary-gravity waves on water channels of uniform finite depth and in a diatomic lattice system. For more details about these physical applications, see~\cite{Funakoshi}, \cite{Nishikawa}, \cite{Kawahara} and \cite{Yajima-Satsuma}.} This motivates the study of the local and global well-posedness of the Cauchy problem for the NLS-KdV system with rough initial data.\footnote{Benilov and Burtsev in~\cite{Benilov} showed that the NLS-KdV is not completely integrable. In particular, the solvability of~(\ref{e.nls-kdv}) depends on the theory of non-linear dispersive equations.} The central theme of this paper is the local and global well-posedness theory of the NLS-KdV system in the periodic setting (i.e., $x\in\mathbb T$); but, in order to motivate our subsequent results, we recall some known theorems in the non-periodic setting. In the continuous context (i.e., $x\in\mathbb R$), Corcho and Linares~\cite{Corcho} showed the local well-posedness of the NLS-KdV for initial data $(u_0,v_0)\in H^k(\mathbb R)\times H^s(\mathbb R)$ with $k\geq 0$, $s>-3/4$ provided that $k-1\leq s\leq 2k-\frac{1}{2}$ for $k\leq 1/2$ and $k-1\leq s < k+\frac{1}{2}$ for $k>1/2$. It is worth to point out that the lowest regularity obtained by Corcho and Linares is $k=0$ and $s=-\frac{3}{4} +$. In the non-resonant case $\beta\neq 0$, it is reasonable to expect that the NLS-KdV is locally well-posed in $L^2\times H^{-\frac{3}{4}+}$: the nonlinear Schr\"odinger (NLS) equation with cubic term $(|u|^2 u)$ is globally well-posed in $H^s(\mathbb R)$ for $s\geq 0$ and ill-posed below $L^2(\mathbb R)$; similarly, the Kortweg-de Vries (KdV) equation is globally well-posed in $H^s(\mathbb R)$ for $s>-3/4$ and ill-posed in $H^s(\mathbb R)$ for $-1\leq s < -3/4$. Also, using three conserved quatinties for the NLS-KdV flow, M. Tsutsumi~\cite{MTsutsumi} showed global well-posedness for initial data $(u_0,v_0)\in H^{s+\frac{1}{2}}(\mathbb R)\times H^s(\mathbb R)$ with $s\in\mathbb Z_+$ and Corcho, Linares~\cite{Corcho}, assuming $\alpha\gamma>0$, showed global well-posedness in the energy space $H^1(\mathbb R)\times H^1(\mathbb R)$.\footnote{Pecher~\cite{Pecher} announced the global well-posedness of the NLS-KdV system (with $\alpha\gamma>0$) in the continuous setting for initial data $(u_0,v_0)\in H^s(\mathbb R)\times H^s(\mathbb R)$, for $3/5<s<1$ in the resonant case $\beta=0$ and $2/3<s<1$ in the non-resonant case $\beta\neq 0$. The proof is based on two refined bilinear estimates and the I-method of Colliander, Keel, Staffilani, Takaoka and Tao.} The point of view adopted by Corcho and Linares in order to prove their local well-posedness result is to use a basic strategy to treat, in both continuous and periodic contexts, the low-regularity study of dispersive equations (such as NLS and KdV): one considers the Fourier restriction norm method introduced by Bourgain in~\cite{Bourgain}; then, they showed two new mixed bilinear estimates for the coupling terms of the NLS-KdV system (namely, $uv$ and $\partial_x(|u|^2)$) in certain Bourgain's spaces, which implies that an equivalent integral equation can be solved by Picard's fixed point method (in other words, the operator associated to the integral equation is a contraction in certain Bourgain spaces). Coming back to the periodic setting, before stating our results, we advance that, although our efforts are to obtain similar well-posedness theorems, the periodic case is more subtle than the continuous context: since the cubic NLS is globally well-posed (resp., ill-posed) in $H^s(\mathbb T)$ for $s\geq 0$ (resp. $s<0$) and the KdV is globally well-posed (resp., ill-posed) in $H^s(\mathbb T)$ for $s\geq -1/2$ (resp., $s<-1/2$), it is reasonable to expect $L^2(\mathbb T)\times H^{-1/2}(\mathbb T)$ as the lowest regularity for the local well-posedness results; but, surprisingly enough, the endpoint of the bilinear estimates for the coupling terms $uv$, $\partial_x(|u|^2)$ in the periodic setting is $(k,s)=(1/4, 0)$, i.e., our lowest regularity is $H^{1/4}\times L^2$ (see the propositions~\ref{p.uv},~\ref{p.du2}, theorem~\ref{t.A} and remark~\ref{r.1} below). We refer the reader to the section~\ref{s.remarks} for a more detailed comparasion between the well-posedness results for the NLS-KdV system in the periodic and non-periodic settings (as well as a couple of questions motivated by this discussion). Now, we introduce some notations. Let $U(t) = e^{it\partial_x^2}$ and $V(t) = e^{-t\partial_x^3}$ be the unitary groups associated to the linear Schr\"odinger and the Airy equations, respectively. Given $k,s,b\in\mathbb R$, we define the spaces $X^{k,b}$ and $Y^{s,b}$ via the norms \begin{equation*} \begin{split} \|f\|_{X^{k,b}} &:= \left(\sum\limits_{n\in\mathbb Z} \langle n \rangle^{2k} \langle \tau+n^2 \rangle^{2b} |\widehat{f}(n,\tau)|^2\right)^{1/2} \\ &= \|U(-t) f\|_{H_t^b(\mathbb R,H_x^k)} \end{split} \end{equation*} \begin{equation*} \begin{split} \|g\|_{Y^{s,b}} &:= \left(\sum\limits_{n\in\mathbb Z} \langle n \rangle^{2k} \langle\tau-n^3\rangle^{2b} |\widehat{g}(n,\tau)|^2\right)^{1/2} \\ &= \|V(-t) g\|_{H_t^n(\mathbb R,H_x^s)} \end{split} \end{equation*} where $\langle\cdot\rangle:= 1+|\cdot|$ and $\widehat{f}$ is the Fourier transform of $f$ in both variables $x$ and $t$: \begin{equation*} \widehat{f}(n,\tau) = (2\pi)^{-1}\int_{\mathbb R\times\mathbb T} e^{-it\tau} e^{-ixn} f(x,t) dtdx \end{equation*} and, given a time interval $I$, we define $X^{k,b}(I)$ and $Y^{s,b}(I)$ via the (restriction in time) norms \begin{equation*} \|f\|_{X^{k,b}(I)} = \inf\limits_{\widetilde{f}|_I = f} \|\widetilde{f}\|_{X^{k,b}} \quad \textrm{and} \quad \|g\|_{Y^{s,b}(I)} = \inf\limits_{\widetilde{g}|_I = g} \|\widetilde{g}\|_{Y^{s,b}} \end{equation*} The study of the periodic dispersive equations (e.g, KdV) has been based around iteration in the Bourgain spaces (e.g., $Y^{s,b}$) with $b=1/2$. Since we are interested in the continuity of the flow associated to the NLS-KdV system and the Bourgain spaces with $b=1/2$ do not control the $L_t^{\infty}H_x^s$, we consider the slightly smaller spaces $\widetilde{X}^{k}$, $\widetilde{Y}^s$ defined by the norms \begin{equation*} \|u\|_{\widetilde{X}^{k}}:= \|u\|_{X^{k,1/2}} + \|\langle n\rangle^k \widehat{u}(n,\tau)\|_{L_n^2 L_{\tau}^1} \quad \textrm{and} \quad \|v\|_{\widetilde{Y}^{s}}:= \|v\|_{Y^{s,1/2}} + \|\langle n\rangle^s \widehat{v}(n,\tau)\|_{L_n^2 L_{\tau}^1} \end{equation*} and, given a time interval $I$, we define the spaces $\widetilde{X}^{k}(I)$, $\widetilde{Y}^s(I)$ via the restriction in time norms \begin{equation*} \|f\|_{\widetilde{X}^k(I)} = \inf\limits_{\widetilde{f}|_I = f} \|\widetilde{f}\|_{\widetilde{X}^k} \quad \textrm{and} \quad \|g\|_{Y^{s,b}(I)} = \inf\limits_{\widetilde{g}|_I = g} \|\widetilde{g}\|_{\widetilde{Y}^s} \end{equation*} Also, we introduce the companion spaces $Z^k$ and $W^s$ via the norms \begin{equation*} \|u\|_{Z^k}:= \|u\|_{X^{k,-1/2}} + \left\|\frac{\langle n\rangle^k \widehat{u}(n,\tau)}{\langle\tau+n^2\rangle}\right\|_{L_n^2 L_{\tau}^1} \quad \textrm{and} \quad \|v\|_{W^{s}}:= \|v\|_{Y^{s,-1/2}} + \left\|\frac{\langle n\rangle^s \widehat{v}(n,\tau)}{\langle\tau-n^3\rangle}\right\|_{L_n^2 L_{\tau}^1} \end{equation*} Denote by $\psi$ a non-negative smooth bump function supported in $[-2,2]$ with $\psi = 1$ on $[-1,1]$ and $\psi_{\delta}(t):=\psi(t/ \delta)$ for any $\delta>0$. Also, let $a\pm$ be a number slightly larger (resp., smaller) than $a$. At this point, we are ready to state our main results. The fundamental technical propositions are the following two sharp bilinear for the coupling terms of the NLS-KdV system: \begin{proposition}\label{p.uv}For any $s\geq 0$ and $k-s\leq 3/2$, \begin{equation}\label{e.uv} \|uv\|_{Z^k}\lesssim \|u\|_{X^{k,\frac{1}{2}-}}\|v\|_{Y^{s,\frac{1}{2}}} + \|u\|_{X^{k,\frac{1}{2}}}\|v\|_{Y^{s,\frac{1}{2}-}}. \end{equation} Furthermore, the estimate~(\ref{e.uv}) fails if either $s<0$ or $k-s>3/2$. More precisely, if the bilinear estimate $\|uv\|_{X^{k,b-1}}\lesssim \|u\|_{X^{k,b}} \|v\|_{Y^{s,b}}$ with $b=1/2$ holds then $s\geq 0$ and $k-s\leq 3/2$. \end{proposition} \begin{proposition}\label{p.du2}For any $k>0$, $1+s\leq 4k$ and $-1/2\leq k-s$, \begin{equation}\label{e.du2} \|\partial_x(u_1 \overline{u_2})\|_{W^s}\lesssim \|u_1\|_{X^{k,\frac{1}{2}-}} \|u_2\|_{X^{k,\frac{1}{2}}} + \|u\|_{X^{k,\frac{1}{2}}}\|v\|_{X^{k,\frac{1}{2}-}}. \end{equation} Furthermore, the estimate~(\ref{e.du2}) fails if either $1+s>4k$ or $k-s<-1/2$. More precisely, if the bilinear estimate $\|\partial_x(u_1 \overline{u_2})\|_{Y^{s,-1/2}} \lesssim \|u_1\|_{X^{k,1/2}}\|u_2\|_{X^{k,1/2}}$ holds then $1+s\leq 4k$ and $-1/2\leq k-s$. \end{proposition} Using these bilinear estimates for the coupling terms $uv$ and $\partial_x(|u|^2)$, we show the main theorem of this paper, namely, we prove the following local well-posedness result: \begin{theorem}\label{t.A}The periodic NLS-KdV~(\ref{e.nls-kdv}) is locally well-posed in $H^k(\mathbb T)\times H^s(\mathbb T)$ whenever $s\geq 0$, $-1/2\leq k-s\leq 3/2$ and $1+s\leq 4k$. I.e., for any $(u_0,v_0)\in H^k(\mathbb T)\times H^s(\mathbb T)$, there exists a positive time $T=T(\|u_0\|_{H^k},\|v_0\|_{H^s})$ and a unique solution $(u(t),v(t))$ of the NLS-KdV system~(\ref{e.nls-kdv}) satisfying \begin{equation*} (\psi_T(t) u, \psi_T(t) v)\in \widetilde{X}^k\times\widetilde{Y}^s, \end{equation*} \begin{equation*} (u,v)\in C([0,T],H^k(\mathbb T)\times H^s(\mathbb T)). \end{equation*} Moreover, the map $(u_0,v_0)\mapsto (u(t),v(t))$ is locally Lipschitz from $H^k(\mathbb T)\times H^s(\mathbb T)$ into $C([0,T],H^k(\mathbb T)\times H^s(\mathbb T))$, whenever $k,s\geq 0$, $-1/2\leq k-s\leq 3/2$ and $1+s\leq 4k$. \end{theorem} \begin{remark}\label{r.1}As we pointed out before, the endpoint of our sharp bilinear estimates and, consequently, our local well-posedness result is $H^{1/4}\times L^2$. Since the endpoint of the sharp well-posedness theory for the periodic NLS is $L^2$ and for the periodic KdV is $H^{-1/2}$, we are somewhat far from the naturally expected endpoint $L^2\times H^{-1/2}$ for the local in time theory for the NLS-KdV system (although, our bilinear estimates are optimal). This leads us to ask about possible ill-posedness results in this gap between $H^{1/4}\times L^2$ and $L^2\times H^{-1/2}$. For precise statements and some comparision with the continuous setting, see the section~\ref{s.remarks}. \end{remark} \begin{remark}\label{r.2} It is easy to see that the NLS-KdV system~(\ref{e.nls-kdv}) system is ill-posed for $k<0$. Indeed, if we put \begin{equation*} \begin{cases} u:=e^{-it}w,\\ v\equiv \alpha^{-1} \in H^s(\mathbb{T}), \forall s\in \mathbb R, \end{cases} \end{equation*} the system (\ref{e.nls-kdv}) becomes into the equation \begin{equation*} \begin{cases} iw_t+\partial_x^2w=\beta |w|^2w,\\ \partial_x (|w|^2)=0,\\ w_0(x)=u_0\in H^k(\mathbb{T}), \end{cases} \end{equation*} which is not locally-well posed (ill-posed) below $L^2({\mathbb T})$ in the sense that the data-solution map is not uniformly continuous. \end{remark} Using this local well-posedness result and three conserved quantities for the NLS-KdV flow, it will be not difficult to prove the following global well-posedness theorem in the energy space $H^1(\mathbb T)\times H^1(\mathbb T)$: \begin{theorem}\label{t.B}Let $\alpha,\beta,\gamma\in\mathbb R$ be such that $\alpha\gamma>0$ and $(u_0,v_0)\in H^1(\mathbb T)\times H^1(\mathbb T)$. Then, the unique solution in the theorem~\ref{t.A} can be extended to the time interval $[0,T]$ for any $T>0$. \end{theorem} To close this introduction, we give the outline of the paper. In section~\ref{s.examples} we give counter-examples for the bilinear estimates of the coupling terms, when the indices $k$ and $s$ satisfies (at least) one of the following inequalities: $s<0$, $k-s>3/2$, $1+s>4k$ or $k-s<-1/2$. In section~\ref{s.bilinear} we complete the proof of the propositions~\ref{p.uv} and~\ref{p.du2} by establishing the claimed bilinear estimates for the terms $uv$ and $\partial_x(|u|^2)$. In section~\ref{s.thmA} we use propositions~\ref{p.uv} and~\ref{p.du2} to show that the integral operator associated to the NLS-KdV system is a contraction in the space $\widetilde{X}^{k}([0,T])\times \widetilde{Y}^{s}([0,T])$ (for sufficiently small $T>0$) when $k,s\geq 0$, $-1/2\leq k-s\leq 3/2$ and $1+s\leq 4k$. In particular, we obtain the desired local well-posedness statement in theorem~\ref{t.A}. In section~\ref{s.thmB} we make a standard use of three conserved quantities for the NLS-KdV flow to obtain the global well-posedness result of theorem~\ref{t.B} in the energy space $H^1(\mathbb T)\times H^1(\mathbb T)$. In section~\ref{s.remarks} we make some questions related to the gap between the expected $L^2\times H^{-1/2}$ endpoint regularity and our lowest regularity $H^{1/4}\times L^2$ for the local well-posedness for the periodic NLS-KdV system; also, we compare the known theorems in the continuous setting with the periodic setting. Finally, in the appendix, we collect some standard facts about linear and multilinear estimates associated to the cubic NLS and the KdV equations (which were used in the proof of theorem~\ref{t.A}) and we show that the NLS-KdV flow preserves three quantities controlling the $H^1$ norms of $u(t)$ and $v(t)$ (this is the heart of the proof of theorem~\ref{t.B}). \begin{figure}[ht] \centering \psset{unit=0.7cm} \begin{pspicture}(-2,-2)(7,5 \mall \psline{->}(-2,0)(7.6,0 \psline{->}(0,-2)(0,5.6 \pspolygon[fillstyle=solid,fillcolor=green,linewidth=0.8pt] (0.25,0)(1.5,0)(6.5,5)(4.5,5)(0.5,1)(0.25,0 \rput(3,2.5){$\mathcal{W}$ \pspolygon[fillstyle=solid,fillcolor=gray,linewidth=0.8pt] (0,-2)(-2,-2)(-2,5)(0,5)(0,-2 \rput(-1,2){$\mathcal{I}$ \rput(7.4,-0.25){$k$ \rput(0.25,5.4){$s$ \rput(4.5,2.5){\small{$r_3$} \rput(1.5,2.5){\small{$r_2$} \rput(0.7,0.5){\small{$r_1$} \end{pspicture} \vspace{0.5cm} \caption{Well-posedness results for periodic NLS-KdV system. The region $\mathcal{W}$, limited for the lines $r_1: s=4k-1$, $r_2: s=k+\frac{1}{2}$ and $r_3: s=k-\frac{3}{2}$, contains indices $(k,s)$ for which local well-posedness is achieved in Theorem \ref{t.A}. The region $\mathcal{I}$ show the ill-posedness results commented in Remark \ref{r.2}.} \end{figure} \section{Counter-Examples}\label{s.examples} We start with some counter-examples for the bilinear estimate in proposition~\ref{p.uv} when $s<0$ or $k-s>3/2$: \begin{lemma}\label{l.example-uv}$\|uv\|_{X^{k,b-1}}\leq \|u\|_{X^{k,b}}\cdot\|v\|_{Y^{s,b}}$ (with $b=1/2$) implies $s\geq 0$ and $k-s\leq 3/2$. \end{lemma} \begin{proof}Fix $N\gg 1$ a large integer. Firstly, we show that $\|uv\|_{X^{k,b-1}}\leq \|u\|_{X^{k,b}}\cdot\|v\|_{Y^{s,b}}$ (with $b=1/2$) implies $s\geq 0$. Define \begin{displaymath} b_n= \left\{ \begin{array}{ll} 1 & \textrm{if $n=N$}\\ 0 & \textrm{otherwise} \end{array} \right. \end{displaymath} and \begin{displaymath} a_n= \left\{ \begin{array}{ll} 1 & \textrm{if $n=\frac{-N^2 -N}{2}$}\\ 0 & \textrm{otherwise} \end{array} \right. \end{displaymath} Let $u$ and $v$ be defined by $\widehat{u}(n,\tau)=a_n\chi_1(\tau+n^2)$ and $\widehat{v}(n,\tau)=b_n\chi_1(\tau-n^3)$, where $\chi_1$ is the characteristic function of the interval $[-1,1]$. Now let's go to the calculations. By definition of the Bourgain space $X^{k,b}$, $$\|u v\|_{X^{k,b-1}}=\left\|\frac{\langle n \rangle^k}{\langle \tau+n^2 \rangle^{1/2}} \widehat{u}\ast\widehat{v}\right\|_{L^2_{n,\tau}}.$$ Hence, $$\|u v\|_{X^{k,b-1}}= \left\|\frac{\langle n \rangle^k}{\langle \tau+n^2 \rangle^{1/2}} \sum\limits_{n_1}\int d\tau_1 \ a_{n-n_1} \ \chi_1((\tau-\tau_1)+(n-n_1)^2) \ b_{n_1}\ \chi_1(\tau_1-n_1^3)\right\|_{L^2_{n,\tau}}.$$ Recall the following numerical expression: \begin{equation}\label{e.1} \left(\tau_1-n_1^3\right)+\left((\tau-\tau_1)+(n-n_1)^2\right) -\left(\tau+n^2\right) = -n_1^3 + n_1^2 - 2 n n_1. \end{equation} Taking into account that $b_{n_1}\neq 0$ iff $n_1=N$, $a_{n-n_1}\neq 0$ iff $n=\frac{-N^2 +N}{2}$, $\chi_1(\tau_1-n_1^3)\neq 0$ iff $|\tau_1-n_1^3|\leq 1$ and $\chi_1((\tau-\tau_1)+(n-n_1)^2)\neq 0$ iff $|(\tau-\tau_1)+(n-n_1)^2|\leq 1$, we conclude, from a direct substitution of these data into~(\ref{e.1}), that \begin{equation}\label{e.example-uv} \|u v\|_{X^{k,b-1}}\approx N^{2k}. \end{equation} On the other hand, it is not difficult to see that \begin{equation}\label{e.example-u} \|u\|_{X^{k,b}}= \|\langle n \rangle^k \langle \tau+n^2 \rangle^{1/2} \ a_n \ \chi_1(\tau+n^2)\|_{L^2_{n,\tau}} \approx N^{2k}, \end{equation} and \begin{equation}\label{e.example-v} \|v\|_{Y^{s,b}} = \|\langle n \rangle^s \langle \tau-n^3 \rangle^{1/2} b_n \ \chi_1(\tau-n^3)\|_{L^2_{n,\tau}} \approx N^s. \end{equation} Putting together the equations~(\ref{e.example-uv}),~(\ref{e.example-u}),~(\ref{e.example-v}), we obtain that the bilinear estimate implies $$N^{2k}\lesssim N^{2k}\cdot N^s,$$ which is possible only if $s\geq 0$. Secondly, we prove that $\|uv\|_{X^{k,b-1}}\leq \|u\|_{X^{k,b}}\cdot\|v\|_{Y^{s,b}}$ (with $b=1/2$) implies $k-s\leq 3/2$. Define \begin{displaymath} b_n= \left\{ \begin{array}{ll} 1 & \textrm{if $n=N$}\\ 0 & \textrm{otherwise} \end{array} \right. \end{displaymath} and \begin{displaymath} a_n= \left\{ \begin{array}{ll} 1 & \textrm{if $n=0$}\\ 0 & \textrm{otherwise} \end{array} \right. \end{displaymath} Let $u$ and $v$ be defined by $\widehat{u}(n,\tau)=a_n\chi_1(\tau+n^2)$ and $\widehat{v}(n,\tau)=b_n\chi_1(\tau-n^3)$, where $\chi_1$ is the characteristic function of the interval $[-1,1]$. Using the definitions of the Bourgain $X^{k,b}$ and $Y^{s,b}$ spaces and the algebraic relation~(\ref{e.1}), we have $$\|u v\|_{X^{k,b-1}}\approx \frac{N^k}{N^{3/2}},$$ $$\|u\|_{X^{k,b}}\approx 1,$$ and $$\|v\|_{Y^{s,b}}\approx N^s.$$ Hence, the bilinear estimate says $$N^k\lesssim N^s N^{3/2},$$ which is only possible if $k-s\leq 3/2$. \end{proof} We consider now some counter-examples for the bilinear estimate in proposition~\ref{p.du2} when $1+s> 4k$ or $k-s<-1/2$: \begin{lemma}\label{l.example-du2}$\|\partial_x (u_1 \overline{u_2})\|_{Y^{s,b-1}}\leq \|u_1\|_{X^{k,b}}\cdot\|u_2\|_{X^{k,b}}$ (with $b=1/2$) implies $1+s\leq 4k$ and $k-s\geq -1/2$. \end{lemma} \begin{proof}Fix $N\gg 1$ a large integer. Firstly, we prove that $\|\partial_x (u_1 \overline{u_2})\|_{Y^{s,b-1}}\leq \|u_1\|_{X^{k,b}}\cdot\|u_2\|_{X^{k,b}}$ (with $b=1/2$) implies $1+s\leq 4k$. Define \begin{displaymath} b_n= \left\{ \begin{array}{ll} 1 & \textrm{if $n=\frac{-N^2-N}{2}$}\\ 0 & \textrm{otherwise} \end{array} \right. \end{displaymath} and \begin{displaymath} a_n= \left\{ \begin{array}{ll} 1 & \textrm{if $n=\frac{-N^2 +N}{2}$}\\ 0 & \textrm{otherwise} \end{array} \right. \end{displaymath} Let $u_1$ and $u_2$ be defined by $\widehat{u_1}(n,\tau)=a_n\chi_1(\tau+n^2)$ and $\widehat{u_2}(n,\tau)=b_n\chi_1(\tau+n^2)$, where $\chi_1$ is the characteristic function of the interval $[-1,1]$. By definition of the Bourgain space $Y^{s,b}$, $$\|\partial_x(u_1\overline{u_2})\|_{Y^{s,b-1}}= \left\|\frac{\langle n \rangle^s} {\langle \tau-n^3 \rangle^{1/2}} \; n \; (\widehat{u_1}\ast\widehat{\overline{u_2}})\right\|_{L^2_{n,\tau}}.$$ Hence, if one uses that $\widehat{\overline{u}}(n,\tau)=\overline{\widehat{u}(-n,-\tau)}$, it is not difficult to see that $$\|\partial_x(u_1\overline{u_2})\|_{Y^{s,b-1}}= \left\|\frac{ n \; \langle n \rangle^s}{\langle \tau-n^3 \rangle^{1/2}} \sum\limits_{n_1}\int d\tau_1 \ a_{n-n_1} \ \chi_1((\tau-\tau_1)+(n-n_1)^2) \ b_{-n_1}\ \chi_1(-\tau_1+n_1^2)\right\|_{L^2_{n,\tau}}.$$ Note the following numerical expression: \begin{equation}\label{e.2} \left(\tau-n^3\right)-\left((\tau-\tau_1)+(n-n_1)^2\right) +\left(-\tau_1+n_1^2\right) = -n^3 - n^2 + 2 n_1 n. \end{equation} Taking into account that $b_{-n_1}\neq 0$ iff $n_1=\frac{N^2+N}{2}$, $a_{n-n_1}\neq 0$ iff $n=N$, $\chi_1(-\tau_1+n_1^2)\neq 0$ iff $|-\tau_1+n_1^2|\leq 1$ and $\chi_1((\tau-\tau_1)+(n-n_1)^2)\neq 0$ iff $|(\tau-\tau_1)+(n-n_1)^2|\leq 1$, we conclude, from a direct substitution of these data into~(\ref{e.2}), that \begin{equation}\label{e.example-du2} \|\partial_x(u_1\overline{u_2})\|_{Y^{s,b-1}}\approx N^{1+s}. \end{equation} On the other hand, it is not difficult to see that \begin{equation}\label{e.u_1} \|u_1\|_{X^{k,b}}= \|\langle n \rangle^k \langle \tau+n^2 \rangle^{1/2} \ a_n \ \chi_1(\tau+n^2)\|_{L^2_{n,\tau}} \approx N^{2k}, \end{equation} and \begin{equation}\label{e.u_2} \|u_2\|_{X^{k,b}} = \|\langle n \rangle^k \langle \tau+n^2 \rangle^{1/2} b_n \ \chi_1(\tau+n^2)\|_{L^2_{n,\tau}} \approx N^{2k}. \end{equation} Putting together the equations~(\ref{e.example-du2}),~(\ref{e.u_1}),~(\ref{e.u_2}), we obtain that the bilinear estimate implies $$N^{1+s}\lesssim N^{2k}\cdot N^{2k},$$ which is possible only if $1+s\leq 4k$. Secondly, we obtain that $\|\partial_x (u_1 \overline{u_2})\|_{Y^{s,b-1}}\leq \|u_1\|_{X^{k,b}}\cdot\|u_2\|_{X^{k,b}}$ (with $b=1/2$) implies $k-s\geq -1/2$. Define \begin{displaymath} b_n= \left\{ \begin{array}{ll} 1 & \textrm{if $n=-N$}\\ 0 & \textrm{otherwise} \end{array} \right. \end{displaymath} and \begin{displaymath} a_n= \left\{ \begin{array}{ll} 1 & \textrm{if $n=0$}\\ 0 & \textrm{otherwise} \end{array} \right. \end{displaymath} Let $u_1$ and $u_2$ be defined by $\widehat{u_1}(n,\tau)=a_n\chi_1(\tau+n^2)$ and $\widehat{u_2}(n,\tau)=b_n\chi_1(\tau+n^2)$, where $\chi_1$ is the characteristic function of the interval $[-1,1]$. Using the definitions of the Bourgain $X^{k,b}$ and $Y^{s,b}$ spaces and the algebraic relation~(\ref{e.1}), we have $$\|\partial_x (u_1\overline{u_2})\|_{Y^{s,b-1}}\approx \frac{N^{1+s}}{N^{3/2}},$$ $$\|u_1\|_{X^{k,b}}\approx 1,$$ and $$\|u_2\|_{Y^{s,b}}\approx N^k.$$ Hence, the bilinear estimate says $$N^{1+s}\lesssim N^k N^{3/2},$$ which is only possible if $k-s\geq -1/2$. \end{proof} \section{Bilinear Estimates for the Coupling Terms}\label{s.bilinear} This section is devoted to the proof of our basic tools, that is, the sharp bilinear estimates~\ref{p.uv},~\ref{p.du2} for the coupling terms of the NLS-KdV system. We begin by showing some elementary calculus lemmas; next, using Plancherel and duality, the claimed bilinear estimates reduce to controlling some weighted convolution integrals, which is quite easy from these lemmatas. \subsection{Preliminaries} The first elementary calculus lemma to be used later is: \begin{lemma}\label{l.calculus-1} \begin{equation*} \int_{-\infty}^{+\infty}\frac{d\kappa}{\langle\kappa\rangle^{\theta} \langle\kappa-a\rangle^{\widetilde{\theta}}} \lesssim \frac{\log (1+\langle a \rangle)}{\langle a \rangle^{\theta+\widetilde{\theta}-1}} \end{equation*} where $\theta,\widetilde{\theta}>0$ and $\theta+\widetilde{\theta}>1$ \end{lemma} \begin{proof}Clearly we can assume that $|a|\gg 1$. In this case, we divide the domain of integration into the regions $I_1:=\{|\kappa|\ll |a|\}$, $I_2:=\{|\kappa|\sim |a|\}$ and $I_3:=\{|\kappa|\gg |a|\}$. Since $\kappa\in I_1$ implies $\langle \kappa-a\rangle\gtrsim \langle a\rangle\geq \langle \kappa\rangle$, $\kappa\in I_2$ implies $\langle \kappa\rangle\sim\langle a\rangle$ and $x\in I_3$ implies $\langle \kappa-a\rangle\gtrsim \langle \kappa\rangle$, we obtain \begin{equation*} \begin{split} \int_{-\infty}^{+\infty}\frac{d\kappa}{\langle\kappa\rangle^{\theta} \langle\kappa-a\rangle^{\widetilde{\theta}}} &= \int_{I_1} \frac{d\kappa}{\langle\kappa\rangle^{\theta} \langle\kappa-a\rangle^{\widetilde{\theta}}} + \int_{I_2} \frac{d\kappa}{\langle\kappa\rangle^{\theta} \langle\kappa-a\rangle^{\widetilde{\theta}}} + \int_{I_3} \frac{d\kappa}{\langle\kappa\rangle^{\theta} \langle\kappa-a\rangle^{\widetilde{\theta}}}\\ &\lesssim \frac{1}{\langle a\rangle^{\theta+\widetilde{\theta}-1}} \int_{I_1}\frac{d\kappa}{\langle\kappa\rangle} + \frac{1}{\langle a\rangle^{\theta+\widetilde{\theta}-1}} \int_{I_2}\frac{d\kappa}{\langle\kappa-a\rangle} + \int_{I_3}\frac{d\kappa}{\langle\kappa-a\rangle^{\theta+\widetilde{\theta}}} \\ &\lesssim \frac{\log (1+\langle a \rangle)}{\langle a \rangle^{\theta+\widetilde{\theta}-1}}. \end{split} \end{equation*} \end{proof} The second lemma is a well-known fact concerning the convergence of series whose terms are the values of certain polynomials along the integer numbers:\footnote{This lemma is essentially contained in the work~\cite{KPV2} of Kenig, Ponce and Vega on bilinear estimates related to the KdV equation.} \begin{lemma}\label{l.1.1}For any constant $\theta > 1/3$, $$ \sum\limits_{m\in\mathbb Z}\frac{1}{\langle p(m) \rangle^{\theta}} \leq C(\theta)<\infty, $$ where $p(x)$ is a cubic polynomial of the form $p(x):= x^3+ex^2+fx+g$ with $e,f,g\in\mathbb R$. \end{lemma} \begin{proof} We start the proof of the lemma~\ref{l.1.1} with two simple observations: defining \begin{equation*} \mathcal{E}:=\{m\in\mathbb Z: \ |m-\alpha|\geq 2, \ |m-\beta|\geq 2 \text{ and } |m-\gamma|\geq 2\} \end{equation*} and \begin{equation*} \mathcal{F}:=\mathbb Z-\mathcal{E}, \end{equation*} then $$\#\mathcal{F}\leq 12$$ and $$\langle (m-\alpha)(m-\beta)(m-\gamma) \rangle \gtrsim \langle m-\alpha \rangle \langle m-\beta \rangle \langle m-\gamma \rangle$$ for any $m\in\mathcal{E}.$ In particular, writing $p(x)=(x-\alpha)(x-\beta)(x-\gamma)$, we can estimate \begin{equation*} \begin{split} \sum\limits_{m}\frac{1}{p(m)^{\theta}}&\leq \sum\limits_{m\in\mathcal{F}}\frac{1}{p(m)^{\theta}} + \sum\limits_{m\in\mathcal{E}}\frac{1}{p(m)^{\theta}} \\ &\leq 12 + \sum\limits_{m\in\mathcal{E}}\frac{1}{p(m)^{\theta}} \\ &\lesssim 12 + \sum\limits_{m\in\mathcal{E}}\frac{1}{\langle m-\alpha \rangle^{\theta} \langle m-\beta \rangle^{\theta} \langle m-\gamma \rangle^{\theta}} \end{split} \end{equation*} Now, by H\"older inequality \begin{equation*} \sum\limits_{m}\frac{1}{p(m)^{\theta}}\lesssim 12 + \left(\sum\limits_{m}\frac{1}{\langle m-\alpha \rangle^{3\theta}}\right)^{1/3} \left(\sum\limits_{m}\frac{1}{\langle m-\beta \rangle^{3\theta}}\right)^{1/3} \left(\sum\limits_{m}\frac{1}{\langle m-\gamma \rangle^{3\theta}}\right)^{1/3} \end{equation*} So, the hypothesis $3\theta>1$ implies \begin{equation*} \sum\limits_{m}\frac{1}{p(m)^{\theta}}\leq C(\theta)<\infty. \end{equation*} This completes the proof of the lemma~\ref{l.1.1}. \end{proof} Finally, the third lemma is a modification of the previous one for linear polynomials with large coefficients: \begin{lemma}\label{l.1.2}For any constant $\theta>1/2$, whenever $n_1\in\mathbb Z-\{0\}$, $|n_1|\gg 1$, \begin{equation*} \sum\limits_{n\in\mathbb Z; \ |n|\sim |n_1|}\frac{1}{q(n)^{\theta}}\leq C(\theta)<\infty, \end{equation*} where $q(x):=2 n_1 x-n_1^2+r$ with $r\in\mathbb R$. \end{lemma} \begin{proof}The strategy of the proof is the same as before, but since now the polynomial $q$ is linear, we have to take a little bit of care. The idea is: although the polynomial $q$ has degree $1$, the fact that $|n_1|\sim |n|$ means morally that $q$ has degree $2$ in this range. So, the exponent of $n$ in the summand is morally $2\theta>1$ and, in particular, the series is convergent. This intuition can be formalized as follows: we write $q(x):=r-n_1^2+ 2 n_1 x = 2 n_1 (x+\delta)$, where $\delta = (r-n_1^2)/(2 n_1)$ (of course the assumption $n_1\neq 0$ enters here). If we define \begin{equation*} \mathcal{G}:=\{n\in\mathbb Z: \ |n+\delta|\geq 2\} \end{equation*} and \begin{equation*} \mathcal{H}:=\mathbb Z-\mathcal{G}, \end{equation*} then $$\#\mathcal{H}\leq 4$$ and $$\langle 2 n_1(n+\delta) \rangle \gtrsim \langle n \rangle \langle n-\delta \rangle$$ for any $n\in\mathcal{G}$, since $|n_1|\sim |n|$. In particular, we can estimate \begin{equation*} \begin{split} \sum\limits_{|n|\sim |n_1|}\frac{1}{q(n_1)^{\theta}}&\leq \sum\limits_{n\in\mathcal{H}}\frac{1}{q(n_1)^{\theta}} + \sum\limits_{n\in\mathcal{G},|n|\sim |n_1|}\frac{1}{q(n_1)^{\theta}} \\ &\leq 4 + \sum\limits_{n\in\mathcal{G},|n|\sim |n_1|}\frac{1}{q(n_1)^{\theta}} \\ &\lesssim 4 + \sum\limits_{n\in\mathcal{G},|n|\sim |n_1|}\frac{1}{\langle n \rangle^{\theta} \langle n+\delta \rangle^{\theta}} \end{split} \end{equation*} Now, by H\"older inequality \begin{equation*} \sum\limits_{|n|\sim |n_1|}\frac{1}{q(n_1)^{\theta}}\lesssim 4 + \left(\sum\limits_{n}\frac{1}{\langle n \rangle^{2\theta}}\right)^{1/2} \left(\sum\limits_{n}\frac{1}{\langle n+\delta \rangle^{2\theta}}\right)^{1/2} \end{equation*} So, the hypothesis $2\theta>1$ implies \begin{equation*} \sum\limits_{|n|\sim |n_1|}\frac{1}{q(n)^{\theta}}\leq C(\theta)<\infty. \end{equation*} This completes the proof of the lemma~\ref{l.1.2}. \end{proof} \subsection{Proof of the proposition~\ref{p.uv}: bilinear estimates for the coupling term $uv$} In view of the lemma~\ref{l.example-uv}, it suffices to show the bilinear estimate: \begin{lemma}\label{l.sharp-uv} $\|uv\|_{Z^k}\lesssim \|u\|_{X^{k,\frac{1}{2}-}}\|v\|_{Y^{s,\frac{1}{2}}} + \|u\|_{X^{k,\frac{1}{2}}}\|v\|_{Y^{s,\frac{1}{2}-}}$ whenever $s\geq 0$ and $k-s\leq 3/2$. \end{lemma} \begin{proof} From the definition of $Z^k$, we must show that \begin{equation}\label{e.sharp-uv-1} \|uv\|_{X^{k,-1/2}}\lesssim \|u\|_{X^{k,\frac{1}{2}-}}\|v\|_{Y^{s,\frac{1}{2}}} + \|u\|_{X^{k,\frac{1}{2}}}\|v\|_{Y^{s,\frac{1}{2}-}} \end{equation} and \begin{equation}\label{e.sharp-uv-2} \left\|\frac{\langle n \rangle^k \widehat{uv}}{\langle \tau+n^2 \rangle} \right\|_{L^2_n L^1_{\tau}} \lesssim \|u\|_{X^{k,\frac{1}{2}-}}\|v\|_{Y^{s,\frac{1}{2}}} + \|u\|_{X^{k,\frac{1}{2}}}\|v\|_{Y^{s,\frac{1}{2}-}} \end{equation} We begin with the estimate~(\ref{e.sharp-uv-1}). By the definition of Bourgain's space, \begin{equation*}\begin{split} \|uv\|_{X^{k,-a}}&= \|\langle \tau + {n}^2 \rangle^{-a}\langle n \rangle^k \widehat {uv} (n,\tau )\|_{L^2_{\tau}L^2_{n}}\\ &=\Bigl \| \frac{\langle n \rangle^k}{\langle \tau + {n}^2 \rangle^a} \widehat {u}*\widehat{v}(n,\tau )\Bigl \|_{L^2_{\tau}L^2_{n}}\\ \end{split} \end{equation*} Let \begin{equation*} f(\tau,n)=\langle \tau + {n}^2\rangle^b\langle n \rangle^k\widehat {u}(n,\tau )\quad \text{and}\quad g(\tau,n)=\langle \tau - {n}^3\rangle^c\langle n \rangle^s\widehat {v}(n,\tau ). \end{equation*} In particular, by duality, we obtain \begin{equation}\label{e.sharp-uv-1-duality} \begin{split} \|uv\|_{X^{k,-a}}&=\sup\limits_{\|\varphi\|_{L^2_{n,\tau}}\leq 1} \sum\limits_{n\in\mathbb Z}\int d\tau \frac{\langle n \rangle^k}{\langle \tau + n^2 \rangle^{a}}\bar{\varphi}(n,\tau) \left( \frac{f}{\langle\tau+n^2\rangle^b \langle n\rangle^k}*\frac{g}{\langle\tau-n^3\rangle^c \langle n\rangle^s} \right) \\ &=\sup\limits_{\|\varphi\|_{L^2_{n,\tau}}\leq 1} \sum\limits_{n\in\mathbb Z}\int d\tau \sum\limits_{n_1\in\mathbb Z} \int d\tau_1 \frac{\langle \tau + n^2\rangle^{-a}\langle n \rangle^k g(n_1,\tau_1)f(n-n_1,\tau-\tau_1) \bar {\varphi} (\tau,n)} {\langle \tau_1 - n_1^3\rangle^c\langle n_1\rangle^s \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^b\langle n-n_1\rangle^k} \\ &=\sum\int\sum\int_{(n,n_1,\tau,\tau_1)\in\mathcal{R}_0} + \sum\int\sum\int_{(n,n_1,\tau,\tau_1)\in\mathcal{R}_1} + \sum\int\sum\int_{(n,n_1,\tau,\tau_1)\in\mathcal{R}_2} \\ &\equiv W_0 + W_1 + W_2, \end{split} \end{equation} whenever $\mathbb Z^2\times\mathbb R^2=\mathcal{R}_0\cup\mathcal{R}_1\cup\mathcal{R}_2$. Now, taking into account the previous calculation, we look at three general simple ways to reduce the problem of goods bounds on the expressions $W_i$ into some multiplier estimates. In the sequel, $\chi_{\mathcal R}$ denotes the characteristic function of the set ${\mathcal R}$. So, we consider the expression \begin{equation}\label{e.w} W = \sup\limits_{\|\varphi\|_{L^2_{n,\tau}}\leq 1} \sum\limits_{n\in\mathbb Z}\int d\tau \sum\limits_{n_1\in\mathbb Z} \int d\tau_1 \frac{\langle \tau + n^2\rangle^{-a}\langle n \rangle^k g(n_1,\tau_1)f(n-n_1,\tau-\tau_1) \bar {\varphi} (\tau,n)\chi_{\mathcal R}} {\langle \tau_1 - n_1^3\rangle^b\langle n_1\rangle^s \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^b\langle n-n_1\rangle^k}. \end{equation} The first way to bound $W$ is: integrate over $\tau_1$ and $n_1$, and then use the Cauchy-Schwarz and H\"older inequalities to obtain \begin{equation}\label{e.w0}\begin{split} |W|^2&\leq \|\varphi \|^2_{{L^2_{\tau}L^2_{n}}} \left \|\frac{\langle n \rangle^k}{\langle \tau + n^2 \rangle^a} \int \! \! \! \! \int \frac{g(n_1,\tau_1)f(n-n_1,\tau-\tau_1)\chi_{{\mathcal R}}d\tau_1dn_1} {\langle \tau_1 - n_1^3\rangle^c\langle n_1\rangle^s\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^b\langle n-n_1\rangle^k} \right \|^2_{{L^2_{\tau}L^2_{n}}}\\ &\leq \int \! \! \! \! \int \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle^{2a}} \left | \int \! \! \! \! \int \frac{g(n_1,\tau_1)f(n-n_1,\tau-\tau_1)\chi_{{\mathcal R}}d\tau_1dn_1} {\langle \tau_1 - n_1^3\rangle^c\langle n_1\rangle^s\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^b\langle n-n_1\rangle^k} \right |^2d\tau dn\\ &\leq \int \! \! \! \! \int \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle^{2a}} \Biggl (\int \! \! \! \! \int \frac{\chi_{{\mathcal R}}d\tau_1dn_1} {\langle \tau_1 - n_1^3\rangle^{2c}\langle n_1\rangle^{2s}\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{2b}\langle n-n_1\rangle^{2k}}\times \\ &\quad \times \int \! \! \! \! \int |g(n_1,\tau_1)|^2|f(n-n_1,\tau-\tau_1)|^2d\tau_1dn_1 \Biggl )d\tau dn\\ &\leq \|f\|^2_{{L^2_{\tau}L^2_{n}}}\|g\|^2_{{L^2_{\tau_1}L^2_{n_1}}}\\ &\quad \times \left \| \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle^{2a}} \int \! \! \! \! \int \frac{\chi_{{\mathcal R}}d\tau_1dn_1} {\langle \tau_1 - n_1^3\rangle^{2c}\langle n_1\rangle^{2s}\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{2b} \langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}}\\ &=\|u\|^2_{X^{k,b}}\|v\|^2_{Y^{s,c}}\\ &\quad \times \left \| \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle^{2a}} \int \! \! \! \! \int \frac{\chi_{{\mathcal R}}d\tau_1dn_1} {\langle \tau_1 - n_1^3\rangle^{2c}\langle n_1\rangle^{2s} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{2b}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}} . \end{split} \end{equation} The second way we can bound $W$ is: put $\widetilde f(n,\tau)= f(-n,-\tau)$,\; integrate over $\tau$ and $n$ first and follow the same steps as above to get \begin{equation}\label{e.w1} \begin{split} |W|^2&\leq \|g\|^2_{{L^2_{\tau_1}L^2_{n_1}}} \left \|\frac{1}{\langle n_1\rangle^s \langle \tau_1 - n_1^3 \rangle^c} \int \! \! \! \! \int \frac{\langle n \rangle^k \widetilde f(n_1-n,\tau_1-\tau)\bar{\varphi}(\tau,n)\chi_{{\mathcal R}}d\tau dn} {\langle \tau + n^2\rangle^a\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^b\langle n-n_1\rangle^k} \right\|^2_{{L^2_{\tau_1}L^2_{n_1}}}\\ &\leq \|\widetilde f\|^2_{{L^2_{\tau_1}L^2_{n_1}}} \|g\|^2_{{L^2_{\tau_1}L^2_{n_1}}}\\ &\quad \times \left\|\frac{1}{\langle n_1\rangle^{2s} \langle \tau_1 - n_1^3 \rangle^{2c}} \int \! \! \! \! \int \frac{\langle n \rangle^{2k} \chi_{{\mathcal R}}d\tau dn}{\langle \tau + n^2\rangle^{2a}\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{2b}\langle n-n_1\rangle^{2k}} \right\|^2_{{L^{\infty}_{\tau_1}L^{\infty}_{n_1}}}\\ &= \|u\|^2_{X^{k,b}}\|v\|^2_{Y^{s,c}}\\ &\quad \times \left\|\frac{1}{\langle n_1\rangle^{2s} \langle \tau_1 - n_1^3 \rangle^{2c}} \int \! \! \! \! \int \frac{\langle n \rangle^{2k} \chi_{{\mathcal R}}d\tau dn}{\langle \tau + n^2\rangle^{2a}\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{2b}\langle n-n_1\rangle^{2k}} \right\|_{{L^{\infty}_{\tau_1}L^{\infty}_{n_1}}}. \end{split} \end{equation} Note that $\widetilde f(n,\tau)=\langle n \rangle^k \langle \tau - n^2\rangle^b\widehat u(-n,-\tau)$\; and \; $\|\widetilde f\|_{{L^2_{\tau}L^2_{n}}}= \|f\|_{{L^2_{\tau}L^2_{n}}}= \|u\|_{X^{k,b}}$.\\ Finally, the third way to estimate $W$ is: using the change of variables $\tau = \tau_1 -\tau_2$ and $n = n_1 -n_2$, the region,\;${\mathcal R}$,\;is transformed into the set $\widetilde {\mathcal R}$ such that \begin{equation*} \widetilde {\mathcal R} = \bigl \{(n_1,n_2,\tau_1,\tau_2)\in \mathbb Z^2\times\mathbb R^2;\; (n_1-n_2,n_1, \tau_1-\tau_2,\tau_1)\in\mathcal{R} \bigl \}. \end{equation*} Then, $W$ can be estimated as: \begin{equation}\label{e.w2} \begin{split} |W|^2&\leq \|\widetilde{f}\|^2_{{L^2_{\tau_2}L^2_{n_2}}}\\ &\quad \times \left \|\frac{1}{ \langle n_2\rangle^k\langle \tau_2 - n_2^2 \rangle^b} \int \! \! \! \! \int \frac{\langle n_1 - n_2 \rangle^k g(n_1,\tau_1) \widetilde{\bar{\varphi}}(n_2-n_1,\tau_2-\tau_1) \chi_{\widetilde {\mathcal R}}d\tau_1 dn_1} {\langle \tau_1-\tau_2 + (n_1-n_2)^2\rangle^a\langle \tau_1 - n_1^3\rangle^c\langle n_1\rangle^s} \right\|^2_{{L^2_{\tau_2}L^2_{n_2}}}\\ &\leq \|\widetilde{f}\|^2_{{L^2_{\tau_2}L^2_{n_2}}} \|g\|^2_{{L^2_{\tau_1}L^2_{n_1}}}\\ &\quad \times \left\|\frac{1}{\langle n_2\rangle^{2k} \langle \tau_2 - n_2^2 \rangle^{2b}} \int \! \! \! \! \int \frac{\langle n_1 - n_2 \rangle^{2k}\chi_{\widetilde {\mathcal R}}d\tau_1 dn_1} {\langle \tau_1-\tau_2 + (n_1-n_2)^2\rangle^{2a} \langle \tau_1 - n_1^3\rangle^{2c} \langle n_1\rangle^{2s}} \right \|^2_{{L^{\infty}_{\tau_2}L^{\infty}_{n_2}}}\\ &= \|u\|^2_{X^{k,b}}\|v\|^2_{Y^{s,c}}\\ &\quad \times \left \|\frac{1}{\langle n_2\rangle^{2k} \langle \tau_2 - n_2^2 \rangle^{2b}} \int \! \! \! \! \int \ \frac{\langle n_1 - n_2 \rangle^{2k}\chi_{\widetilde {\mathcal R}}d\tau_1 dn_1} {\langle \tau_1-\tau_2 + (n_1-n_2)^2\rangle^{2a} \langle \tau_1 - n_1^3\rangle^{2c}\langle n_1\rangle^{2s}} \right \|_{{L^{\infty}_{\tau_2}L^{\infty}_{n_2}}} \end{split} \end{equation} Next, using the equation~(\ref{e.sharp-uv-1-duality}) and the estimates~(\ref{e.w0}),~(\ref{e.w1}),~(\ref{e.w2}), we are going to reduce the desired bilinear estimate $\|uv\|_{Z^k}\lesssim \|u\|_{X^{k,\frac{1}{2}-}}\|v\|_{Y^{s,\frac{1}{2}}} + \|u\|_{X^{k,\frac{1}{2}}}\|v\|_{Y^{s,\frac{1}{2}-}}$ (whenever $s\geq 0$ and $k-s\leq 3/2$) into certain $L^{\infty}$ bounds for multipliers localized in some well-chosen regions $\mathcal{R}_i$, $i=0,1,2$ such that $\mathcal{R}_0\cup\mathcal{R}_1\cup\mathcal{R}_2=\mathbb Z^2\times\mathbb R^2$. First, if $n_0:=n$, $n_1$, $n_2:= n_1-n$ are the frequencies of our waves, let $\lambda_0=\tau+n^2$, $\lambda_1:=\tau_1-n_1^3$, $\lambda_2:=\tau_2-n_2^2:=(\tau_1-\tau)-n_2^2$ be the modulations of our waves. Also, we consider $N_j = |n_j|,j=0,1,2$ variables measuring the magnitude of frequencies of the waves, and $L_j = |\lambda_j|,j=0,1,2$ variables measuring the magnitude of modulations of the waves. It is convenient to define the quantities $N_{max}\geq N_{med}\geq N_{min}$ to be the maximum, median and minimum of $N_0,N_1,N_2$, resp. Similarly, we define $L_{max}\geq L_{med}\geq L_{min}$. In order to define the regions $\mathcal{R}_i$, we split $\mathbb Z^2\times\mathbb R^2$ into three regions ${\mathcal A}$,\;${\mathcal B}$ and ${\mathcal C}$, \begin{equation*} \begin{split} &{\mathcal A}= \bigl\{(n,n_1,\tau,\tau_1)\in \mathbb Z^2\times\mathbb R^2 ;\; N_1\leq 100 \bigl\}, \\ &{\mathcal B}=\bigl\{(n,n_1,\tau,\tau_1)\in \mathbb Z^2\times\mathbb R^2 ;\; N_1> 100\;\text{and, either}\; N_1\ll N_0\;\text{or}\;N_1\gg N_0\}, \\ &{\mathcal C}=\bigl\{(n,n_1,\tau,\tau_1)\in \mathbb Z^2\times\mathbb R^2 ;\; N_1> 100 \;\text{and}\;N_1\sim N_0\}. \end{split} \end{equation*} Now we separate ${\mathcal C}$ into three parts \begin{equation*}\begin{split} &{\mathcal C}_0 =\bigl\{(n,n_1,\tau,\tau_1)\in {\mathcal C};\; L_0=L_{max} \bigl\},\\ &{\mathcal C}_1 =\bigl\{(n,n_1,\tau,\tau_1)\in {\mathcal C};\; L_1=L_{max}\bigl\},\\ &{\mathcal C}_2 =\bigl\{(n,n_1,\tau,\tau_1)\in {\mathcal C};\; L_2=L_{max}\bigl\}. \end{split} \end{equation*} At this point, we define the sets ${\mathcal R}_i,\;i=0,1,2$,\;as: \begin{equation*} {\mathcal R}_0= {\mathcal A}\cup {\mathcal B}\cup {\mathcal C}_0,\;\; {\mathcal R}_1={\mathcal C}_1,\;\; {\mathcal R}_2={\mathcal C}_2 \end{equation*} and it is clear that $\mathbb Z^2\times\mathbb R^2 = {\mathcal R}_0\cup {\mathcal R}_1\cup {\mathcal R}_2$. For these regions $\mathcal{R}_i$, we can show the following multiplier estimates \begin{claim}\label{c.w0} If $s\geq 0$ and $k-s\leq 3/2$, $$\left\| \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle} \int \! \! \! \! \int \frac{\chi_{{\mathcal R}_0}d\tau_1dn_1}{\langle \tau_1 - n_1^3\rangle^{1-}\langle n_1\rangle^{2s} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}}\lesssim 1.$$ \end{claim} \begin{claim}\label{c.w1} If $s\geq 0$ and $k-s\leq 3/2$, $$\left \|\frac{1}{\langle n_1\rangle^{2s} \langle \tau_1 - n_1^3 \rangle} \int \! \! \! \! \int \frac{\langle n \rangle^{2k} \chi_{{\mathcal R}_1}d\tau dn} {\langle \tau + n^2\rangle\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau_1}L^{\infty}_{n_1}}}\lesssim 1.$$ \end{claim} \begin{claim}\label{c.w2} If $s\geq 0$ and $k-s\leq 3/2$, $$\left \|\frac{1}{\langle n_2\rangle^{2k} \langle \tau_2 - n_2^2 \rangle} \int \! \! \! \! \int \frac{\langle n_1 - n_2 \rangle^{2k}\chi_{\widetilde {\mathcal R}_2}d\tau_1 dn_1} {\langle \tau_1-\tau_2 + (n_1-n_2)^2\rangle \langle \tau_1 - n_1^3\rangle^{1-}\langle n_1\rangle^{2s}} \right \|_{{L^{\infty}_{\tau_2}L^{\infty}_{n_2}}}\lesssim 1,$$ where $\widetilde {\mathcal R}_2$ is the image of $\mathcal{R}_2$ by the change of variables $n_2:=n_1-n$, $\tau_2:=\tau_1-\tau$. \end{claim} It is easy to show that these facts implies the desired bilinear estimate~(\ref{e.sharp-uv-1}). Indeed, by the equations~(\ref{e.w0}),~(\ref{e.w1}),~(\ref{e.w2}), we see that, for $a=1/2$ and well-chosen $b,c$, these claims means that, whenever $s\geq 0$ and $k-s\leq 3/2$, $|W_0|\lesssim \|u\|_{X^{k,\frac{1}{2}-}}\|v\|_{Y^{s,\frac{1}{2}-}}$, $|W_1|\lesssim \|u\|_{X^{k,\frac{1}{2}-}}\|v\|_{Y^{s,\frac{1}{2}}}$ and $|W_2|\lesssim \|u\|_{X^{k,\frac{1}{2}}}\|v\|_{Y^{s,\frac{1}{2}-}}$. Putting these informations into the equation~(\ref{e.sharp-uv-1-duality}), we obtain the bilinear estimate~(\ref{e.sharp-uv-1}). So, it remains only to prove these claims. For later use, recall the following algebraic relation: \begin{equation}\label{e.uv-dispersion} \lambda_0-\lambda_1+\lambda_2 = n_1^3-n_1^2-2 n n_1. \end{equation} \begin{proof}[Proof of claim~\ref{c.w0}] In the region $\mathcal{A}$, using that $N_1\leq 100$ and $\langle n \rangle\leq \langle n_1 \rangle \langle n-n_1 \rangle$, \begin{equation*} \begin{split} &\left \| \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle} \sum\limits_{n_1}\int \frac{\chi_{{\mathcal A}}d\tau_1} {\langle \tau_1 - n_1^3\rangle^{1-}\langle n_1\rangle^{2s} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}} \\ &\lesssim \sup\limits_{n,\tau}\sum\limits_{n_1}\int \frac{d\tau_1}{\langle \tau_1 - n_1^3\rangle^{1-} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}} \end{split} \end{equation*} However, the lemma~\ref{l.calculus-1} (with $\theta=\widetilde{\theta}=1-$) implies \begin{equation*} \int \frac{d\tau_1}{\langle \tau_1 - n_1^3\rangle^{1-} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}} \leq \frac{\log (1+\langle p(n_1) \rangle)}{\langle p(n_1) \rangle^{1-}}, \end{equation*} where $p(x)$ is the polynomial $p(x):= x^3-x^2+2nx-(\tau+n^2)$. Hence, we can estimate: \begin{equation*} \sum\limits_{n_1}\int \frac{d\tau_1}{\langle \tau_1 - n_1^3\rangle^{1-} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}} \lesssim \sum\limits_{n_1}\frac{\log (1+\langle p(n_1) \rangle)}{\langle p(n_1) \rangle^{1-}}. \end{equation*} In particular, the lemma~\ref{l.1.1} can be applied to give \begin{equation}\label{e.c.w0-1} \left \| \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle^{2a}} \sum\limits_{n_1}\int \frac{\chi_{{\mathcal A}}d\tau_1} {\langle \tau_1 - n_1^3\rangle^{2b}\langle n_1\rangle^{2s} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{2b}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}} \lesssim 1. \end{equation} In the region $\mathcal{B}$, $N_1>100$, and either $N_1\gg N_0$ or $N_1\ll N_0$. In any case, it is not difficult to see that \begin{equation*} \frac{\langle n \rangle^{2k}}{\langle n-n_1 \rangle^{2k} \langle n_1 \rangle^{2s}}\lesssim 1. \end{equation*} In fact, this is an easy consequence of $s\geq 0$ and $N_2\gtrsim N_i$ if $N_i\gg N_j$, for $\{i,j\}=\{0,1\}$. So, we obtain the bound \begin{equation}\label{e.c.w0-2} \begin{split} &\left \| \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle} \sum\limits_{n_1}\int \frac{\chi_{{\mathcal B}}d\tau_1} {\langle \tau_1 - n_1^3\rangle^{1-}\langle n_1\rangle^{2s} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}} \\ &\lesssim \sup\limits_{n,\tau}\sum\limits_{n_1}\int \frac{d\tau_1}{\langle \tau_1 - n_1^3\rangle^{1-} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}} \\ &\lesssim \sum\limits_{n_1}\frac{\log (1+\langle p(n_1) \rangle)}{\langle p(n_1) \rangle^{1-}} \\ &\lesssim 1, \end{split} \end{equation} where, as before, we have used the lemmas~\ref{l.calculus-1} and~\ref{l.1.1}. In the region $\mathcal{C}_0$, it is convenient to consider the following bound \begin{equation*} \begin{split} &\left \| \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle} \sum\limits_{n_1}\int \frac{\chi_{{\mathcal C}_0}d\tau_1} {\langle \tau_1 - n_1^3\rangle^{1-}\langle n_1\rangle^{2s} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}} \\ &\lesssim \left \| \frac{1}{\langle \tau + n^2 \rangle} \sum\limits_{n_1}\int \frac{\langle n_1\rangle^{2k-2s}\chi_{{\mathcal C}_0}d\tau_1} {\langle \tau_1 - n_1^3\rangle^{1-}\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}}, \end{split} \end{equation*} which is an immediate corollary of $\langle n \rangle \leq \langle n-n_1 \rangle \langle n_1 \rangle$. Integrating with respect to $\tau_1$ and using the lemma~\ref{l.calculus-1} gives, as before, \begin{equation*} \begin{split} &\frac{1}{\langle \tau + n^2 \rangle} \sum\limits_{n_1}\int \frac{\langle n_1\rangle^{2k-2s}\chi_{{\mathcal C}_0}d\tau_1} {\langle \tau_1 - n_1^3\rangle^{1-}\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}} \\ &\lesssim \frac{1}{\langle \tau + n^2 \rangle} \sum\limits_{n_1} \frac{\langle n_1\rangle^{2k-2s}\chi_{{\mathcal C}_0}\log(1+\langle p(n_1) \rangle)}{\langle p(n_1) \rangle^{1-}} \end{split} \end{equation*} Since, by the dispersion relation~(\ref{e.uv-dispersion}), $L_0=L_{max}\gtrsim N_1^3$ in the region $\mathcal{C}_0$, we have \begin{equation*} \begin{split} &\frac{1}{\langle \tau + n^2 \rangle} \sum\limits_{n_1} \frac{\langle n_1\rangle^{2k-2s}\chi_{{\mathcal C}_0} \log(1+\langle p(n_1)\rangle)}{\langle p(n_1) \rangle^{1-}} \\ &\lesssim \frac{L_{max}^{(2k-2s)/3}}{L_{max}}\sum\limits_{n_1} \frac{\log(1+\langle p(n_1) \rangle)}{\langle p(n_1) \rangle^{1-}} \end{split} \end{equation*} Hence, $k-s\leq 3/2$ and lemma~\ref{l.1.1} together allow us to conclude \begin{equation}\label{e.c.w0-3} \left \| \frac{\langle n \rangle^{2k}}{\langle \tau + n^2 \rangle} \sum\limits_{n_1}\int \frac{\chi_{{\mathcal C}_0}d\tau_1} {\langle \tau_1 - n_1^3\rangle^{1-}\langle n_1\rangle^{2s} \langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau}L^{\infty}_{n}}} \lesssim 1. \end{equation} By definition of $\mathcal{R}_0$, the bounds~(\ref{e.c.w0-1}),~(\ref{e.c.w0-2}),~(\ref{e.c.w0-3}) concludes the proof of the claim~\ref{c.w0}. \end{proof} \begin{proof}[Proof of the claim~\ref{c.w1}] Using that $\langle n \rangle\leq \langle n_1 \rangle \langle n-n_1 \rangle$, integrating in the variable $\tau$ and applying the lemma~\ref{l.calculus-1} (with $\theta=1/2$ and $\widetilde{\theta}=\frac{1}{2}-$), we get \begin{equation*} \begin{split} &\left \|\frac{1}{\langle n_1\rangle^{2s} \langle \tau_1 - n_1^3 \rangle} \sum\limits_{n}\int \frac{\langle n \rangle^{2k} \chi_{{\mathcal R}_1}d\tau } {\langle \tau + n^2\rangle\langle \tau-\tau_1 + {(n-n_1)}^2\rangle^{1-}\langle n-n_1\rangle^{2k}} \right \|_{{L^{\infty}_{\tau_1}L^{\infty}_{n_1}}} \\ &\lesssim \left \|\frac{\langle n_1\rangle^{2k-2s}}{\langle \tau_1 - n_1^3 \rangle} \sum\limits_{n} \frac{ \chi_{{\mathcal R}_1}\log(1+\langle q(n) \rangle)} {\langle q(n) \rangle^{1-}} \right \|_{{L^{\infty}_{\tau_1}L^{\infty}_{n_1}}}, \end{split} \end{equation*} where $q(x):=\tau_1-n_1^2+2 n_1 x$. Note that in the region $\mathcal{R}_1$, $N_1>100$ and $N_0\sim N_1$, $|\lambda_1|\sim L_1 = L_{max}$ and, by the dispersion relation~(\ref{e.uv-dispersion}), $L_{max}\gtrsim N_1^3$; this permits us to apply the lemma~\ref{l.1.2} to conclude \begin{equation*} \frac{\langle n_1\rangle^{2k-2s}}{\langle \tau_1 - n_1^3 \rangle^{2b}} \sum\limits_{n} \frac{ \chi_{{\mathcal R}_1}\log(1+\langle q(n) \rangle)} {\langle q(n) \rangle^{1-}}\lesssim \frac{L_{max}^{(2k-2s)/3}}{L_{max}} \end{equation*} Thus, if we remember that $k-s\leq 3/2$, we get \begin{equation*} \frac{\langle n_1\rangle^{2k-2s}}{\langle \tau_1 - n_1^3 \rangle^{2b}} \sum\limits_{n} \frac{ \chi_{{\mathcal R}_1}\log(1+\langle q(n) \rangle)} {\langle q(n) \rangle^{1-}}\lesssim 1. \end{equation*} This completes the proof of the claim~\ref{c.w1}. \end{proof} \begin{proof}[Proof of the claim~\ref{c.w2}]Using that $\langle n_1-n_2 \rangle\leq \langle n_1 \rangle\langle n_2 \rangle$, integrating in the $\tau_1$ and applying the lemma~\ref{l.calculus-1} with $\theta=\frac{1}{2}-$ and $\widetilde{\theta}=1/2$, \begin{equation*} \begin{split} &\left \|\frac{1}{\langle n_2\rangle^{2k} \langle \tau_2 - n_2^2 \rangle^{2b}} \sum\limits_{n_1}\int \frac{\langle n_1 - n_2 \rangle^{2k}\chi_{\widetilde {\mathcal R}_2}d\tau_1 } {\langle \tau_1-\tau_2 + (n_1-n_2)^2\rangle^{2a} \langle \tau_1 - n_1^3\rangle^{2b}\langle n_1\rangle^{2s}} \right \|_{{L^{\infty}_{\tau_2}L^{\infty}_{n_2}}} \\ &\lesssim \left \|\frac{1}{\langle \tau_2 - n_2^2 \rangle^{2b}} \sum\limits_{n_1} \frac{\langle n_1 \rangle^{2k-2s} \chi_{\widetilde {\mathcal R}_2}\log(1+\langle r(n_1) \rangle)} {r(n_1)} \right \|_{{L^{\infty}_{\tau_2}L^{\infty}_{n_2}}}, \end{split} \end{equation*} where $r(x):=x^3+x^2-2 n_2 x-(\tau_2-n_2^2)$. Note that the change of variables $\tau = \tau_1 -\tau_2$ and $n = n_1 -n_2$ transforms the region ${\mathcal R}_2$ into a set $\widetilde {\mathcal R}_2$ such that \begin{equation*} \widetilde {\mathcal R}_2 \subseteq \bigl \{(n_1,n_2,\tau_1,\tau_2)\in \mathbb Z^2\times\mathbb R^2;\; N_1>100 \quad \text{and}\quad L_2=L_{max} \bigl \}. \end{equation*} In particular, the dispersion relation~(\ref{e.uv-dispersion}) implies that $|\lambda_2|\sim L_2=L_{max}\gtrsim N_1^3$ in the region $\widetilde{\mathcal{R}_2}$. So, an application of the lemma~\ref{l.1.1} and the hypothesis $k-s\leq 3/2$ yields \begin{equation*} \frac{1}{\langle \tau_2 - n_2^2 \rangle^{2b}} \sum\limits_{n_1} \frac{\langle n_1 \rangle^{2k-2s}\chi_{\widetilde {\mathcal R}_2} \log(1+\langle r(n_1) \rangle)} {r(n_1)} \lesssim \frac{L_{max}^{(2k-2s)/3}}{L_{max}}\lesssim 1. \end{equation*} This concludes the proof of the claim~\ref{c.w2}. \end{proof} It remains now only to prove the second estimate~(\ref{e.sharp-uv-2}), i.e., \begin{equation*} \left\|\frac{\langle n \rangle^k}{\langle \tau+n^2 \rangle} \widehat{uv}(n,\tau) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u\|_{X^{k,\frac{1}{2}-}} \cdot\|v\|_{Y^{s,\frac{1}{2}}} + \|u\|_{X^{k,\frac{1}{2}}}\cdot \|v\|_{Y^{s,\frac{1}{2}-}} \end{equation*} We can rewrite the left-hand side as \begin{equation*} \left\|\int\limits_{n=n_1+n_2} \langle n\rangle^k \int\limits_{\tau=\tau_1+\tau_2} \frac{1}{\langle\tau+n^2\rangle} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \end{equation*} To begin with, we split the domain of integration into three regions. Let $\mathcal{L} =\mathcal{L}_1\cup\mathcal{L}_2\cup \mathcal{L}_3$, where $$\mathcal{L}_1:=\{(n,\tau,n_2,\tau_2): |n_2|\leq 100\},$$ $$\mathcal{L}_2:=\{(n,\tau,n_2,\tau_2): |n_2|> 100 \textrm{ and } |n|\ll |n_2|\},$$ $$\mathcal{L}_3:=\{(n,\tau,n_2,\tau_2): |n_2|> 100 \textrm{ and } |n|\gg |n_2|\},$$ $\mathcal{M}:=\{(n,\tau,n_2,\tau_2): |n_2|> 100, |n|\sim |n_2| \textrm{ and either } |\tau_1+n_1^2| = L_{\max} \textrm{ or } |\tau_2+n_2^3| = L_{\max}\}$ and $\mathcal{N}:=\{(n,\tau,n_2,\tau_2): |n_2|> 100, |n|\sim |n_2| \textrm{ and } |\tau+n^2| = L_{\max}\}$. Clearly, $\mathcal{L}$, $\mathcal{M}$ and $\mathcal{N}$ completely decomposes our domain of integrations, so that, in order to prove~(\ref{e.sharp-uv-2}), it suffices to get the bounds \begin{equation}\label{e.sharp-uv-2-L} \begin{split} &\left\|\int\limits_{n=n_1+n_2} \frac{\langle n\rangle^k}{\langle n_1\rangle^k \langle n_2\rangle^{s}} \int\limits_{\tau=\tau_1+\tau_2} \frac{\chi_{\mathcal{L}}}{ \langle\tau+n^2\rangle\langle\tau_1+n_1^2\rangle^{\frac{1}{2}-} \langle\tau_2-n_2^3\rangle^{\frac{1}{2}-}}\widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2)\right\|_{L_n^2 L_{\tau}^1} \\ &\lesssim \|u\|_{X^{0,0}} \|v\|_{Y^{0,0}} \end{split} \end{equation} \begin{equation}\label{e.sharp-uv-2-M} \begin{split} &\left\|\int\limits_{n=n_1+n_2} \frac{\langle n\rangle^k}{\langle n_1\rangle^k \langle n_2\rangle^{s}} \int\limits_{\tau=\tau_1+\tau_2} \frac{\chi_{\mathcal{M}}}{ \langle\tau+n^2\rangle}\widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \\ &\lesssim \|u\|_{X^{0,\frac{1}{2}-}} \|v\|_{Y^{0,\frac{1}{2}}} + \|u\|_{X^{0,\frac{1}{2}}} \|v\|_{Y^{0,\frac{1}{2}-}} \end{split} \end{equation} \begin{equation}\label{e.sharp-uv-2-N} \begin{split} &\left\|\int\limits_{n=n_1+n_2} \frac{\langle n\rangle^k}{\langle n_1\rangle^k\langle n_2\rangle^{s}} \int\limits_{\tau=\tau_1+\tau_2} \frac{\chi_{\mathcal{N}}}{\langle\tau+n^2\rangle} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \\ &\lesssim \|u\|_{X^{0,\frac{1}{2}-}} \|v\|_{Y^{0,\frac{1}{2}}} + \|u\|_{X^{0,\frac{1}{2}}} \|v\|_{Y^{0,\frac{1}{2}-}} \end{split} \end{equation} To proceed further, we need to recall the following Bourgain-Strichartz inequalities: \begin{lemma}[Bourgain~\cite{Bourgain}]\label{l.Bourgain} $X^{0,3/8}([0,1]), Y^{0,1/3}([0,1])\subset L^4(\mathbb T\times [0,1])$. More precisely, $$\|\psi(t) f\|_{L_{xt}^4}\lesssim \|f\|_{X^{0,3/8}} \quad \textrm{ and } \quad \|\psi(t) g\|_{L_{xt}^4}\lesssim \|g\|_{Y^{0,1/3}}.$$ \end{lemma} To prove the first bound~(\ref{e.sharp-uv-2-L}), we start with the simple observation that $$\frac{\langle n\rangle^k}{\langle n_1\rangle^k \langle n_2\rangle^{s}} \lesssim 1,$$ if either $|n_2|\leq 100$, or $|n_2|>100$ and $|n|\ll |n_2|$, or $|n_2|>100$ and $|n|\gg|n_2|$. This follows from the fact that $\langle n\rangle\leq \langle n_1\rangle\langle n_2\rangle$ and $s\geq 0$. Hence, \begin{equation*} \begin{split} &\left\|\int\limits_{n=n_1+n_2} \frac{\langle n\rangle^k}{\langle n_1\rangle^k \langle n_2\rangle^{s}} \int\limits_{\tau=\tau_1+\tau_2} \frac{\chi_{\mathcal{L}}}{\langle\tau+n^2\rangle\langle\tau_1+n_1^2\rangle^{\frac{1}{2}-} \langle\tau_2-n_2^3\rangle^{\frac{1}{2}-}}\widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2)\right\|_{L_n^2 L_{\tau}^1} \\ &\lesssim \left\|\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \frac{1}{\langle\tau+n^2\rangle \langle\tau_1+n_1^2\rangle^{\frac{1}{2}-} \langle\tau_2-n_2^3\rangle^{\frac{1}{2}-}}\widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2)\right\|_{L_n^2 L_{\tau}^1}. \end{split} \end{equation*} Therefore, this reduces our goal to prove that \begin{equation*} \left\|\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \frac{1}{\langle\tau+n^2\rangle \langle\tau_1+n_1^2\rangle^{\frac{1}{2}-} \langle\tau_2-n_2^3\rangle^{\frac{1}{2}-}} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u\|_{X^{0,0}} \|v\|_{Y^{0,0}}. \end{equation*} This can be re-written as \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{5/8}\langle\tau+n^2\rangle^{3/8}} \int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u\|_{X^{0,\frac{1}{2}-}} \|v\|_{Y^{0,\frac{1}{2}-}}. \end{equation*} Since $2(-5/8)<-1$, the Cauchy-Schwarz inequality in $\tau$ reduces this bound to showing \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{3/8}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^2} \lesssim \|u\|_{X^{0,\frac{1}{2}-}} \|v\|_{Y^{0,\frac{1}{2}-}}. \end{equation*} However, this bound is an easy consequence of duality, $L^4_{xt} L^2_{xt} L^4_{xt}$ H\"older and the Bourgain-Strichartz inequalities $X^{0,3/8}, Y^{0,1/3}\subset L^4$ in the lemma~\ref{l.Bourgain}. The second bound~(\ref{e.sharp-uv-2-M}) can be proved in an analogous fashion, using the dispersion relation \begin{equation}\label{e.uv-Dispersion} (\tau+n^2) - (\tau_2-n_2^3) + (\tau_1+n_1^2) = n_2^3 - n_2^2 - 2 n n_2. \end{equation} which implies that, in the region $\mathcal{M}$, either $|\tau_1+n_1^2|\gtrsim |n_2|^3$ or $|\tau_2-n_2^3|\gtrsim |n_2|^3$. Thus, using that $k-s\leq 3/2$ and making the corresponding cancelation, we see that it suffices to prove that \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{5/8}\langle\tau+n^2\rangle^{3/8}} \int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u\|_{X^{0,0}} \|v\|_{Y^{0,\frac{1}{2}-}} \end{equation*} and \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{5/8}\langle\tau+n^2\rangle^{3/8}} \int \limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u\|_{X^{0,\frac{1}{2}-}} \|v\|_{Y^{0,0}}. \end{equation*} Again, we use Cauchy-Schwarz to reduce these estimates to \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{3/8}}\int \limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right \|_{L_n^2 L_{\tau}^2} \lesssim \|u\|_{X^{0,0}} \|v\|_{Y^{0,\frac{1}{2}-}} \end{equation*} and \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{3/8}}\int \limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^2} \lesssim \|u\|_{X^{0,\frac{1}{2}-}} \|v\|_{Y^{0,0}}, \end{equation*} which follows from duality, H\"older and Bourgain-Strichartz, as above. Finally, the third bound~(\ref{e.sharp-uv-2-N}) requires a subdivision into two cases. When $|\tau_1+n_1^2|\gtrsim |n_2|^{2-}$ (resp., $|\tau_2-n_2^3|\gtrsim |n_2|^{2-}$), we use $\langle\tau_1+n_1^2\rangle^{1/8}$ leaving $\langle\tau_1+n_1^2\rangle^{3/8}$ in the denominator and $|n_2|^{k-s-}$ in the numerator (resp., a similar argument with $(\tau_2-n_2^3)$ instead of $(\tau_1+n_1^2)$, using $\langle\tau_2-n_2^3\rangle^{1/6}$ and leaving $\langle\tau_2-n_2^3\rangle^{1/3}$). After another cancelation using $|\tau+n^2|\gtrsim |n_2|^3$, we need to prove \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{1/2+}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u\|_{X^{0,3/8}} \|v\|_{Y^{0,\frac{1}{2}-}}, \end{equation*} and \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{1/2+}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u\|_{X^{0,\frac{1}{2}-}} \|v\|_{Y^{0,1/3}}. \end{equation*} These bounds follow again from Cauchy-Schwarz in $\tau$, duality, H\"older and Bourgain-Strichartz. So it remains only the case $|\tau_1+n_1^2|, |\tau_2-n_2^3|\ll |n_2|^{2-}$. In this case, the dispersion relation says that, in the region $\mathcal{N}$, $$\tau+n^2 = n_2^3-n_2^2-2n n_2 - O(|n_2|^{2-}).$$ On the other hand, the cancelation using $|\tau+n^2|\gtrsim |n_2|^3$ and $k-s\leq 3/2$ reduces the proof to the bound \begin{equation*} \left\|\frac{1}{\langle\tau+n^2\rangle^{1/2}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \chi_{\Omega(n)}(\tau+n^2) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u\|_{X^{0,\frac{1}{2}-}} \|v\|_{Y^{0,\frac{1}{2}-}}, \end{equation*} where $\Omega(n) = \{\eta\in\mathbb R: \eta = r^3-r^2-2n r + O(|r|^{2-}), \text{ for some } r\in\mathbb Z, |r|\sim |n| > 100\}$. Applying Cauchy-Schwarz in $\tau$, we can estimate the left-hand side by $$\left\|\left(\int \langle\tau+n^2\rangle^{-1} \chi_{\Omega(n)}(\tau+n^2) \right)^{1/2} \left\|\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2)\right\|_{L_{\tau}^2}\right\|_{L_n^2}$$ Therefore, the point is to show \begin{equation}\label{e.uv-dyadic} \sup_{n}\left(\int \langle\tau+n^2\rangle^{-1} \chi_{\Omega(n)}(\tau+n^2) d\tau\right)\lesssim 1 \end{equation} We need the following lemma: \begin{lemma}\label{l.uv-dyadic} There exists some $\delta>0$ such that, for any fixed $n\in\mathbb Z$, $|n|\gg 1$ and for all $M\geq 1$ dyadic, we have \begin{equation*} |\{\mu\in\mathbb R: |\mu|\sim M, \mu = r^3-r^2-2n r + O(|r|^{2-}), \text{ for some } r\in\mathbb Z, |r|\sim |n|\}|\lesssim M^{1-\delta}. \end{equation*} \end{lemma} \begin{proof}Note that the dyadic block $\{|\mu|\sim M\}$ contains at most $O(M/N^2)+1$ integer numbers of the form $r^3-r^2-2n r$ with $|r|\sim |n|$, $r\in\mathbb Z$, where $N\sim |n|$. Indeed, this follows from the fact that the distance between two consecutive numbers of this form is $\sim N^2$. Thus, the set of $\mu$ verifying $\mu = r^3-r^2-2n r + O(|r|^{2-})$ is the union of $O(M/N^2)+1$ intervals of size $O(N^{2-})$. Since the relation $\mu = r^3-r^2-2n r + O(|r|^{2-})$ with $|\mu|\sim M$ and $|r|\sim |n|\sim N\gg 1$ implies that $M\sim N^3$, we get \begin{equation*} \begin{split} &\left|\{\mu\in\mathbb R: |\mu|\sim M, \mu = r^3-r^2-2n r + O(|r|^{2-}), \text{ for some } r\in\mathbb Z, |r|\sim |n|\}\right|\lesssim \\ & N^{2-}\cdot\frac{M}{N^2} \lesssim M^{1-}. \end{split} \end{equation*} This completes the proof of the lemma~\ref{l.uv-dyadic} \end{proof} Using the lemma~\ref{l.uv-dyadic}, it is not difficult to conclude the proof of~(\ref{e.uv-dyadic}): by changing variables, we have to estimate $$\sup_n \int \langle \mu \rangle^{-1} \chi_{\Omega(n)}(\mu) d\mu.$$ By decomposing the domain of integration into dyadic blocks $\{|\mu|\sim M\}$, the lemma~\ref{l.uv-dyadic} gives \begin{equation*} \int \langle \mu \rangle^{-1} \chi_{\Omega(n)}(\mu) d\mu \leq 1+ \sum_{M\geq 1}\int\limits_{|\mu|\sim M} \langle \mu \rangle^{-1} \chi_{\Omega(n)}(\mu) d\mu \lesssim 1+\sum\limits_{M\geq 1; \, M \textrm{ dyadic }} M^{-1} M^{1-\delta} \lesssim 1. \end{equation*} This proves the estimate~(\ref{e.sharp-uv-2}), thus completing the proof of the lemma~\ref{l.sharp-uv}. \end{proof} \subsection{Proof of proposition~\ref{p.du2}: bilinear estimates for the coupling term $\partial_x (|u|^2)$} By the lemma~\ref{l.example-du2}, it suffices to prove the bilinear estimate: \begin{lemma}\label{l.sharp-du2}$\|\partial_x(u_1 \overline{u_2})\|_{W^s}\lesssim \|u_1\|_{X^{k,\frac{1}{2}-}}\|u_2\|_{X^{k,\frac{1}{2}}} + \|u_1\|_{X^{k,\frac{1}{2}}}\|u_2\|_{X^{k,\frac{1}{2}-}}$ whenever $1+s\leq 4k$ and $k-s\geq -1/2$. \end{lemma} \begin{proof}From the definition of $W^s$, we have to prove that \begin{equation}\label{e.sharp-du2-1} \|\partial_x(u_1 \overline{u_2})\|_{Y^{s,-1/2}}\lesssim \|u_1\|_{X^{k,\frac{1}{2}-}}\|u_2\|_{X^{k,\frac{1}{2}}} + \|u_1\|_{X^{k,\frac{1}{2}}}\|u_2\|_{X^{k,\frac{1}{2}-}} \end{equation} and \begin{equation}\label{e.sharp-du2-2} \left\|\frac{\langle n\rangle^s}{\langle \tau-n^3 \rangle}\widehat{\partial_x(u_1 \overline{u_2})} \right\|_{L_n^2 L_{\tau}^1}\lesssim \|u_1\|_{X^{k,\frac{1}{2}-}}\|u_2\|_{X^{k,\frac{1}{2}}} + \|u_1\|_{X^{k,\frac{1}{2}}}\|u_2\|_{X^{k,\frac{1}{2}-}}. \end{equation} We begin with the proof of~(\ref{e.sharp-du2-1}). First, we reduce the bilinear estimate to some multiplier estimates as follows. By the definition of Bourgain's spaces, \begin{equation*} \begin{split} \|\partial_x (u_1\overline {u_2})\|_{Y^{s,-a}}&= \|\langle \tau-n^3 \rangle^{-a} \langle n \rangle^s \widehat{\partial_x (u_1 \overline{u_2})}\|_{L^2_{\tau}L^2_n} \\ &= \| n \langle \tau-n^3 \rangle^{-a} \langle n \rangle^s \widehat{u_1}*\widehat{\overline{u_2}} (n,\tau)\|_{L^2_{\tau}L^2_n} \end{split} \end{equation*} Let \begin{equation*} f(n,\tau) = \langle n \rangle^k \langle \tau+n^2 \rangle^b \widehat{u}_1(n,\tau) \quad \text{ and } \quad g(n,\tau) = \langle n \rangle^k \langle -\tau+n^2 \rangle^c \overline{\widehat{u}_2(-n,-\tau)} \end{equation*} By duality, \begin{equation}\label{e.sharp-du2-1-duality} \begin{split} \|\partial_x (u_1\overline {u_2})\|_{Y^{s,-a}}&= \sup\limits_{\|\varphi\|_{L_{\tau}^2 l_n^2}\leq 1} \sum\limits_{n\in\mathbb Z}\int d\tau \sum\limits_{n_1\in\mathbb Z}\int d\tau_1 \frac{|n| \langle n \rangle^s}{\langle \tau-n^3 \rangle^{a}}\widehat{u}_1(n-n_1, \tau-\tau_1) \overline{\widehat{u}_2(-n_1,-\tau_1)} \cdot \overline{\varphi(n,\tau)} \\ &= \sup\limits_{\|\varphi\|_{L_{\tau}^2 l_n^2}\leq 1} \sum\limits_{n\in\mathbb Z}\int d\tau \sum\limits_{n_1\in\mathbb Z}\int d\tau_1 \frac{|n| \langle n \rangle^s \langle \tau-n^3 \rangle^{-a} f(n-n_1,\tau-\tau_1) g(n_1,\tau_1) \overline{\varphi(n,\tau)}} {\langle n-n_1 \rangle^k \langle (\tau-\tau_1)+(n-n_1)^2 \rangle^b \langle n_1 \rangle^k \langle -\tau_1+n_1^2 \rangle^c} \\ &= \sum\int\sum\int_{(n,n_1,\tau,\tau_1)\in\mathcal{V}_0} + \sum\int\sum\int_{(n,n_1,\tau,\tau_1)\in\mathcal{V}_1} + \sum\int\sum\int_{(n,n_1,\tau,\tau_1)\in\mathcal{V}_2} \\ &\equiv V_0+V_1 + V_2 \end{split} \end{equation} whenever $\mathbb Z^2\times\mathbb R^2 = \mathcal{V}_0\cup\mathcal{V}_1\cup\mathcal{V}_2$. As before, we have three general ways to estimate the quantity \begin{equation}\label{e.v} V=\sup\limits_{\|\varphi\|_{L_{\tau}^2 l_n^2}\leq 1} \sum\limits_{n\in\mathbb Z}\int d\tau \sum\limits_{n_1\in\mathbb Z}\int d\tau_1 \frac{|n| \langle n \rangle^s \langle \tau-n^3 \rangle^{-a} f(n-n_1,\tau-\tau_1) g(n_1,\tau_1) \overline{\varphi(n,\tau)}} {\langle n-n_1 \rangle^k \langle (\tau-\tau_1)+(n-n_1)^2 \rangle^b \langle n_1 \rangle^k \langle -\tau_1+n_1^2 \rangle^c}\chi_{\mathcal{V}} \end{equation} Firstly, we integrate over $\tau_1$ and $n_1$ and then use Cauchy-Schwarz and H\"older inequalities to obtain \begin{equation}\label{e.v0} \begin{split} |V|^2 &\leq \|\varphi\|_{L_{n,\tau}^2}^2 \left\|\frac{|n| \langle n \rangle^s}{\langle \tau-n^3 \rangle^{a}}\sum\limits_{n_1}\int d\tau_1 \frac{g(n_1,\tau_1) f(n-n_1,\tau-\tau_1)\chi_{\mathcal{V}}}{\langle n-n_1 \rangle^k \langle (\tau-\tau_1)+(n-n_1)^2 \rangle^b \langle n_1 \rangle^k \langle -\tau_1+n_1^2 \rangle^c} \right\|_{L_{\tau}^2 L_n^2} \\ &\leq\|u_1\|_{X^{k,b}}^2 \|u_2\|_{X^{k,c}}^2 \times \\ &\quad \times \left\| \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle^{2a}} \sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{V}}} {\langle -\tau_1 + n_1^2\rangle^{2c}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{2b}}\right\|_{L_{\tau}^{\infty} L_n^{\infty}}. \end{split} \end{equation} Secondly, we put $\widetilde{f}(n,\tau) = f(-n,-\tau)$, integrate over $n$ and $\tau$ and then use the same steps above to get \begin{equation}\label{e.v1} \begin{split} |V|^2&\leq \|g\|_{L_{\tau_1}^2 L_{n_1}^2}^2 \left\| \frac{1}{\langle n_1 \rangle^k \langle -\tau_1+n_1^2 \rangle^c}\sum\limits_n \int d\tau \frac{|n| \langle n \rangle^s}{\langle \tau-n^3 \rangle^{a}} \frac{\widetilde{f}(n_1-n,\tau_1-\tau)\overline{\varphi(n,\tau)}\chi_{\mathcal{V}}} {\langle (\tau-\tau_1) + (n-n_1)^2 \rangle^b \langle n_1-n \rangle^k} \right\|_{L_{\tau_1}^2 L_{n_1}^2}^2 \\ &\leq \|u_1\|_{X^{k,b}}^2 \|u_2\|_{X^{k,c}}^2 \times \\ &\quad \times \left\| \frac{1}{\langle n_1 \rangle^{2k} \langle -\tau_1+n_1^2 \rangle^{2c}} \sum\limits_n \int d\tau \frac{|n|^2 \langle n \rangle^{2s}\chi_{\mathcal{V}}}{\langle \tau-n^3 \rangle^{2a} \langle (\tau-\tau_1) + (n-n_1)^2 \rangle^{2b} \langle n_1-n \rangle^{2k}} \right\|_{L_{\tau_1}^{\infty} L_{n_1}^{\infty}}. \end{split} \end{equation} Finally, using the change of variables $\tau_1=\tau+\tau_2$ and $n_1=n+n_2$, we transform $\mathcal{V}$ into the region \begin{equation*} \widetilde{\mathcal{V}}= \{(n,n_2,\tau,\tau_2): (n,n+n_2,\tau,\tau+\tau_2)\in \mathcal{V}\} \end{equation*} and, hence, integrating over $\tau$ and $n$, we can estimate \begin{equation}\label{e.v2} \begin{split} |V|^2 &\leq \|\widetilde{f}\|_{L_{\tau_2}^2 L_{n_2}^2}^2 \left\|\frac{1}{\langle n_2 \rangle^k \langle -\tau_2 + n_2^2 \rangle^b} \sum\limits_{n\in\mathbb Z} \int d\tau \frac{|n|\langle n \rangle^s g(n+n_2,\tau+\tau_2) \overline{\varphi(n,\tau)}\chi_{\widetilde{\mathcal{V}}}}{\langle \tau - n^3 \rangle^a \langle -(\tau+\tau_2)+(n+n_2)^2 \rangle^c} \right\|_{L_{\tau_2}^2 L_{n_2}^2}^2 \\ &\leq \|u_1\|_{X^{k,b}}^2 \|u_2\|_{X^{k,c}}^2 \times \\ &\times \left\|\frac{1}{\langle n_2 \rangle^{2k} \langle -\tau_2 + n_2^2 \rangle^{2b}} \sum\limits_{n\in\mathbb Z} \frac{|n|^2\langle n \rangle^{2s}}{\langle n+n_2 \rangle^{2k}} \int d\tau \frac{\chi_{\widetilde{\mathcal{V}}}}{\langle \tau - n^3 \rangle^{2a} \langle -(\tau+\tau_2)+(n+n_2)^2 \rangle^{2c}} \right\|_{L_{\tau_2}^{\infty} L_{n_2}^{\infty}} \end{split} \end{equation} The next step is to use the estimates~(\ref{e.v0}),~(\ref{e.v1}) and~(\ref{e.v2}) for the expression~(\ref{e.v}) to reduce the bilinear estimate $\|\partial_x(u_1 \overline{u_2})\|_{Y^{s,-1/2}}\lesssim \|u_1\|_{X^{k,\frac{1}{2}-}}\|u_2\|_{X^{k,\frac{1}{2}}} + \|u_1\|_{X^{k,\frac{1}{2}}}\|u_2\|_{X^{k,\frac{1}{2}-}}$ to $L^{\infty}$ bounds for certain multipliers localized in some well-chosen regions $\mathcal{V}_0$, $\mathcal{V}_1$ and $\mathcal{V}_2$. We consider $n_0:=n$, $n_1$ and $n_2:=n_1-n$ the frequencies of our waves and $\lambda_0:=\tau-n^3$, $\lambda_1:=-\tau_1+n_1^2$, $\lambda_2:=-\tau_2+n_2^2:=(\tau-\tau_1)+(n-n_1)^2$ the modulations of our waves; again, $L_j= |\lambda_j|$ are variables measuring the magnitude of the modulations, $j=0,1,2$. We define $L_{max}\geq L_{med}\geq L_{min}$ to be the maximum, median and minimum of $L_0,L_1,L_2$. In order to define the regions $\mathcal{V}_i$, we split $\mathbb Z^2\times\mathbb R^2$ into three regions $\mathcal{O},\mathcal{P},\mathcal{Q}$, \begin{equation*} \begin{split} &\mathcal{O}=\{(n,n_1,\tau,\tau_1)\in\mathbb Z^2\times\mathbb R^2: |n|\leq 100\}, \\ &\mathcal{P}=\{(n,n_1,\tau,\tau_1)\in\mathbb Z^2\times\mathbb R^2: |n|\geq 100 \quad \text{ and } \quad |n_1|\gtrsim |n|^2\}, \\ &\mathcal{Q}=\{(n,n_1,\tau,\tau_1)\in\mathbb Z^2\times\mathbb R^2: |n|\geq 100 \quad \text{ and } \quad |n_1|\ll |n|^2\}. \end{split} \end{equation*} Now we separate $\mathcal{Q}$ into three parts \begin{equation*} \begin{split} &\mathcal{Q}_0=\{(n,n_1,\tau,\tau_1)\in\mathcal{C}: L_0=L_{max}\}, \\ &\mathcal{Q}_1=\{(n,n_1,\tau,\tau_1)\in\mathcal{C}: L_1=L_{max}\}, \\ &\mathcal{Q}_2=\{(n,n_1,\tau,\tau_1)\in\mathcal{C}: L_2=L_{max}\}. \end{split} \end{equation*} At this point, we put \begin{equation*} \begin{split} &\mathcal{V}_0=\mathcal{O}\cup\mathcal{P}\cup\mathcal{Q}_0, \\ &\mathcal{V}_1=\mathcal{Q}_1, \\ &\mathcal{V}_2=\mathcal{Q}_2. \end{split} \end{equation*} We have the following multiplier estimates: \begin{claim}\label{c.v0}If $1+s\leq 4k$ and $k-s\geq -1/2$, $$\left\| \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle} \sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{V}_0}} {\langle -\tau_1 + n_1^2\rangle^{1-}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}}\right\|_{L_{\tau}^{\infty} L_n^{\infty}} \lesssim 1.$$ \end{claim} \begin{claim}\label{c.v1}If $1+s\leq 4k$ and $k-s\geq -1/2$, $$\left\| \frac{1}{\langle n_1 \rangle^{2k} \langle -\tau_1+n_1^2 \rangle} \sum\limits_n \int d\tau \frac{|n|^2 \langle n \rangle^{2s}\chi_{\mathcal{V}_1}}{\langle \tau-n^3 \rangle \langle (\tau-\tau_1) + (n-n_1)^2 \rangle^{1-} \langle n_1-n \rangle^{2k}} \right\|_{L_{\tau_1}^{\infty} L_{n_1}^{\infty}}\lesssim 1.$$ \end{claim} \begin{claim}\label{c.v2}If $1+s\leq 4k$ and $k-s\geq -1/2$, $$\left\|\frac{1}{\langle n_2 \rangle^{2k} \langle -\tau_2 + n_2^2 \rangle} \sum\limits_{n\in\mathbb Z} \frac{|n|^2\langle n \rangle^{2s}}{\langle n+n_2 \rangle^{2k}} \int d\tau \frac{\chi_{\widetilde{\mathcal{V}_2}}}{\langle \tau - n^3 \rangle \langle -(\tau+\tau_2)+(n+n_2)^2 \rangle^{1-}} \right\|_{L_{\tau_2}^{\infty} L_{n_2}^{\infty}}\lesssim 1,$$ where $\widetilde{\mathcal{V}_2}$ is the image of $\mathcal{V}_2$ under the change of variables $n_2:=n_1-n$ and $\tau_2:=\tau_1-\tau$. \end{claim} Again, it is easy to show that these facts implies the desired bilinear estimate~(\ref{e.sharp-du2-1}). Indeed, by the equations~(\ref{e.v0}),~(\ref{e.v1}),~(\ref{e.v2}), we see that, for $a=1/2$ and well-chosen $b,c$, these claims means that, whenever $1+s\leq 4k$ and $k-s\geq -1/2$, $|V_0|\lesssim \|u_1\|_{X^{k,\frac{1}{2}-}}\|u_2\|_{X^{k,\frac{1}{2}-}}$, $|V_1|\lesssim \|u_1\|_{X^{k,\frac{1}{2}-}}\|u_2\|_{X^{k,\frac{1}{2}}}$ and $|V_2|\lesssim \|u_1\|_{X^{k,\frac{1}{2}}}\|u_2\|_{X^{k,\frac{1}{2}-}}$. Putting these informations into the equation~(\ref{e.sharp-du2-1-duality}), we obtain the bilinear estimate~(\ref{e.sharp-uv-1}). Hence, we have only to prove these claims. For later use, we recall that our dispersion relation is \begin{equation}\label{e.du2-dispersion} \lambda_0+\lambda_1-\lambda_2 = -n^3-n^2+2n_1n \end{equation} \begin{proof}[Proof of the claim~\ref{c.v0}] In the region $\mathcal{O}$, using that $|n|\leq 100$, \begin{equation*} \begin{split} &\sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle} \sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{O}}} {\langle -\tau_1 + n_1^2\rangle^{1-}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} \\ &\lesssim \frac{1}{\langle \tau - n^3 \rangle} \sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \frac{1}{\langle \lambda_1-\lambda_2 \rangle^{1-}}, \end{split} \end{equation*} by the lemma~\ref{l.calculus-1}. By the dispersion relation~(\ref{e.du2-dispersion}) and the fact $\langle x+y \rangle\leq \langle x \rangle \langle y \rangle$, we obtain the bound \begin{equation}\label{e.c.v0-1} \begin{split} &\sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle} \sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{O}}} {\langle -\tau_1 + n_1^2\rangle^{1-} \langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} \\ &\lesssim \sup\limits_{n\neq 0}\sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1 \rangle^{2k} \langle -n^3-n^2+2nn_1 \rangle^{1-}} \\ &\lesssim 1, \end{split} \end{equation} if $k>0$. In the region $\mathcal{P}$, we consider the cases $n_1=(n^2+n)/2$ and $|n_1-(n^2+n)/2|\geq 1$. Using that $|n|\lesssim |n_1|^{1/2}$, $4k\geq 1+s$, the dispersion relation~(\ref{e.du2-dispersion}) and the fact that $\langle xy \rangle\gtrsim\langle x \rangle \langle y \rangle$ whenever $|x|,|y|\geq 1$, we see that \begin{equation*} \begin{split} &\sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle} \sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{P}}} {\langle -\tau_1 + n_1^2\rangle^{1-}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} \\ &\lesssim C + \sup\limits_{n,\tau} |n|^2 \langle n \rangle^{2s} \sum\limits_{|n_1-(n^2+n)/2|\geq 1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k} \langle n \rangle^{1-} \langle n_1-(n^2+n)/2 \rangle^{1-}}. \end{split} \end{equation*} Thus, \begin{equation}\label{e.c.v0-2} \begin{split} &\sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle} \sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{P}}} {\langle -\tau_1 + n_1^2\rangle^{1-}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} \\ &\lesssim C + \sup\limits_{n,\tau} |n|^{1+} \langle n \rangle^{2s} \sum\limits_{|n_1-(n^2+n)/2|\geq 1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k} \langle n_1-(n^2+n)/2 \rangle^{1-}} \\ &\lesssim C + \sum\limits_{|n_1-(n^2+n)/2|\geq 1} \frac{1}{\langle n_1 \rangle^{\frac{1}{2}-} \langle n_1-(n^2+n)/2 \rangle^{1-}} \\ &\lesssim 1. \end{split} \end{equation} In the region $\mathcal{Q}_0$, using that $L_0\gtrsim |n|^3$ and $k-s\geq -1/2$, we get \begin{equation}\label{e.c.v0-3} \begin{split} &\sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle} \sum\limits_{n_1} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{Q}_0}} {\langle -\tau_1 + n_1^2\rangle^{1-}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} \\ &= \sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle} \sum\limits_{|n_1|\gtrsim |n|} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{Q}_0}} {\langle -\tau_1 + n_1^2\rangle^{1-}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} + \\ &\sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle \tau - n^3 \rangle} \sum\limits_{|n_1|\ll |n|} \frac{1}{\langle n_1 \rangle^{2k} \langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{Q}_0}} {\langle -\tau_1 + n_1^2\rangle^{1-}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} \\ &\lesssim \sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle n \rangle^3 \langle n \rangle^{2k}} \sum\limits_{|n_1|\gtrsim |n|} \frac{1}{\langle n-n_1\rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{Q}_0}} {\langle -\tau_1 + n_1^2\rangle^{1-} \langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} + \\ &\sup\limits_{n,\tau} \frac{|n|^2 \langle n \rangle^{2s}}{\langle n \rangle^3 \langle n \rangle^{2k}} \sum\limits_{|n_1|\ll |n|} \frac{1}{\langle n_1 \rangle^{2k}} \int d\tau_1 \frac{\chi_{\mathcal{Q}_0}} {\langle -\tau_1 + n_1^2\rangle^{1-}\langle (\tau-\tau_1) + (n-n_1)^2\rangle^{1-}} \\ &\lesssim 1, \end{split} \end{equation} if $k>0$. \end{proof} \begin{proof}[Proof of the claim~\ref{c.v1}] In the region $\mathcal{Q}_1$, using that $L_1=L_{max}\gtrsim |n|^3$ (by the dispersion relation~(\ref{e.du2-dispersion}) and $|n_1|\ll |n|^2$), $\langle n \rangle \leq \langle n_1 \rangle \langle n-n_1 \rangle$ and $k-s\geq -1/2$, it is not difficult to see that \begin{equation} \begin{split} &\sup\limits_{n_1,\tau_1} \frac{1}{\langle n_1 \rangle^{2k} \langle -\tau_1+n_1^2 \rangle} \sum\limits_n \frac{|n|^2 \langle n \rangle^{2s}}{\langle n_1-n \rangle^{2k}} \int d\tau \frac{\chi_{\mathcal{Q}_1}}{\langle \tau-n^3 \rangle \langle (\tau-\tau_1) + (n-n_1)^2 \rangle^{1-}} \\ &\lesssim \sup\limits_{n_1,\tau_1} \sum\limits_{n\in\mathbb Z} \frac{1}{\langle -\tau_1 + (n-n_1)^2 +n^3 \rangle^{1-}} \\ &\lesssim 1. \end{split} \end{equation} \end{proof} \begin{proof}[Proof of the claim~\ref{c.v2}] In the region $\mathcal{Q}_2$, using that $L_2=L_{max}\gtrsim |n|^3$ (by the dispersion relation~(\ref{e.du2-dispersion}) and $|n_1|\ll |n|^2$), $\langle n \rangle \leq \langle n_2 \rangle \langle n+n_2 \rangle$ and $k-s\geq -1/2$, it follows that \begin{equation} \begin{split} &\sup\limits_{n_2,\tau_2} \frac{1}{\langle n_2 \rangle^{2k} \langle -\tau_2 + n_2^2 \rangle} \sum\limits_{n\in\mathbb Z} \frac{|n|^2\langle n \rangle^{2s}}{\langle n+n_2 \rangle^{2k}} \int d\tau \frac{\chi_{\widetilde{\mathcal{Q}}_2}}{\langle \tau - n^3 \rangle \langle -(\tau+\tau_2)+(n+n_2)^2 \rangle^{1-}} \\ &\lesssim \sup\limits_{n_2,\tau_2} \sum\limits_{n\in\mathbb Z} \frac{1}{\langle \tau_2 - (n+n_2)^2 +n^3 \rangle^{\theta}} \\ &\lesssim 1. \end{split} \end{equation} \end{proof} Once~(\ref{e.sharp-du2-1}) is proved, we start the proof of the estimate~(\ref{e.sharp-du2-2}), that is, \begin{equation*} \left\|\frac{\langle n \rangle^s}{\langle \tau-n^3 \rangle} \widehat{\partial_x(u_1 \overline{u_2})} \right\|_{L_n^2 L_{\tau}^1}\lesssim \|u_1\|_{X^{k,\frac{1}{2}-}}\|u_2\|_{X^{k,\frac{1}{2}}} + \|u_1\|_{X^{k,\frac{1}{2}}}\|u_2\|_{X^{k,\frac{1}{2}-}}. \end{equation*} We can rewrite the left-hand side as \begin{equation*} \left\|\int\limits_{n=n_1+n_2} |n|\langle n\rangle^s \int\limits_{\tau=\tau_1+\tau_2} \frac{1}{\langle\tau-n^3\rangle} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \end{equation*} To begin with, we split the domain of integration into three regions. Let $\mathcal{S} =\mathcal{S}_1\cup\mathcal{S}_2$, where $$\mathcal{S}_1:=\{(n,\tau,n_2,\tau_2): |n|\leq 100\},$$ $$\mathcal{S}_2:=\{(n,\tau,n_2,\tau_2): |n|> 100 \textrm{ and } |n_2|\gtrsim |n|^2\},$$ $\mathcal{T}:=\{(n,\tau,n_2,\tau_2): |n_2|> 100, |n_2|\ll |n|^2 \textrm{ and either } |\tau_1+n_1^2| = L_{\max} \textrm{ or } |-\tau_2+n_2^2| = L_{\max}\}$ and $\mathcal{U}:=\{(n,\tau,n_2,\tau_2): |n_2|> 100, |n|\sim |n_2| \textrm{ and } |\tau-n^3| = L_{\max}\}$. Clearly, $\mathcal{S}$, $\mathcal{T}$ and $\mathcal{U}$ completely decomposes our domain of integrations, so that, in order to prove~(\ref{e.sharp-du2-2}), it suffices to get the bounds \begin{equation}\label{e.sharp-du2-2-s} \begin{split} &\left\|\int\limits_{n=n_1+n_2} \frac{|n|\langle n\rangle^s}{\langle n_1\rangle^k \langle n_2\rangle^{k}} \int\limits_{\tau=\tau_1+\tau_2} \frac{\chi_{\mathcal{S}}}{ \langle\tau+n^2\rangle\langle\tau_1+n_1^2\rangle^{\frac{1}{2}-} \langle-\tau_2+n_2^2\rangle^{\frac{1}{2}-}}\widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)}\right\|_{L_n^2 L_{\tau}^1} \\ &\lesssim \|u_1\|_{X^{0,0}} \|u_2\|_{X^{0,0}} \end{split} \end{equation} \begin{equation}\label{e.sharp-du2-2-T} \begin{split} &\left\|\int\limits_{n=n_1+n_2} \frac{|n|\langle n\rangle^s}{\langle n_1\rangle^k \langle n_2\rangle^{k}} \int\limits_{\tau=\tau_1+\tau_2} \frac{\chi_{\mathcal{T}}}{ \langle\tau-n^3\rangle}\widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \\ &\lesssim \|u_1\|_{X^{0,\frac{1}{2}-}} \|u_2\|_{X^{0,\frac{1}{2}}} + \|u_1\|_{X^{0,\frac{1}{2}}} \|u_2\|_{X^{0,\frac{1}{2}-}} \end{split} \end{equation} \begin{equation}\label{e.sharp-du2-2-U} \begin{split} &\left\|\int\limits_{n=n_1+n_2} \frac{|n|\langle n\rangle^s}{\langle n_1\rangle^k\langle n_2\rangle^{k}} \int\limits_{\tau=\tau_1+\tau_2} \frac{\chi_{\mathcal{U}}}{\langle\tau-n^3\rangle} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \\ &\lesssim \|u_1\|_{X^{0,\frac{1}{2}-}} \|u_2\|_{X^{0,\frac{1}{2}}} + \|u_1\|_{X^{0,\frac{1}{2}}} \|u_2\|_{X^{0,\frac{1}{2}-}} \end{split} \end{equation} To prove~(\ref{e.sharp-du2-2-s}), we note that $$\frac{|n|\langle n\rangle^s}{\langle n_1\rangle^k \langle n_2\rangle^{k}} \lesssim 1,$$ if either $|n|\leq 100$, or $|n|>100$ and $|n_2|\gtrsim |n|^2$, since $\langle n\rangle\leq \langle n_1\rangle\langle n_2\rangle$ and $1+s\leq 4k$. Hence, \begin{equation*} \begin{split} &\left\|\int\limits_{n=n_1+n_2} \frac{|n|\langle n\rangle^s}{\langle n_1\rangle^k \langle n_2\rangle^{k}} \int\limits_{\tau=\tau_1+\tau_2} \frac{\chi_{\mathcal{S}}}{\langle\tau-n^3\rangle\langle\tau_1+n_1^2\rangle^{\frac{1}{2}-} \langle-\tau_2+n_2^2\rangle^{\frac{1}{2}-}}\widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)}\right\|_{L_n^2 L_{\tau}^1} \\ &\lesssim \left\|\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \frac{1}{\langle\tau-n^3\rangle \langle\tau_1+n_1^2\rangle^{\frac{1}{2}-} \langle-\tau_2+n_2^2\rangle^{\frac{1}{2}-}}\widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2)\right\|_{L_n^2 L_{\tau}^1}. \end{split} \end{equation*} Therefore, this reduces our goal to prove that \begin{equation*} \left\|\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \frac{1}{\langle\tau-n^3\rangle \langle\tau_1+n_1^2\rangle^{\frac{1}{2}-} \langle-\tau_2+n_2^2\rangle^{\frac{1}{2}-}} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u_1\|_{X^{0,0}} \|u_2\|_{X^{0,0}}. \end{equation*} This can be re-written as \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{2/3}\langle\tau-n^3\rangle^{1/3}} \int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u_1\|_{X^{0,\frac{1}{2}-}} \|u_2\|_{X^{0,\frac{1}{2}-}}. \end{equation*} Since $2(-2/3)<-1$, the Cauchy-Schwarz inequality in $\tau$ reduces this bound to showing \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{1/3}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(n_2,\tau_2)} \right\|_{L_n^2 L_{\tau}^2} \lesssim \|u_1\|_{X^{0,\frac{1}{2}-}} \|u_2\|_{X^{0,\frac{1}{2}-}}, \end{equation*} which is an easy consequence of duality, $L^4_{xt} L^2_{xt} L^4_{xt}$ H\"older and the Bourgain-Strichartz inequalities $X^{0,3/8}, Y^{0,1/3}\subset L^4$ in the lemma~\ref{l.Bourgain}. The second bound~(\ref{e.sharp-du2-2-T}) can be proved in an analogous fashion, using the dispersion relation \begin{equation}\label{e.du2-Dispersion} (\tau-n^3) - (-\tau_2+n_2^2) + (\tau_1+n_1^2) = -n^3 + n^2 + 2n n_2. \end{equation} which implies that, in the region $\mathcal{M}$, either $|\tau_1+n_1^2|\gtrsim |n|^3$ or $|-\tau_2+n_2^2|\gtrsim |n|^3$. Thus, using that $s-k\leq 1/2$ and making the corresponding cancelation, we see that it suffices to prove that \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{2/3}\langle\tau-n^3\rangle^{1/3}} \int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u_1\|_{X^{0,0}} \|u_2\|_{X^{0,\frac{1}{2}-}} \end{equation*} and \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{2/3}\langle\tau-n^3\rangle^{1/3}} \int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u_1\|_{X^{0,\frac{1}{2}-}} \|u_2\|_{X^{0,0}}. \end{equation*} Again, we use Cauchy-Schwarz to reduce these estimates to \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{1/3}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^2} \lesssim \|u_1\|_{X^{0,0}} \|u_2\|_{X^{0,\frac{1}{2}-}} \end{equation*} and \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{1/3}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^2} \lesssim \|u_1\|_{X^{0,\frac{1}{2}-}} \|u_2\|_{X^{0,0}}, \end{equation*} which follows from duality, H\"older and Bourgain-Strichartz, as above. Finally, the third bound~(\ref{e.sharp-du2-2-U}) requires a subdivision into two cases. When $|\tau_1+n_1^2|\gtrsim |n|^{1-}$ (resp., $|-\tau_2+n_2^2|\gtrsim |n|^{1-}$), we use $\langle\tau_1+n_1^2\rangle^{1/8}$ leaving $\langle\tau_1+n_1^2\rangle^{3/8}$ in the denominator and $|n|^{1+s-k-}$ in the numerator (resp., the same argument with $(-\tau_2+n_2^2)$ instead of $(\tau_1+n_1^2)$). After another cancelation using $|\tau-n^3|\gtrsim |n|^3$, we need to prove \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{1/2+}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u_1\|_{X^{0,3/8}} \|u_2\|_{X^{0,\frac{1}{2}-}}, \end{equation*} and \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{1/2+}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u_1\|_{X^{0,\frac{1}{2}-}} \|u_2\|_{X^{0,3/8}}. \end{equation*} These bounds follow again from Cauchy-Schwarz in $\tau$, duality, H\"older and Bourgain-Strichartz. So it remains only the case $|\tau_1+n_1^2|, |\tau_2-n_2^3|\ll |n|^{1-}$. In this case, the dispersion relation says that, in the region $\mathcal{N}$, $$\tau-n^3 = -n^3+n^2+2n n_2 - O(|n|^{1-}).$$ On the other hand, the cancelation using $|\tau-n^3|\gtrsim |n|^3$ and $s-k\leq 1/2$ reduces the proof to the bound \begin{equation*} \left\|\frac{1}{\langle\tau-n^3\rangle^{1/2}}\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)} \chi_{\widetilde{\Omega}(n)}(\tau-n^3) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|u_1\|_{X^{0,\frac{1}{2}-}} \|u_2\|_{X^{0,\frac{1}{2}-}}, \end{equation*} where, $\widetilde{\Omega}(n) = \{\eta\in\mathbb R: \eta = n^3-n^2-2n r + O(|n|^{1-}), \text{ for some } r\in\mathbb Z, |r|\ll |n|^2\}$ if $|n| > 100$ and $\widetilde{\Omega}(n) = \emptyset$ otherwise. Applying Cauchy-Schwarz in $\tau$, we can estimate the left-hand side by $$\left\|\left(\int \langle\tau-n^3\rangle^{-1} \chi_{\widetilde{\Omega}(n)}(\tau-n^3) \right)^{1/2} \left\|\int\limits_{n=n_1+n_2} \int\limits_{\tau=\tau_1+\tau_2} \widehat{u_1}(n_1,\tau_1) \overline{\widehat{u_2}(-n_2,-\tau_2)}\right\|_{L_{\tau}^2}\right\|_{L_n^2}$$ Therefore, the point is to show \begin{equation}\label{e.du2-dyadic} \sup_{n}\left(\int \langle\tau-n^3\rangle^{-1} \chi_{\widetilde{\Omega}(n)}(\tau-n^3) d\tau\right)\lesssim 1 \end{equation} We need the following lemma: \begin{lemma}\label{l.du2-dyadic} There exists some $\delta>0$ such that, for any fixed $n\in\mathbb Z$, $|n|\gg 1$ and for all $M\geq 1$ dyadic, we have \begin{equation*} |\{\mu\in\mathbb R: |\mu|\sim M, \mu = n^3-n^2-2n r + O(|n|^{1-}), \text{ for some } r\in\mathbb Z, |r|\ll |n|^2\}|\lesssim M^{1-\delta}. \end{equation*} \end{lemma} \begin{proof}Note that the dyadic block $\{|\mu|\sim M\}$ contains at most $O(M/N)+1$ integer numbers of the form $n^3-n^2-2n r$ with $r\in\mathbb Z$, where $N\sim |n|$. Indeed, this follows from the fact that the distance between two consecutive numbers of this form is $\sim N$. Thus, the set of $\mu$ verifying $\mu = r^3-r^2-2n r + O(|r|^{2-})$ is the union of $O(M/N)+1$ intervals of size $O(N^{2-})$. Since the relation $\mu = n^3-n^2-2n r + O(|n|^{1-})$ with $|\mu|\sim M$ and $|r|\ll |n|^2\sim N^2\gg 1$ implies that $M\sim N^3$, we get \begin{equation*} \begin{split} &\left|\{\mu\in\mathbb R: |\mu|\sim M, \mu = n^3-n^2-2n r + O(|n|^{1-}), \text{ for some } r\in\mathbb Z, |r|\ll |n|^2\}\right|\lesssim \\ & N^{1-}\cdot\frac{M}{N} \lesssim M^{1-}. \end{split} \end{equation*} This completes the proof of the lemma~\ref{l.du2-dyadic} \end{proof} It is now easy to conclude the proof of~(\ref{e.uv-dyadic}): by changing variables, we have to estimate $$\sup_n \int \langle \mu \rangle^{-1} \chi_{\widetilde{\Omega}(n)}(\mu) d\mu.$$ By decomposing the domain of integration into dyadic blocks $\{|\mu|\sim M\}$, the lemma~\ref{l.du2-dyadic} gives \begin{equation*} \begin{split} \int \langle \mu \rangle^{-1} \chi_{\widetilde{\Omega}(n)}(\mu) d\mu \leq 1+ \sum_{M\geq 1}\int\limits_{|\mu|\sim M} \langle \mu \rangle^{-1} \chi_{\widetilde{\Omega}(n)(\mu)} d\mu \\ \lesssim 1+\sum\limits_{M\geq 1; \, M \textrm{ dyadic }} M^{-1} M^{1-\delta} \lesssim 1. \end{split} \end{equation*} This proves the estimate~(\ref{e.sharp-du2-2}), thus completing the proof of the lemma~\ref{l.sharp-du2}. \end{proof} \section{Local well-posedness for rough initial data}\label{s.thmA} This section contains the proof of the theorem~\ref{t.A} concerning the local well-posedness of the NLS-KdV. First of all, we observe that the NLS-KdV~(\ref{e.nls-kdv}) is equivalent to the integral equation \begin{equation*} u(t) = U(t)u_0 - i\int_0^t U(t-t')\{\alpha u(t')v(t') +\beta |u|^2 u(t') \} dt', \end{equation*} \begin{equation*} v(t) = V(t)v_0 + \int_0^t V(t-t')\{\gamma\partial_x(|u|^2)(t') - \frac{1}{2}\partial_x(v^2)(t')\} dt'. \end{equation*} Since we are seeking for local-in-time solutions for~(\ref{e.nls-kdv}), it suffices to find a fixed point $u$ for the map $\Phi = (\Phi_1,\Phi_2):\widetilde{X}^k([0,T])\times\widetilde{Y}^s([0,T])\to \widetilde{X}^k([0,T])\times\widetilde{Y}^s([0,T])$, \begin{equation*} \Phi_1(u,v)=\psi_1(t)U(t)u_0 -i\psi_T(t)\int_0^t U(t-t')\{ \alpha u(t')v(t') + \beta |u|^2 u(t')\} dt', \end{equation*} \begin{equation*} \Phi_2(u,v)=\psi_1(t)V(t)v_0 + \psi_T(t)\int_0^t V(t-t')\{ \gamma\partial_x(|u|^2)(t') - \frac{1}{2}\partial_x(v^2)(t')\} dt'. \end{equation*} From now on, our efforts are to show that $\Phi$ is a contraction of (a large ball of) the space $\widetilde{X}^k([0,T])\times\widetilde{Y}^s([0,T])$ for sufficiently small $T>0$. To accomplish this goal, we need the following well-known linear and multilineal estimates related to the cubic NLS and the KdV equations: \begin{lemma}[Linear estimates]\label{l.linear}It holds \begin{itemize} \item $\|\psi_1(t) U(t)u_0\|_{X^k}\lesssim \|u_0\|_{H^k}$ and $\left\|\psi_T(t)\int_0^t U(t-t') F(t') dt'\right\|_{X^k} \lesssim \|F\|_{Z^k}$; \item $\|\psi_1(t) V(t)v_0\|_{Y^s}\lesssim \|v_0\|_{H^s}$ and $\left\|\psi_T(t)\int_0^t V(t-t') G(t') dt'\right\|_{Y^s}\lesssim \|G\|_{W^s}$. \end{itemize} \end{lemma} \begin{lemma}[Trilinear estimate for the cubic term $|u|^2u$]\label{l.nls} For $k\geq 0$, we have $$ \|\psi(t) u v \overline{w}\|_{Z^k}\lesssim \|u\|_{X^{k,\frac{3}{8}}} \|v\|_{X^{k,\frac{3}{8}}} \|w\|_{X^{k,\frac{3}{8}}}. $$ \end{lemma} \begin{lemma}[Bilinear estimate for $\partial_x(v^2)$]\label{l.kdv} For $s\geq -1/2$, we have $$ \|\psi(t)\partial_x(v_1 v_2)\|_{W^s}\lesssim \|v_1\|_{Y^{s,\frac{1}{2}}} \|v_2\|_{Y^{s,\frac{1}{2}-}} + \|v_1\|_{Y^{s,\frac{1}{2}-}} \|v_2\|_{Y^{s,\frac{1}{2}}}, $$ if $v_1=v_1(x,t)$ and $v_2=v_2(x,t)$ are $x$-periodic functions having zero $x$-mean for all $t$ (i.e., $\int_{\mathbb T} v_j(x,t) dx =0$ for all $t$ and $j=1,2$). \end{lemma} \begin{remark}The zero-mean assumption in the lemma~\ref{l.kdv} above is crucial for some of the analysis of the multiplier associated to this bilinear estimate. However, in the proof of our local well-posedness result, this hypothesis is not restrictive by a standard argument based on the conservation of the mean of $v$ under the flow~(\ref{e.nls-kdv}). See the remark~\ref{r.zero-mean} below. \end{remark} We present the proofs of these lemmas in the appendix of this paper because some of these estimates are not stated as above in the literature, although they are contained in the works~\cite{Bourgain} and~\cite{CKSTT} for instance. See the section~\ref{s.appendix} below for more details. Returning to the proof of the theorem~\ref{t.A}, in order to apply the lemma~\ref{l.kdv}, we make the following observation: \begin{remark}\label{r.zero-mean}The spatial mean $\int_{\mathbb T} v(t,x) dx$ is preserved during the evolution~(\ref{e.nls-kdv}). Thus, we can assume that the initial data $v_0$ has zero-mean, since otherwise we make the change $w= v-\int_{\mathbb T}v_0 dx$ at the expense of two harmless linear terms (namely, $u\int_{\mathbb T}v_0 dx$ and $\partial_x v \int_{\mathbb T}v_0$). \end{remark} After this reduction, we are ready to finish the proof of the theorem~\ref{t.A}. Accordingly with the linear estimates (lemma~\ref{l.linear}), trilinear estimate for the cubic term $|u|^2 u$ (lemma~\ref{l.nls}), bilinear estimate for $\partial_x(v^2)$ (lemma~\ref{l.kdv}) and the bilinear estimates for the coupling terms (propositions~\ref{p.uv} and~\ref{p.du2}), we obtain \begin{equation*} \begin{split} \|\Phi_1(u,v)\|_{\widetilde{X}^k([0,T])}&\leq C_0\|u_0\|_{H^k} + C_1\{\|uv\|_{Z^k}+\|u\|_{X^{k,\frac{3}{8}}([0,T])}^3\} \\ &\leq C_0\|u_0\|_{H^k} + C_1 \|u\|_{X^{k,\frac{1}{2}-}([0,T])}\|v\|_{Y^{s,\frac{1}{2}}([0,T])} + \\ &+ C_1\|u\|_{X^{k,\frac{1}{2}}([0,T])}\|v\|_{Y^{s,\frac{1}{2}-}([0,T])} + C_1 \|u\|_{X^{k,\frac{3}{8}}([0,T])}^3 \end{split} \end{equation*} and \begin{equation*} \begin{split} \|\Phi_2(u,v)\|_{\widetilde{Y}^s([0,T])}&\leq C_0\|v_0\|_{H^s} + C_1\{\|\partial_x(v^2)\|_{W^s}+\|\partial_x(|u|^2)\|_{W^s}\} \\ &\leq C_0\|v_0\|_{H^k} + C_1\{ \|v\|_{Y^{s,\frac{1}{2}}}\|v\|_{Y^{s,\frac{1}{2}-}([0,T])} + \|u\|_{X^{k,\frac{1}{2}}}\|u\|_{X^{k,\frac{1}{2}-}([0,T])} \}, \end{split} \end{equation*} if $s\geq 0$, $-1/2\leq k-s\leq 3/2$ and $1+s\leq 4k$. At this point we invoke the following elementary lemma concerning the stability of Bourgain's spaces with respect to time localization: \begin{lemma}Let $X_{\tau=h(\xi)}^{s,b}:=\{f: \langle\tau-h(\xi)\rangle^b\langle\xi\rangle^s |\widehat{f}(\tau,\xi)|\in L^2\}$. Then, $$\|\psi(t)f\|_{X_{\tau=h(\xi)}^{s,b}}\lesssim_{\psi,b} \|f\|_{X_{\tau=h(\xi)}^{s,b}}$$ for any $s,b\in\mathbb R$ and, furthermore, if $-1/2<b'\leq b <1/2$, then for any $0<T<1$ we have $$\|\psi_T(t)f\|_{X_{\tau=h(\xi)}^{s,b'}}\lesssim_{\psi,b',b} T^{b-b'} \|f\|_{X_{\tau=h(\xi)}^{s,b}},$$ \end{lemma} \begin{proof}First of all, note that $\langle\tau-\tau_0-h(\xi)\rangle^{b}\lesssim_b \langle\tau_0\rangle^{|b|}\langle\tau-h(\xi)\rangle^{b}$, from which we obtain $$\|e^{it\tau_0}f\|_{X_{\tau=h(\xi)}^{s,b}}\lesssim_b \langle\tau_0\rangle^{|b|} \|f\|_{X_{\tau=h(\xi)}^{s,b}}.$$ Using that $\psi(t)=\int\widehat{\psi}(\tau_0) e^{it\tau_0}d\tau_0$, we conclude $$\|\psi(t)f\|_{X_{\tau=h(\xi)}^{s,b}}\lesssim_b \left(\int|\widehat{\psi}(\tau_0)| \langle\tau_0\rangle^{|b|}\right) \|f\|_{X_{\tau=h(\xi)}^{s,b}}.$$ Since $\psi$ is smooth with compact support, the first estimate follows. Next we prove the second estimate. By conjugation we may assume $s=0$ and, by composition it suffices to treat the cases $0\leq b'\leq b$ or $\leq b'\leq b\leq 0$. By duality, we may take $0\leq b'\leq b$. Finally, by interpolation with the trivial case $b'=b$, we may consider $b'=0$. This reduces matters to show that $$\|\psi_T(t)f\|_{L^2}\lesssim_{\psi,b} T^b\|f\|_{X_{\tau=h(\xi)}^{0,b}}$$ for $0<b<1/2$. Partitioning the frequency spaces into the cases $\langle\tau-h(\xi)\rangle\geq 1/T$ and $\langle\tau-h(\xi)\leq 1/T$, we see that in the former case we'll have $$\|f\|_{X_{\tau=h(\xi)}^{0,0}}\leq T^b\|f\|_{X_{\tau=h(\xi)}^{0,b}}$$ and the desired estimate follows because the multiplication by $\psi$ is a bounded operation in Bourgain's spaces. In the latter case, by Plancherel and Cauchy-Schwarz \begin{equation*} \begin{split} \|f(t)\|_{L_x^2}&\lesssim \|\widehat{f(t)}(\xi)\|_{L_{\xi}^2} \lesssim \left\|\int_{\langle\tau-h(\xi)\rangle\leq 1/T}|\widehat{f}(\tau,\xi)|d\tau) \right\|_{L_{\xi}^2} \\ &\lesssim_b T^{b-1/2} \left\|\int\langle\tau-h(\xi)\rangle^{2b} |\widehat{f}(\tau,\xi)|^2 d\tau)^{1/2}\right\|_{L_{\xi}^2} = T^{b-1/2}\|f\|_{X_{\tau=h(\xi)}^{s,b}}. \end{split} \end{equation*} Integrating this against $\psi_T$ concludes the proof of the lemma. \end{proof} Now, a direct application of this lemma yields \begin{equation*} \|\Phi_1(u,v)\|_{\widetilde{X}^k([0,T])}\leq C_0\|u_0\|_{H^k} + C_1 T^{0+}\{\|u\|_{X^{k,\frac{1}{2}}([0,T])}\|v\|_{Y^{s,\frac{1}{2}}([0,T])} + \|u\|_{X^{k,\frac{1}{2}}([0,T])}^3\}. \end{equation*} and \begin{equation*} \begin{split} \|\Phi_2(u,v)\|_{\widetilde{Y}^s([0,T])}&\leq C_0\|v_0\|_{H^s} + C_1T^{0+}\{\|v\|_{Y^{s,\frac{1}{2}}([0,T])}^2 + \|u\|_{X^{k,\frac{1}{2}}([0,T])}^2 \}, \end{split} \end{equation*} if $s\geq 0$, $-1/2\leq k-s\leq 3/2$ and $1+s\leq 4k$. Hence, if $T>0$ is sufficiently small (depending on $\|u_0\|_{H^k}$ and $\|v_0\|_{H^s}$), we see that for every sufficiently large $R>0$, $\Phi$ sends the ball of radius $R$ of the space $\widetilde{X}^k([0,T])\times\widetilde{Y}^s([0,T])$ into itself. Similarly, we have that \begin{equation*} \|\Phi_1(u,v)-\Phi_1(\widetilde{u},\widetilde{v})\|_{\widetilde{X}^k} \lesssim T^{0+}\{\|u\|_{X^{k,\frac{1}{2}}}+\|u\|_{X^{k,\frac{1}{2}}}^2+ \|v\|_{Y^{s,\frac{1}{2}}}\}\{\|u-\widetilde{u}\|_{X^{k,\frac{1}{2}}}+ \|v-\widetilde{v}\|_{Y^{s,\frac{1}{2}}}\} \end{equation*} and \begin{equation*} \|\Phi_2(u,v)-\Phi_2(\widetilde{u},\widetilde{v})\|_{\widetilde{Y}^s([0,T])} \lesssim T^{0+}\{\|u\|_{X^{k,1/2}}+\|v\|_{Y^{s,1/2}}\} \{\|u-\widetilde{u}\|_{X^{k,1/2}}+\|v-\widetilde{v}\|_{Y^{s,1/2}}\}, \end{equation*} if $s\geq 0$, $-1/2\leq k-s\leq 3/2$ and $1+s\leq 4k$. So, up to taking $T>0$ smaller, we get that $\Phi$ is a contraction. This concludes the proof of the theorem~\ref{t.A}. \section{Global well-posedness in the energy space $H^1\times H^1$} \label{s.thmB} This section is devoted to the proof of the theorem~\ref{t.B}. First of all, we recall the following conserved functionals for the NLS-KdV system \begin{lemma}\label{l.global}The evolution~(\ref{e.nls-kdv}) preserves the quantities \begin{itemize} \item $M(t):=\int_{\mathbb T} |u(t)|^2 dx$, \item $Q(t):=\int_{\mathbb T}\left\{\alpha v(t)^2 + 2\gamma \Im(u(t)\overline{\partial_x u(t)}) dx\right\}$ and \item $E(t):=\int_{\mathbb T}\left\{\alpha\gamma v(t)|u(t)|^2 -\frac{\alpha}{6}v(t)^3 + \frac{\beta\gamma}{2}|u(t)|^4 + \frac{\alpha}{2}|\partial_x v(t)|^2 + \gamma |\partial_x u(t)|^2\right\} dx$. \end{itemize} In other words, $M(t)=M(0)$, $Q(t)=Q(0)$ and $E(t)=E(0)$. \end{lemma} In order to do not interrupt the proof of the global well-posedness result, we postpone the proof of this lemma to the appendix. Let $\alpha \gamma >0$ and $t >0$. From the previous lemma, we have that $\|u(t)\|_{L^2}=\|u_0\|_{L^2}$, and \begin{equation*} \|v(t)\|_{L^2}^2 \leq \frac{1}{|\alpha|}\left\{ |\mathcal{Q}_0| + 2|\gamma|\;\|u_0\|_{L^2}\|\partial_x u(t)\|_{L^2}\right\}. \end{equation*} Put $\mu=\min \left \{|\gamma|,\tfrac{|\alpha|}{2}\right \}$. Then, using again the previous lemma, Gagliardo-Nirenberg and Young inequalities, we deduce \begin{equation*} \begin{split} \|\partial_x u(t)\|_{L^2}^2&+ \|\partial_x v(t)\|_{L^2}^2 \leq \frac{1}{\mu}\left( |\gamma|\|\partial_x u(t)\|_{L^2}^2 + |\alpha|\|\partial_x v(t)\|_{L^2}^2\right )\\ &\leq C\left( |E(0)|+ \|v(t)\|_{L^2}\|u(t)\|_{L^4}^2 + \|v(t)\|_{L^3}^3+ \|u(t)\|_{L^4}^4 \right)\\ &\leq C\left(|E(0)|+ \|v(t)\|_{L^2}^2 + \|v(t)\|_{L^3}^3+ \|u(t)\|_{L^4}^4 \right)\\ &\leq C\left(|E(0)|+ |Q(0)| + \|u_0\|_{L^2}\|\partial_x u(t)\|_{L^2} + \|v(t)\|_{L^3}^3+ \|u(t)\|_{L^4}^4 \right)\\ &\leq C \left\{|E(0)|+ |Q(0)| + |Q(0)|^{\frac{5}{3}} + M(0)^5 + M(0)^3 + M(0) \right\} + \\ &+ \frac{1}{2}\left\{ \|\partial_x u(t)\|_{L^2}^2+ \|\partial_x v(t)\|_{L^2}^2\right\}. \end{split} \end{equation*} Hence \begin{equation} \begin{split}\label{e.global-1} \|\partial_x u(t)\|_{L^2}^2+\|\partial_x v(t)\|_{L^2}^2 &\leq C\left\{|E(0)|+ |Q(0)| + |Q(0)|^{\frac{5}{3}} + M(0)^5 + M(0)^3 + M(0) \right\}. \end{split} \end{equation} We can estimate the right hand of (\ref{e.global-1}) using the conservation laws in the lemma~\ref{l.global} and Sobolev's lemma to get \begin{equation}\label{e.global-2} \|u(t)\|_{H^1}^2+\|v(t)\|_{H^1}^2 \leq \Psi(\|u_0\|_{H^1}, \|v_0\|_{H^1}), \end{equation} where $\Psi$ is a function depending only on $\|u_0\|_{H^1}$\;and\;$\|v_0\|_{H^1}$. We observe that the constans depend only on the parameters $\alpha, \beta$ and $\gamma$. Since the right hand in (\ref{e.global-2}) only depends of $\|u_0\|_{H^1}$\;and\;$\|v_0\|_{H^1}$, we can repeat the argument of local existence of solution at time $T$ arriving to a solution for any positive time. This completes the proof of the theorem~\ref{t.B}. \section{Final Remarks}\label{s.remarks} We conclude this paper with some comments and questions related to our results in theorems~\ref{t.A},~\ref{t.B}. Concerning the local well-posedness result in theorem~\ref{t.A}, the gap between our endpoint $H^{1/4}\times L^2$ and the ``natural'' $L^2\times H^{-1/2}$ endpoint\footnote{As we said before, from the sharp well-posedness theory for the NLS and the KdV equations, the well-posedness endpoint for the periodic NLS equation is $L^2$ and for the periodic KdV is $H^{-1/2}$.} suggests the ill-posedness question: \begin{question}Is the periodic NLS-KdV system~(\ref{e.nls-kdv}) ill-posed for initial data $(u_0,v_0)\in H^k\times H^s$ with $0\leq k<1/4$, $1+s\leq 4k$ and $-1/2\leq k-s\leq 3/2$? \end{question} On the other hand, one should be able to improve the global well-posedness result in theorem~\ref{t.B} using the \emph{I-method} of Colliander, Keel, Staffilani, Takaoka and Tao~\cite{CKSTT}. In the continuous case, the global well-posedness result in the energy space of Corcho and Linares~\cite{Corcho} was refined by Pecher~\cite{Pecher} via the I-method. This motivates the following question in the periodic context: \begin{question}Is the periodic NLS-KdV system~(\ref{e.nls-kdv}) globally well-posed for initial data $(u_0,v_0)\in H^{1-}\times H^{1-}$? \end{question} We plan to address this issue in a forthcoming paper by using our bilinear estimates for the coupling terms $uv$ and $\partial_x(|u|^2)$ and the I-method. \section*{Acknowledgements} The authors are thankful to IMPA and its staff for the fine research ambient. A.A and C.M. would like to acknowledge Viviane Baladi for the invitation to visit the Institute Henri Poincar\'e in May-June 2005, where a large part of the bilinear estimates for the coupling terms was done. Also, C.M. is indebted to Terence Tao for some discussions about the method of sharp bilinear estimates. A.A. and C.M. were partially supported by CNPq-Brazil and A.C. was partially supported by CNPq-Brazil and FAPEAL. \section{Appendix}\label{s.appendix} This appendix collects some well-known results concerning linear and multilinear estimates related to the periodic cubic NLS and the periodic KdV, and also includes a brief comment about three conserved functionals for the NLS-KdV discovered by M. Tsutsumi. \subsection{Linear estimates}\label{s.a.linear}We begin with the proof of the linear estimates in lemma~\ref{l.linear}. The basic strategy of the argument is contained in the work~\cite{CKSTT} of Colliander, Keel, Staffilani, Takaoka and Tao. First, we observe that $\widehat{\psi U(u_0)}(n,\tau)= \widehat{u_0}(n)\widehat{\psi}(\tau+n^2)$ and $\widehat{\psi V(v_0)}(n,\tau)= \widehat{v_0}(n)\widehat{\psi}(\tau-n^3)$. Thus, it follows that \begin{equation}\label{e.a.linear-0} \|\psi(t)U(t)u_0\|_{Z^k}\lesssim\|u_0\|_{H^k} \quad \text{ and } \quad \|\psi(t)V(t)v_0\|_{W^s}\lesssim\|v_0\|_{H^s}. \end{equation} Hence, it remains only to show that \begin{equation*} \left\|\psi_T(t)\int_0^t U(t-t') F(t') dt'\right\|_{X^k}\lesssim\|F\|_{Z^k} \quad \text{ and } \quad \left\|\psi_T(t)\int_0^t V(t-t') G(t') dt'\right\|_{Y^s}\lesssim\|G\|_{W^s}. \end{equation*} Up to a smooth cutoff, we can assume that both $F$ and $G$ are supported on $\mathbb T\times [-3,3]$. Let $a(t)=\textrm{sgn}(t)\eta(t)$, where $\eta(t)$ is a smooth bump function supported on $[-10,10]$ which equals $1$ on $[-5,5]$. The identity \begin{equation*} \chi_{[0,t]}(t') = \frac{1}{2} (a(t')-a(t-t')), \end{equation*} for $t\in [-2,2]$ and $t'\in [-3,3]$ permits to rewrite $\psi_T(t)\int_0^t U(t-t') F(t') dt'$ (resp., $\psi_T(t)\int_0^t V(t-t') G(t') dt'$) as a linear combination of \begin{equation}\label{e.a.linear-1} \psi_T(t)U(t)\int_{\mathbb R} a(t')U(-t') F(t') dt' \quad \left(\text{resp., } \psi_T(t)V(t)\int_{\mathbb R} a(t')V(-t') G(t') dt'\right) \end{equation} and \begin{equation}\label{e.a.linear-2} \psi_T(t)\int_{\mathbb R} a(t-t')U(t-t') F(t') dt' \quad \left(\text{resp., } \psi_T(t)\int_{\mathbb R} a(t-t')V(t-t') G(t') dt'\right). \end{equation} For~(\ref{e.a.linear-1}), we note that by~(\ref{e.a.linear-0}), it suffices to prove that \begin{equation*} \|\int_{\mathbb R}a(t')U(-t')F(t') dt'\|_{H^k}\lesssim \|F\|_{Z^k} \quad \left(\text{resp., } \|\int_{\mathbb R}a(t')V(-t')G(t') dt'\|_{H^s}\lesssim \|G\|_{W^s}\right). \end{equation*} Since the Fourier transform of $\int_{\mathbb R}a(t')U(-t')F(t') dt'$ (resp., $\int_{\mathbb R}a(t')V(-t')G(t') dt'$) at $n$ is $\int\widehat{a}(\tau+n^2)\widehat{F}(n,\tau) d\tau$ (resp., $\int\widehat{a}(\tau-n^3)\widehat{G}(n,\tau) d\tau$) and $|\widehat{a}(\tau)|=O(\langle\tau\rangle^{-1})$, the desired estimate follows. For~(\ref{e.a.linear-2}), we discard the cutoff $\psi_T(t)$ and note that the Fourier transform of $\int_{\mathbb R} a(t-t')U(t-t') F(t') dt'$ (resp., $\int_{\mathbb R} a(t-t')V(t-t') G(t') dt'$) evaluated at $(n,\tau)$ is $\widehat{a}(\tau+n^2)\widehat{F}(n,\tau)$ (resp., $\widehat{a}(\tau-n^3)\widehat{G}(n,\tau)$). Therefore, the decay estimate $|\widehat{a}(\tau)| = O(\langle\tau\rangle^{-1})$ give us the claimed estimate. This proves the lemma~\ref{l.linear}. \subsection{Trilinear estimates for $(|u|^2 u)$}\label{s.a.trilinear}Next, we prove the trilinear estimate in lemma~\ref{l.nls}. The argument is essentially contained in the work~\cite{Bourgain} of Bourgain.\footnote{The ``novelty'' here is to estimate the contribution of the weighted $L_n^2 L_{\tau}^1$ portion of the $Z^k$ norm, although this is not hard, as we are going to see.} By definition of $Z^k$, the hypothesis $k\geq 0$ says that it suffices to show that \begin{equation*} \begin{split} &\sup_{\|\phi\|_{L_{n,\tau}^2}\leq 1} \sum\limits_{n=n_1+n_2-n_3}\int\limits_{\tau=\tau_1+\tau_2+\tau_3} \overline{\phi(n,\tau)}\frac{\langle n\rangle^k}{\langle \tau+n^2\rangle^{1/2}} \widehat{u}(n_1,\tau_1)\widehat{v}(n_2,\tau_2) \overline{\widehat{w}(n_3,\tau_3)} \lesssim \\ &\|u\|_{X^{k,\frac{3}{8}}} \|v\|_{X^{k,\frac{3}{8}}}\|w\|_{X^{k,\frac{3}{8}}} \end{split} \end{equation*} and \begin{equation*} \left\|\frac{\langle n\rangle^k}{\langle\tau+n^2\rangle} \widehat{uv\overline{w}}(n,\tau)\right\|_{L_n^2 L_{\tau}^1}\lesssim \|u\|_{X^{k,\frac{3}{8}}}\|v\|_{X^{k,\frac{3}{8}}}\|w\|_{X^{k,\frac{3}{8}}}. \end{equation*} Observe that $\langle n\rangle^k\lesssim \max\{\langle n_1\rangle^k, \langle n_2\rangle^k, \langle n_3\rangle^k\}$. By symmetry, we can assume that $\langle n\rangle^k\lesssim \langle n_1\rangle^k$. This reduces matters to show that \begin{equation}\label{e.a.trilinear-1} \begin{split} &\sup_{\|\phi\|_{L_{n,\tau}^2}\leq 1} \sum\limits_{n=n_1+n_2-n_3}\int\limits_{\tau=\tau_1+\tau_2+\tau_3} \frac{\overline{\phi(n,\tau)}}{\langle \tau+n^2\rangle^{1/2}} \langle n_1\rangle^k\widehat{u}(n_1,\tau_1)\widehat{v}(n_2,\tau_2) \overline{\widehat{w}(n_3,\tau_3)} \lesssim \\ &\|u\|_{X^{k,\frac{3}{8}}} \|v\|_{X^{0,\frac{3}{8}}}\|w\|_{X^{0,\frac{3}{8}}} \end{split} \end{equation} and \begin{equation}\label{e.a.trilinear-2} \begin{split} \left\|\sum\limits_{n=n_1+n_2-n_3}\int_{\tau=\tau_1+\tau_2+\tau_3} \frac{1}{\langle\tau+n^2\rangle}\widehat{u}(n_1,\tau_1)\widehat{v}(n_2,\tau_2) \overline{\widehat{w}(n_3,\tau_3)}\right\|_{L_n^2 L_{\tau}^1}\lesssim \|u\|_{X^{0,\frac{3}{8}}}\|v\|_{X^{0,\frac{3}{8}}}\|w\|_{X^{0,\frac{3}{8}}}. \end{split} \end{equation} First, it is not difficult to see that duality, $L_{xt}^4 L_{xt}^4 L_{xt}^4 L_{xt}^4$ H\"older inequality and the Bourgain-Strichartz estimate in lemma~\ref{l.Bourgain} (i.e., $X^{0,3/8}\subset L^4$) implies~(\ref{e.a.trilinear-1}). Next, consider the contribution of~(\ref{e.a.trilinear-2}). By Cauchy-Schwarz in $\tau$, since $2(-5/8)<-1$, we need only to prove that \begin{equation*} \left\|\sum\limits_{n=n_1+n_2-n_3}\int_{\tau=\tau_1+\tau_2+\tau_3} \frac{1}{\langle\tau+n^2\rangle^{\frac{3}{8}}}\widehat{u}(n_1,\tau_1) \widehat{v}(n_2,\tau_2) \overline{\widehat{w}(n_3,\tau_3)}\right\|_{L_n^2 L_{\tau}^1}\lesssim \|u\|_{X^{0,\frac{3}{8}}}\|v\|_{X^{0,\frac{3}{8}}}\|w\|_{X^{0,\frac{3}{8}}}, \end{equation*} which follows again by duality, $L_{xt}^4 L_{xt}^4 L_{xt}^4 L_{xt}^4$ H\"older inequality and the Bourgain-Strichartz estimate. This concludes the proof of the lemma~\ref{l.nls}. \subsection{Bilinear estimates for $\partial_x(v^2)$}\label{s.a.bilinear}Now, we present the proof of the bilinear estimate in lemma~\ref{l.kdv}. Since this bilinear estimate was used only in the case $s\geq 0$, we will restrict ourselves to this specific context (although the proof of the bilinear estimate for $-1/2\leq s\leq 0$ is similar). Again, the argument is due to Bourgain~\cite{Bourgain} (except for the bound on the weighted $L_n^2 L_{\tau}^1$ portion of the $W^s$ norm, which is due to Colliander, Keel, Stafillani, Takaoka and Tao~\cite{CKSTT}). By definition of $W^s$, it suffices to prove that \begin{equation}\label{e.a.bilinear-1} \begin{split} &\sup_{\|\phi\|_{L_{n,\tau}^2}\leq 1} \sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \frac{|n|\langle n\rangle^s}{\langle\tau-n^3\rangle^{1/2}}\widehat{v_1}(n_1,\tau_1) \widehat{v_2}(n_2,\tau_2)\overline{\phi(n,\tau)}\lesssim \\ &\|v_1\|_{Y^{s,\frac{1}{2}}}\|v_2\|_{Y^{s,\frac{1}{2}-}} + \|v_1\|_{Y^{s,\frac{1}{2}-}}\|v_2\|_{Y^{s,\frac{1}{2}}} \end{split} \end{equation} and \begin{equation}\label{e.a.bilinear-2} \begin{split} \left\|\frac{|n|\langle n\rangle^s}{\langle\tau-n^3\rangle} \widehat{v_1 v_2}(n,\tau)\right\|_{L_n^2 L_{\tau}^1}\lesssim \|v_1\|_{Y^{s,\frac{1}{2}}}\|v_2\|_{Y^{s,\frac{1}{2}-}} + \|v_1\|_{Y^{s,\frac{1}{2}-}}\|v_2\|_{Y^{s,\frac{1}{2}}}. \end{split} \end{equation} Note that our hypothesis of zero mean implies that $n n_1 n_2\neq 0$. Since $$\tau-n^3 = (\tau_1-n_1^3) + (\tau_2-n_2^3) - 3n n_1 n_2,$$ we obtain that $$\max\{\langle\tau-n^3\rangle, \langle\tau_1-n_1^3\rangle, \langle\tau_2-n_2^3\rangle\}\gtrsim |n n_1 n_2|\gtrsim |n|^2.$$ Also, observe that $s\geq 0$ implies that $\langle n\rangle^s\lesssim \langle n_1\rangle^s\langle n_2\rangle^s$. First, we deal with~(\ref{e.a.bilinear-1}). To do so, we analyse two cases: \begin{itemize} \item $\langle\tau-n^3\rangle=\max\{\langle\tau-n^3\rangle, \langle\tau_1-n_1^3\rangle, \langle\tau_2-n_2^3\rangle\}$: in this case, the estimate~(\ref{e.a.bilinear-1}) follows from \begin{equation*} \sup\limits_{\|\phi\|_{L_{n,\tau}^2}\leq 1} \sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}\widehat{v_2}\overline{\phi(n,\tau)}\lesssim \|v_1\|_{Y^{0,\frac{1}{3}}}\|v_2\|_{Y^{0,\frac{1}{3}}}, \end{equation*} which is an easy consequence of duality, $L_{x,t}^4 L_{x,t}^4 L_{x,t}^2$ H\"older inequality and Bourgain-Strichartz estimate in lemma~\ref{l.Bourgain} ($Y^{0,1/3}\subset L^4$). \item $\langle\tau_j-n_j^3\rangle=\max\{\langle\tau-n^3\rangle, \langle\tau_1-n_1^3\rangle, \langle\tau_2-n_2^3\rangle\}$ for $j\in\{1,2\}$: in this case, the estimate~(\ref{e.a.bilinear-1}) follows from \begin{equation*} \sup\limits_{\|\phi\|_{L_{n,\tau}^2}\leq 1} \sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2) \frac{\overline{\phi(n,\tau)}}{\langle\tau-n^3\rangle^{1/2}} \lesssim \|v_1\|_{Y^{0,0}}\|v_2\|_{Y^{0,\frac{1}{2}-}} \end{equation*} and \begin{equation*} \sup\limits_{\|\phi\|_{L_{n,\tau}^2}\leq 1} \sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2) \frac{\overline{\phi(n,\tau)}}{\langle\tau-n^3\rangle^{1/2}} \lesssim \|v_1\|_{Y^{0,\frac{1}{2}-}}\|v_2\|_{Y^{0,0}}, \end{equation*} which are valid by duality, H\"older and the Bourgain-Strichartz estimate. \end{itemize} Second, we consider~(\ref{e.a.bilinear-2}). Again, we distinguish two cases: \begin{itemize} \item $\langle\tau_j-n_j^3\rangle=\max\{\langle\tau-n^3\rangle, \langle\tau_1-n_1^3\rangle, \langle\tau_2-n_2^3\rangle\}$ for $j\in\{1,2\}$: after doing the natural cancelations, we see that~(\ref{e.a.bilinear-2}) is a corollary of \begin{equation*} \begin{split} \left\|\langle\tau-n^3\rangle^{-\frac{2}{3}}\langle\tau-n^3\rangle^{-\frac{1}{3}} \sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2)\right\|_{L_n^2 L_{\tau}^1} \lesssim \|v_1\|_{Y^{0,0}}\|v_2\|_{Y^{0,\frac{1}{3}}} \end{split} \end{equation*} and \begin{equation*} \left\|\langle\tau-n^3\rangle^{-\frac{2}{3}}\langle\tau-n^3\rangle^{-\frac{1}{3}} \sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2)\right\|_{L_n^2 L_{\tau}^1} \lesssim \|v_1\|_{Y^{0,\frac{1}{3}}}\|v_2\|_{Y^{0,0}}. \end{equation*} Applying Cauchy-Schwarz in $\tau$, since $2(-2/3)<-1$, it suffices to prove \begin{equation*} \left\|\langle\tau-n^3\rangle^{-1/3}\sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2)\right\|_{L_{n}^2 L_{\tau}^2} \lesssim \|v_1\|_{Y^{0,0}}\|v_2\|_{Y^{0,\frac{1}{3}}} \end{equation*} and \begin{equation*} \left\|\langle\tau-n^3\rangle^{-1/3}\sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2)\right\|_{L_{n}^2 L_{\tau}^2} \lesssim \|v_1\|_{Y^{0,\frac{1}{3}}}\|v_2\|_{Y^{0,0}}. \end{equation*} Rewriting the left-hand sides by duality, using H\"older inequality and Bourgain-Strichartz estimate $Y^{0,1/3}\subset L^4$ we finish off this case. \item $\langle\tau-n^3\rangle=\max\{\langle\tau-n^3\rangle, \langle\tau_1-n_1^3\rangle, \langle\tau_2-n_2^3\rangle\}$: we subdivide this case into two situations. If $\langle\tau_j-n_j^3\rangle\gtrsim |n n_1 n_2|^{1/100}\gtrsim |n|^{1/50}$ for some $j\in\{1,2\}$, we cancel $\langle\tau_j-n_j^3\rangle^{1/6}$ leaving $\langle\tau_j-n_j^3\rangle^{1/3}$ so that we need to show \begin{equation*} \left\|\langle\tau-n^3\rangle^{-\frac{1}{2}-} \sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2)\right\|_{L_n^2 L_{\tau}^1} \lesssim \|v_1\|_{Y^{0,0}}\|v_2\|_{Y^{0,\frac{1}{3}}} \end{equation*} and \begin{equation*} \left\|\langle\tau-n^3\rangle^{-\frac{1}{2}-} \sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2)\right\|_{L_n^2 L_{\tau}^1} \lesssim \|v_1\|_{Y^{0,\frac{1}{3}}}\|v_2\|_{Y^{0,0}}. \end{equation*} This is an easy consequence of Cauchy-Schwarz in $\tau$, H\"older inequality and Bourgain-Strichartz. If $\langle\tau_j-n_j^3\rangle\ll |n n_1 n_2|^{1/100}$ for $j=1,2$, we observe that \begin{equation*} \tau-n^3 = -3 n n_1 n_2 + O(\langle n n_1 n_2\rangle^{1/100}). \end{equation*} After some cancelations, we need to prove that \begin{equation*} \left\|\langle\tau-n^3\rangle^{-1/2}\sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2) \chi_{\Omega(n)}(\tau-n^3) \right\|_{L_n^2 L_{\tau}^1} \lesssim \|v_1\|_{Y^{0,1/3}}\|v_2\|_{Y^{0,1/3}}, \end{equation*} where $\Omega(n):=\{\eta\in\mathbb R : \eta = -3 n n_1 n_2 + O(\langle n n_1 n_2\rangle^{1/100}) \textrm{ for } n_1,n_2\in\mathbb Z \textrm{ with } n=n_1+n_2\}$. By Cauchy-Schwarz in $\tau$, we bound the left-hand side by \begin{equation*} \left\| \left(\int\langle\tau-n^3\rangle^{-1}\chi_{\Omega(n)}(\tau-n^3) d\tau\right)^{1/2} \left\|\sum\limits_{n=n_1+n_2}\int_{\tau=\tau_1+\tau_2} \widehat{v_1}(n_1,\tau_1)\widehat{v_2}(n_2,\tau_2) \right\|_{L_n^2}\right\|_{L_{\tau}^2}. \end{equation*} Therefore, it remains only to prove that \begin{equation*} \left(\int\langle\tau-n^3\rangle^{-1}\chi_{\Omega(n)}(\tau-n^3) d\tau\right)^{1/2} \lesssim 1. \end{equation*} To estimate the integral on the left-hand side, we need the following lemma about the distribution of points in $\Omega(n)$ in a fixed dyadic block: \begin{lemma}Fix $n\in\mathbb Z-\{0\}$. For $n_1,n_2\in\mathbb Z-\{0\}$, we have for all dyadic $M\geq 1$ \begin{equation*} |\{\mu\in\mathbb R : |\mu|\sim M, \mu = -3 n n_1 n_2 + O(\langle n n_1 n_2\rangle^{1/100})\}| \lesssim M^{1-\delta}, \end{equation*} for some $\delta>0$. \end{lemma} \begin{proof}By symmetry, we may assume $|n_1|\geq |n_2|$. Consider first the situation $|n|\geq |n_1|$. Since $\mu = -3 n n_1 n_2 + O(\langle n n_1 n_2\rangle^{1/100})$, we get $|n|\lesssim |\mu|\lesssim |n|^3$ because $n_1,n_2\in\mathbb Z-\{0\}$ and $|n n_1 n_2|\lesssim |n|^3$. Suppose $\mu\sim M$ and $|n|\sim N$. For some $1\leq p\leq 3$, we have $M\sim N^p$. Thus, the expression of $\mu$ implies that $|n_1 n_2|\sim M^{1-\frac{1}{p}}$. Observe that there are at most $M^{1-\frac{1}{p}}$ multiples of $M^{\frac{1}{p}}$ in the dyadic block $\{|\mu|\sim M\}$. Therefore, the set of $\mu$ with the form $-3 n n_1 n_2 + O(\langle n n_1 n_2\rangle^{1/100})$ is the union of $M^{1-\frac{1}{p}}$ intervals of size $M^{1/100}$, each of them containing an integer multiple of $n$. Then, \begin{equation*} |\{\mu\in\mathbb R: |\mu|\sim M, \mu=-3 n n_1 n_2 + O(\langle n n_1 n_2\rangle^{1/100})\}| \leq M^{1-\frac{1}{p}} M^{1/100}\lesssim M^{3/4}, \end{equation*} since $1\leq p\leq 3$. In the situation $|n|\leq |n_1|$, we have $|n_1|\lesssim |\mu|\lesssim |n_1|^3$. So, if $|n_1|\sim N_1$, we obtain $M\sim N_1^{p}$ for some $1\leq p\leq 3$. Thus, we can repeat the previous argument. \end{proof} Using this lemma, it is not hard to prove that \begin{equation*} \int\langle\tau-n^3\rangle^{-1}\chi_{\Omega(n)}(\tau-n^3) d\tau\lesssim 1. \end{equation*} Indeed, we change the variables to rewrite the left-hand side as \begin{equation*} \int\langle\mu\rangle^{-1}\chi_{\Omega(n)}(\mu) d\mu. \end{equation*} Decomposing the domain of integration and using the previous lemma, we have \begin{equation*} \begin{split} &\int\langle\mu\rangle^{-1}\chi_{\Omega(n)}(\mu) d\mu = \\ &= \int\limits_{|\mu|\leq 1}\langle\mu\rangle^{-1}\chi_{\Omega(n)}(\mu) d\mu + \sum\limits_{M\geq 1 \; dyadic}\int\limits_{|\mu|\sim M} \langle\mu\rangle^{-1}\chi_{\Omega(n)}(\mu) d\mu \leq \\ &\lesssim 1 + \sum\limits_{M\geq 1 \; dyadic} M^{-1} M^{1-\delta} \lesssim 1. \end{split} \end{equation*} \end{itemize} This finishes the proof of the lemma~\ref{l.kdv}. \subsection{Three conserved quantities for the NLS-KdV flow}\label{s.a.global} In the sequel, we show that the quantities \begin{itemize} \item $M(t):=\int_{\mathbb T} |u(t)|^2 dx$, \item $Q(t):=\int_{\mathbb T}\left\{\alpha v(t)^2 + 2\gamma \Im(u(t)\overline{\partial_x u(t)}) dx\right\}$ and \item $E(t):=\int_{\mathbb T}\left\{\alpha\gamma v(t)|u(t)|^2 -\frac{\alpha}{6}v(t)^3 + \frac{\beta\gamma}{2}|u(t)|^4 + \frac{\alpha}{2}|\partial_x v(t)|^2 + \gamma |\partial_x u(t)|^2\right\} dx$ \end{itemize} are conserved by the NLS-KdV flow, as discovered by M. Tsutsumi~\cite{MTsutsumi}. By the local well-posedness result in theorem~(\ref{t.A}), we may assume that $u$ and $v$ are smooth in both $x$ and $t$ variables. First, we consider $M(t)$. Differentiating with respect to $t$, we have \begin{equation*} \partial_t M(t) = \int_{\mathbb T} \partial_t u \cdot \overline{u} + \int_{\mathbb T} u \cdot \overline{\partial_t u} \end{equation*} Since the equation~(\ref{e.nls-kdv}) implies \begin{equation*} \partial_t u = i\partial_x^2 u -i\alpha uv -i \beta|u|^2 u, \end{equation*} we see that, by integration by parts, \begin{equation*} \begin{split} \int_{\mathbb T}\partial_t u \cdot \overline{u} &= i\int\partial_x^2 u\cdot\overline{u} -i\int\alpha u\overline{u}v -i\int\beta|u|^4 \\ &= -\int u\cdot\overline{i\partial_x^2 u} - \int u(\overline{-i\alpha uv}) - \int u(\overline{-i\beta|u|^2 u}) \\ &= -\int_{\mathbb T}u\cdot\overline{\partial_t u}. \end{split} \end{equation*} Hence, $\partial_t M(t)=0$, i.e., $M(t)$ is a conserved quantity. Second, we analyse $Q(t)$. Differentiating with respect to $t$ and using that $v$ is a real-valued function, \begin{equation*} \partial_t Q(t) = 2\alpha\int_{\mathbb T} \partial_t v\cdot v + 2\gamma\int_{\mathbb T}\Im(\partial_t u \overline{\partial_x u}) + 2\gamma\int_{\mathbb T}\Im(u\overline{\partial_x \partial_t u}). \end{equation*} Applying~(\ref{e.nls-kdv}) and using integration by parts, we obtain \begin{equation*} 2\alpha\int_{\mathbb T}\partial_t v\cdot v = 2\alpha\gamma\int_{\mathbb T}\{-\partial_x^3 v -\frac{1}{2}\partial_x (v^2)+\gamma\partial_x(|u|^2)\}\cdot v = 2\alpha\gamma\int_{\mathbb T}\partial_x(|u|^2)\cdot v, \end{equation*} \begin{equation*} \int_{\mathbb T} \partial_t u\partial_x\overline{u} = i\int_{\mathbb T} \partial_x^2u \partial_x\overline{u} -i\alpha\int_{\mathbb T}uv\partial_x\overline{u}-i\beta\int_{\mathbb T}|u|^2 u\partial_x\overline{u} \end{equation*} and \begin{equation*} \begin{split} \int_{\mathbb T}u\partial_x \partial_t \overline{u} &= \int_{\mathbb T} u\cdot\partial_x\{-i\partial_x^2 u + i\alpha \overline{u}v +i\beta|u|^2\overline{u}\}\\ &= i\alpha\int_{\mathbb T} uv\partial_x\overline{u} + i\alpha\int_{\mathbb T}|u|^2\partial_x v + i\beta\int_{\mathbb T}|u|^2 u \partial_x\overline{u} +i\beta\int_{\mathbb T}|u|^2\partial_x(|u|^2) \\ &= i\alpha\int_{\mathbb T} uv\partial_x\overline{u} + i\alpha\int_{\mathbb T}\partial_x(|u|^2) v + i\beta\int_{\mathbb T}|u|^2 u \partial_x\overline{u}. \end{split} \end{equation*} In particular, \begin{equation*} \int_{\mathbb T} \partial_t u\partial_x\overline{u}+\int_{\mathbb T}u\partial_x \partial_t \overline{u} = i\int_{\mathbb T} \partial_x^2u \partial_x\overline{u}+i\alpha\int_{\mathbb T}\partial_x(|u|^2) v. \end{equation*} Since $i\int_{\mathbb T}\partial_x^2 u \partial_x\overline{u} = \overline{i\int_{\mathbb T}\partial_x\overline{u}\partial_x^2 u}$, we get \begin{equation*} \Im(\int_{\mathbb T} \partial_t u\partial_x\overline{u}+\int_{\mathbb T}u\partial_x \partial_t \overline{u}) = \alpha\int_{\mathbb T}\partial_x(|u|^2) v. \end{equation*} Hence, putting these informations together, we obtain $\partial_t Q(t)=0$. Third, we compute $\partial_t E(t)$. Writing $E(t)=I-II+III+IV+V$, where $I:=\alpha\gamma\int_{\mathbb T}|u|^2 v$, $II=\frac{\alpha}{6}\int_{\mathbb T}v^3$, $III:= \frac{\beta\gamma}{2}\int_{\mathbb T} |u|^4$, $IV:= \frac{\alpha}{2} \int_{\mathbb T}|\partial_x v|^2$ and $V:=\gamma\int_{\mathbb T}|\partial_x u|^2$. Using~(\ref{e.nls-kdv}) and integrating by parts, \begin{equation*} \partial_t I = -\alpha\gamma\int_{\mathbb T}|u|^2\{\partial_x^3 v+\frac{1}{2}\partial_x(v^2)\} +v\{i\partial_x^2 u\cdot\overline{u} - i\partial_x^2\overline{u}\cdot u\}, \end{equation*} \begin{equation*} -\partial_t II = -\frac{\alpha}{2}\int_{\mathbb T} \left\{-v^2\partial_x^3 v + \gamma v^2\partial_x(|u|^2)\right\} = \frac{\alpha}{2}\int_{\mathbb T}v^2\partial_x^3 v -\alpha\gamma\int_{\mathbb T}|u|^2\frac{1}{2}\partial_x(v^2), \end{equation*} \begin{equation*} \partial_t III = \beta\gamma\int_{\mathbb T}\{\partial_t u |u|^2\overline{u}+\partial_t\overline{u} |u|^2 u\} = \beta\gamma\int_{\mathbb T}\{i\overline{u}|u|^2\partial_x^2 u - iu|u|^2\partial_x^2\overline{u} \}, \end{equation*} \begin{equation*} \partial_t IV = \alpha\int_{\mathbb T}\partial_x v \cdot\partial_x \partial_t v = -\frac{\alpha}{2}\int_{\mathbb T}v^2\partial_x^3 v+ \alpha\gamma\int_{\mathbb T}|u|^2\partial_x^3 v, \end{equation*} \begin{equation*} \begin{split} \partial_t V &= \gamma\int_{\mathbb T}\{\partial_x\partial_t u\cdot\partial_x\overline{u} +\partial_x u\cdot\partial_x\partial_t\overline{u}\} \\ &= \gamma\int_{\mathbb T}\{i\partial_x^2 u -i\alpha uv +i\beta |u|^2 u\}\partial_x^2\overline{u} + \{-i\partial_x^2\overline{u} +i\alpha\overline{u}v - i\beta |u|^2\overline{u}\} \partial_x^2 u \\ &= -\alpha\gamma\int_{\mathbb T}v\cdot\{i\partial_x^2\overline{u}\cdot u -i\partial_x^2\overline{u}\cdot u\} -\beta\gamma\int_{\mathbb T}\{i\overline{u}|u|^2\partial_x^2 u - iu|u|^2\partial_x^2\overline{u}\}. \end{split} \end{equation*} From these expressions, it is not hard to conclude that $\partial_t Q(t)=0$.
1,108,101,563,104
arxiv
\section{Introduction} We assume familiarity with the basic facts and notions from error-correcting codes, combinatorial designs and Hadamard matrices \cite{BJL}, \cite{BBFKKW}, \cite{HP}, \cite{IS}, \cite{ton88}. All codes considered in this paper are ternary. The minimum distance, or equivalently, the minimum weight $d$ of a ternary self-dual code of length $n$ divisible by 12 satisfies the upper bound $d\le n/4 + 3$ (cf., e.g. \cite[9.3]{HP}), and a self-dual code with minimum distance $d=n/4+3$ is called {\it extremal}. The extremal self-dual codes support combinatorial 5-designs by the Assmus-Mattson theorem \cite{AM}, \cite[8.4]{HP}. Ternary extremal self-dual codes are known for the following lengths $n\equiv 0 \pmod {12}$: $n=12$: the extended Golay code, unique up to equivalence; $n=24$: there are exactly two inequivalent codes, the extended quadratic-residue code \cite{AM} and the Pless symmetry code $C(11)$ \cite{Pless69}, \cite{Pless72}; $n=36$: only one code is known, namely the Pless symmetry code $C(17)$ \cite{Pless69}, \cite{Pless72}; $n=48$: two codes are known, the extended quadratic-residue code and the Pless symmetry code $C(23)$; $n=60$: three codes are known: the extended quadratic-residue code, the Pless symmetry code $C(29)$, and a code found by Nebe and Villar \cite{NV}. Huffman \cite{Huf} proved that any extremal ternary self-dual code of length 36 that admits an automorphism of prime order $p>3$ is monomially equivalent to the Pless symmetry code. More recently, Eisenbarth and Nebe \cite{EN} extended Huffman's result by proving that the Pless symmetry code is the unique (up to monomial equivalence) ternary extremal self-dual code of length 36 that admits an automorphism of order 3. In addition, it was proved in \cite[Theorem 5.1]{EN} that if $C$ is an extremal ternary self-dual code of length 36 then either $C$ is equivalent to the Pless symmetry code or the full automorphism group of $C$ is a subgroup of the cyclic group of order 8. It is known \cite{Pless72} that the Pless symmetry code $C(q)$ of length $n=2q+2$, where $q\equiv -1 \pmod 3$ is an odd prime power, contains a set of $n$ codewords of weight $n$, which after replacing every entry equal to 2 by $-1$ form the rows of a Hadamard matrix equivalent to the Paley-Hadamard matrix of type II. In particular, the Pless symmetry code $C(17)$ contains the rows of a Hadamard matrix $P$ of Paley type II, having a full automorphism group of order $4\cdot 17(17^2 -1)=19584$, and the rows of $P$ span the code $C(17)$. It was shown in \cite{Ton21} that the code $C(17)$ contains a second equivalence class of Hadamard matrices of order 36 having as rows codewords of $C(17)$. Any matrix $H$ from the second equivalence class has a full automorphism group of order 72 and the rows of $H$ span the code $C(17)$. In addition, $H$ is monomially equivalent to a regular Hadamard matrix $H'$, such that every row of $H'$ has 15 entries equal to 1 and 21 entries equal to $-1$ . The symmetric 2-$(36,15,6)$ design $D$ with $(0,1)$-incidence obtained by replacing every entry $-1$ of $H'$ with 0 has a trivial full automorphism group, and the row span of its incidence matrix over $GF(3)$ is equivalent to the Pless symmetry code $C(17)$. In Section 2, we consider codes that are obtained from $C(17)$ by negating code coordinates so that the resulting new code contains the all-one vector, and examine Hadamard matrices formed by codewords of weight 36. This examination reveals the surprising fact that the Pless symmetry code $C(17)$ is equivalent to a code spanned by the rows of a regular Hadamard matrix which is monomially equivalent to the Paley-Hadamard matrix of type II, the associate symmetric 2-$(36,15,6)$ design has a full automorphism group of order 24, and its incidence matrix spans a code equivalent to $C(17)$. Motivated by this phenomenon and the results from \cite{EN}, in Section 3 we classify all symmetric 2-$(36,15,6)$ designs that admit an automorphism of order 2 and their incidence matrices span an extremal ternary self-dual code of length 36. The results of this classification show that, up to isomorphism, there is only one such design, being isomorphic to the design associated with the regular presentation of the Paley-Hadamard matrix of type II found in Section 2. \section{Hadamard matrices derived from the symmetry code of length 36} \label{sec2} In this section we consider Hadamard matrices of order 36 whose rows are obtained from codewords of full weight 36 in the ternary Pless symmetry code $C(17)$ after replacing all codeword entries that are equal to 2 with $-1$. The code $C(17)$ contains exactly 888 codewords of weight 36. In what follows, we denote by $U$ the set of all 888 codewords in $C(17)$ of full weight. The 3-rank (that is, the rank over the finite field of order 3, $GF(3)$) of the matrix having as rows the codewords from $U$ is 18, hence $U$ spans $C(17)$. In the context of Hadamard matrices, we view the codewords from $U$ as vectors with components $\pm 1$. \begin{lem} \label{l1} A set $S$ of 36 codewords from $U$ is the row set of a Hadamard matrix of order 36 if and only if the Hamming distance between every two codewords from $S$ is 18. \end{lem} \medskip\noindent{\bf Proof.}\quad The inner product of two vectors $u=(u_1,\ldots, u_{36})$, $v=(v_1, \ldots , v_{36})$ of length 36 with entries $\pm 1$ is zero over the field of real numbers if and only if there are exactly 18 indices $i$, $1\le i\le 36$, such that $u_{i}v_{i}=-1$, hence $u_i \neq v_i$, and the remaining 18 indices $j$, $1\le j\le 36$, satisfy $u_{j}v_{j}=1$, that is, $u_j = v_j$. \qed By Lemma \ref{l1}, every Hadamard matrix of order 36 whose rows are codewords from $U$ corresponds to a clique of size 36 in a graph $\Gamma$ having as vertices the codewords from $U$, where two codewords are adjacent in $\Gamma$ if they are at Hamming distance 18 from each other. Enumerating all Hadamard matrices up to equivalence by a direct search for cliques of size 36 is computationally infeasible because for every Hadamard matrix $H$, the graph $\Gamma$ contains $2^{36}$ distinct 36-cliques that correspond to $2^{36}$ Hadamard matrices that are equivalent to $H$ and are obtained by negating rows of $H$. Note that negating a row is equivalent to a scalar multiplication of the corresponding codeword by 2 (mod 3), thus any negation of a row preserves $U$. A simple way to reduce the search is by considering normalized Hadamard matrices. \begin{lem} \label{l2} If $H$ is a Hadamard matrix whose rows are codewords from $U$ then the row set of any normalized Hadamard matrix obtained from $H$ belongs to a code which is monomially equivalent to $C(17)$. \end{lem} \medskip\noindent{\bf Proof.}\quad The matrix $H$ is normalized with respect to column $j$ by negating (or multiplying by 2 (mod 3)) all rows of $H$ having entry $-1$ in the $j$th column. Any such negation preserves $U$. Similarly, $H$ is normalized with respect to row $i$ by negating all columns of $H$ having entry $-1$ in the $i$th row. These negations of the columns of $U$ either preserve $U$ or change $U$ to a new set that spans a linear code which is monomially equivalent to $C(17)$. \qed The set $U$ of the Pless symmetry code $C(17)$, as defined in \cite{Pless69}, \cite{Pless72}, does not contain the all-one codeword $\bar{1}=(1,\ldots,1)$, while it contains a codeword $v$ with one entry equal to $-1$, located in the code coordinate labeled by $\infty$, and 35 entries equal to 1. Negating (that is, multiplying by 2 (mod 3)) the code coordinate of $C(17)$ labeled by $\infty$ transforms the symmetry code $C(17)$ into a monomially equivalent code $L(17)$ which does contain the all-one vector $\bar{1}$ \cite{Ton21}. If $x \in L(17)$ is a codeword of full weight 36, we denote by $w_{i}(x)$ the number of entries of $x$ that are equal to $i$ ($i = 1, 2$). The complete weight distribution of the set $W$ consisting of all 888 codewords of $L(17)$ of weight 36 is listed in Table \ref{tab1}. The set $W$ is available at \begin{verbatim} https://pages.mtu.edu/~tonchev/W.txt \end{verbatim} As shown in \cite{Ton21}, the set $M$ of the 70 codewords $x\in W$ with weight structure $(w_{1}(x),w_{2}(x))=(18,18)$ form the $(1,2)$-incidence matrix of a Hadamard 3-$(36,18,8)$ design with full automorphism group of order $272=2^{4}17$, while adding to $M$ the all-one vector $\bar{1}$ and the all-two vector $\bar{2}=2\bar{1}$ gives a set of 72 vectors that comprises of the rows of a normalized Hadamard matrix and its negative, being monomially equivalent to the Paley-Hadamard matrix of type II, having full monomial automorphism group of order $19584=2^{7}3^{2}17$. In addition, it was shown in \cite{Ton21} that the set of 408 codewords $x\in W$ with weight structure $(w_{1}(x),w_{2}(x))=(15,21)$ contains exactly 272 subsets of size 36 that are regular Hadamard matrices of order 36. All these 272 regular Hadamard matrices are pairwise equivalent, and have an automorphism group of order 72, while the associated symmetric 2-$(36,15,6)$ design has trivial full automorphism group, and its $(0,1)$-incidence matrix has 3-rank 18, hence spans the code $L(17)$. All inequivalent Hadamard matrices of order 36 whose rows are obtained from codewords in $L(17)$ of weight 36 can be enumerated by using Lemma \ref{l2} as follows. We consider $W$ as an $888 \times 36$ matrix, and for every integer $i$, $1\le i \le 888$, we define a matrix $W_i$, being the matrix obtained by negating all columns of $W$ which have entry $-1$ in row $i$. Thus, the $i$th row of $W_i$ is the all-one vector $\bar{1}$, and this row must be a row of every normalized Hadamard matrix consisting of rows of $W_i$. We refer to $W_i$ as a matrix obtained by switching of $W$ with respect to row $i$. To reduce the search further, we consider only normalized Hadamard matrices with first column being the all-one column. Next we define a graph $\Gamma_i$ having as vertices all rows of $W_i$ with first entry 1 and exactly 18 entries equal to 1, where two vertices are adjacent in $\Gamma_i$ if and only if the Hamming distance between the corresponding rows of $W_i$ is 18. \begin{lem} \label{l3} (a) Any set of 35 rows of $W_i$ that corresponds to a clique of size 35 in $\Gamma_i$, together with the all-one row $\bar{1}$, is the set of rows of a normalized Hadamard matrix.\\ (b) The maximum clique size in $\Gamma_i$ is 35. \end{lem} \medskip\noindent{\bf Proof.}\quad Part (a) follows from Lemma \ref{l1}. If $H$ is a normalized Hadamard matrix of order $n=2d$, then deleting the all-one column from $H$ gives a $2d\times (2d-1)$ matrix whose rows form a code $C$ of length $2d-1$ and minimum distance $d$ over the alphabet $\{ 1, -1 \}$ that meets the Plotkin upper bound \cite[2.2]{HP}, \cite[2.1]{ton88}: \[ |C| \le 2d. \] This proves part (b). \qed \begin{table} \begin{center} \begin{tabular}{|r|c|} \hline \#$x$ & $(w_{1}(x),w_{2}(x))$\\ \hline \hline 1 & (0,36) \\ \hline 408 & (15,21) \\ \hline 70 & (18,18) \\ \hline 408 & (21,15) \\ \hline 1 & (36,0)\\ \hline \end{tabular} \caption{The complete weight distribution of $W$}\label{tab1} \end{center} \end{table} One can compute representatives of the equivalence classes of Hadamard matrices by an examination of the 35-cliques in the graphs $\Gamma_i$, $1\le i \le 888$. Since every codeword $x\in W$ with weight structure $(w_{1}(x),w_{2}(x))=(21,15)$ is the negative of the codeword $2x \in W$ with weight structure $(15,21)$, it is sufficient to examine the graphs $\Gamma_i$ such that the weight structure of the $i$th row of $W$ is $(15,21)$ or $(18,18)$. The incidence structure with $(0,1)$-incidence matrix obtained from $W$ by replacing all entries that are equal to 2 with zero, has full automorphism group $G$ of order 272. The group $G$ partitions the set of 408 rows of $W$ with weight structure $(15,21)$ into four orbits of lengths 68, 68, 136, 136, and the set of 70 rows with weight structure $(18,18)$ into four orbits of lengths 2, 17, 17, 34. Therefore, it is sufficient to examine eight switchings of $W,$ one for each orbit, all together. An examination of the matrices $W_i$ obtained by switching of $W$ with respect to a row $i$ having weight structure $(18,18)$ shows that any such matrix has the same complete weight distribution as $W$ (see Table \ref{tab1}), and the resulting unique (up to a permutations of rows) normalized Hadamard matrix is equivalent to the Paley-Hadamard matrix of type II. An examination of the matrices $W_i$ obtained by switching of $W$ with respect to any of the 408 rows having weight structure $(15,21)$ shows that any such matrix has a complete weight distribution given in Table \ref{tab2}, where $W_{408}$ is obtained by the switching of W with respect to row no. 408. \begin{table} \begin{center} \begin{tabular}{|r|c|} \hline \#$x$ & $(w_{1}(x),w_{2}(x))$\\ \hline \hline 1 & (0,36) \\ \hline 93 & (12,24) \\ \hline 36 & (15,21) \\ \hline 628 & (18,18) \\ \hline 36 & (21,15)\\ \hline 93 & (24,12)\\ \hline 1 & (36,0)\\ \hline \end{tabular} \caption{The complete weight distribution of $W_{408}$}\label{tab2} \end{center} \end{table} Any graph $\Gamma_i$ associated with a matrix $W_i$ obtained by switching of $W$ with respect to a row $i$ with weight structure $(15,21)$ has 314 vertices and contains exactly 24 cliques of size 35. The resulting 24 normalized Hadamard matrices are all equivalent to the Hadamard matrix with full automorphism group of order 72 that was discovered in \cite{Ton21} as a regular Hadamard matrix derived from the code $L(17)$, whose related symmetric 2-$(36,15,6)$ design has trivial automorphism group. A surprising result of the switching of $W$ with respect to a row with weight structure $(15,21)$ that was suggested by the weight distribution in Table \ref{tab2} and was proved by inspection, is the following. \begin{thm} \label{t1} (a) The 36 codewords of $W_{408}$ with weight structure $(15,21)$ form a regular Hadamard matrix $H$ which is monomially equivalent to the Paley-Hadamard matrix of type II. (b) The symmetric 2-$(36,15,6)$ design $D$ associated with $H$ has a full automorphism group of order 24. (c) The incidence matrix of $D$ has 3-rank 18, and its linear span over $GF(3)$ is a code equivalent to the Pless symmetry code $C(17)$. \end{thm} \begin{note} {\rm Every row of the regular Hadamard matrix $H$ from Theorem \ref{t1} contains 15 entries equal to 1 and 21 entries equal to $-1$. A $(0,1)$-incidence matrix $A$ of the associated symmetric 2-$(36,15,6)$ design $D$ is obtained by adding the all-one codeword $\bar{1}$ to every row of $H$, followed by a multiplication of all rows by $2 \pmod 3$. Hence, the ternary code spanned by the rows of $H$ contains also the rows of $A$. } \end{note} A $(0,1)$-incidence matrix of $D$ is given in the Appendix. \section{Symmetric 2-$(36,15,6)$ designs with an involution and their ternary codes} It was proved in \cite[Theorem 5.1]{EN} that if $C$ is an extremal ternary self-dual code of length 36 then either $C$ is equivalent to the Pless symmetry code or the full automorphism group of $C$ is a subgroup of the cyclic group of order 8. This result and the fact that the Pless symmetry code is equivalent to codes spanned by the incidence matrices of two nonisomorphic 2-$(36,15,6)$ designs, motivated our study of symmetric 2-$(36,15,6)$ designs with an automorphism of order 2 (or, an {\it involution}), and the ternary codes spanned by their incidence matrices. In what follows we summarize the classification of symmetric 2-$(36,15,6)$ designs that are invariant under an involution and the ternary code of length 36 spanned by their incidence matrix is a self-dual code with minimum weight 12. It is known that every finite group of order $2^n$ contains a subgroup of order $2^i$ for every $i$ in the range $1 \le i \le n$ (cf. \cite[Theorem 6.5, page 116]{rose}). By this property, finding all nonisomorphic $2$-$(36,15,6)$ designs which are invariant under an involution and whose incidence matrix spans a ternary extremal self-dual code, will also give the enumeration of all nonisomorphic $2$-$(36,15,6)$ designs invariant under a nontrivial subgroup of the cyclic group of order $8$ with this property. For the construction of $2$-$(36,15,6)$ designs we use the method for constructing orbit matrices with presumed action of an automorphism group, which are then indexed to construct designs (see, e.g., \cite{c-r}, \cite{cep}). After constructing $2$-$(36,15,6)$ designs, we check the $3$-rank of their incidence matrices, and if it is equal to $18$ we determine the minimum weight of the corresponding ternary code. In our work we use computers. In addition to our own computer programs, we use computer programs by V. \'{C}epuli\'{c} for the construction of orbit matrices and the computer algebra system MAGMA \cite{magma} when working with codes.\\ The first step in the construction of $2$-$(36,15,6)$ designs that admit an involution is to determine the possible orbit lengths distributions. For that we use the following statements. \begin{thm}\cite[Corollary~3.7]{l}\label{cor-fp} Suppose that a nonidentity automorphism $\sigma$ of a symmetric $2$-$(v,k, \lambda)$ design fixes $f$ points. Then $$f \le v-2(k-\lambda) \qquad {\rm and} \qquad f \le ( \frac{\lambda}{k- \sqrt{k-\lambda}} ) v.$$ \end{thm} \begin{thm}\cite[Proposition~4.23]{l}\label{involution} Suppose that ${\mathcal D}$ is a nontrivial symmetric $2$-$(v,k, \lambda)$ design, with an involution $\sigma$fixing $f$ points and blocks. If $f \neq 0$, then $$ f \ge \left \{ \begin{tabular}{l l} $1 + \frac{k}{\lambda}$, & if $k$ and $\lambda$ are both even, \\ $1 + \frac{k-1}{\lambda}$, & otherwise. \\ \end{tabular} \right . $$ \end{thm} It follows that an involution acting on $2$-$(36,15,6)$ design could have $f$ fixed points, where $f \in \{4, 6, 8, 10, 12, 14, 16, 18 \}$. Our analysis shows that orbit matrices do not exist for $f \in \{6, 14, 18 \}$. The results for the remaining cases are summarized in Table \ref{tab3} and the constructed orbit matrices are available at \begin{verbatim} https://www.math.uniri.hr/~sanjar/structures/ \end{verbatim} The third row of Table \ref{tab3} contains information on the maximal minimum weight among the self-dual ternary codes spanned by incidence matrices of the corresponding $2$-$(36,15,6)$ designs, where symbol $\times$ indicates that all constructed designs have incidence matrices with 3-rank smaller than 18. \\ \begin{table}[htpb!] \begin{center} \begin{footnotesize} \begin{tabular}{|c ||c| c | c|c|c| } \hline Number of fixed points& 4 & 8 & 10 & 12 & 16 \\ \hline Number of orbit matrices& 12991& 670 & 56 & 311 & 83 \\ \hline max $d$ in self-dual codes & 12 & $\times$ & $\times$ & 9& $\times$ \\ \hline \end{tabular} \end{footnotesize} \caption{Symmetric 2-$(36,15,6)$ designs with an involution}\label{tab3} \end{center} \end{table} All $2$-$(36,15,6)$ designs whose incidence matrix span an extremal ternary code are mutually isomorphic. They are also isomorphic to the design discussed in Section \ref{sec2}, whose incidence matrix is given in the Appendix. The results of our classification of symmetric 2-$(36,15,6)$ designs with the desired properties can be summarized as follows. \begin{thm} (a) Up to isomorphism, there exists exactly one symmetric 2-$(36,15,6)$ design $D$ that admits an automorphism of order 2 and its incidence matrix spans an extremal ternary self-dual code of length 36. (b) The full automorphism group $G$ of $D$ is of order 24, and $G$ is isomorphic to the symmetric group $S_4$. (c) The regular Hadamard matrix associated with $D$ is equivalent to the Paley-Hadamard matrix of type II. (d) The ternary code spanned by $D$ is equivalent to the Pless symmetry code. \end{thm} \bigskip \noindent {\bf Statements and Declarations} \noindent {\bf Competing Interests:} The authors declare no conflict of interest.\\ {\bf Contributions:} This is a joint collaboration with both authors contributing substantially throughout.\\ {\bf Data Availability:} The links to data generated and analysed are included in this article.\\ {\bf Funding:} The first author is supported by {\rm C}roatian Science Foundation under the project 6732. \bigskip
1,108,101,563,105
arxiv
\section{Introduction} \label{intro} \par In the present days the technological advances permit the study of the history and evolution of the Universe in a more profound manner, leading to a series of questions and breakthroughs from a scientific point of view. The exploration of various questions related to the history of the Universe can open new extraordinary possibilities in the long run. On a short time scale, the quest for the major cosmological scientific questions pose new specific difficulties to various technological areas, leading to the development of new technologies in applied science. The discovery of the accelerated expansion of the current Universe was a breakthrough at the end of the last millennium \cite{Peebles:2002gy}, configuring new specific areas in physics. This new curious phenomenon has been probed through various phenomenological studies \cite{DES:2021wwk, Planck:2015fie}, representing a viable and intriguing aspect of modern physics \cite{Beutler:2011hx}. The simplest dark energy model is represented by the $\Lambda$CDM scenario where the accelerated expansion is driven by a cosmological constant \cite{Copeland:2006wr}. However, this model has various limitations and cannot explain various observed features, including the late time dynamical evolution of the dark energy equation of state \cite{Copeland:2006wr}. In order to overcome these issues, a new theoretical direction has emerged, the extended theories of gravity \cite{Capozziello:2011et}. These theories are based on the mathematical approach of basic general relativity, extending the fundamental action to a more general or special form \cite{Clifton:2011jh, Bamba:2012cp}. It can be seen that the modified gravity theories can explain various aspects from the evolution of the Universe \cite{Nojiri:2017ncd}. In these theories, the fundamental action is extended, being replaced with a generic functional which takes into account possible influences from various invariant geometrical quantities \cite{Koyama:2015vza}. Another viable approach consists in adding one or more scalar fields in the fundamental action \cite{Tsujikawa:2010zza, Kasper:1989wu}, relating the evolution of the late time acceleration of the Universe with the dynamics induced by the scalar field(s). The scalar fields which can trigger the acceleration of the Universe can be in the form of quintessence models, with a canonical kinetic energy \cite{Nojiri:2006ri, Tsujikawa:2013fta}, embedding also a potential term. Another approach where the scalar fields are non-canonical is represented by phantom dark energy models, an approach which can explain the super--acceleration \cite{Caldwell:2003vq, Vikman:2004dc, Nojiri:2005sx, Ludwick:2017tox}. Soon after the emergence of quintessence and phantom models, a new intriguing aspect was discovered -- the crossing of the cosmological constant boundary in the recent past for the dark energy equation of state \cite{Feng:2004ad, Huterer:2004ch, Nesseris:2005ur}. This phenomenon triggered the formation of quintom models \cite{MohseniSadjadi:2006hb, Alimohammadi:2006tw, Guo:2006pc, Shi:2008df, Wei:2007rp, Lazkoz:2007mx, Leon:2018lnd}, a complex construction which includes two scalar fields, one with a negative kinetic energy, and another, with a canonical kinetic energy. Such a construction can explain the crossing of the phantom divide line in the recent past, being extended also in the non--minimal case \cite{Marciu:2020yaw,Marciu:2020vve,Marciu:2020hpk,Marciu:2019cpb, Marciu:2018oks, Bahamonde:2018miw, Marciu:2016aqq}. \par In the scalar tensor theories of gravitation the Einsteinian cubic gravity was developed in Ref.~\cite{Bueno:2016xff}, representing a particular extension which includes higher order contractions of the Riemann tensor. Such an approach has attracted attention in the last couple of years, being studied in various cosmological applications \cite{Bueno:2018yzo, Bueno:2018xqc, Jiang:2019kks, Pookkillath:2020iqq, Bueno:2016ypa, Li:2017txk, Hennigar:2018hza, Bueno:2019ltp, Caceres:2020jrf, BeltranJimenez:2020lee, Edelstein:2022xlb, Rudra:2022qbv}. The extension towards a generic theory based on the cubic invariant has been proposed in \cite{Erices:2019mkd}, analyzed numerically. Later, such an approach has been studied \cite{Marciu:2020ysf, Quiros:2020uhr} dynamically considering the linear stability theory, investigating the physical features of the phase space structure. The inclusion of the cubic component for scalar fields has appeared shortly \cite{Marciu:2020ski, Marciu:2022wzh} for tachyonic models and quintessence or phantom components. Furthermore, a more general extension which takes into account also the scalar curvature has been investigated \cite{Marciu:2021rdl}. The observational analysis for the generic cubic gravity \cite{Giri:2021amc} showed the regions of interest for various parameters from an astrophysical point of view. Moreover, various authors have investigated the inflation \cite{Quiros:2020eim, Edelstein:2020nhg, Arciniega:2019oxa, Arciniega:2018tnn, Arciniega:2018fxj} and the black hole solutions \cite{Hennigar:2016gkm, Bueno:2016lrh, Feng:2017tev, Adair:2020vso} in different cubic gravitational theories. \par As previously stated, the two--field dark energy models in the form of quintom scenarios can explain the dynamical evolution of the dark energy equation of state and the possible crossing of the phantom divide line in the near past \cite{Cai:2009zp}. This aspect is exhibited in the minimal coupling case, where the fields have only kinetic and potential energies, as well as in the non--minimal models which take into account various geometrical invariant components in the specific action \cite{Cai:2009zp}. To this regard, the basic two--field dark energy models can in principle be extended, by considering particular invariant components based on third order contractions of the Riemann tensor. \par In the present paper we have developed a two--field dark energy model which takes into account possible couplings with a specific invariant obtained by applying special contractions of the Riemann tensor in the third order. Hence, the Einstein--Hilbert action is extended by adding two scalar fields independently coupled with an invariant based on cubic contractions of the Riemann tensor. After proposing the fundamental action for the cosmological model we obtain the field equations by varying the action with respect to the inverse metric in the case of a Roberson--Walker background. The resulting Klein--Gordon relations are obtained in the usual manner, by the variation with respect to the specific fields which are assumed to be time--depending. The physical features for this cosmological setup are investigated by applying the linear stability theory in the case of exponential coupling functions, determining the phase space structure and the corresponding stationary points which are associated to different eras in the evolution of the Universe. The dynamical characteristics for our model are investigated by analyzing the corresponding eigenvalues, obtaining possible constraints for the specific parameters which are related to different dynamical effects. The present paper can be regarded as a more general dark energy model which takes into consideration viable couplings between scalar fields and cubic contractions of the Riemann tensor, aiming towards a multi--scalar field theory. \par The paper is organized as follows: in Sec.~\ref{sec:1} we propose the action for the present cosmological model and discuss the corresponding field equations which are obtained by taking the variation with respect to the inverse metric. Then, in Sec.~\ref{sec:2} we analyze the features of the phase space structure in the case of an exponential coupling and potential energy, considering a quintom scenario. Lastly, in Sec.~\ref{sec:3} we summarize our investigation and discuss the main physical features obtained. \section{The description of the action and the corresponding field equations} \label{sec:1} \par In this section we shall investigate a cosmological model based on two scalar fields which are non--minimally coupled with an invariant based on cubic contractions of the Riemann tensor, having the following action: \begin{multline} \label{actiune} S=S_m+\int d^4x \sqrt{-\tilde{g}} \Bigg( \frac{R}{2}-\frac{\epsilon_1}{2} \tilde{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-\frac{\epsilon_2}{2} \tilde{g}^{\mu\nu}\partial_{\mu}\sigma\partial_{\nu}\sigma \\-V_1(\phi)- V_2(\sigma) + f(\phi)P + g(\sigma) P\Bigg), \end{multline} with the cubic invariant \cite{Erices:2019mkd}: \begin{multline} P=\beta_1 R_{\mu\quad\nu}^{\quad\rho\quad\sigma}R_{ \rho\quad\sigma}^{\quad \gamma\quad\delta}R_{\gamma\quad\delta}^{\quad\mu\quad\nu}+\beta_2 R_{\mu\nu}^{\rho\sigma}R_{\rho\sigma}^{\gamma\delta}R_{\gamma\delta}^{\mu\nu} \\+\beta_3 R^{\sigma\gamma}R_{\mu\nu\rho\sigma}R_{\quad\quad\gamma}^{\mu\nu\rho}+\beta_4 R R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}+\beta_5 R_{\mu\nu\rho\sigma}R^{\mu\rho}R^{\nu\sigma} \\+\beta_6 R_{\mu}^{\nu}R_{\nu}^{\rho}R_{\rho}^{\mu}+\beta_7 R_{\mu\nu}R^{\mu\nu}R+\beta_8 R^3. \end{multline} \par As can be noted, we have extended the fundamental Einstein--Hilbert action by considering two additional scalar fields endowed with potential energy, non--minimally coupled with a specific term which encodes effects from the third order contractions of the Riemann tensor. In this action the two constants $\epsilon_{1,2}$ describe the signs of the kinetic terms and can describe the case where the two scalar fields are canonical or non--canonical. Before proceeding to the dynamics, we need to describe the background, by specifying the Robertson--Walker metric: \begin{equation} \label{metrica} ds^2=-dt^2+a^2(t) \delta_{ju}dx^j dx^u, \end{equation} with $a(t)$ the cosmic scale factor, characterizing the accelerated expansion of the Universe, defining the Hubble parameter in the usual manner, $H=\dot{a}/a$. \par Next, by considering the following relations \cite{Erices:2019mkd}: \begin{equation} \beta_7=\frac{1}{12}\big[3\beta_1-24\beta_2-16\beta_3-48\beta_4-5\beta_5-9\beta_6\big], \end{equation} \begin{equation} \beta_8=\frac{1}{72}\big[-6\beta_1+36\beta_2+22\beta_3+64\beta_4+5\beta_5+9\beta_6\big], \end{equation} \begin{equation} \beta_6=4\beta_2+2\beta_3+8\beta_4+\beta_5, \end{equation} \begin{equation} \beta=(-\beta_1+4\beta_2+2\beta_3+8\beta_4), \end{equation} we can show that the cubic invariant term for the Roberson--Walker metric takes the following form \cite{Erices:2019mkd}: \begin{equation} \label{PP} P=6 \beta H^4 (2H^2+3\dot{H}). \end{equation} \par The modified Friedmann equations which are associated to the cosmological model are obtained by considering the variation with respect to the inverse metric, reducing to the following \cite{Marciu:2020ski}: \begin{equation} \label{friedmannconstr} 3H^2=\rho_m+\rho_{\phi}+\rho_{\sigma}, \end{equation} \begin{equation} \label{friedmannaccelerare} 3H^2+2\dot{H}=-p_m-p_{\phi}-p_{\sigma}, \end{equation} where \begin{equation} \label{densitatede} \rho_{\phi}=\frac{1}{2}\epsilon_1\dot{\phi}^2+V_1(\phi)+6 \beta f(\phi) H^6-18 \beta H^5 \frac{df(\phi)}{d\phi}\dot{\phi}, \end{equation} \begin{equation} \label{densitatede2} \rho_{\sigma}=\frac{1}{2}\epsilon_2\dot{\sigma}^2+V_2(\sigma)+6 \beta g(\sigma) H^6-18 \beta H^5 \frac{dg(\sigma)}{d\sigma}\dot{\sigma}, \end{equation} \begin{multline} \label{presiunede} p_{\phi}=\frac{1}{2}\epsilon\dot{\phi}^2-V_1(\phi)-6 \beta f(\phi) H^6-12 \beta f(\phi) H^4 \dot{H} \\+12 \beta H^5 \frac{df(\phi)}{d\phi}\dot{\phi}+24 \beta H^3 \frac{df(\phi)}{d\phi}\dot{H}\dot{\phi} \\ +6 \beta H^4 \dot{\phi}^2\frac{d^2f(\phi)}{d\phi^2}+6 \beta H^4 \frac{df(\phi)}{d\phi}\ddot{\phi}, \end{multline} \begin{multline} \label{presiunede2} p_{\sigma}=\frac{1}{2}\epsilon_2\dot{\sigma}^2-V_2(\sigma)-6 \beta g(\sigma) H^6-12 \beta g(\sigma) H^4 \dot{H} \\+12 \beta H^5 \frac{dg(\sigma)}{d\sigma}\dot{\sigma}+24 \beta H^3 \frac{dg(\sigma)}{d\sigma}\dot{H}\dot{\sigma} \\ +6 \beta H^4 \dot{\sigma}^2\frac{d^2g(\sigma)}{d\sigma^2}+6 \beta H^4 \frac{dg(\sigma)}{d\sigma}\ddot{\sigma}. \end{multline} \par We can define the equation of state for the scalar fields: \begin{equation} w_{\bf{\phi\sigma}}=\frac{p_{\phi}+p_{\sigma}}{\rho_{\phi}+\rho_{\sigma}}. \end{equation} Then, the total equation of state is equal to: \begin{equation} w_{\bf{tot}}=\frac{p_m+p_{\phi}+p_{\sigma}}{\rho_{m}+\rho_{\phi}+\rho_{\sigma}}=-1-\frac{2}{3}\frac{\dot{H}}{H^2}. \end{equation} \par The Klein--Gordon equation for the first scalar field is obtained by taking the variation with respect to the $\phi$ field and reduces to \cite{Marciu:2020ski}: \begin{equation} \label{eq:eqkg1} \epsilon_1(\ddot{\phi}+3 H \dot{\phi})+\frac{dV_1(\phi)}{d\phi}-6 \beta H^4 (2 H^2+3 \dot{H})\frac{df(\phi)}{d\phi}=0. \end{equation} \par In a similar way the second Klein--Gordon relation is obtained and has the following form: \begin{equation} \label{eq:eqkg2} \epsilon_2(\ddot{\sigma}+3 H \dot{\sigma})+\frac{dV_2(\sigma)}{d\sigma}-6 \beta H^4 (2 H^2+3 \dot{H})\frac{dg(\sigma)}{d\sigma}=0. \end{equation} \par Finally, we can define the matter density parameter \begin{equation} \Omega_m=\frac{\rho_{m}}{3H^2}, \end{equation} and the density parameter associated to the present two field dark energy model: \begin{equation} \Omega_{de}=\frac{\rho_{\phi}+\rho_{\sigma}}{3H^2}, \end{equation} satisfying the normal constraint: \begin{equation} \Omega_m+\Omega_{de}=1. \end{equation} \begin{figure} \includegraphics[width=8cm]{P6saddle.pdf} \caption{The figure displays a possible saddle region for the $P_6$ critical point ($\lambda_{\sigma}=2$).} \label{fig:1} \end{figure} \begin{figure} \includegraphics[width=8cm]{P7saddle.pdf} \caption{A non--exclusive saddle region for the $P_7$ solution where the de--Sitter behavior is expected ($w_m=0$).} \label{fig:2} \end{figure} \begin{figure} \includegraphics[width=8cm]{P9contourplot.pdf} \caption{A non--exclusive region for the $P_9$ solution where the matter density parameter is $s=0.3$.} \label{fig:3} \end{figure} \begin{figure} \includegraphics[width=8cm]{P9saddle.pdf} \caption{A non--exclusive region where the $P_9$ solution is saddle ($w_m=0, \lambda_{\phi}=1, \lambda_{\sigma}=1$).} \label{fig:4} \end{figure} \begin{figure} \includegraphics[width=8cm]{P11evolutionsaddle.pdf} \caption{The evolution near the $P_{11}$ critical point (represented as a red dot) in the $O x_1 y_1$ plane ($w_m=0, \lambda_{\phi}=3, \lambda_{\sigma}=1, \eta=1, \alpha=1$).} \label{fig:p11} \end{figure} \begin{figure} \includegraphics[width=8cm]{P13densitymatter.pdf} \caption{The matter density parameter for the $P_{13}$ cosmological solution.} \label{fig:5} \end{figure} \begin{figure} \includegraphics[width=8cm]{P14saddle.pdf} \caption{A region of interest for the $P_{14}$ cosmological solution where the dynamics corresponds to a saddle behavior ($w_m=0$).} \label{fig:6} \end{figure} \section{Dynamical properties for a quintom scenario} \label{sec:2} In this section we shall investigate the dynamical properties of the present cosmological model by considering that we have a quintom scenario with $\epsilon_1=+1, \epsilon_2=-1$. In this case the $\phi$ component is associated to the quintessence, while the $\sigma$ term describes a non--canonical field with a negative kinetic energy violating various aspects from a physical point of view. The dynamical aspects are investigated by adopting the linear stability theory, an important tool in modern cosmology. In order to study the dynamical features, we introduce the following dimensionless variables: \begin{equation} x_1=\frac{\dot{\phi}}{H\sqrt{3}} , \end{equation} \begin{equation} x_2=\frac{\dot{\sigma}}{H\sqrt{3}}, \end{equation} \begin{equation} y_1=\frac{\sqrt{V_1(\phi)}}{H\sqrt{3}} , \end{equation} \begin{equation} y_2=\frac{\sqrt{V_2(\sigma)}}{H\sqrt{3}} , \end{equation} \begin{equation} z_1=2 \beta f(\phi) H^4 , \end{equation} \begin{equation} z_2=2 \beta g(\sigma) H^4 , \end{equation} \begin{equation} s=\frac{\rho_m}{3 H^2}. \end{equation} For the potential energy terms we shall consider the exponential decomposition, \begin{equation} V_1(\phi)=V_{10}e^{- \lambda_{\phi} \phi}, \end{equation} \begin{equation} V_2(\sigma)=V_{20}e^{- \lambda_{\sigma} \sigma}, \end{equation} where $V_{10}, V_{20}, \lambda_{\phi}$ and $\lambda_{\sigma}$ are constant positive parameters. Furthermore, for the coupling functions we have taken: \begin{equation} f(\phi)=f_0 e^{\alpha \phi}, \end{equation} \begin{equation} g(\sigma)=g_0 e^{\eta \sigma}, \end{equation} with $f_0, g_0, \alpha, \eta$ constant parameters associated to the strength of the cubic dependence. In order to study the physical features of the phase space structure, we shall introduce a new variable $N=log(a)$, approximating the evolution of our cosmological model by an autonomous system of differential equations. In this case we shall obtain the following autonomous system: \begin{equation} \label{eq:dn1} \frac{dx_1}{dN}=-x_1 \frac{\dot{H}}{H^2}+\frac{\ddot{\phi}}{\sqrt{3}H^2}, \end{equation} \begin{equation} \frac{dx_2}{dN}=-x_2 \frac{\dot{H}}{H^2}+\frac{\ddot{\sigma}}{\sqrt{3}H^2}, \end{equation} \begin{equation} \frac{dy_1}{dN}=-y_1 \frac{\dot{H}}{H^2}-\frac{\sqrt{3}}{2} \lambda_{\phi} x_1 y_1, \end{equation} \begin{equation} \frac{dy_2}{dN}=-y_2 \frac{\dot{H}}{H^2}-\frac{\sqrt{3}}{2} \lambda_{\sigma} x_2 y_2, \end{equation} \begin{equation} \frac{dz_1}{dN}=4 z_1 \frac{\dot{H}}{H^2}+\sqrt{3} \alpha x_1 z_1, \end{equation} \begin{equation} \label{eq:dn2} \frac{dz_1}{dN}=4 z_2 \frac{\dot{H}}{H^2}+\sqrt{3} \eta x_2 z_2. \end{equation} Then, the Friedmann constraint equation reduces to: \begin{equation} s=3 \sqrt{3} \alpha x_1 z_1+3 \sqrt{3} \eta x_2 z_2-\frac{x_1^2}{2}+\frac{x_2^2}{2}-y_1^2-y_2^2-z_1-z_2+1. \end{equation} In order to close the autonomous system we need to rewrite the Klein--Gordon equations in terms of auxiliary variables, obtaining: \begin{equation} \ddot{\phi}=-3 \sqrt{3} H^2 x_1+3 H^2 y_1^2 \lambda _{\phi }+6 \alpha H^2 z_1+9 \alpha z_1 \dot{H}, \end{equation} \begin{equation} \ddot{\sigma}=-3 \sqrt{3} H^2 x_2-3 H^2 y_2^2 \lambda _{\sigma }-6 \eta H^2 z_2-9 \eta z_2 \dot{H}. \end{equation} Lastly, the second Friedmann equation, the acceleration relation reduces to the following expression: \begin{multline} -3 H^2-2 \dot{H}=3 H^2 s w_m+9 \alpha ^2 H^2 x_1^2 z_1+6 \sqrt{3} \alpha H^2 x_1 z_1 \\+9 \eta ^2 H^2 x_2^2 z_2+6 \sqrt{3} \eta H^2 x_2 z_2+\frac{3}{2} H^2 x_1^2-\frac{3}{2} H^2 x_2^2-3 H^2 y_1^2 \\-3 H^2 y_2^2-3 H^2 z_1-3 H^2 z_2+12 \sqrt{3} \alpha x_1 z_1 \dot{H}+12 \sqrt{3} \eta x_2 z_2 \dot{H} \\-6 z_1 \dot{H}-6 z_2 \dot{H}+3 \alpha z_1 \ddot{\phi}+3 \eta z_2 \ddot{\sigma}, \end{multline} closing the system of differential equations in an autonomous manner. For the system of equations \eqref{eq:dn1}--\eqref{eq:dn2} we have identified various critical points which are associated to different cosmological epochs. In what follows we shall discuss the obtained results and the corresponding dynamical features. \par The first critical point represents the origin of the phase space structure, located at the following coordinates: \begin{equation} O:=[x_1=0,x_2=0,y_1=0,y_2=0,z_1=0,z_2=0], \end{equation} where the total equation of state corresponds to a matter era ($w_{tot}=w_m, s=1$), with the following eigenvalues: \begin{multline} \Big[ \frac{3}{2} \left(w_m-1\right),\frac{3}{2} \left(w_m-1\right),-6 \left(w_m+1\right),-6 \left(w_m+1\right), \\ \frac{3}{2} \left(w_m+1\right),\frac{3}{2} \left(w_m+1\right) \Big]. \end{multline} In the case where $w_m \to 0$ the solution is associated to a saddle dynamical behavior, describing the late matter epoch. \par The second solution is located to the following coordinates: \begin{equation} P_1^{\pm}:=[x_2= \pm \sqrt{x_1^2-2},y_1= 0,y_2= 0,z_1= 0,z_2= 0], \end{equation} describing a critical line which is related to the kinetic variables of the two scalar fields. This solution appears also in the minimally coupled case, describing a stiff fluid solution ($w_{tot}=1, s=0$), a particular representation which is not of great interest nowadays. The corresponding eigenvalues are the following: \begin{multline} \Big[ 0,3-3 w_m,\sqrt{3} \alpha x_1-12,3-\frac{1}{2} \sqrt{3} x_1 \lambda _{\phi },\pm \sqrt{3} \eta \sqrt{x_1^2-2}-12, \\ \mp \frac{1}{2} \sqrt{3} \sqrt{x_1^2-2} \lambda _{\sigma }+3 \Big]. \end{multline} Since the second eigenvalue is always positive when $w_m \to 0$ this solution cannot be stable, it is either saddle or unstable. Moreover, the solution is non-hyperbolic due to the existence of a zero eigenvalue. Due to this aspect, the linear stability theory can be used only to constrain and study the saddle characteristics, a feature which is particular for two-field dark energy models. \par The third cosmological solution is represented by a de--Sitter ($w_{tot}=-1, s=0$) epoch located at the following coordinates: \begin{multline} P_2:=[x_1=0,x_2=0,y_1=\frac{\sqrt{2} \sqrt{-\alpha } \sqrt{z_1}}{\sqrt{\lambda _{\phi }}}, \\ y_2=\frac{\sqrt{2} \sqrt{\eta \lambda _{\phi }+2 \alpha \eta z_1-\eta z_1 \lambda _{\phi }}}{\sqrt{2 \eta \lambda _{\phi }-\lambda _{\sigma } \lambda _{\phi }}}, \\ z_2=\frac{-\lambda _{\sigma } \lambda _{\phi }-2 \alpha z_1 \lambda _{\sigma }+z_1 \lambda _{\sigma } \lambda _{\phi }}{\lambda _{\phi } \left(2 \eta -\lambda _{\sigma }\right)}], \end{multline} describing a cosmological solution where the two dark energy fields are at rest without any kinetic energy. The dynamics is driven by the potential energy terms, and the strength of the cubic couplings embedded into $\alpha, \eta$ coefficients. However, due to the high complexity of the phase space structure, we haven't been able to find the corresponding eigenvalues in the most general case. Hence, in order to check the stability of the cosmological solution, we have set the parameters to the following values: $w_m=0, \alpha=-1, \eta=1, \lambda_{\phi}=1, \lambda_{\sigma}=1, z_1=1$, obtaining the eigenvalues $\approx [0.,-3.,-3.95+0.85 i,0.95\, -0.85 i,-3.95-0.85 i,0.95\, +0.85 i]$ which are associated to a saddle dynamical behavior. In this case the accelerated expansion can be associated to this cosmological solution, describing a possible early dark energy where the fields are asymptotically frozen, without kinetic energy. \par Next, the $P_3^{\pm}$ critical points are located at the coordinates: \begin{multline} P_3^{\pm}:=[x_1=\pm \frac{\sqrt{2} \sqrt{\eta ^2+24}}{\eta }, x_2=\frac{4 \sqrt{3}}{\eta },y_1= 0,y_2= 0, \\ z_1= 0,z_2= 0], \end{multline} with the corresponding eigenvalues: \begin{multline} \Big[ 0,0,3-3 w_m,\pm \frac{\sqrt{6} \alpha \sqrt{\eta ^2+24}}{\eta }-12,3-\frac{6 \lambda _{\sigma }}{\eta }, \\ 3\mp \frac{\sqrt{\frac{3}{2}} \sqrt{\eta ^2+24} \lambda _{\phi }}{\eta } \Big]. \end{multline} These solutions are associated to a stiff--fluid case ($w_{tot}=1, s=0$) where the dark energy fluid completely dominates in terms of density parameters. As previously stated, these solutions are not of great interest for the modern cosmology, representing a not so feasible epoch driven only by the kinetic energies of the two scalar fields. \par Furthermore, the $P_{4}^{\pm}$ solutions are located in the phase space structure at the coordinates: \begin{multline} P_4^{\pm}:=[x_1=\pm \frac{\sqrt{2} \sqrt{\lambda _{\sigma }^2+6}}{\lambda _{\sigma }}, x_2=\frac{2 \sqrt{3}}{\lambda _{\sigma }},y_1= 0,y_2= 0, \\ z_1= 0,z_2= 0], \end{multline} where the corresponding eigenvalues are: \begin{multline} \Big[ 0,0,\frac{6 \eta }{\lambda _{\sigma }}-12,3-3 w_m,\pm \frac{\sqrt{6} \alpha \sqrt{\lambda _{\sigma }^2+6}}{\lambda _{\sigma }}-12, \\ 3\mp\frac{\sqrt{\frac{3}{2}} \sqrt{\lambda _{\sigma }^2+6} \lambda _{\phi }}{\lambda _{\sigma }} \Big]. \end{multline} As in the previous case, these stiff--fluid solutions ($w_{tot}=1, s=0$) describe an era where the expansion is not accelerated, with the dynamics driven only by the kinetic energy of the scalar fields. \par The $P_5$ critical point located at: \begin{multline} P_5:=[x_1=0, x_2=-\frac{\lambda _{\sigma }}{\sqrt{3}},y_1=0, y_2=\frac{\sqrt{\lambda _{\sigma }^2+6}}{\sqrt{6}}, \\ z_1=0 ,z_2=0 ], \end{multline} represents a phantom solution where the effective equation of state is equal to: \begin{equation} w_{tot}=-\frac{\lambda _{\sigma }^2}{3}-1, \end{equation} with the density parameter of the matter component equal to zero, describing a super accelerated expansion. For this solution we have obtained the following eigenvalues: \begin{multline} \Big[ -\frac{\lambda _{\sigma }^2}{2},2 \lambda _{\sigma }^2,\lambda _{\sigma } \left(2 \lambda _{\sigma }-\eta \right), \\ -\frac{\lambda _{\sigma }^2}{2}-3,-\frac{\lambda _{\sigma }^2}{2}-3,-\lambda _{\sigma }^2-3 w_m-3 \Big], \end{multline} corresponding to an era which is always saddle from a dynamical perspective. \par Next, the $P_6$ critical point (in the case when $w_m=0$) is located at the coordinates: \begin{multline*} P_6:=[x_1=\frac{\sqrt{3}}{\lambda _{\phi }}, x_2=\frac{2 \sqrt{3}}{\eta }, \\ y_1=\frac{\sqrt{\frac{3}{2}} \sqrt{\left(\left(\eta ^2-4\right) \lambda _{\phi }^2+\eta ^2\right) \left(5 \eta ^2 \left(5 \lambda _{\phi }^2+12\right)-366 \lambda _{\phi }^2\right)}}{\sqrt{\lambda _{\phi }^2 \left(\left(\eta ^2-4\right) \lambda _{\phi }^2+\eta ^2\right) \left(5 \eta ^2 \left(5 \lambda _{\phi }^2+12\right)-366 \lambda _{\phi }^2\right)}}, \\ y_2=0,z_1=0 ,z_2=\frac{6}{5 \eta ^2} ] \end{multline*} represents a scaling solution \cite{Uzan:1999ch, Marciu:2017sji} where the effective equation of state is equal to: \begin{equation} w_{tot}=w_m, \end{equation} with the density parameter of the matter component: \begin{equation} s=\frac{132}{5 \eta ^2}-\frac{3}{\lambda _{\phi }^2}+1. \end{equation} For this solution we have obtained the following eigenvalues: \begin{equation} \Big[\frac{3 \left(\eta -2 \lambda _{\sigma }\right)}{2 \eta },-\frac{3 \left(2 \lambda _{\phi }-\alpha \right)}{\lambda _{\phi }} ,X_3, X_4, X_5, X_6 \Big]. \end{equation} In this case the last eigenvalues ($X_3 - X_6$) are not written in the manuscript due to the high complexity of the formulas involved. This scaling solution is important in the phase space structure since it can explain the matter epoch. As can be noted, the kinetic energies of the two fields are set to very specific values, while the potential energy of the $\phi$ field is influenced by $\lambda_{\phi}$ and $\eta$ parameters. Moreover, the strength of the cubic coupling for the $\sigma$ field is affected by the values of the $\eta$ constant parameter in a self--interacting manner. In Fig.~\ref{fig:1} we have displayed a possible saddle region for the $P_6$ critical point by analyzing the signs of the first and second eigenvalues, considering that the first eigenvalue is positive, while the second eigenvalue is negative. As can be suspected, this analysis is not exclusive and only displays a possible region of interest associated to a saddle dynamical behavior where the two scalar fields are triggering a matter epoch, representing a scaling solution. \par The $P_7$ solution, located at the coordinates: \begin{multline*} P_7:=[x_1=0,x_2=0, y_1=0, y_2=\frac{\sqrt{2} \sqrt{\eta }}{\sqrt{2 \eta -\lambda _{\sigma }}},z_1=0 , \\ z_2=\frac{\lambda _{\sigma }}{\lambda _{\sigma }-2 \eta } ], \end{multline*} corresponds to a de--Siter era ($w_{tot}=-1, s=0$) where the two scalar fields are at rest without any kinetic energy, while the dynamics is driven by the potential energy of the $\sigma$ field, and its cubic coupling term. The corresponding eigenvalues are $[0,0,-3, X_4, X_5, X_6]$, where the last three terms are not displayed due to the complicated expressions involved. As can be observed, this critical point is non--hyperbolic and can be analyzed only in the saddle behavior using the linear stability theory. For this solution, we have displayed in Fig.~\ref{fig:2} an interval associated to a saddle dynamical behavior, considering also the existence conditions which require that the $y_2$ variable is real and positive. This behavior mimics a cosmological constant dynamics and is expected to drive the evolution of the Universe in the future. \par The $P_8^{\pm}$ solutions are found in the phase space structure at: \begin{multline} P_8^{\pm}:=[x_1=\frac{4 \sqrt{3}}{\alpha },x_2=\pm \frac{\sqrt{2} \sqrt{24-\alpha ^2}}{\alpha }, \\ y_1=0, y_2=0,z_1=0 , z_2=0 ], \end{multline} driven a stiff--fluid scenario ($w_{tot}=1, s=0$), with the eigenvalues: \begin{multline} \Big[0,0,3-3 w_m,\pm \frac{\sqrt{6} \sqrt{24-\alpha ^2} \eta }{\alpha }-12, 3\mp \frac{\sqrt{36-\frac{3 \alpha ^2}{2}} \lambda _{\sigma }}{\alpha }, \\ 3-\frac{6 \lambda _{\phi }}{\alpha } \Big]. \end{multline} \par The next solution $P_9$ represents a scaling cosmological epoch ($w_{tot}=w_m$) located at the coordinates: \begin{multline} P_9:=[x_1=\frac{2 \sqrt{3} \left(w_m+1\right)}{\alpha },x_2=\frac{2 \sqrt{3} \left(w_m+1\right)}{\eta }, \\ y_1=0, y_2=0,z_1=\frac{6 \left(w_m^2-1\right)}{\alpha ^2 \left(9 w_m+5\right)} , z_2=-\frac{6 \left(w_m^2-1\right)}{\eta ^2 \left(9 w_m+5\right)} ], \end{multline} with the matter density parameter equal to: \begin{equation} \resizebox{0.5\textwidth}{!}{$s=\frac{\alpha ^2 \left(5 \eta ^2+3 \left(3 \eta ^2+74\right) w_m-54 w_m^3+36 w_m^2+132\right)+6 \eta ^2 \left(9 w_m^3-6 w_m^2-37 w_m-22\right)}{\alpha ^2 \eta ^2 \left(9 w_m+5\right)}.$} \end{equation} It can be observed that for this solution the potential energy terms are absent, with the dynamics driven by the kinetic energies and the cubic components of the two scalar fields. In principle, the location in the phase space structure and the physical properties are affected by the matter component through the barotropic equation of state, and the two cubic couplings, embedded into $\eta$ and $\alpha$ parameters. For this solution we have displayed in Fig.~\ref{fig:3} a possible contour where the value of the matter density parameter is set $(s=0.3)$, obtaining various values of the associated parameters. The eigenvalues corresponding to this solution are the following (assuming a pressure--less mater component): \begin{strip} \begin{multline} \Big[-\frac{3 \alpha \eta \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^2+3 \sqrt{17} \sqrt{\alpha ^2 \eta ^2 \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^4}}{4 \alpha \eta \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^2}, \\ \frac{3 \sqrt{17} \sqrt{\alpha ^2 \eta ^2 \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^4}-3 \alpha \eta \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^2}{4 \alpha \eta \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^2}, \\ -\frac{3 \alpha \eta \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^2+3 \sqrt{\alpha ^2 \eta ^2 \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^3 \left(\alpha ^2 \left(425 \eta ^2+11064\right)-11064 \eta ^2\right)}}{4 \alpha \eta \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^2}, \\ \frac{3 \sqrt{\alpha ^2 \eta ^2 \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^3 \left(\alpha ^2 \left(425 \eta ^2+11064\right)-11064 \eta ^2\right)}-3 \alpha \eta \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^2}{4 \alpha \eta \left(\alpha ^2 \left(25 \eta ^2+504\right)-504 \eta ^2\right)^2}, \\ \frac{3}{2}-\frac{3 \lambda _{\sigma }}{\eta },\frac{3}{2}-\frac{3 \lambda _{\phi }}{\alpha } \Big]. \end{multline} \end{strip} We have displayed in Fig.~\ref{fig:4} a possible region of interest for the $P_9$ solution associated to a saddle dynamical behavior which takes into account also the existence conditions. \par The next critical point $P_{10}$ represents also a stiff-fluid solution ($w_{tot}=1, s=0$) found at the coordinates: \begin{multline} P_{10}^{\pm}:=[x_1=\frac{2 \sqrt{3}}{\lambda_{\phi}},x_2=\pm \frac{\sqrt{2} \sqrt{6-\lambda _{\phi }^2}}{\lambda _{\phi }}, \\ y_1=0, y_2=0,z_1=0 , z_2=0 ], \end{multline} with the following eigenvalues (in the case of $P_{10}^{+}$): \begin{multline} \Big[0,0,\frac{6 \alpha }{\lambda _{\phi }}-12,3-3 w_m,\frac{\sqrt{6} \eta \sqrt{6-\lambda _{\phi }^2}}{\lambda _{\phi }}-12, \\ 3-\frac{\lambda _{\sigma } \sqrt{9-\frac{3 \lambda _{\phi }^2}{2}}}{\lambda _{\phi }} \Big]. \end{multline} In the case of a pressure--less matter component this solution cannot be stable, it is either saddle or unstable, a solution which is not very relevant from a cosmological point of view. \par The $P_{11}$ solution is driven by the canonical field $\phi$, \begin{multline} P_{11}:=[x_1=\frac{\sqrt{3} \left(w_m+1\right)}{\lambda _{\phi }},x_2=0, \\ y_1=\frac{\sqrt{\frac{3}{2}} \sqrt{1-w_m^2}}{\lambda _{\phi }}, y_2=0,z_1=0 , z_2=0 ], \end{multline} acting as a saddle matter epoch ($w_{tot}=w_m$), $s=\frac{\lambda _{\phi }^2-3 w_m-3}{\lambda _{\phi }^2}$, with the following eigenvalues: \tiny \begin{multline} \Big[\frac{3}{2} \left(w_m-1\right),-6 \left(w_m+1\right),\frac{3}{2} \left(w_m+1\right),\frac{3 \left(w_m+1\right) \left(\alpha -2 \lambda _{\phi }\right)}{\lambda _{\phi }}, \\ \frac{3}{4} \left(-\frac{\sqrt{\lambda _{\phi }^{10} \left(w_m-1\right) \left(\lambda _{\phi }^2 \left(9 w_m+7\right)-24 \left(w_m+1\right){}^2\right)}}{\lambda _{\phi }^6}+w_m-1\right), \\ \frac{3 \left(\lambda _{\phi }^6 \left(w_m-1\right)+\sqrt{\lambda _{\phi }^{10} \left(w_m-1\right) \left(\lambda _{\phi }^2 \left(9 w_m+7\right)-24 \left(w_m+1\right){}^2\right)}\right)}{4 \lambda _{\phi }^6} \Big]. \end{multline} \normalsize At this solution the kinetic energy of the canonical field $\phi$ is influenced by the matter equation of state, and the strength of the potential energy term, encoded into the $\lambda_{\phi}$ coefficient. As can be seen, this solution is always saddle. The evolution near the $P_{11}$ critical point is represented in Fig.~\ref{fig:p11} for specific initial conditions, showing the dynamics in the $O x_1 y_1$ plane. \par Furthermore, the $P_{12}$ solution located at the coordinates: \begin{multline} P_{12}:=[x_1=\frac{\lambda _{\phi }}{\sqrt{3}},x_2=0, \\ y_1=\frac{\sqrt{6-\lambda _{\phi }^2}}{\sqrt{6}}, y_2=0,z_1=0 , z_2=0 ], \end{multline} describes a possible epoch where the acceleration of the Universe corresponds to a quintessence regime since \begin{equation} w_{tot}=\frac{\lambda _{\phi }^2}{3}-1, s=0. \end{equation} From a dynamical point of view we have obtained the following eigenvalues: \begin{multline} \Big[-2 \lambda _{\phi }^2,\frac{\lambda _{\phi }^2}{2},\lambda _{\phi } \left(\alpha -2 \lambda _{\phi }\right), \\ \lambda _{\phi }^2-3 w_m-3,\frac{1}{2} \left(\lambda _{\phi }^2-6\right),\frac{1}{2} \left(\lambda _{\phi }^2-6\right) \Big], \end{multline} indicating an era which is always saddle affected in principal by the strength of the potential energy of the canonical scalar field. \par Next, the $P_{13}$ critical point, located at the coordinates: \begin{multline} P_{13}:=[x_1=\frac{2 \sqrt{3} \left(w_m+1\right)}{\alpha },x_2=0, \\ y_1=0, y_2=0,z_1=\frac{6 \left(w_m^2-1\right)}{\alpha ^2 \left(9 w_m+5\right)} , z_2=0 ], \end{multline} is describing a scaling solution ($w_{tot}=w_m$) triggered by the kinetic energy of the canonical scalar field $\phi$ and the associated cubic component, with the matter density parameter equal to: \begin{equation} s=\frac{5 \alpha ^2+3 \left(3 \alpha ^2-74\right) w_m+54 w_m^3-36 w_m^2-132}{\alpha ^2 \left(9 w_m+5\right)}. \end{equation} If we assume the case of a pressure--less matter component ($w_m=0$), then we obtain the following eigenvalues: \tiny \begin{multline} \Big[-\frac{3}{2},-6,\frac{3}{2}, \\ -\frac{3 \left(\alpha \left(504-25 \alpha ^2\right)^2+\sqrt{\alpha ^2 \left(25 \alpha ^2-504\right)^3 \left(425 \alpha ^2-11064\right)}\right)}{4 \alpha \left(504-25 \alpha ^2\right)^2}, \\ \frac{3 \left(\sqrt{\alpha ^2 \left(25 \alpha ^2-504\right)^3 \left(425 \alpha ^2-11064\right)}-\alpha \left(504-25 \alpha ^2\right)^2\right)}{4 \alpha \left(504-25 \alpha ^2\right)^2},\frac{3}{2}-\frac{3 \lambda _{\phi }}{\alpha } \Big], \end{multline} \normalsize corresponding to a solution which is always saddle from a dynamical point of view. For this solution we have displayed in Fig.~\ref{fig:5} a specific region where the matter density parameter $s$ satisfies the existence conditions, showing the viability of such a scaling behavior. \par Lastly, the $P_{14}$ solution found at the coordinates: \begin{multline} P_{14}:=[x_1=0,x_2=0, \\ y_1=\frac{\sqrt{2} \sqrt{\alpha }}{\sqrt{2 \alpha -\lambda _{\phi }}}, y_2=0,z_1\frac{\lambda _{\phi }}{\lambda _{\phi }-2 \alpha } , z_2=0 ], \end{multline} is associated to a de--Sitter epoch ($w_{tot}=-1, s=0$) where the two scalar fields are frozen. The dynamics is affected in principle by the potential energy of the canonical scalar field $\phi$, and the specific cubic component. This solution is similar to the $P_7$ critical point, representing a non--hyperbolic case where two eigenvalues are equal to zero. Since the expressions for the last eigenvalues are very complex, they are not displayed in the manuscript. As in the previous case, we show in Fig.~\ref{fig:6} a specific region of interest where such a cosmological solution is having a saddle dynamical behavior, acting closely as a cosmological constant. \section{Summary and Conclusions} \label{sec:3} \par In the present paper we have proposed a novel dark energy model in the theoretical framework of modified gravity theories. This model extends the fundamental Einstein--Hilbert action by adding two scalar fields non--minimally coupled in an independent manner to an invariant component which contains cubic contractions of the Riemann tensor. After proposing the action for the current cosmological model, we have obtained the modified Friedmann equations by taking the variation of the action with respect to the inverse metric. The remaining field equations, the Klein--Gordon relations are developed by taking the variation with respect to the two fields which are present in the proposed action. As expected, the cosmological model satisfies the continuity equation due to the specific form of the proposed action. The dynamical features of the model are investigated by adopting the linear stability theory considering that the two fields are describing a quintom scenario, where a scalar field has a positive kinetic term, while the second field is associated to a non--canonical behavior, with a negative kinetic component, violating various physical principles from a classical point of view. From a theoretical point of view the quintom models are associated to a non--classical behavior as effective approaches in the modified gravity theories, being supported by various observations in the recent past. The analysis showed that the phase space structure is very rich in terms of stationary points, a complexity which can be associated to various stages in the evolution of the Universe. To this regard, we have obtained different classes of critical points which are associated to various cosmological eras. The first class of stationary points is represented by the stiff fluid solutions, a category which is not very interesting from a cosmological point of view. A second class of stationary points is represented by the de--Sitter solutions, a specific category which can explain the accelerated expansion where the model behaves as a cosmological constant. In this case the constant equation of state corresponds to $-1$ and can explain the late time evolution of the total equation of state. The third class of stationary points is associated to the scaling behavior where the effective equation of state corresponds to a matter epoch, a special type of solutions which can explain the existence of the matter epoch without fine--tuning. The scaling solutions found in the analysis are represented by an epoch characterized by a constant equation of state where the model behaves as a matter fluid, a dynamics driven by one or two scalar fields. In this case, the physical features are influenced by the couplings with the specific invariant which contains the third order contractions of the Riemann tensor, particular scaling solutions in the current cosmological setup. From a cosmological type of view the existence of this type of solutions in the phase space structure is very important for the viability of the corresponding models. The fourth class of solutions is associated to a quintessence or phantom behavior and can delineate the late time acceleration of the Universe by fine--tuning various parameters of the present model. These solutions can explain the quintessence--like and the phantom--like eras, as well as the possible crossing of the phantom divide line in the recent past. From the above discussion it can be noted that the present cosmological model can explain various stages in the evolution of our Universe, explaining both early and late time evolution in the phase space structure, a viable model from a theoretical point of view at the level of background dynamics. \begin{acknowledgements} For the development of the present manuscript we have considered analytical computations in $Mathematica$ \cite{mathematica} and $xAct$ \cite{xact}. \end{acknowledgements}
1,108,101,563,106
arxiv
\section{Introduction} The magnetic fields of binary X-ray pulsars have been directly measured utilizing cyclotron resonance scattering features (hereafter CRSF) in their X-ray spectra (e.g., \cite{Makishima2016}). The magnetic field strength $B$ measured in this way is distributed in the range of $B/(1+{z_{\rm g}})= (1-7)\times 10^{12}$ G (\cite{Yamamoto2014} and references therein), where $z_{\rm g} \simeq 0.24$ is the gravitational redshift at the neutron-star (hereafter NS) surface. No accreting pulsars with $B\geq10^{13}$ G have yet been discovered. However, this could be due to a selection effect, that the CRSF becomes difficult to detect in $\geq$ 100 keV (i.e, for $B\geq10^{13}$ G) where the observational sensitivity becomes progressively lower. In order to search for such accreting NSs with higher magnetic fields, we need to invoke other methods such as pulse timing analysis or modeling of continuum spectra \citep{Sasano2015}. Using the MAXI Gas Slit Camera (GSC) data and some past measurements, \citet{Takagi2016} investigated long-term relations between the pulse period derivative $\dot{P}$ and the fluxes of the binary X-ray pulsar 4U~1626-67. They adopted the accretion torque theory proposed by \citet{GL79} (hereafter GL79), which describes $\dot{P}$ as a function of the luminosity $L$, the pulse period $P$, the mass $M$, the radius $R$, and the surface magnetic field $B$ of the NS. Then, employing $B/(1+{z_{\rm g}})=3.2\times 10^{12}$ G measured with a CRSF \citep{Orland1998}, the GL79 model was confirmed to accurately explain the observed relation between $\dot{P}$ and $L$ of 4U 1626-67, including both the spin-up and spin-down phases. This means that accurate measurements of $\dot{P}$ over a sufficiently wide range of $L$ of a binary X-ray pulsar would conversely allow us to estimate its $B$. X~Persei (4U 0352+309) is a high-mass X-ray binary system, consisting of a Be-type star and a magnetized NS. Its pulse period of $P\sim835\ \rm s$, which is relatively longer than those of other pulsars, was independently discovered by Ariel 5 and Copernicus \citep{White1976}. The distance to the source is estimated as $D=0.81\pm0.04$ kpc by its optical parallax derived from the GAIA Astrometry \footnote{https://gea.esac.esa.int/archive/}. With X-ray observations, \citet{Delgado2001} determined the orbital period of X~Persei as $P_{\rm orb}=250.3 \pm 0.6$ d, and derived an eccentricity of $e\sim$0.11 which is considerably smaller than those of typical Be/X-ray binaries, $e\geq$0.3. X~Persei was spinning up at a rate of $\dot{P}\sim -1.5 \times 10^{-4}\ \rm yr^{-1}$ until 1978, when it turned into a phase of spin down at a rate of $\dot{P}\sim 1.3 \times 10^{-4}\ \rm yr^{-1}$ \citep{Delgado2001}. In 2002, the pulsar returned to the spin-up phase, and has since been spinning up until today. This spin up/down alternation implies that the source is close to a torque equilibrium, and hence the long $P$ suggests a strong magnetic field, because we then expect $B\sim L^{\frac{1}{2}}P^{\frac{6}{7}}$ \citep{Makishima2016} In the 2--20 keV range, the spectrum of X~Persei can be fitted by a powerlaw model with an exponential cutoff \citep{DiSalvo1998}, which is typical of accreting X-ray pulsars. However, in the 20--100 keV band, the spectrum of X~Persei is much flatter and harder than those of other X-ray pulsars, and lacks steep cutoff \citep{Sasano2015}. \citet{Sasano2015} empirically quantified this shape of the spectrum in comparison with those of other pulsars (mostly with $B$ measured), and obtained an estimate of of $B\sim10^{13}\ \rm G$. Although \citet{Coburn2001} noticed a spectral dip at $\sim 29$ keV in an RXTE/PCA spectrum, and regarded it as a CRSF corresponding to $B/(1+{z_{\rm g}})=2.5 \times 10^{12}\ \rm G$, the feature is too shallow and broad for that interpretation, and can be explained away by a combination of two continuum components without a local feature \citep{Doroshenko2012, Sasano2015}. Thus, the strong-field property of X~Persei is suggested also by its spectral properties. In the present paper, we report the long-term continuous observations of the fluxes and $P$ of X~Persei by RXTE and MAXI. Applying the GL79 model to the derived $\dot{P}-L$ relation, we then estimate $B$ and $M$ of the NS in this binary. \section{Observation} \begin{figure*} \begin{center} \includegraphics[width=0.7\textwidth]{long_term_lc_in_unit_of_crab_90.ps} \end{center} \caption{Long term (1996--2017) light curves of X~Persei in 25-d bin, expressed in Crab units. Black filled circles and red open squares are data with the RXTE/ASM (1.5--12 keV) and the MAXI/GSC (2--20 keV), respectively.} \label{longlc} \end{figure*} \subsection{MAXI/GSC} Since August 2009, MAXI \citep{Matsuoka2009} on board the International Space Station (ISS) has been continuously scanning the X-ray sky every 92 min of the ISS orbital period. We derived the data of X~Persei from the MAXI/GSC \citep{Mihara2011, Sugizaki2011}, using On-Demand System \footnote{https://maxi.riken.jp/mxondem/} provided by the MAXI team. This data-analysis scheme enables us to extract images, light curves, and energy spectra of a given source by specifying its coordinate and observation times. The on-source events and background events, both in 2--20 keV were derived using standard regions. Every 92 min, we thus obtained a background-subtracted data set (an energy spectrum), with an integration time of $\sim 60\ \rm s$ corresponding to a single scan transit of the source. Since this is sufficiently shorter than compared to the pulse period of X~Persei ($\sim835\ \rm s$), the data obtained with a single-scan is just a snap shot at a certain pulse phase. These data sets have been combined into a 2--20 keV light curve from MJD 55054 to MJD 58049 and 12 energy spectra each summed over the 250-d orbital period. The 250-d averaged spectra have statistics high enough to yield accurate fluxes. \subsection{RXTE/ASM} The All-Sky monitor (ASM; \cite{Levine1996}) on board the RXTE satellite \citep{Bradt1993} continuously monitored the whole X-ray sky from MJD 50087 to MJD 55927. For this entire period, we retrieved the RXTE/ASM dwell-time light curves of X~Persei in four energy ranges, namely, 1.5--12.0 keV, 1.5--3.0 keV, 3.0--5.0 keV, and 5.0--12.0 keV\footnote{https://xte.mit.edu/ASM\_lc.html/}. These light curves were used to obtain $P$ and the fluxes. \subsection{RXTE/PCA} In order to cross check the validity of the fluxes of RXTE/ASM and MAXI/GSC, RXTE/PCA data were also used. Since the PCA data are generally too short and sparse for the accurate determination of $P$ and calculation of $\dot{P}$, these data were not used for pulse period analysis. Through 16 years RXTE mission, total 156 pointing observations were performed on this source by the Proportional Counter Array (PCA; \cite{Jahoda2006}). Those PCA observations were mede in the period from MJD 50161 to MJD 52687. In this duration, one of the PCA unit, PCU 2, had longer exposure times than the others. Thus, we used the data taken by PCU 2 only. An inspection of the operation status of PCU 2, indicated that some of the data sets were acquired under bad conditions (e.g., large offset angles). We discarded those data sets, and utilized 136 pointing observations (total exposure time of 640 ksec) to extract the spectra. \subsection{Long-term light curves} To investigate intensity variations of X~Persei, we have binned all the RXTE/ASM and MAXI/GSC data into 25-d bin light curves. The derived long-term light curves from 1996 to 2017 are shown in figure \ref{longlc}, where the background-subtracted count rate is normalized to that of the Crab Nebula. Thus, X~Persei showed moderate X-ray intensity variations up to a factor of 5-6. The intensity is not apparently modulated at the orbital period, presumably because of the small eccentricity. Instead, in 2003, 2010, and 2017, the intensity increased to 50 mCrab and dropped to 25 mCrab in one orbital period, suggesting a 7 years super-orbital periodicity. \section{Analysis and results} In order to calculate $\dot{P}$, it is necessary to first determine $P$ accurately, with a reasonably dense sampling. For this purpose, the times of individual light-curve bins were converted to those to be measured at the solar center; this heliocentric correction, instead of the more complex barycentric correction, is sufficient for the present purpose because of the long pulse period. Then, we performed time corrections for the binary orbital motion of X~Persei, assuming the circular orbit parameters ; $P_{\rm orb}=249.9$ d, the epoch of $90^\circ$ mean orbital longitude : $T_{\pi/2} = 51215.5$ (MJD), and the semi-major axis $a_{\rm x} \mathrm{\,sin}\,i =454$ lt-s, from Table 2 in \citet{Delgado2001}. The small eccentricity (e$\sim$0.11) affects the value of $P$ at most by $\pm0.010$ s in 70-d analysis (Section 3.1) and $\pm0.001$ s in 250-d analysis (Section 3.2) ; these are smaller than typical errors of the $P$ measurements. \subsection{Refinement of the orbital period} \begin{figure*} \begin{center} \includegraphics[width=8cm]{p_70days_representative_4point_90.ps} \includegraphics[width=8cm]{folded_residuals_representative_4point_90.ps} \end{center} \caption{(a)Pulse periods of X~Persei determined every 70 d with the MAXI/GSC. Colors specify different values of $P_{\rm orb}$ employed to correct the pulse arrival times for the orbital motion of the NS. (b)Pulse-period residuals, obtained from panel (a) by subtracting a linear trend and folding at the assumed $P_{\rm orb}$.} \label{pulse70d} \end{figure*} As a preliminary attempt to confirm the arrival-time corrections, we determined the pulse period every 70 d for MJD55450--57020, when the luminosity stayed relatively constant. The epoch folding method described in section 3.2 was employed, assuming $\dot{P}=-7\times10^{-9}\ \rm s\ s^{-1}$ which is the average value of $\dot{P}$ during this time period. The result is shown with black points in figure \ref{pulse70d}. Although the values of $P$ confirm the spin-up trend in general, the data points show wiggling behavior around the trend, with a typical period of 4 data points, or $\sim280$ d. Since the flux in figure \ref{longlc} is not modulated at this period, the effect is unlikely to be caused by accretion-torque changes. Instead, it could be due to slight discordance of the orbital phase, which in turn arose from an accumulation of the error of $P_{\rm orb}$ for 12--16 years since the observation by \citet{Delgado2001}. We searched for a better orbital period as follows assuming that $T_{0} = 51215.5 $ (MJD) and $a_{\rm x} \mathrm{\,sin}\,i =454$ lt-s are correct. First we repeated this analysis by changing the trial value of $P_{\rm orb}$ from 249.9 d to 251.8 d with a step of 0.3 d or finer. Then, from the pulse-period history obtained in this way, we subtracted a trend line represented by $\dot{P}=-7\times10^{-9}\ \rm s\ s^{-1}$. Finally, the residual pulse period was folded at the employed $P_{\rm orb}$. Black points in figure \ref{pulse70d} show the folded residuals assuming, for example, $P_{\rm orb}=249.9$ d, where large residuals make a constant-line fit unacceptable with $\chi^2=134$ (for d.o.f.= 3). The values of $\chi^2$ derived from this $P_{\rm orb}$ scan are plotted in figure \ref{orbchi}, and the orbital profile of the residuals for four representative cases (including the one with $P_{\rm orb}=249.9$ d) are presented in figure \ref{pulse70d}. Requiring that the residual pulse-period modulation at the assumed $P_{\rm orb}$ should be minimized, the best orbital period was estimated as $P_{\rm orb}=251.0^{+0.2}_{-0.1}$ d ($1 \sigma$ error) which is still not away from the $2 \sigma$ error range given by \cite{Delgado2001}. We hereafter use this value for the binary orbital motion corrections. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{kidou_syuki_chi_value_90.ps} \end{center} \caption{The values of $\chi^2$ from a constant fit to the residual profiles such as shown in figure \ref{pulse70d}, presented as a function of the assumed $P_{\rm orb}$} \label{orbchi} \end{figure} \subsection{Pulse timing analysis} \subsubsection{Measurements of $P$} Now that $P_{\rm orb}$ has successfully been updated, we employed the best orbital period in the correction of the binary motion. Then, we analyze the entire RXTE/ASM and MAXI/GSC data of X~Persei, via the standard epoch folding method \citep{Leahy1983} to determine $P$ of X~Persei every 250 d. This time period, which is longer than employed in Section 3.1 and is close to $P_{\rm orb}$, has been chosen to accurately estimate $\dot{P}$, which is important in the GL79 formula. The employed energy band is 1.5--12 keV for the RXTE/ASM and 2--20 keV for MAXI/GSC. The number of bins of the folded pulse profile was chosen as 16, because one MAXI data point is an accumulation over about 60 s. We assumed a constant but free $\dot{P}$ during each 250-d time period, and searched for a pair ($P$, $\dot{P}$) that maximizes $\chi^2$ of the folded 16-bin pulse profile against the constant-intensity hypothesis. An example of $\chi^2$ map on the ($P$, $\dot{P}$) is shown in figure \ref{cont}(a), and the corresponding folded pulse profile using ($P$, $\dot{P}$) at the maximum $\chi^2$ is shown in figure \ref{cont}(b). The pulse profile has a sinusoidal shape, not only in this particular case, but also in other intervals. The derived values of $P$ are shown in figure \ref{fp}(b), together with the 250-d averaged 3--12 keV flux to be described in the next section. The 1$\sigma$ errors of $P$ were estimated with the Monte-Carlo method of \citet{Leahy1987}. As \citet{Lutovinov2012} and \citet{Acuner2014} reported, the pulsar changed from a spin-down phase to a spin-up phase around MJD52000, when the X-ray flux started increasing. Since then, X~Persei has been in the spin-up phase at least until MJD58049. \subsubsection{Calculations of $\dot{P}$} Although $\dot{P}$ was thus estimated every 250 d, the calculation of $\dot{P}$ can also be done by taking the difference between adjacent two points of $P$. Let us denote $\dot{P}$ from the $P$--$\dot{P}$ plane (e.g., figure \ref{cont}), as $\dot{P}_{\chi^2}$, and $\dot{P}$ from the difference between adjacent $P$ measurements as $\dot{P}_{\rm \ diff}$. Then, as shown in figure \ref{fp}(c), $\dot{P}_{\rm \ diff}$ is consistent with $\dot{P}_{\chi^2}$, and has smaller uncertainty. Therefore we employ $\dot{P}_{\rm \ diff}$ hereafter. \begin{figure*} \begin{center} \includegraphics[width=8cm]{cont_2-20keV_56425_2_1_90.ps} \includegraphics[width=8cm]{pulse_profile_2-20keV_56300-56549_90.ps} \end{center} \caption{(a) A color map of the 2-20 keV pulse significance in terms of $\chi^2$, for MJD56300--56549, shown on a $P-\dot{P}$ plane. The best-fit values are $P=835.090 \pm 0.002\ \rm s$ and $\dot{P}=-(6.3\pm0.7)\times10^{-9}\ \rm s\ s^{-1}$. (b) A background-subtracted 2--20 keV pulse profile of X~Persei in MJD56300--56549, folded with the ${\chi^2}$-maximum parameters.} \label{cont} \end{figure*} \subsection{Calculation of the energy flux} The energy flux of X~Persei at individual epochs can be derived from the MAXI/GSC and the RXTE/ASM data. In addition, a limited amount of data from the RXTE/PCA are utilized. In order to minimize systematic differences among the three instruments, we decided to use their common energy band, 3--12 keV. Below, we describe how the fluxes were derived with the three instruments. \subsubsection{The MAXI/GSC data} The 2--20 keV energy spectra of X~Persei with the MAXI/GSC were accumulated over the same 250-d time periods as section 3.2, and fitted individually by a power-law with an exponential cutoff. The intersteller absorption was fixed to $N_{\rm H}=3.4\times10^{21}\,\rm cm^{-2}$ derived from an XMM-Newton observation \citep{Palombara2007}, because it cannot be determined independently with the MAXI/GSC. The spectral fits were acceptable in all the time periods, with the reduced chi-square of $\chi_{\rm \nu}^2 \sim 1$. We then calculated the unabsorbed 3--12 keV fluxes from the best-fit models. For example, the best-fit model for MJD 56300--56549 has a photon index of $\Gamma=0.67\pm 0.13$, a cutoff energy of $5.57\pm0.76\ \rm keV$, and a normalization of $(8.4\pm0.7)\times 10^{-2}\ \rm $ with $\chi^2=119$ for 105 d.o.f. By analyzing all the MAXI/GSC spectrum in the same way, the 2--20 keV fluxes were obtained as $(9-14)\times10^{-10}$ erg cm$^{-2}$ s$^{-1}$, and those in 3--12 keV as $(6-10)\times10^{-10}$ erg cm$^{-2}$ s$^{-1}$. The latter results are presented with red squares in figure \ref{fp}(a), in comparison with the $\dot{P}$ determinations made in subsection 3.3. (A correction factor of 0.78 was multiplied in order to match the data to the RXTE/ASM data. See Section 3.4.2.) In figure \ref{fp}(a), we can reconfirm the intensity variations seen in figure \ref{longlc}. \subsubsection{The RXTE/ASM data} By adding the dwell time light curves in 3--5 keV and 5--12 keV, the 3--12 keV count rates averaged over the individual 250-d time periods were obtained. We converted the 3--12 keV count rates into 3--12 keV fluxes using WebPIMMS system\footnote{https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl}. Since this conversion requires the knowledge of spectrum shape of the source, we followed the results of the MAXI spectroscopy, and assumed a power-law spectrum with $\Gamma = 1.7$ and $N_{\rm H}=3.4\times10^{21}\,\rm cm^{-2}$ \citep{Palombara2007} in all the 250-d time periods. Although the MAXI spectra covers only the spin-up phase after 2009, our assumption on the spectral shape is justified by \citet{Lutovinov2012}, who analyzed the 2--20 keV RXTE/PCA data of X~Persei in both the spin-down and spin-up phases, and found no significant difference in the spectral shape. The unabsorbed fluxes thus estimated with the RXTE/ASM data are shown in figure \ref{fp}(a) by black filled circles. Over the time range when the MAXI and RXTE/ASM results overlap, we found that the MAXI fluxes are systematically higher, by $\sim25$\%, than those of the RXTE/ASM. Since the RXTE/ASM fluxes are consistent with BeppoSAX observation \citep{DiSalvo1998} and the RXTE/PCA measurements described in the next section, the MAXI/GSC fluxes have been corrected to match the RXTE/ASM measurements by multiplying a factor 0.78. The investigation of this difference is beyond the scope of this work. \begin{figure*} \begin{center} \includegraphics[width=0.6\textwidth]{ronbun_flux_period_pdot_matome.ps} \end{center} \caption{(a) The 3--12 keV fluxes of X~Persei obtained with the RXTE/ASM (black circles), and the MAXI/GSC (red squares), and the RXTE/PCA (yellow triangles). The MAXI/GSC fluxes have been converted as described in text. (b) The intrinsic pulse period of X~Persei, after the heliocentric and binary orbit corrections. In both panel(a) and panel(b), one data point covers a 250-d time interval. (c) A comparison between $\dot{P}_{\chi^2}$ (blue diamonds and green triangles) and $\dot{P}_{\rm diff}$ (black and red).} \label{fp} \end{figure*} \subsubsection{The RXTE/PCA data} First, we investigated the individual spectra obtained by the 136 pointing observations. Although some of them have short exposure times ($\leq1$ ksec), these data were confirmed to have sufficient quality to analyze the 3--12 keV X-ray spectra. The PCU2 spectra were fitted by a power-law model with an exponential cut-off, incorporating a low-energy absorption factor with $N_{\rm H}=3.4\times10^{21}\,\rm cm^{-2}$ (again fixed). The model successfully describe all the PCU2 spectra. Next, we calculated unabsorbed 3--12 keV fluxes of all these observations, and averaged them typically every $\sim 50$ d. The results are shown in figure \ref{fp}(a) with yellow triangles. Thus, the PCA and ASM results are consistent with each other, justifying the correction of the MAXI/GSC fluxes. As already noted in section 2.3, we do not utilize the PCA data in the pulse-timing studies. \subsubsection{Estimation of the bolometric flux} Since the luminosity used in the GL79 formalism is the bolometric value $L_{\rm bol}$, we converted the 3--12 keV fluxes to the bolometric ones, $F_{\rm bol}$. For this purpose, we employed the best-fit model to the 0.1--200 keV spectrum obtained with BeppoSAX in 1996 September (spin-down phase), and that in the 1--100 keV band obtained with Suzaku in 2012 September (spin-up phase), derived by \citet{DiSalvo1998} and \citet{Sasano2015}, respectively. The conversion factor from the 3--12 keV to 0.1--200 keV fluxes was calculated as 2.605 in BeppoSAX and 2.611 in Suzaku, respectively. Since the two factors agree very well within $\leq0.1$\%, in agreement with \citet{Lutovinov2012}, we used a factor of 2.61 for both the spin-down and spin-up phases, and multiplied it to all the data points in figure \ref{fp}(a). \subsubsection{The correlation between $\dot{P}$ and $F_{\rm bol}$} The derived $\dot{P}$ and $F_{\rm bol}$ values of X~Persei, spanning 22 years, are shown in figure \ref{GL}. Thus, we find a clear negative correlation between $\dot{P}$ and $F_{\rm bol}$ of X~Persei. Although a few data points are outlying, $\dot{P}$ basically behaves as a single-valued function of $F_{\rm bol}$ as the latter varies by a factor of $\sim 5$. After renormalizing the MAXI/GSC fluxes (section 3.3.2), the data points from the two instruments are generally consistent with one another. Furthermore, a single correlation appears to persist through the spin-up and spin-down phases, without exhibiting any drastic changes towards lower fluxes. Therefore, the system is considered to be still free from so-called propeller effect, which might set in at very low accretion rates. Thus, the overall source behavior in figure \ref{GL} is very similar to the case of 4U~1626--67 studied by \citet{Takagi2016}, wherein the GL79 model has given a very successful explanation. \section{Application of GL79 model} \subsection{The comparison between the observational results and GL79} The final step of our analysis is to fit the $\dot{P}$--$F_{\rm bol}$ relation in figure \ref{GL} with the GL79 model, to constrain the parameters of the NS in X~Persei; $M$, $R$ and $B$. We use the same procedure as described in Appendix 1 in \citet{Takagi2016}, as briefly reviewed below. According to GL79, $\dot{P}$ is described as \begin{equation} \dot{P}=-5.0 \times 10^{-5}\ \mu_{30}^{\frac{2}{7}}\ n(\omega_{\rm s})\ R_{6}^{\frac{6}{7}}\ \left(\frac{M}{M_{\odot}}\right)^{-\frac{3}{7}} I_{45}^{-1}\ P^{2}\ L_{37}^{\frac{6}{7}}\ {\rm \ s\ yr}^{-1}, \label{eqgl} \end{equation} where $\mu_{30}$, $R_{6}$, $M_{\odot}$, $I_{45}$, $P$, and $L_{37}$ are the magnetic dipole moment in units of $10^{30}{\rm\ G\ cm^{3}}$, $R$ in unit of ${10^{6}\rm\ cm}$, the solar mass, the moment of inertia in unit of ${10^{45}\rm\ g\ cm^2}$, the pulse period in unit of s, and the luminosity in units of ${10^{37}\rm\ erg\ s^{-1}}$. The factor $n(\omega_{\rm s})$ describes corrections for the accretion torque exerted onto the NS from the accreting matter via disk. It is represented as a function of the fastness parameter, $\omega_{\rm s}$, which means the ratio of the angular velocity of the NS rotation to that of the Kepler rotation at the inner disk radius. In the GL79 model, $n(\omega_{\rm s})$ is approximately given by \begin{equation} n(\omega_{\rm s})\simeq1.39\{1-\omega_{\rm s}[4.03(1-\omega_{\rm s})^{0.173}-0.878]\}(1-\omega_{\rm s})^{-1}, \label{nomega} \end{equation} with \begin{equation} \omega_{\rm s}\sim1.35\ \mu_{30}^{\frac{6}{7}}\ R_{6}^{-\frac{3}{7}}\left(\frac{M}{M_{\odot}}\right)^{-\frac{2}{7}}P^{-1}\ L_{37}^{-\frac{3}{7}} \label{omega} \end{equation} in the accretion regime of $0\leq \omega_{\rm s} \leq0.9$. The spin-up and spin-down regimes corresponds to $\omega_{\rm s} \leq0.349$ and $\omega_{\rm s}\geq0.349$, respectively, with $\omega_{\rm s}=0.349$ describing a torque equilibrium. We used a approximation for $I_{45}$ proposed by \citet{Lat2005} as \[ I_{45} \simeq (0.474\pm0.016) \left(\frac{M}{M_{\odot}}\right)R_{6}^2 \] \begin{equation} \ \ \ \ \ \ \ \ \ \ \times \left[1+0.42\left(\frac{M}{M_{\odot}}\right)R_{6}^{-1}+0.009\left(\frac{M}{M_{\odot}}\right)^{4} R_{6}^{-4}\right]. \end{equation} Considering the relativistic effects, $\mu$ is described in cgs units \citep{Was1983} as \begin{equation} \mu = \frac{1}{2} {BR^3}\ \frac{X^3}{3}\left[-\rm ln(1-X)-X-\frac{X^2}{2}\right]^{-1} \end{equation} where $X\equiv \left({R}/{R_{\rm s}}\right)^{-1}$, and $R_{\rm s}=2GM/c^2$ is the Schwarzschild radius. If $R\gg R_{\rm s}$, this formula can be expanded \begin{equation} \mu \simeq \frac{1}{2} {BR^3}\left[1+\frac{3}{4}X+\frac{3}{5}X^{2}+\cdots \right]^{-1}. \end{equation} Finally, using the known $D$ to X~Persei, $L$ is expressed as \begin{equation} L=4 \pi D^2 F_{\rm bol} \end{equation} By substituting equations (2) through (7) to equation (1), we can calculate a theoretical $\dot{P}$ vs $L_{37}$ relation, to fit the observed data points in figure \ref{GL}. However, the prediction is subject to uncertainties of the parameters involved, namely, $M$, $R$, $D$, and $B$. Because $B$ is least constrained among them, and of our prime interest, we restricted ranges of $M$, $R$ and $D$ to 1.0--2.4$M_{\odot}$, 9.5--15 km, (e.g., \cite{Ozel2016, Bauswein2017}) and $0.81\pm 0.04$ kpc (Section 1), respectively, and aimed to estimate $B$. Since the correlation is almost linear which has two free parameters, we specified trial values of ($B$, $D$), and optimized ($M$, $R$) to minimize the $\chi^2$. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{B_change_GL.ps} \end{center} \caption{The $\dot{P}$ and $F_{\rm bol}$ measurements with the RXTE/ASM (green circles), and MAXI/GSC (red squares), compared with the GL79 predictions. The black line shows the best-fit GL79 relation. It can be represented by multiple sets of parameters (figure \ref{massradi}), including a particular case of $M=2.05M_{\odot}$, $R=12.9$ km, $D=0.81$ kpc, and $B=8\times 10^{13}$ G. The two blue lines show predictions when $B$ is changed, with the other parameters kept unchanged.} \label{GL} \end{figure} An example of the best-fit GL79 prediction is shown with a black line in figure \ref{GL}, where $B=8\times 10^{13}$ G was assumed and the best-fit parameters was obtained as ($M$, $R$, $D$) = ($2.05M_{\odot}$, 12.9 km, 0.81 kpc). By introducing a 5.9\% systematic error to $F_{\rm bol}$, the data points have been explained successfully ($\chi^2 =29$ d.o.f = 29), including both the spin-up and spin-down phases, since $n(\omega_{\rm s})$ can take both positive and negative values. However, due to the heavy model degeneracy, the parameters cannot be constrained uniquely. Instead, the same best-fit solution in figure \ref{GL} can be reproduced by multiple sets of ($M$, $R$, $D$, $B$). For example, a set of (2.02$M_{\odot}$, 14.1 km, 0.81 kpc, $6\times 10^{13}$ G) and a set of (2.05$M_{\odot}$, 9.5 km, 0.81 kpc, $23\times 10^{13}$ G) give essentially the same prediction, and hence the same $\chi^2$. Black circles of figure \ref{bchi} represents the minimum $\chi^2$ values as a function of the assumed $B$, where $M$, $R$, and $D$ are allowed to more freely within their respective constraints. We thus find that the same best-fit solution is obtained over a range of $B=(5-23) \times 10^{13}$ G. Below $B=5\times 10^{13}$ G, the fit $\chi^2$ starts increasing because $R$ is saturated at $R=15$ km. In the same way, the fit worsening above $B=23\times 10^{13}$ G is due to $R$ fitting the assumed floor at $R=9.5$ km. This is mainly because the value of $\mu \sim \frac{1}{2} {BR^3}$ is roughly fixed by the zero-cross point in figure \ref{GL}, that is, the spin up/down boundary \citep{Takagi2016}. The effect of changing $B$, with the other parameters kept fixed, is shown by two blue lines in figure \ref{GL}. Under this constraint, only the zero-cross, only the zero-cross point moves with the slope almost unchanged, the data favors $B=8\times 10^{13}$ G. Red and green points in figure \ref{bchi} represent the behavior of $\chi^2$ when the set of ($M$, $R$, $D$) are fixed to the best values at $B=5\times 10^{13}$ G and those for $B=23\times 10^{13}$ G, respectively. When $M$ and $D$ are fixed, the increase of $\chi^2$ thus becomes steeper than the case when they are free. In short, the present study indicates $B=(5-23)\times 10^{13}$ G even allowing $M$, $R$, and $D$ to vary freely over the assumed uncertainty ranges. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{best_b_12_surface_val_search_90.ps} \end{center} \caption{The $\chi^2$ value of the fit to figure \ref{GL} with equation (1), shown as a function of the assumed $B$. Open black circles show the results when $M$, $R$, and $D$ are allowed to take free values within the allowed ranges. Red points exemplify the case with $M=1.90M_{\odot}$, $R=14.6$ km, and $D=0.77$ kpc, whereas green ones assume $M=2.05M_{\odot}$, $R=9.5$ km, and $D=0.81$ kpc. The inset shows a detail at $B=(46-54)\times10^{12}$ G.} \label{bchi} \end{figure} As represented by equation (9) in \citet{Takagi2016}, the slope of the GL79 relation depends mainly on $M$ and $D$. Since $\dot{P}$ is proportional to $I^{-1}$, and hence to $M^{-1}$, the slope flatters as $M$ increases. Figure \ref{GLM} presents the same dataset as figure \ref{GL}, but is meant to show how the GL79 prediction depends on $M$. Thus, the data prefer a relatively high value of $M$. So far, we have confirmed the argument by \citet{Takagi2016} that figure \ref{GL} can constrain, with its zero-cross point and the data slope, two out of the four model parameters ($M$, $R$, $D$, and $B$). In other words, specifying two of them can allow us to estimate the remaining two through the GL79 fit to figure \ref{GL}. Choosing $D$ and $B$ as the input two parameters, figure \ref{massradi} shows how $M$ and $R$ are determined. The red and blue lines in figure \ref{massradi} represent grids of constant $D$ and $B$, respectively. Thanks to the accurate distance determination, $M$ is relatively well constrained, although we need to consider systematic errors involved in equation (1). In these arguments using figure \ref{massradi}, we assumed that the radius is in the range of $R=9.5-15$ km, based on a recent equation of state of NSs \citep{Ozel2016} and the GW170817 observations \citep{Bauswein2017}. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{M_change_GL.ps} \end{center} \caption{The data points and the black line are the same as in Fig.\ref{GL}. The blue lines are for $M=1.70M_{\odot}$ and $M=1.40M_{\odot}$ with the same $R$, $D$, and $B$ as in the black line.} \label{GLM} \end{figure} \subsection{Systematic model uncertainties} In the previous section, we considered how the data can constrain $M$, $R$, $D$, and $B$, when the first three of them are allowed to move freely over their pre-defined uncertainty ranges, whereas $B$ can take any positive value. However, we have yet to considered another source of systematic error, namely, the uncertainty in the GL79 model itself. This may be expressed by multiplying the right side of equation (1) by a normalization factor $A$, and allowing it to take values rather than unity. Along the above line of argument, \citet{Takagi2016} showed that the MAXI/GSC data of the low-mass X-ray pulsar 4U~1626--67 can be explained very well with $A \approx 1.0$. Analyzing the MAXI/GSC and Fermi/GBM data of 12 Be pulsars in a similar manner, \citet{Sugizaki2017} argued that the values of $A$ also has a mean around unity, but it scatters from $\sim 0.3$ to $\sim 3$ among the 12 sample objects. Furthermore, \citet{Sugizaki2017} successfully decomposed this scatter into the following three major sources. One comes from the model assumption that the emission of a pulsar is completely isotropic, which is obviously not warranted because X~Persei show strong pulsations. As adopted by \citet{Sugizaki2017}, this uncertainty may amount to a factor of 2, after \citet{Basko1975}. Another uncertainty lies is the angle between the magnetic axis and the accretion plane; it can affect the dipole magnetic-field strength at large distances by another factor of 2 \citep{Wang1997}. The last uncertainty is that in the distance. In the present case, the last uncertainty, namely that in $D$, can be ignored, because it is rather small, and was already considered before introducing $A$. Therefore, we may presume that $A$ takes a value from 0.5 to 1.9. Then, the model fitting was repeated by multiplying $A$ onto right side of equation (1), and changing it from 0.5 to 1.9. For example, two lines with $A = 0.5$ and $A = 1.3$ are shown in figure \ref{massradi}, where $D= 0.81$ kpc is fixed. As a result, the mass and magnetic-field ranges have somewhat increased to $M =1.3-2.4~M_\odot$ and $B = (4-25)\times 10^{13}$ G, respectively. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{norm_0.5_1.3_radius_mass_gaia2_systematic.ps} \end{center} \caption{Sets of ($M$, $R$) accepted by the present data. The constant-$B$ and constant-$D$ grids are represented by dashed blue lines and dashed black lines, respectively. In particular, the $M-R$ relation fixing $D=0.81$ kpc is represented by the red line. The solid black lines are those when the normalization of equation (1) is chosen as $A=0.5$ and $A=1.3$, both with $D=0.81$ kpc.} \label{massradi} \end{figure} \section{Discussion and Conclusion} Using the RXTE/ASM and MAXI/GSC data spanning altogether 22 yr, we studied the spin-period change rate $\dot{P}$ and the bolometric luminosity $L$ of X~Persei. Then, $\dot{P}$ has been confirmed to behave as a single-valued and approximately linear function of $L$ (figure \ref{GL}), covering both the spin-down ($\dot{P}>$0) phase before MJD 52200 and the spin-up ($\dot{P}<0$) phase afterwards. At the present spin period of $P \sim 837$ s, the torque equilibrium condition ($\dot{P}=0$; the zero-cross point in figure \ref{GL}) is realized at $L=5\times10^{34}$ erg s$^{-1}$ assuming $D=0.81$ kpc. In addition, the orbital period of X~Persei has been updated to $251.0^{+0.2}_{-0.1}$ d. \subsection{The magnetic field of the neutron star in X~Persei} By fitting the observed $\dot{P}$ vs $L$ relation with the theoretical GL79 model, incorporating its observational calibrations \citep{Takagi2016, Sugizaki2017}, we have estimated the surface dipole magnetic field of X~Persei as $B=(4-25) \times 10^{13}$ G. Since the equilibrium values of $L$ and $P$ together specify the magnetic dipole moment ($\propto BR^3$) rather than $B$ itself, the derived estimate of $B$ is most strongly affected by the uncertainty in $R$. In fact, as in figure \ref{bchi}, the lower and upper limits of the derived $B$ respectively correspond to the upper bound of $R=15$ km and the lower bound of $R=9.5$ km, which we assumed as conservative limits. If $R$ is further restricted in future studies, e.g., by gravitational wave observations, the range of $B$ will become narrower accordingly. In any way, the present results show that X~Persei has a significantly stronger magnetic field than ordinary accretion-powered pulsars ($B\sim10^{12}$ G). \citet{Klus2014} investigated the relation between $\dot{P}$ and $L$ of 42 Be/X-ray binaries and showed that, if those systems are in near spin equilibrium, more than half of them have magnetic fields over the quantum critical level of $4.4 \times 10^{13}$ G. They estimated the magnetic field of X~Persei would be $4.2 \times 10^{13}$ G, which is consistent with our results. Our result is consistent with the suggestions from spectral studies by \citet{Doroshenko2012} and \citet{Sasano2015}. These authors noticed that the spectrum of X~Persei, extending to 160 keV \citep{Lutovinov2012} without a clear cutoff, is qualitatively distinct from those of ordinary X-ray pulsars, which decline steeply above 20--30 keV at least partially due to the presence of CRSFs \citep{Makishima2016}. Because the spectral cutoff of X~Persei must appear at rather high energies, its CRSF, if any, would be present above $\sim 400$ keV, corresponding to $B\gtrsim 4 \times 10^{13}$ G. As mentioned in section 1, the spectrum of X~Persei exhibits a shallow and broad dip at $\sim 30$ keV \citep{Coburn2001, Lutovinov2012}. \citet{Coburn2001} interpreted it as a CRSF, and proposed a value of $B\sim 10^{12}\,\rm G$. However, \citet{Doroshenko2012} fitted the spectrum of X~Persei successfully with two Comptonization models (to be interpreted as thermal and bulk Comptonization), without invoking a CRSF structure. A very similar result was obtained by \citet{Sasano2015} from a broad-band Suzaku spectrum of X~Persei. Thus, the dip around 30 keV can be better interpreted as a "crossing point" of the two Comptonization components, rather than a CRSF. \subsection{Luminosity of X~Persei} The GL79 relation which we employed, namely equation (1), assumes mass accretion from an accretion disk. This condition is probably satisfied in 4U~1626--67, and in the 12 Be pulsars studied by \citet{Sugizaki2017}. Specifically, these Be binaries are considered to be accreting from the circumstellar envelopes of their Be-type primaries (e.g., \cite{Okazaki2013}), through accretion disks that form near the pulsars. Compared to these typical Be X-ray binaries, X~Persei has an unusually wide and nearly circular orbit with $a_{\rm x}\sim 2$ a.u. \citep{Delgado2001}, together with a very low luminosity that lacks orbital modulation. In spite of these peculiarities, we believe that X~Persei is also fed with an accretion disk rather than via stellar-wind capture, for the following three reasons. The first reason supporting the disk-accretion scheme is that the primary star of X~Persei, of a spectral type (O9.5III-B0V)e \citep{Lyubi1997}, is not expected to launch massive stellar winds. Its luminosity of $\sim 3 \times 10^{4}~L_\odot$ and the empirical stellar wind intensity (Cox et al. 2000) predict a wind mass-loss rate of $\lesssim 10^{-7.5}~M_\odot$ yr$^{-1}$. At a distance of 2 a.u. (appropriate for the NS of X~Persei) from the star, and assuming a typical wind velocity of $10^8$ cm s$^{-1}$, this would yield a wind-capture X-ray luminosity of $\lesssim 10^{32}$ erg s$^{-1}$, which is much lower than the actually observed values of $10^{34-35}$ erg s$^{-1}$. Second, the spectra of X~Persei show neither strong iron-K emission lines (with an equivalent width of at most 7.5 eV; \cite{Maitra2017}), nor variable high absorption. According to \citet{Makishima1990}, these properties of X~Persei supports the scheme of mass accretion from the Be envelope, rather than from the stellar winds. Finally, the Be envelope is likely to be actually feeding the pulsar, because a positive correlation was observed between the optical H$\alpha$ line intensity and the X-ray flux over 2009--2018 \citet{Zamanov2018}, involving the possible 7-yr periodicity, although no clear correlation was observed before 2009 \citep{Smith1999, Reig2016}. From the above arguments, X~Persei is considered to have the same accretion scheme as the other Be X-ray binaries. Its unusually low luminosity, without orbital modulation, can be attributed to to its very wide and nearly circular orbit, along which the stellar envelope is expected to have a rather low density. Incidentally, the possible 7-yr X-ray periodicity (Section 2.4), correlated with the H$\alpha$ variations \citep{Zamanov2018}, may be produced by some long-term changes in the circumstellar envelope, as pointed out by \citet{Laplace2017}. \subsection{Mass of the neutron star in X~Persei} While the zero-cross point of the $L$ vs $\dot{P}$ correlation in figure \ref{GL} is most sensitive to the magnetic dipole moment of the pulsar, the slope of the correlation specifies mainly $M$ \citep{Takagi2016}. Thus, assuming $D=0.81\pm0.04$ kpc, and neglecting the systematic model uncertainty (Section 4.2), the NS in X~Persei is constrained to have $M=2.03\pm0.17M_{\odot}$ as in figure \ref{massradi}. Therefore, the object is suggested to be somewhat more massive than typical NSs with $M\sim 1.4M_{\odot}$. As shown in figure \ref{massradi}, the systematic GL79 model uncertainty affects the estimate of $M$, and its inclusion widens the range as $M = 1.3-2.4~M_{\odot}$. Then, the canonical value of $M=1.4~M_\odot$ can no longer be excluded. Nevertheless, in order for the object to have $M=1.4~M_\odot$, we need to invoke the smallest value of $A \sim 0.5$. This would in turn require, for example, that the emission from X~Persei is strongly beamed away from our line of sight, and we are underestimating its spherically averaged luminosity by a factor of 2. Thus, the data still favor a relatively high mass (e.g., $\sim 2~M_\odot$). Let us tentatively assume that the NS in X~Persei is indeed massive. Then, this must be from its birth, because the accretion rate of X~Persei ($\sim 1 \times 10^{-12}~M_\odot$ yr$^{-1}$) and its estimated life time ($\sim 10^7$ yr) mean a negligible mass increase due to accretion. We hence arrive at an interesting possibility, that a fraction of NSs could be born with a strong magnetic field ($B\geq10^{13}$ G) together with a rather high mass. Such a native difference, if true, might in turn reflect different mechanisms of the NS formation. Even excluding the case of accretion-induced collapse of white dwarfs, NSs in Be X-ray binaries may be produced via supernovae of either electron-capture type, or gravitational core-collapse type. As discussed by \citet{Kitaura2006}, the former types are likely to leave rather standard NSs with $\sim1.4M_{\odot}$ and $B\sim10^{12}$ G, as is possibly the case with the Crab pulsar \citep{Moriya2014}. In contrast, core-collapse supernovae might sometimes yield NSs with a higher $M$ and a stronger $B$, as represented by X~Persei. Even in that case, it is an open question whether a higher mass necessarily lead to a stronger field, or these two quantities scatter independently. \section*{Acknowledgement} The authors are grateful to all MAXI team members for scientific analysis.
1,108,101,563,107
arxiv
\section{Introduction} Face detection, being one of the most fundamental problems in computer vision, plays an important role in the performance of various face-based applications ranging from face identification and verification to clustering, tagging and retrieval \cite{FaceevaluationStanLi}. There has been substantial progress in the development of efficient face detection techniques and many such techniques are now available in commercial products such as digital cameras, smartphones and social networking websites \cite{fddbTech}. Both in academia and industry, the current trend of developing new face detectors is primarily centered around detecting faces in unconstrained environments with huge variations of pose and illumination. This has led to the development of a number of recent face detectors that try to outperform each other on the Labeled Faces in the Wild (LFW) \cite{LFWTech}, Annotated Facial Landmarks in the Wild (AFLW) \cite{AFLWDataset} and other such datasets. Another focus of current face detection research is in developing fast face detectors for real time implementation on devices such as smartphones, tablets etc. In today's world, mobile devices are being used not only for verbal communication but also for accessing bank accounts and performing transactions, managing user profiles, accessing e-mail accounts, etc. With increasing usage, there is a growing concern about ensuring the security of users' personal information on these devices. Going beyond passwords and fingerprints, concepts have emerged for actively verifying users by analyzing faces from the front-facing camera, the swipe and keystroke patterns from the touchscreen and motions patterns from inertial sensors, among others \cite{umd_Dataset}, \cite{ContAuth_AJain}, \cite{Mobio_2012}. Face-based authentication systems rely on accurate detection of faces in their first step and successful verification in the next. Several research works have recently been published on face-based continuous user authentication techniques for smartphones, using representation-based or attribute-based approaches \cite{AA_Fathy}, \cite{Mobio_2012}, \cite{AA_Samangouei}. However, in all of these methods the faces and landmarks are assumed to be available by performing face detection within milliseconds. \begin{figure}[t] \centering \includegraphics[width=0.25\textwidth]{Umd_Dataset_Sample.jpg} \vskip -5pt \caption{Sample frames from the AA-01-FD mobile face dataset where one can clearly see the presence of partial faces.} \label{UMDDataSample} \vskip -10pt \end{figure} Viola and Jones's \cite{VJFull} ground-breaking method of face detection popularized Adaboost cascade structures and simple feature extraction techniques for realtime face detection. Hadid \emph{et al.} \cite{AA_Hadid} evaluated the feasibility of face and eye detection on cell phones using Adaboost cascade classifiers with Haar-like and local binary pattern (LBP) features, as well as a skin color based face detector. On a Nokia N90 mobile phone that has an ARM$9$ $220$ MHz processor and a built-in memory of $31$ MB, their work reported that the Haar+Adaboost method can detect faces in $0.5$ seconds from $320\times 240$ pixel images. This approach, however, is not effective when wide variations in pose and illumination are present. Some variations of this method are available in the literature such as \cite{RotationInvMultiview_Huang}, \cite{Multiview_heyden} but most of them are computationally very expensive. Zhu \emph{et al.} \cite{Ramanan:2012:FDP:2354409.2355119} proposed a method that uses a deformable part model where a face is defined as a collection of parts at each facial landmark, and the parts are then trained side-by-side with the face using a spring-like constraint. The method uses a mixture of tree-structured models resilient to viewpoint changes. Mar$\acute{i}$n-Jim$\acute{e}$nez \emph{et al.} \cite{LAEOdataset} also followed a similar approach for `Looking At Each Other' head detection. Whereas, Shen \emph{et al.} \cite{Shen_ExamplarFD} proposed an exemplar-based face detector that exploits advanced image retrieval techniques to avoid an expensive sliding window search. Apart from academic research, many face detectors such as Google Picasa, PittPatt and Face++ have been developed commercially as face detection has gotten more and more attention over time. However, most of the state-of-the-art techniques are not designed to detect partially visible or cropped faces (see Figure~\ref{UMDDataSample}). Hence, their recall rate is usually very low when operating at high precision. Furthermore, many of these methods require long computation times because of algorithmic complexity and are thus not suitable for real time implementation. In this paper, we introduce a face detection scheme for detecting faces from images captured by the front-facing cameras of mobile devices used for continuous authentication. This paper makes the following contributions: (1) A method for realtime face detection on a smartphone is proposed based on facial segments that can detect cropped and partial faces of users. (2) A simple yet effective technique of clustering the facial segments is proposed. (3) Through extensive experimentation, it is shown that the proposed method can achieve superior performance over some of the state-of-the-art methods in terms of accuracy and processing time. In the next section the proposed method is described in detail followed by a brief summary of the dataset. Then, the experimental setup, and results are explained. Finally, a summary of the work with future directions is provided in the section on conclusion. \section{Facial Segment based Face Detector (FSFD)}\label{method} \begin{figure}[t] \centering \includegraphics[width=0.38\textwidth]{SystemDiagramSVMICASSP.jpg} \vskip -8pt \caption{The block diagram of the overall system.} \label{SysteDiagramSVM} \vskip -10pt \end{figure} The system block diagram for the proposed face detector is shown in Fig. \ref{SysteDiagramSVM}. The process is divided into three phases. In the segment clustering phase, facial segments are logically clustered to estimate face regions from the training images. In the Support Vector Machine (SVM) learning phase, a linear SVM is trained with statistical features obtained from face proposals that are derived from the estimated faces. Finally, in the face detection phase, statistical features are obtained in a similar manner for each face proposal from test images and confidence scores are obtained for each proposal using the SVM classifier. The proposal with the highest confidence score higher than a certain threshold $\theta$ is considered to be a face. Because of its simple architecture, the method can be implemented in realtime. In addition, it is fairly accurate and provides a good recall rate since it can efficiently detect partially visible faces. Thus, the FSFD method is suitable for real time face verification tasks on mobile devices. The three main steps of the detector are discussed in detail in the following subsections. \subsection{Segment Clustering} As shown in Fig. \ref{SysteDiagramSVM}, in the segment clustering phase, contrast limited adaptive histogram equalization (CLAHE) is performed first on each of the training images to reduce the impact of illumination on facial segment detectors. Each training image is then passed through a set of facial segment detectors. A total of $14$ facial segment detectors were trained using facials segments obtained from the cropped and aligned Labeled Faces in the Wild (LFW) dataset \cite{LFWTech}, \cite{LFW_Sanderson} and negative examples were produced from the Pascal VOC dataset \cite{PascalVOC}. These detectors are basically adaboost cascade classifiers trained using a local binary pattern (LBP) representation of the images for better feature representation and faster training \cite{LBPVJ}, \cite{opencv_library}. The facial segments for which the classifiers were trained are Eye-pair ($EP$), Upper-left-half of face ($UL_{12}$), Upper-half of face ($U_{12}$), Upper-right-half of face ($UR_{12}$), Upper-left-three-fourth of face ($UL_{34}$), Upper-three-fourth of face ($U_{34}$), Upper-right-three-fourth of face ($UR_{34}$), Left-half of face ($L_{12}$), Left-three-fourth of face ($L_{34}$), Nose ($NS$), Right-three-fourth of face ($R_{34}$), Right-half of face ($R_{12}$), Bottom-three-fourth of face ($B_{34}$) and Bottom-half of face ($B_{12}$). An example of all the $14$ facial segments of a full face from the LFW dataset is shown in Fig. \ref{FaceCrop}. \begin{figure}[t] \centering \includegraphics[width=0.27\textwidth]{FaceCropAll.jpg} \vskip -8pt \caption{All of the $14$ facial segments.} \label{FaceCrop} \vskip -10pt \end{figure} Each detector may return one or more facial segments in a frame. For example, the segment clustering phase of Fig. \ref{SysteDiagramSVM}, $L_{12}$ returned two detection results while $UL_{12}$ returned four detection results. For each facial segment, the bounding box of the full face is estimated according to the ideal position of that segment relative to the whole face. For example, if the top left and bottom right corners of the bounding box obtained for segment $L_{12}$ are ($x_{1}^{L12}, y_{1}^{L12}$) and ($x_{2}^{L12}, y_{2}^{L12}$), respectively, then those for the estimated full face are ($x_{1}^{L12}, y_{1}^{L12}$) and ($min(w_{img}, x_{2}^{L12}+(x_{2}^{L12}-x_{1}^{L12})), y_{2}^{L12}$), where $w_{img}$ is the width of the image. The estimated face center from this segment is $(x_{2}^{L12}, y_{1}^{L12}+(y_{2}^{L12}-y_{1}^{L12})/2)$. For each estimated face center $p$, a cluster of segments $CL_{p}$ that depicts the full face is formed where (a) the number of segments in that cluster is above a threshold $c$, and (b) the other segments of that cluster have estimated face centers within a certain radius $r$ pixels from the center. \subsection{Learning SVM} In the learning phase, the first $\zeta$ subsets of the total set of facial segments from each cluster are regarded as proposed faces. Assuming that $m$ facial segments satisfy the condition for clustering around the $k$-th segment, there are $m+1$ segments in that cluster where $m+1 \geq c$. For example, if the $4$ segments $U_{12}$, $B_{12}$, $L_{34}$ and $UR_{12}$ form a cluster around $NS$ and $c=2$, then the viable subsets are $\allowbreak[[NS, U_{12}],\allowbreak [NS, B_{12}],\allowbreak \hdots,\allowbreak,\allowbreak [NS,\allowbreak U_{12}, \allowbreak B_{12},\allowbreak L_{34},\allowbreak UR_{12}]]$. The total number of subset here is $\sum_{j=1}^{4}{{4}\choose{j}}= 15$ including the complete set. Keeping the $k$-th segment fixed, $\zeta$ random subsets are considered for face proposals. In this example, $\zeta$ varies from $1$ to $14$. For $m+1$ segments, the number of subsets is in the order of $2^{m+1}$, which introduces huge computation and memory requirements. Hence, the number of subset is limited to $\zeta << 2^{m+1}$. The bounding box of the face proposal is the smallest bounding box that encapsulates all the estimated faces from all the facial segments in that proposal. Intuitively, the greater the number of facial segments with better detection accuracy in a proposal, the higher the probability of that proposal being a face. Further, experimentally, it is found that some particular sets of facial segments are more likely to return a face than others, and some sets of segments provide more accurate bounding boxes with greater consistency than the others. The amount of overlap is expressed as the ratio of intersection areas to joined area $I(d_{i}, a_{j})$, between the detected region $d_{i}$ and annotated regions $a_{j}$, and is expressed as \vskip -8pt \begin{equation} I(d_{i}, a_{j})=\frac{area(d_{i})\cap area(a_{j})}{area(d_{i})\cup area(a_{j})}. \end{equation} If this ratio is above a certain threshold $\delta$, then the detection result is considered to be correct. A linear SVM classifier is trained on the proposed faces using the following statistical features. \begin{enumerate} \item Probability of a proposal $S_{set}$ being a face $Pr^{T}(S_{set})$ and not being a face $Pr^{F}(S_{set})$ are defined as \vskip -15pt \begin{eqnarray} Pr^{T}(S_{set})&=&\frac{\sum{|S_{set}\in P_{F}^{T}|}}{|P_{F}^{T}|}\\ Pr^{F}(S_{set})&=&\frac{\sum{|S_{set}\in P_{F}^{F}|}}{|P_{F}^{F}|} \end{eqnarray} where, $P_{F}^{T}=\{S|I(S, S_{GT})\geq \delta; S\in P_{Tr}^{F}\}$, and $P_{F}^{F}=\{S|I(S, S_{GT})<\delta; S\in P_{Tr}^{F}\}$. Here, $P_{Tr}^{F}$ denotes the set of clusters of facial segments that the detector proposed as faces. \item For each facial segment in a set $P^{i}_{Set}$ where $i\in S_{set}$, the probabilities of $P^{i}_{Set}$ being in a cluster depicting the true face $Pr^{T}(P_{set}^{i})$ or a non-face $Pr^{F}(P_{set}^{i})$ are calculated as \vskip -15pt \begin{eqnarray} Pr^{T}(P_{set}^{i})=\frac{\sum{|P^i_{set}\in S_{set}; S_{set}\in P_{F}^{T}|} }{|P_{F}^{T}|}\\ Pr^{F}(P_{set}^{i})=\frac{\sum{|P^i_{set}\in S_{set}; S_{set}\in P_{F}^{F}|}}{|P_{F}^{F}|}. \end{eqnarray} Experimentally, it is found that the nose detector is the most accurate of all, while $B_{12}$ is the least accurate. \end{enumerate} If $n$ segments are considered then the feature size is $2n+2$ for each proposal. There are $2n$ values corresponding to the face and non-face probabilities of each of the $n$ segment and the rest $2$ values are the probabilities of the cluster being and not-being a face. Among the $2n$ values, only those corresponding to the segments present in the proposal are non-zero. \subsection{Face Detection} For each pre-processed test image, the proposed faces are obtained in a similar manner as the training faces. Thus, there are $\zeta$ face proposals from each face and feature vectors of size $2n+2$ for each proposal. The SVM classifier returns a confidence score for each proposal. The proposal that has the highest confidence score above a threshold is chosen as the detected face. \section{Dataset} \label{Dataset} The performance of the proposed face detector is evaluated on the Active Authentication Dataset (AA-01) \cite{umd_Dataset}, \cite{AA_Samangouei}. The AA-01 Dataset contains the front-facing camera face video for $50$ iphone users ($43$ male, $7$ female) with three different ambient lighting conditions: well-lit, dimly-lit, and natural daylight. In each session, the users performed $5$ different tasks: enrollment, scroll test, picture count, document reading and picture drag. In order to evaluate the face detector, face bounding boxes were annotated in a total of $8036$ frames of the $50$ users. This dataset, denoted as AA-01-FD, contains $1607$ frames without faces and $6429$ frames with faces. For training, $20\%$ of these frames are randomly picked and the rest are used as test data. Some sample face images from the AA-01-FD Dataset are shown in Fig. \ref{UMDDataSample}. \section{Experimental Results} \label{Result} The experimental results are presented in this section, demonstrating the effectiveness of the FSFD method over other state-of-the-art face detectors. In particular, experimental results on the AA-01-FD dataset are compared with a. the Viola-Jones face detector (VJ) \cite{VJFull}, b. the deformable part-based model face detector (DPM) \cite{Ramanan:2012:FDP:2354409.2355119}, and c. the Looking At Each Other (LAEO) head detector \cite{LAEOdataset}. \subsection{Evaulation Metrics and Experimental Setup} The results are evaluated in terms of F1-Score, true positive rate (TPR) at $1\%$ false positive rate (FPR), recall at $99\%$ precision, and processing time. The F1-Score, which is the harmonic mean of precision and recall, is defined as \vskip -8pt \begin{equation} F1=\frac{2TP}{2TP+FP+FN} \end{equation} where, $TP$, $FP$ and $FN$ are the numbers of true positive, false positive and false negative detections, respectively. Higher F1-Score employs better overall precision and recall. For the active authentication problem, the goal is to achieve the best recall at a very high precision. Hence, the value of recall achieved by a detector at $99\%$ precision is considered another evaluation metric. Finally, for detectors that return a confidence score rather than a binary face/non-face decision, the TPR at $1\%$ FPR is used as a metric for evaluation. Here, also, higher values of TPR represent a better detector. \begin{figure}[t] \centering \includegraphics[width=0.43\textwidth]{EVvsZetaICASSP.jpg} \caption{Performance of $C0$ for different values of $\zeta$ - (a) F1-Score vs. $\zeta$, (b) TPR at $1\%$ FAR vs. $\zeta$, (c) Recall at $99\%$ Precision vs. $\zeta$, and (d) Time vs. $\zeta$.} \label{INVariation} \end{figure} Based on the relative importance of each segment, several combinations of segments are considered to determine the optimum number of segments to produce the best result. The basic combination of all $14$ segments is labeled as $C0$ and the best performing combination is labeled $Cbest$. $Cbest$ consists of $9$ segments - $NS$, $EP$, $UL_{34}$, $UR_{34}$, $U_{12}$, $L_{34}$, $UL_{12}$, $R_{12}$, $L_{12}$. All the experiments in this paper were performed on an Intel Xeon(R) CPU E5506 operating at $2.13$GHz with $5.8$GB memory and a $64$ bit UBUNTU $14.04$ LTS operating system. No multi-threading was done to parallelize the operations of different facial segment detectors, hence, speed up is achievable \cite{Pulli:2012:RCV:2184319.2184337}. The images, initially $1280\times 720$ pixels, are downsampled by $4$. It is experimentally verified that for all the methods this downsampling ensures best performance in quickest time. The minimum face size is set to be $64\times 64$ pixels. For clustering facial segments, the value of $c$ is set to two and $r$ is considered to be one-sixth of the half-diagonal of the estimated face of the center segment. A lower value of $r$ may improve the precision while possibly decreasing the recall. In order to determine the optimum value of $\zeta$, the evaluation metrics and average processing time per image are plotted in Fig. \ref{INVariation} with increasing $\zeta$ for combination $C0$ on AA-01-FD. It can be inferred that the performance of the system does not change significantly, while the time consumption per image rapidly increases with $\zeta$. Hence, a value of $\zeta=20$ is chosen to ensure the best performance at the smallest time. \subsection{Results and Comparison}\label{Results} \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{DetResFigs_Wide.jpg} \vskip -8pt \caption{Detection results for $C0$. Left - true positive and true negative detection results. Yellow boxes for facial segments, green boxes for final estimated face. Right - false positive and false negative results.} \label{DetResFigs} \vskip -10pt \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.28\textwidth]{F1ScoreVsEta_CompareICASSP.jpg} \vskip -8pt \caption{Comparison of F1-Score for varying overalpping threshold $\eta$ for $Cbest$ ($\zeta=20, c=2$) with state-of-the-art methods.} \label{F1ScoreVsEta} \vskip -10pt \end{figure} Some sample face detection results obtained by the proposed method for combination $C0 (\zeta=20, c=2$) are shown in Fig. \ref{DetResFigs}. In this figure, the correct detection results are depicted in the first two rows and the false detection results are shown in the bottom row.The yellow bounding boxes denote different facial segments and the green bounding box encapsulating all the yellow boxes is the final estimated face. It is apparent from this images that FSFD is robust enough to detect partial faces, such as the upper-half or left-half of a face, as well as full faces. Also, it can be seen that the operation of the proposed detector is mostly unaffected by illumination and pose variation. The images where the detector fails are mostly extreme situations where only a fragment of the face is visible, the face is critically posed, or the amount of motion blur is too much. \begin{table}[t] \centering \caption{Comparison between methods at $50\%$ overlap} \vskip -8pt \begin{tabular}{c c c} \hline Method & TPR at & Recall at \\ & $1\%$ FPR & $99\%$ Precision\\ \hline \hline DPM & 0.4155 & 0.5957\\ \hline LAEO & 0.1612 & - \\ \hline FSFD $C0$ ($\zeta$ =20, c=2) & 0.4940 & 0.5577\\ \hline FSFD $Cbest$ ($\zeta$ =20, c=2) & \textbf{0.5635} & \textbf{0.6372}\\ \hline \end{tabular} \label{TPR_Recall_Compare} \vskip -10pt \end{table} \begin{figure}[!htb] \centering \includegraphics[width=0.3\textwidth]{TimeVsMethodICASSP.jpg} \vskip -8pt \caption{Time per image for different methods.} \label{TimeVsMethod} \vskip -10pt \end{figure} In Fig. \ref{F1ScoreVsEta} the F1 score of $C0$ and $Cbest$ are compared with DPM, LAEO and VJ face detection methods while varying the threshold $\eta$. It can be observed that the FSFD method outperforms all of the other methods at all values of $\eta$. $Cbest$ especially provides superior performance at a high percentage of overlap with the ground truth. In Table \ref{TPR_Recall_Compare}, FSFD is compared with DPM in terms of TPR at $1\%$ FPR and Recall at $99\%$ Precision. For both metrics, $Cbest$ provides significantly better performance than DPM as can be seen from the table. Note that LAEO cannot even reach $99\%$ precision with any value of recall on the dataset. In Fig. \ref{TimeVsMethod} the average detection times per image for the four methods are compared. It can be seen that, though $Cbest$ utilizes $9$ cascade classifiers, the time consumption is not $9$-fold that of VJ which uses only one such classifier. This is because the scaling factor of VJ was set to a small value to ensure finer search in order to obtain the best results. Conversely, the scale factor of each cascade classifier in $Cbest$ was relatively big. Hence each classifier is individually weak but can produce much better outcomes when combined together. The LAEO and DPM detectors require $5$ to $7$ times more time than $Cbest$ and are not suitable for real time implementation for continuous authentication. \section{Conclusion and Future Directions} In this paper, a novel facial segment-based face detection technique is proposed that is suitable for face-based continuous authentication on mobile devices due to its high recall at excellent precision. A total of $14$ facial segment detectors have been trained and, an algorithm is introduced for clustering these segments to estimate a full or partially visible face. Through extensive experimentation, it is shown that the proposed method can achieve superior performance over state-of-the-art methods using fewer facial segment cascade classifiers, even as there still remains a lot of provision for speeding up the process. The future direction of this experiment is toward accurate landmark detection from overlapping facial segments and toward face authentication by fusing segment-wise verification scores. {\small \bibliographystyle{ieee}
1,108,101,563,108
arxiv
\section{Introduction} \em Categorical quantum mechanics \em \cite{AC} aims to recast quantum mechanical notions in terms of symmetric monoidal categories with additional structure. One layer of extra structure, compactness \cite{KellyLaplaza}, encompasses the well-known Choi-Jamiolkowski isomorphism. Compactness is itself subsumed by the much richer commutative Frobenius algebra structure \cite{CarboniWalters}, which governs classical data, observables, and certain tripartite states \cite{CPav, CD, CES2, CK}. In this symmetric monoidal form, quantum mechanics enjoys: \begin{itemize} \item an \em operational interpretation \em by making sequential and parallel composition of systems and processes the basic connectives of the language \cite{ContPhys}; \item an intuitive \em diagrammatic calculus \em \cite{ContPhys} via the Penrose-Joyal-Street diagrammatic calculus for symmetric monoidal categories \cite{Penrose, JS}, augmented with Kelly and Laplaza's coherence result for compact categories, and Lack's work on distributive laws \cite{Lack}; \item a \em logical underpinning \em \cite{RossThesis} via the closed structure resulting from compactness. \end{itemize}\par\noindent The last allows the application of automated reasoning techniques to quantum mechanics \cite{DD, quanto, DK}. A prototype software implementation, {\tt quantomatic}, already exists and is jointly developed in Edinburgh and Oxford. Categorical quantum mechanics has meanwhile been successful in solving problems in quantum information \cite{DP} and quantum foundations \cite{CES2}, where other methods and structures failed to be adequate. Key to these results is the description of \em interacting basis structures \em in \cite{CD}. The language of that paper consists of a pair of abstract bases or \em basis structures\em, which are, again in abstract terms, mutually unbiased, and an abstract generalisation of phases relative to bases. This formalism has been implemented in {\tt quantomatic}, and is expressive enough to universally model any linear map $f:\mathbb{Q}^{\otimes n}\to \mathbb{Q}^{\otimes m}$, where $\mathbb{Q}=\mathbb{C}^2$. On the other hand, if we restrict the language to the two basis structures only it becomes very poor, describing no more than 2 qubit states. This brings us to the subject of this paper. In \cite{CK} two of the authors introduced pairs of interacting commutative Frobenius algebras that do not model bases, but the tripartite GHZ and W states \cite{DVC}. Both these states can indeed be endowed with the structure of a commutative Frobenius algebra, yielding a \em GHZ structure \em and a \em W structure \em as we recall in Section \ref{sec:ghz-w}. The main point of this paper is that the language consisting of the GHZ structure (which is essentially the same as a basis structure) and the W structure is already rich enough to encode rational arithmetic, with the exception of additive inverses. Now an infinite number of qubit states can be described, corresponding to the rational numbers of the arithmetic system. We demonstrate this in Section \ref{sec:arithmetic}. In Section \ref{sec:additive-inverses} we extend the GHZ/W-calculus with one basic graphical element which then allows additive inverses to be captured. Section \ref{sec:automation} addresses the issue of how to implement the calculus within the {\tt quantomatic} software. We assume that the reader is familiar with the diagrammatic calculus for symmetric monoidal categories \cite{JS,SelingerSurvey}, which is also reviewed in \cite{CK}. We also assume that the reader is familiar with the (very) basics of finite dimensional Hilbert spaces and Dirac notation as used in quantum computing. \section{Frobenius Algebras and the GHZ/W-calculus}\label{sec:ghz-w} Fix a symmetric monoidal category $({\bf V},\otimes,I,\sigma)$. Throughout this paper, we shall define morphisms in $\bf V$ using the graphical notation defined in \cite{SelingerSurvey}. In this notation, `wires' correspond to objects and vertices, and `boxes' correspond to morphisms. We shall express composition vertically, from top to bottom, and the monoidal product as (horizontal) juxtaposition of graphs. When wires are not labeled, they are assumed to represent a fixed object, $Q$. \begin{example} A canonical example throughout will be ${\bf FHilb}$, the category of finite-dimensional Hilbert spaces and linear maps. In this case, $\otimes$ is the usual tensor product, $\sigma$ the swap map $v \otimes w \mapsto w \otimes v$, $I := \mathbb C$ and $Q := \mathbb C^2$, the space of qubits. We shall also refer the ``projective'' category of finite-dimensional Hilbert spaces, ${\bf FHilb}_p$, whose objects are the same as ${\bf FHilb}$ and whose arrows are linear maps, taken to be equivalent iff they differ only by a non-zero scalar. \end{example} \subsection{Commutative Frobenius Algebras} A \em commutative Frobenius algebra \em (CFA) consists of an internal commutative monoid $(Q, \dotmult{small dot}\ , \dotunit{small dot})$ and an internal cocommutative comonoid $(Q, \dotcomult{small dot}\ , \dotcounit{small dot})$ that interact via the Frobenius law: \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0.5, -1.5) {}; \node [style=none] (1) at (1.5, -1.5) {}; \node [style=none] (2) at (3, -1.5) {}; \node [style=none] (3) at (4, -1.5) {}; \node [style=dot] (4) at (1.5, -2) {}; \node [style=dot] (5) at (3.5, -2) {}; \node [style=none] (6) at (2.5, -2.25) {=}; \node [style=dot] (7) at (1, -2.5) {}; \node [style=dot] (8) at (3.5, -2.5) {}; \node [style=none] (9) at (1, -3) {}; \node [style=none] (10) at (2, -3) {}; \node [style=none] (11) at (3, -3) {}; \node [style=none] (12) at (4, -3) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (8) to (12.center); \draw (9.center) to (7); \draw (5) to (2.center); \draw (4) to (7); \draw (11.center) to (8); \draw (4) to (1.center); \draw (5) to (3.center); \draw (5) to (8); \draw[bend left=15] (7) to (0.center); \draw[bend left=15, looseness=1.25] (4) to (10.center); \end{pgfonlayer} \end{tikzpicture} \] One can show that any connected graph consisting only of $\dotmult{small dot}, \dotunit{small dot}, \dotcomult{small dot}, \dotcounit{small dot}, \sigma$ and $1_Q$ depends only upon the number of inputs, outputs, and loops. As such, it can be reduced to a canonical normal form: \[ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-2, 4.25) {}; \node [style=none] (1) at (-1, 4.25) {}; \node [style=none] (2) at (0, 4.25) {}; \node [style=none] (3) at (2, 4.25) {}; \node [style=dot] (4) at (-1.5, 3.75) {}; \node [style=dot] (5) at (-1, 3.25) {}; \node [style=none] (6) at (-0.75, 3) {}; \node [style=none] (7) at (-0.5, 2.75) {\small ...}; \node [style=none] (8) at (-0.25, 2.5) {}; \node [style=dot] (9) at (0, 2.25) {}; \node [style=dot] (10) at (0, 1.5) {}; \node [style=dot] (11) at (0, 0.75) {}; \node [style=none] (12) at (0, 0.25) {}; \node [style=none] (13) at (0, 0) {\small ...}; \node [style=none] (14) at (0, -0.25) {}; \node [style=dot] (15) at (0, -0.75) {}; \node [style=dot] (16) at (0, -1.5) {}; \node [style=dot] (17) at (0, -2.25) {}; \node [style=none] (18) at (-0.25, -2.5) {}; \node [style=none] (19) at (-0.5, -2.75) {\small ...}; \node [style=none] (20) at (-0.75, -3) {}; \node [style=dot] (21) at (-1, -3.25) {}; \node [style=dot] (22) at (-1.5, -3.75) {}; \node [style=none] (23) at (-2, -4.25) {}; \node [style=none] (24) at (-1, -4.25) {}; \node [style=none] (25) at (0, -4.25) {}; \node [style=none] (26) at (2, -4.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw[bend left=60, looseness=1.25] (10) to (11); \draw[bend right=60, looseness=1.25] (15) to (16); \draw[bend right=60, looseness=1.25] (10) to (11); \draw (16) to (17); \draw (9) to (10); \draw (4) to (5); \draw (9) to (3.center); \draw (1.center) to (4); \draw (25.center) to (21); \draw (21) to (20.center); \draw (14.center) to (15); \draw (8.center) to (9); \draw (5) to (6.center); \draw (23.center) to (22); \draw (24.center) to (22); \draw (0.center) to (4); \draw (22) to (21); \draw (2.center) to (5); \draw (17) to (26.center); \draw[bend left=60, looseness=1.25] (15) to (16); \draw (18.center) to (17); \draw (11) to (12.center); \end{pgfonlayer} \end{tikzpicture} \] In any connected graph, loops are counted as the total number of edges that can be removed without disconnecting the graph. We shall use `spider' notation to represent graphs of Frobenius algebras using vertices of any arity. We express any connected graph as above with $m$ inputs, $n$ outputs, and no loops as a single vertex of the same colour: \[ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (3, 3.25) {$\overbrace{\qquad\qquad\qquad\qquad}^m$}; \node [style=none] (1) at (1, 2.5) {}; \node [style=none] (2) at (2, 2.5) {}; \node [style=none] (3) at (3, 2.5) {}; \node [style=none] (4) at (5, 2.5) {}; \node [style=dot] (5) at (1.5, 2) {}; \node [style=none] (6) at (-1, 1.75) {$\overbrace{\qquad\qquad}^m$}; \node [style=dot] (7) at (2, 1.5) {}; \node [style=none] (8) at (2.25, 1.25) {}; \node [style=none] (9) at (-2, 1) {}; \node [style=none] (10) at (-1.5, 1) {}; \node [style=none] (11) at (0, 1) {}; \node [style=none] (12) at (2.5, 1) {\small ...}; \node [style=none] (13) at (-0.75, 0.75) {\small ...}; \node [style=none] (14) at (2.75, 0.75) {}; \node [style=dot] (15) at (3, 0.5) {}; \node [style=dot] (16) at (-1, 0) {}; \node [style=none] (17) at (1, 0) {=}; \node [style=dot] (18) at (3, -0.5) {}; \node [style=none] (19) at (-0.75, -0.75) {\small ...}; \node [style=none] (20) at (2.75, -0.75) {}; \node [style=none] (21) at (-2, -1) {}; \node [style=none] (22) at (-1.5, -1) {}; \node [style=none] (23) at (0, -1) {}; \node [style=none] (24) at (2.5, -1) {\small ...}; \node [style=none] (25) at (2.25, -1.25) {}; \node [style=dot] (26) at (2, -1.5) {}; \node [style=none] (27) at (-1, -1.75) {$\underbrace{\qquad\qquad}_n$}; \node [style=dot] (28) at (1.5, -2) {}; \node [style=none] (29) at (1, -2.5) {}; \node [style=none] (30) at (2, -2.5) {}; \node [style=none] (31) at (3, -2.5) {}; \node [style=none] (32) at (5, -2.5) {}; \node [style=none] (33) at (3, -3.25) {$\underbrace{\qquad\qquad\qquad\qquad}_n$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (14.center) to (15); \draw[bend right=15] (10.center) to (16); \draw (29.center) to (28); \draw (28) to (26); \draw (30.center) to (28); \draw[bend right=15] (16) to (22.center); \draw (5) to (7); \draw (15) to (4.center); \draw[bend left] (16) to (23.center); \draw (3.center) to (7); \draw (20.center) to (18); \draw (15) to (18); \draw (7) to (8.center); \draw[bend right] (9.center) to (16); \draw (2.center) to (5); \draw[bend left=15] (11.center) to (16); \draw (31.center) to (26); \draw (18) to (32.center); \draw[bend right] (16) to (21.center); \draw (1.center) to (5); \draw (26) to (25.center); \end{pgfonlayer} \end{tikzpicture} \] We give two of these graphs special names. The \emph{cup} is defined as $\dotcup{small dot}$ and the \emph{cap} is defined as $\dotcap{small dot}$. These induce a compact structure, since \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-0.75, 0.75) {}; \node [style=none] (1) at (1.75, 0.75) {}; \node [style=dot] (2) at (-4.25, 0.5) {}; \node [style=dot] (3) at (-0.75, 0.25) {}; \node [style=none] (4) at (0.25, 0.25) {}; \node [style=none] (5) at (-4.75, -0) {}; \node [style=none] (6) at (-2.75, -0) {}; \node [style=none] (7) at (-2, -0) {$=$}; \node [style=none] (8) at (1, -0) {$=$}; \node [style=none] (9) at (-1.25, -0.25) {}; \node [style=dot] (10) at (-0.25, -0.25) {}; \node [style=dot] (11) at (-3.25, -0.5) {}; \node [style=dot] (12) at (-0.25, -0.75) {}; \node [style=none] (13) at (1.75, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw[out=0, in=270] (11) to (6.center); \draw[out=90, in=180] (5.center) to (2); \draw (3) to (10); \draw[out=0, in=180] (2) to (11); \draw (0) to (3); \draw (1.center) to (13.center); \draw (9.center) to (3); \draw (12) to (10); \draw (10) to (4.center); \end{pgfonlayer} \end{tikzpicture} \] \subsection{Phases} \begin{definition}{\rm\cite{CD}}\label{Phasedef} Given a CFA on an object $A$, a morphism $f:A\to A$ is a \em phase \em if we have \begin{equation}\label{eq:phases1} \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 1.5) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=square box] (2) at (-0.5, 0.5) {$f$}; \node [style=none] (3) at (0.5, 0.5) {}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=none] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw [bend right] (2.center) to (4); \draw [bend right] (4) to (3.center); \draw (3.center) to (1); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 1.5) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=square box] (2) at (0.5, 0.5) {$f$}; \node [style=none] (3) at (-0.5, 0.5) {}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=none] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (3.center); \draw [bend right] (3.center) to (4); \draw [bend right] (4) to (2.center); \draw (2.center) to (1); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 1.5) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=dot] (2) at (0, 0.5) {}; \node [style=square box] (3) at (0, -0.5) {$f$}; \node [style=none] (4) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (0) to (2); \draw [bend right=30] (2) to (1); \draw (2) to (3.center); \draw (3.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \eeq \end{definition} Equivalently, phases can be described as module endomorphisms, where $\dotmult{small dot}$ is considered as a left (or right) module over itself. \begin{proposition}\label{prop:phases-states} A phase $f:A\to A$ can be equivalently defined as a morphism of the form: \begin{equation}\label{eq:phases2} \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw (2.center) to (1); \end{pgfonlayer} \end{tikzpicture}} \ \ := \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=dot] (2) at (0, 0) {}; \node [style=white dot] (3) at (1, 1) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw (2.center) to (1); \draw [bend right=60] (2.center) to (3.center); \end{pgfonlayer} \end{tikzpicture}} \eeq for some element $\psi:{\rm I}\to A$. \end{proposition} \begin{proof} Given eqs.~(\ref{eq:phases1}) we have \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=square box] (2) at (0, 0) {$f$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw (2.center) to (1); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=dot] (2) at (0, -0.5) {}; \node [style=square box] (3) at (0, 0.5) {$f$}; \node [style=dot] (4) at (1, 1.5) {}; \node [style=none] (5) at (1, 0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (3.center); \draw (3.center) to (2.center); \draw (2.center) to (1); \draw [bend right=45] (2.center) to (5.center); \draw (5.center) to (4.center); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=dot] (2) at (0, -0.5) {}; \node [style=square box] (3) at (1, 0.5) {$f$}; \node [style=dot] (4) at (1, 1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw (2.center) to (1); \draw (3.center) to (4); \draw [bend right=60] (2.center) to (3.center); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=small white dot,font=\footnotesize] (3) at (1, 1) {$f \circ \dotunit{small dot}$}; \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=dot] (2) at (0, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=60] (2.center) to (3.center); \draw (2) to (1); \draw (2) to (0); \end{pgfonlayer} \end{tikzpicture}} \] where we used unitality of the CFA, and conversely, given eq.~(\ref{eq:phases2}), eqs.~(\ref{eq:phases1}) straightforwardly follow by associativity and commutativity of the CFA. \end{proof} \begin{proposition}\label{prop:invphase} The inverse of a phase is a phase. \end{proposition} \begin{proof} Setting $ \raisebox{-4mm}{ \begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1) {}; \node [style=square box] (1) at (0, 0) {$f$}; \node [style=none] (2) at (0, -1) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (1); \draw (1) to (2.center); \end{pgfonlayer} \end{tikzpicture} } := \left(\raisebox{-4mm}{ \begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1) {}; \node [style=white dot] (1) at (0, 0) {$\psi$}; \node [style=none] (2) at (0, -1) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (1); \draw (1) to (2.center); \end{pgfonlayer} \end{tikzpicture} }\right)^{-1} $ we have \[ \begin{tikzpicture}[scale=0.8] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-5.75, 1.5) {}; \node [style=none] (1) at (-4.75, 1.5) {}; \node [style=none] (2) at (-3.25, 1.5) {}; \node [style=none] (3) at (-2.25, 1.5) {}; \node [style=none] (4) at (-0.75, 1.5) {}; \node [style=none] (5) at (0.25, 1.5) {}; \node [style=none] (6) at (1.75, 1.5) {}; \node [style=none] (7) at (2.75, 1.5) {}; \node [style=square box] (8) at (-5.75, 0.75) {$f$}; \node [style=none] (9) at (-4.75, 0.75) {}; \node [style=square box] (10) at (-3.25, 0.75) {$f$}; \node [style=none] (11) at (-2.25, 0.75) {}; \node [style=square box] (12) at (-0.75, 0.75) {$f$}; \node [style=dot] (13) at (-5.25, 0) {}; \node [style=none] (14) at (-4, 0) {$=$}; \node [style=dot] (15) at (-2.75, 0) {}; \node [style=none] (16) at (-1.5, 0) {$=$}; \node [style=none] (17) at (1, 0) {$=$}; \node [style=white dot] (18) at (-0.75, -0.25) {$\psi$}; \node [style=none] (19) at (0.25, -0.25) {}; \node [style=none] (20) at (1.75, -0.25) {}; \node [style=none] (21) at (2.75, -0.25) {}; \node [style=white dot] (22) at (-2.75, -0.75) {$\psi$}; \node [style=dot] (23) at (-0.25, -1) {}; \node [style=dot] (24) at (2.25, -1) {}; \node [style=square box] (25) at (-2.75, -1.75) {$f$}; \node [style=square box] (26) at (-0.25, -1.75) {$f$}; \node [style=square box] (27) at (2.25, -1.75) {$f$}; \node [style=none] (28) at (-5.25, -2.5) {}; \node [style=none] (29) at (-2.75, -2.5) {}; \node [style=none] (30) at (-0.25, -2.5) {}; \node [style=none] (31) at (2.25, -2.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (22) to (25); \draw (9.center) to (1.center); \draw[bend right] (20.center) to (24); \draw (6.center) to (20.center); \draw (26) to (30.center); \draw[bend right] (24) to (21.center); \draw[bend right] (13) to (9.center); \draw (7.center) to (21.center); \draw (27) to (31.center); \draw[bend right] (15) to (11.center); \draw (0.center) to (8); \draw[bend right] (10) to (15); \draw (4.center) to (12); \draw[bend right] (8) to (13); \draw[bend right] (23) to (19.center); \draw (19.center) to (5.center); \draw (23) to (26); \draw (13) to (28.center); \draw (24) to (27); \draw[bend right] (18) to (23); \draw (2.center) to (10); \draw (25) to (29.center); \draw (15) to (22); \draw (11.center) to (3.center); \draw (12) to (18); \end{pgfonlayer} \end{tikzpicture} \] \end{proof} \subsection{GHZ/W calculus} In this paper, we are concerned not only with general CFAs, but two specific cases, depending on the behaviour of the loops. We refer to $\dotmult{small dot} \circ \dotcomult{small dot}$ as the \emph{loop map} of a CFA. \begin{definition}{\rm\cite{CK}} A \emph{GHZ-structure} is a special commutative Frobenius algebra; that is, a commutative Frobenius algebra where the loop map is equal to the identity: \[ \begin{tikzpicture}[dotpic,yshift=5mm] \node [dot] (a) at (0,0) {}; \node [dot] (b) at (0,-1) {}; \draw [bend left] (a) to (b); \draw [bend right] (a) to (b); \draw (0,0.5) to (a) (b) to (0,-1.5); \end{tikzpicture} = \ \begin{tikzpicture}[dotpic] \draw (0,1) -- (0,-1); \end{tikzpicture} \] \end{definition} These GHZ-structures have also been referred to as \emph{basis structures}, for example in \cite{CES2}, because of their strong connection to bases in finite-dimensional vector spaces. See Theorem \ref{basisthm} below. \begin{definition}{\rm\cite{CK}} A \emph{W-structure} is an anti-special commutative Frobenius algebra. This is commutative Frobenius algebra whose loop map obeys the following equation: \begin{equation}\label{eq:antispec} \circl\ \begin{tikzpicture}[dotpic] \node [dot] (a) at (0,0.5) {}; \node [dot] (b) at (0,-0.5) {}; \draw [bend left] (a) to (b); \draw [bend right] (a) to (b); \draw (0,1) to (a) (b) to (0,-1); \end{tikzpicture}\ \ = \begin{tikzpicture}[dotpic] \node [dot] (a) at (0,0.7) {}; \node [dot] (b) at (0,-0.7) {}; \draw (0,1.2) to (a) (b) to (0,-1.2); \draw (a) to [downloop] (); \draw (b) to [uploop] (); \end{tikzpicture} \end{equation} where we use the following short-hand notation: \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (1, 0.75) {}; \node [style=none] (1) at (5.25, 0.75) {}; \node [style=none] (2) at (3.25, 0.5) {}; \node [style=dot] (3) at (1, 0.25) {}; \node [style=dot] (4) at (5.25, 0.25) {}; \node [style=dot] (5) at (-1, 0) {}; \node [style=none] (6) at (0, 0) {=}; \node [style=dot] (7) at (3.25, 0) {}; \node [style=none] (8) at (4.25, 0) {=}; \node [style=dot] (9) at (1, -0.25) {}; \node [style=dot] (10) at (5.25, -0.25) {}; \node [style=none] (11) at (-1, -0.5) {}; \node [style=none] (12) at (1, -0.75) {}; \node [style=dot] (13) at (5.25, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw[bend left=300, looseness=1.25] (10) to (4); \draw[bend right=300, looseness=1.25] (3) to (9); \draw[out=45, in=135, loop] (5) to (); \draw (4) to (1.center); \draw[bend left=60, looseness=1.25] (10) to (4); \draw (9) to (12.center); \draw (0) to (3); \draw[bend right=60, looseness=1.25] (3) to (9); \draw[out=-45, in=-135, loop] (7) to (); \draw (5) to (11.center); \draw (7) to (2.center); \draw (13) to (10); \end{pgfonlayer} \end{tikzpicture} \] \end{definition} This distinction essentially comes down to whether the loop map is singular or invertible. \begin{lemma}\label{lem:iso-scfa} If the loop map of a CFA is an isomorphism, the CFA can be made special via a phase. \end{lemma} \begin{proof} Consider a CFA $(\dotmult{small dot}, \dotunit{small dot}, \dotcomult{small dot}, \dotcounit{small dot})$. Since its loop map is a phase, by Proposition \ref{prop:invphase} so is the inverse of the loop map, which we denote $f$. Then $(\,\dotmult{small dot}\, , \, \dotunit{small dot}\,, \raisebox{-4.5mm}{ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-8, 1.5) {}; \node [style=square box] (1) at (-8, 0.75) {$f$}; \node [style=dot] (2) at (-8, 0) {}; \node [style=none] (3) at (-8.5, -0.5) {}; \node [style=none] (4) at (-7.5, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (4.center); \draw (1) to (2); \draw (0.center) to (1); \draw (2) to (3.center); \end{pgfonlayer} \end{tikzpicture}} ,\,\cololli\,)$ is easily seen to be a special CFA. \end{proof} \begin{lemma}[Herrmann \cite{Hermann}] If the loop of a CFA is disconnected, i.e.~factors over the tensor unit, then it obeys eq.~{\rm(\ref{eq:antispec})}, that is the CFA is necessarily anti-special. \end{lemma} The following is an example of a GHZ-structure in ${\bf FHilb}$: \begin{equation}\label{GHZ-SCFA} \begin{split} \dotmult{small white dot} & = \ket{0}\bra{00} + \ket{1}\bra{11} \qquad\qquad \dotunit{small white dot} = \sqrt{2}\, \ket{+} := \ket{0}+\ket{1} \\ \dotcomult{small white dot} & = \ket{00}\bra{0} + \ket{11}\bra{1} \qquad\qquad \dotcounit{small white dot} = \sqrt{2} \bra{+} := \bra{0}+\bra{1} \end{split}\vspace{-1.5mm} \end{equation} and we also have an example of a W-structure in ${\bf FHilb}$: \begin{equation}\label{W-ACFA} \begin{split} \dotmult{small dot} & = \ket{1}\bra{11} + \ket{0}\bra{01} + \ket{0}\bra{10} \qquad\qquad\qquad \dotunit{small dot} = \ket 1\qquad\qquad \\ \dotcomult{small dot} & = \ket{00}\bra{0} + \ket{01}\bra{1} + \ket{10}\bra{1} \qquad\qquad\qquad \dotcounit{small dot} = \bra 0\qquad\qquad \end{split} \end{equation} Note that the cups for these CFAs do not coincide: \[ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (1.5, 0.75) {}; \node [style=white dot] (1) at (-1.5, 0.25) {}; \node [style=white dot] (2) at (1.5, 0.25) {}; \node [style=none] (3) at (0, 0) {$:=$}; \node [style=none] (4) at (-2, -0.25) {}; \node [style=none] (5) at (-1, -0.25) {}; \node [style=none] (6) at (1, -0.25) {}; \node [style=none] (7) at (2, -0.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2); \draw (7.center) to (2); \draw[bend right=45] (5.center) to (1); \draw (6.center) to (2); \draw[bend left=45] (4.center) to (1); \end{pgfonlayer} \end{tikzpicture}\ = \ket{00}+\ket{11} \qquad\qquad \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (1.5, 0.75) {}; \node [style=dot] (1) at (-1.5, 0.25) {}; \node [style=dot] (2) at (1.5, 0.25) {}; \node [style=none] (3) at (0, 0) {$:=$}; \node [style=none] (4) at (-2, -0.25) {}; \node [style=none] (5) at (-1, -0.25) {}; \node [style=none] (6) at (1, -0.25) {}; \node [style=none] (7) at (2, -0.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2); \draw (7.center) to (2); \draw[bend right=45] (5.center) to (1); \draw (6.center) to (2); \draw[bend left=45] (4.center) to (1); \end{pgfonlayer} \end{tikzpicture}\ = \ket{01}+\ket{10} \] However, the composition of a cap from one CFA with a cup from the other yields the Pauli X, or `NOT', gate: \[ \begin{tikzpicture}[dotpic,scale=0.5] \node [bn] (0) at (0,1.5) {}; \node [bn] (1) at (0,-1.5) {}; \draw (0)-- node[tick]{-} (1); \end{tikzpicture} :=\ \begin{tikzpicture}[dotpic,yshift=-5mm,scale=0.5] \node [bn] (b0) at (-1,2) {}; \node [dot] (0) at (0,0) {}; \node [white dot] (1) at (1.5,1) {}; \node [bn] (b1) at (2.5,-1) {}; \draw (b0) to [out=-90,in=180] (0) (0) to [out=0,in=180] (1) (1) to [out=0,in=90] (b1); \end{tikzpicture} \ =\ \begin{tikzpicture}[dotpic,yshift=-5mm,scale=0.5] \node [bn] (b0) at (-1,2) {}; \node [white dot] (0) at (0,0) {}; \node [dot] (1) at (1.5,1) {}; \node [bn] (b1) at (2.5,-1) {}; \draw (b0) to [out=-90,in=180] (0) (0) to [out=0,in=180] (1) (1) to [out=0,in=90] (b1); \end{tikzpicture} \ =\ \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) \] These CFAs respectively induce the following tripartite states: \[ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0, 0.75) {}; \node [style=white dot] (1) at (0, 0) {}; \node [style=white dot] (2) at (-0.5, -0.5) {}; \node [style=none] (3) at (-1, -1) {}; \node [style=none] (4) at (0, -1) {}; \node [style=none] (5) at (1, -1) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (4.center) to (2); \draw (3.center) to (2); \draw (2) to (1); \draw (0) to (1); \draw (5.center) to (1); \end{pgfonlayer} \end{tikzpicture} \ = \ket{000}+\ket{111} = \ket{\textit{GHZ}\,} \qquad \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 0.75) {}; \node [style=dot] (1) at (0, 0) {}; \node [style=dot] (2) at (-0.5, -0.5) {}; \node [style=none] (3) at (-1, -1) {}; \node [style=none] (4) at (0, -1) {}; \node [style=none] (5) at (1, -1) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (4.center) to (2); \draw (3.center) to (2); \draw (2) to (1); \draw (0) to (1); \draw (5.center) to (1); \end{pgfonlayer} \end{tikzpicture} \ = \ket{100}+\ket{010}+\ket{001} = \ket{\textit{W\,}} \] As the name suggests, the associated tripartite state of the above GHZ-structure is a GHZ state, and that of the W-structure is a W state. Furthermore, Theorem \ref{GHZ/Wthm} asserts that for qubits, the associated tripartite state of \emph{any} GHZ-structure (resp. W-structure) is a GHZ state (resp. W state), up to local operations. \begin{theorem}{\rm\cite{CK}}\label{GHZ/Wthm} For any special (respectively~anti-special) CFA on a qubit in ${\bf FHilb}$, the induced tripartite state is SLOCC-equivalent to $\ket{\textit{GHZ}\,}$ (respectively~$\ket{\textit{W\,}}$). Furthermore, any tripartite state $\ket\Psi$ either induces a special or anti-special CFA-structure, depending on whether it is SLOCC-equivalent to $\ket{\textit{GHZ}\,}$ or to $\ket{\textit{W\,}}$. \end{theorem} Theorem \ref{basisthm} justifies the alternative name \em basis structure \em for GHZ-structures. \begin{theorem}{\rm\cite{Aguiar}}\label{basisthm} Special commutative Frobenius algebras on a finite-dimensional Hilbert space ${\cal H}$ are in 1-to-1 correspondence with (possibly non-orthogonal) bases for ${\cal H}$. \end{theorem} For any special CFA, phases are matrices that are diagonal in the corresponding basis. The corresponding $|\psi\rangle$ (as in proposition \ref{prop:phases-states}) lies on the equator of the Bloch sphere, justifying the name `phases'. We can also consider interactions between a GHZ-structure and a W-structure. \begin{definition}{\rm\cite{CK}} A GHZ- and a W-structure form a \em GHZ/W-pair \em if the following equations hold: \begin{center} \raisebox{4mm}{ \begin{tikzpicture}[dotpic,scale=0.5] \node [bn] (0) at (0,1.5) {}; \node [bn] (1) at (0,-1.5) {}; \draw (0)-- node[tick]{-} (1); \end{tikzpicture} \ $:=$\ \ \begin{tikzpicture}[dotpic,yshift=-5mm,scale=0.5] \node [bn] (b0) at (-1,2) {}; \node [dot] (0) at (0,0) {}; \node [white dot] (1) at (1.5,1) {}; \node [bn] (b1) at (2.5,-1) {}; \draw (b0) to [out=-90,in=180] (0) (0) to [out=0,in=180] (1) (1) to [out=0,in=90] (b1); \end{tikzpicture} \ $\stackrel{\alpha}{=}$\ \ \begin{tikzpicture}[dotpic,yshift=-5mm,scale=0.5] \node [bn] (b0) at (-1,2) {}; \node [white dot] (0) at (0,0) {}; \node [dot] (1) at (1.5,1) {}; \node [bn] (b1) at (2.5,-1) {}; \draw (b0) to [out=-90,in=180] (0) (0) to [out=0,in=180] (1) (1) to [out=0,in=90] (b1); \end{tikzpicture} } \qquad \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (2.5, 1.5) {}; \node [style=none] (1) at (5, 1.5) {}; \node [style=dot] (2) at (-3, 1.25) {}; \node [style=dot] (3) at (-0.75, 1) {}; \node [style=dot] (4) at (0.25, 1) {}; \node [style=none] (5) at (-1.75, 0.75) {$\stackrel{\beta}{=}$}; \node [style=none] (6) at (3.75, 0.75) {$\stackrel{\gamma}{=}$}; \node [style=white dot] (7) at (2.5, 0.5) {}; \node [style=white dot] (8) at (5, 0.5) {}; \node [style=white dot] (9) at (-3, 0.25) {}; \node [style=none] (10) at (-0.75, 0) {}; \node [style=none] (11) at (0.25, 0) {}; \node [style=none] (12) at (-3.5, -0.25) {}; \node [style=none] (13) at (-2.5, -0.25) {}; \node [style=none] (14) at (1.75, -0.25) {}; \node [style=none] (15) at (3.25, -0.25) {}; \node [style=none] (16) at (4.25, -0.25) {}; \node [style=none] (17) at (5.75, -0.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (7) to (15.center); \draw (2) to (9); \draw (12.center) to (9); \draw (9) to (13.center); \draw (8) to node[tick]{-} (17.center); \draw (16.center) to node[tick]{-} (8); \draw (0.center) to node[tick]{-} (7); \draw (3) to (10.center); \draw (1.center) to (8); \draw (14.center) to (7); \draw (4) to (11.center); \end{pgfonlayer} \end{tikzpicture} \qquad\ \ \raisebox{3mm}{\ensuremath{\circl \dottickunit{small dot} \stackrel{\xi}{=}\, \lolli}} \end{center} \end{definition} By eqs.~($\beta$, $\gamma$) we also have:\vspace{-5mm} \begin{center} \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-1.5, -0.75) {}; \node [style=none] (1) at (0, -1.25) {$\stackrel{\beta'}{=}$}; \node [style=dot] (2) at (1.25, -1.25) {}; \node [style=dot] (3) at (2.25, -1.25) {}; \node [style=white dot] (4) at (-1.5, -1.75) {}; \node [style=none] (5) at (-2, -2.25) {}; \node [style=none] (6) at (-1, -2.25) {}; \node [style=none] (7) at (1.25, -2.25) {}; \node [style=none] (8) at (2.25, -2.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (4); \draw (3) to node[tick]{-} (8.center); \draw (5.center) to (4); \draw (2) to node[tick]{-} (7.center); \draw (4) to (6.center); \end{pgfonlayer} \end{tikzpicture} \end{center} \subsection{Plugging} Since we are often concerned with objects in a monoidal category that are finitary in nature, we can deduce many new identities using a technique we call \emph{plugging}. \begin{definition}\label{def:plugging-set} A set of points $\{ \psi_i : I \rightarrow Q \}$ form a \emph{plugging set} for $Q$ if they suffice to distinguish maps from $Q$. That is, for all objects $A$ and maps $f,g : Q \rightarrow A$, \[\begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-3.5, 1.5) {}; \node [style=none] (1) at (-1.5, 1.5) {}; \node [style=white dot, font=\footnotesize] (2) at (2.5, 1.5) {$\psi_i$}; \node [style=white dot, font=\footnotesize] (3) at (4.5, 1.5) {$\psi_i$}; \node [style=none, font=\footnotesize] (4) at (-3.75, 1.25) {$Q$}; \node [style=none, font=\footnotesize] (5) at (-1.75, 1.25) {$Q$}; \node [style=square box] (6) at (-3.5, 0) {$f$}; \node [style=none] (7) at (-2.5, 0) {=}; \node [style=square box] (8) at (-1.5, 0) {$g$}; \node [style=none] (9) at (0, 0) {$\Leftrightarrow$}; \node [style=none] (10) at (1.5, 0) {$\forall i .$}; \node [style=square box] (11) at (2.5, 0) {$f$}; \node [style=none] (12) at (3.5, 0) {=}; \node [style=square box] (13) at (4.5, 0) {$g$}; \node [style=none, font=\footnotesize] (14) at (-3.75, -1.25) {$A$}; \node [style=none, font=\footnotesize] (15) at (-1.75, -1.25) {$A$}; \node [style=none, font=\footnotesize] (16) at (2.25, -1.25) {$A$}; \node [style=none, font=\footnotesize] (17) at (4.25, -1.25) {$A$}; \node [style=none] (18) at (-3.5, -1.5) {}; \node [style=none] (19) at (-1.5, -1.5) {}; \node [style=none] (20) at (2.5, -1.5) {}; \node [style=none] (21) at (4.5, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (18.center); \draw (1.center) to (19.center); \draw (3) to (21.center); \draw (2) to (20.center); \end{pgfonlayer} \end{tikzpicture}\] \end{definition} When we prove a graphical identity by showing two maps are not distinguished by a plugging set, we call this `proof by plugging.' Also note that we can extend such proofs to maps of the form $f : Q \otimes A \rightarrow B$ or $f' : A \rightarrow Q \otimes B$ by using the Frobenius caps and cups when $Q$ has a CFA $(\dotmult{small dot}, \dotunit{small dot}, \dotcomult{small dot}, \dotcounit{small dot})$ and $A$ a CFA $(\dotmult{small white dot}, \dotunit{small white dot}, \dotcomult{small white dot}, \dotcounit{small white dot})$, \[ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-2.25, 1.75) {}; \node [style=white dot] (1) at (4.25, 1.75) {}; \node [style=none] (2) at (-4, 1.5) {}; \node [style=none] (3) at (1.5, 1.5) {}; \node [style=none, font=\footnotesize] (4) at (-4.25, 1.25) {$Q$}; \node [style=none, font=\footnotesize] (5) at (-3.25, 1.25) {$A$}; \node [style=none, font=\footnotesize] (6) at (1.75, 1.25) {$Q$}; \node [style=none, font=\footnotesize] (7) at (3.25, 1.25) {$A$}; \node [style=none] (8) at (-3, 1) {}; \node [style=none] (9) at (-1.5, 1) {}; \node [style=none] (10) at (3.5, 1) {}; \node [style=none] (11) at (5, 1) {}; \node [style=none] (12) at (-4, 0.5) {}; \node [style=none] (13) at (-3, 0.5) {}; \node [style=none] (14) at (3.5, 0.5) {}; \node [style=square box, minimum width=1 cm] (15) at (-3.5, 0) {$f$}; \node [style=square box, minimum width=1 cm] (16) at (3.5, 0) {$f'$}; \node [style=none] (17) at (-3.5, -0.5) {}; \node [style=none] (18) at (3, -0.5) {}; \node [style=none] (19) at (4, -0.5) {}; \node [style=none] (20) at (1.5, -0.75) {}; \node [style=none] (21) at (3, -0.75) {}; \node [style=none, font=\footnotesize] (22) at (-3.75, -1.25) {$B$}; \node [style=none, font=\footnotesize] (23) at (4.25, -1.25) {$B$}; \node [style=none] (24) at (-3.5, -1.5) {}; \node [style=none] (25) at (-1.5, -1.5) {}; \node [style=dot] (26) at (2.25, -1.5) {}; \node [style=none] (27) at (4, -1.5) {}; \node [style=none] (28) at (5, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2.center) to (12.center); \draw (8.center) to (13.center); \draw[out=0, in=90] (0) to (9.center); \draw (17.center) to (24.center); \draw[out=180, in=90] (1) to (10.center); \draw (11.center) to (28.center); \draw (9.center) to (25.center); \draw (10.center) to (14.center); \draw (19.center) to (27.center); \draw[out=180, in=90] (0) to (8.center); \draw[out=0, in=90] (1) to (11.center); \draw (21.center) to (18.center); \draw[out=0, in=-90] (26) to (21.center); \draw (20.center) to (3.center); \draw[out=180, in=-90] (26) to (20.center); \end{pgfonlayer} \end{tikzpicture} \] The axioms of a GHZ/W-pair suffice to prove the following lemma for Hilbert spaces. \begin{lemma}[\cite{CK}]\label{lem:dot-lolli-2d} For a GHZ/W-pair on $H$ in ${\bf FHilb}$ with $\dim(H) \geq 2$, the points $\dotunit{small dot}$ and $\dottickunit{small dot}$ span a 2-dimensional space; hence for $H=\mathbb{C}^2$ the points $\dotunit{small dot}$ and $\dottickunit{small dot}$ form a basis. \end{lemma} Motivated by this fact, we assume the that $\{ \dotunit{small dot}, \dottickunit{small dot} \}$ forms a plugging set for $Q$. More explicitly: \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-6, 0.75) {}; \node [style=dot] (1) at (-4.5, 0.75) {}; \node [style=dot] (2) at (-2.5, 0.75) {}; \node [style=dot] (3) at (-1, 0.75) {}; \node [style=none] (4) at (2, 0.75) {}; \node [style=none] (5) at (3.5, 0.75) {}; \node [style=square box] (6) at (-6, 0) {$f$}; \node [style=none] (7) at (-5.25, 0) {$=$}; \node [style=square box] (8) at (-4.5, 0) {$g$}; \node [style=none] (9) at (-3.5, 0) {$\wedge$}; \node [style=square box] (10) at (-2.5, 0) {$f$}; \node [style=none] (11) at (-1.75, 0) {$=$}; \node [style=square box] (12) at (-1, 0) {$g$}; \node [style=none] (13) at (0.5, 0) {$\Leftrightarrow$}; \node [style=square box] (14) at (2, 0) {$f$}; \node [style=none] (15) at (2.75, 0) {$=$}; \node [style=square box] (16) at (3.5, 0) {$g$}; \node [style=none] (17) at (-6, -0.75) {}; \node [style=none] (18) at (-4.5, -0.75) {}; \node [style=none] (19) at (-2.5, -0.75) {}; \node [style=none] (20) at (-1, -0.75) {}; \node [style=none] (21) at (2, -0.75) {}; \node [style=none] (22) at (3.5, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (10) to (19.center); \draw (0) to (6); \draw (2) to node[tick]{-} (10); \draw (16) to (22.center); \draw (8) to (18.center); \draw (1) to (8); \draw (6) to (17.center); \draw (12) to (20.center); \draw (14) to (21.center); \draw (4.center) to (14); \draw (5.center) to (16); \draw (3) to node[tick]{-} (12); \end{pgfonlayer} \end{tikzpicture} \] \section{Arithmetic from a GHZ/W-pair}\label{sec:arithmetic} Given a GHZ/W-pair, we can extract an arithmetic system. First, we establish some preliminary results. \subsection{Properties of GHZ-phases} Below, all phases are GHZ-phases. When relying on plugging, we have the following: \newcommand{\psidot}{ \raisebox{5mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0, 0.5) {$\psi$}; \node [style=dot] (1) at (0, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (1); \end{pgfonlayer} \end{tikzpicture}}} \newcommand{\psitickdot}{ \raisebox{5mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0, 0.5) {$\psi$}; \node [style=dot] (1) at (0, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (1.center); \end{pgfonlayer} \end{tikzpicture}}} \newcommand{\smallpsidot}{ \raisebox{-2.5mm}{\begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0, 0.5) {\footnotesize$\!\psi$}; \node [style=dot] (1) at (0, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (1); \end{pgfonlayer} \end{tikzpicture}}} \newcommand{\smallpsitickdot}{ \raisebox{-2.5mm}{\begin{tikzpicture}[scale=0.5] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0, 0.5) {\footnotesize$\!\psi$}; \node [style=dot] (1) at (0, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (1.center); \end{pgfonlayer} \end{tikzpicture}}} \begin{theorem}\label{thmdelta_1} \[ \raisebox{-8mm}{ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 1.5) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=dot] (2) at (0, 0.5) {}; \node [style=white dot] (3) at (0, -0.5) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (0) to (2); \draw [bend right=30] (2) to (1); \draw (2) to (3.center); \draw (3.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{\delta_1}{=}\ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.75, 1.5) {}; \node [style=none] (1) at (0.75, 1.5) {}; \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=none] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw [bend right] (2.center) to (4); \draw [bend right] (4) to (3.center); \draw (3.center) to (1); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \] \end{theorem} \begin{proof} Plugging $\dotunit{small dot}$ to one input: \[ \raisebox{-8mm}{ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-0.5, 1.25) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=dot] (2) at (0, 0.5) {}; \node [style=white dot] (3) at (0, -0.5) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (0) to (2); \draw [bend right=30] (2) to (1); \draw (2) to (3.center); \draw (3.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=white dot] (3) at (0, 0) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (3.center); \draw (3.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-0.5, -0.5) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=dot] (2) at (0, -1) {}; \node [style=white dot] (3) at (0.5, 0) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \node [style=white dot] (5) at (-0.5, 1) {$\psi$}; \node [style=dot] (6) at (-0.5, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (0) to (2); \draw [bend right=30] (2) to (3.center); \draw (1) to (3.center); \draw (2.center) to (4); \draw (5) to node[tick]{-} (6.center); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.25) {}; \node [style=none] (1) at (0.75, 1.5) {}; \node [style=dot] (2) at (0, -1) {}; \node [style=white dot] (3) at (0.75, 0) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \node [style=white dot] (5) at (-1.5, 1) {$\psi$}; \node [style=white dot] (6) at (-0.75, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (6) to (2); \draw [bend right=30] (5) to (6) (6) to (0); \draw [bend right=30] (2) to (3.center); \draw (1) to (3.center); \draw (2.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-0.75, 1.25) {}; \node [style=none] (1) at (0.75, 1.5) {}; \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=none] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw [bend right] (2.center) to (4); \draw [bend right] (4) to (3.center); \draw (3.center) to (1); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \] Plugging $\dottickunit{small dot}$ to one input of both sides: \[ \raisebox{-8mm}{ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-0.5, 1.25) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=dot] (2) at (0, 0.5) {}; \node [style=white dot] (3) at (0, -0.5) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (0) to node[tick]{-} (2); \draw [bend right=30] (2) to (1); \draw (2) to (3.center); \draw (3.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=dot] (1) at (0, 1) {}; \node [style=dot] (2) at (0, 0.5) {}; \node [style=white dot] (3) at (0, -0.5) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (1) (2) to node[tick]{-} (3); \draw (3.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0.5, 0.5) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=dot] (5) at (0.5, 0) {}; \node [style=white dot] (2) at (0, -1) {}; \node [style=white dot] (3) at (-0.5, 0) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (1); \draw [bend right=30] (3.center) to (2.center); \draw [bend right=30] (2.center) to node[tick]{-} (5.center); \draw (2.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psidot \ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=dot] (1) at (0, 0.25) {}; \node [style=dot] (2) at (0, -0.25) {}; \node [style=none] (3) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (1) (2) to node[tick]{-} (3); \end{pgfonlayer} \end{tikzpicture}} \] \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-0.75, 1.5) {}; \node [style=none] (1) at (0.75, 1.5) {}; \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=none] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (2); \draw [bend right] (2.center) to (4); \draw [bend right] (4) to (3.center); \draw (3.center) to (1); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (-0.5, 1.25) {}; \node [style=white dot] (1) at (-1.5, 1) {$\psi$}; \node [style=white dot] (2) at (-1, 0) {}; \node [style=dot] (3) at (0, -1) {}; \node [style=white dot] (4) at (1, 0) {}; \node [style=white dot] (5) at (0.5, 1.25) {$\psi$}; \node [style=none] (6) at (1.5, 1.5) {}; \node [style=none] (7) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (1) to (2) (2) to node[tick]{-} (0); \draw [bend right=30] (5) to (4) (4) to (6); \draw [bend right=30] (2) to (3) (3) to (4); \draw (7) to (3); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psidot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (2) at (-1, 0) {}; \node [style=dot] (3) at (0, -1) {}; \node [style=white dot] (4) at (1, 0) {}; \node [style=white dot] (5) at (0.5, 1.25) {$\psi$}; \node [style=none] (6) at (1.5, 1.5) {}; \node [style=none] (7) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (5) to (4) (4) to (6); \draw [bend right=30] (2) to node[tick]{-} (3) (3) to (4); \draw (7) to (3); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psidot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (4) at (0, 0) {}; \node [style=white dot] (5) at (-0.5, 1.25) {$\psi$}; \node [style=none] (6) at (0.5, 1.5) {}; \node [style=dot] (7) at (0, -0.5) {}; \node [style=dot] (8) at (0, -1) {}; \node [style=none] (9) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (5) to (4) (4) to (6); \draw (7) to node[tick]{-} (4) (8) to node[tick]{-} (9); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psidot \ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=dot] (1) at (0, 0.25) {}; \node [style=dot] (2) at (0, -0.25) {}; \node [style=none] (3) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (1) (2) to node[tick]{-} (3); \end{pgfonlayer} \end{tikzpicture}} \] \end{proof} \begin{theorem}\label{thmdelta_2} \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (2.center) to (1); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{\delta_2}{=}\ \ \raisebox{-8mm}{ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0)--(1); \end{pgfonlayer} \end{tikzpicture}} \] \end{theorem} \begin{proof} \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (2.center) to (1); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {}; \node [style=white dot] (3) at (-1, 1) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (2.center) to (1); \draw [bend right=30] (3) to (2); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0)--(1); \end{pgfonlayer} \end{tikzpicture}} \] \end{proof} Note that for $\dotmult{small white dot}$-phases we have: \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {$\frac{1}{\psi}$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw (2.center) to (1); \end{pgfonlayer} \end{tikzpicture}} \ \ := \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (2.center); \draw (2.center) to node[tick]{-} (1); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {}; \node [style=white dot] (3) at (1, 1) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (2.center); \draw (2.center) to node[tick]{-} (1); \draw [bend right=60] (2.center) to (3.center); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {}; \node [style=white dot] (3) at (1, 1) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw (2.center) to (1); \draw [bend right=60] (2.center) to node[tick]{-} (3.center); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=small white dot,font=\footnotesize] (3) at (1, 1) {${\tickpsi}$}; \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=60] (2.center) to (3.center); \draw (1.center) to (2.center); \draw (0.center) to (2.center); \end{pgfonlayer} \end{tikzpicture}} \] The particular choice of notation $\frac{1}{\psi}$ is justified below, and will play a key role in this paper. \begin{theorem}\label{thmdelta_3} \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=small white dot,font=\footnotesize] (2) at (0, 0.75) {$\psi$}; \node [style=small white dot,font=\footnotesize] (3) at (0, -0.5) {$\frac{1}{\psi}$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (2.center) to (3.center); \draw (3.center) to (1.center); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{\delta_3}{=}\ \ \raisebox{-8mm}{ \psidot \ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0)--(1); \end{pgfonlayer} \end{tikzpicture}} \] \end{theorem} \begin{proof} Plugging $\dotunit{small dot}$ into the input: \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=small white dot,font=\footnotesize] (2) at (0, 0.75) {$\psi$}; \node [style=small white dot,font=\footnotesize] (3) at (0, -0.5) {$\frac{1}{\psi}$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (2.center) to (3.center); \draw (3.center) to (1.center); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0.75) {}; \node [style=white dot] (3) at (0, -0.5) {}; \node [style=white dot] (4) at (-1, 1.25) {$\psi$}; \node [style=white dot] (5) at (-1, 0) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (2.center) to (3.center); \draw (3.center) to (1.center); \draw [bend right=30] (4) to (2) (5) to node[tick]{-} (3); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psidot \ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0)--(1); \end{pgfonlayer} \end{tikzpicture}} \] Plugging $\dottickunit{small dot}$ into the input: \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=small white dot,font=\footnotesize] (2) at (0, 0.75) {$\psi$}; \node [style=small white dot,font=\footnotesize] (3) at (0, -0.5) {$\frac{1}{\psi}$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (2); \draw (2.center) to (3.center); \draw (3.center) to (1.center); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0.75) {}; \node [style=white dot] (3) at (0, -0.5) {}; \node [style=white dot] (4) at (-1, 1.25) {$\psi$}; \node [style=white dot] (5) at (-1, 0) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (2); \draw (2.center) to (3.center); \draw (3.center) to (1.center); \draw [bend right=30] (4) to (2) (5) to node[tick]{-} (3); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{ \psidot \ \psitickdot \ \begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) -- node[tick]{-} (1); \end{pgfonlayer} \end{tikzpicture}} \] \end{proof} In a setting like ${\bf FHilb}_p$, where we ignore cancelable scalar multipliers, and provided that the scalars $\smallpsidot$ and $\smallpsitickdot$ in the equation are cancelable, eqs.~($\delta_1$, $\delta_2$, $\delta_3$) simplify to: \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 1.5) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=dot] (2) at (0, 0.5) {}; \node [style=white dot] (3) at (0, -0.5) {$\psi$}; \node [style=none] (4) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (0) to (2); \draw [bend right=30] (2) to (1); \draw (2) to (3.center); \draw (3.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{\delta_1}{=}\ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.75, 1.5) {}; \node [style=none] (1) at (0.75, 1.5) {}; \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=none] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw [bend right] (2.center) to (4); \draw [bend right] (4) to (3.center); \draw (3.center) to (1); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \qquad \qquad \qquad \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, 0) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (2.center) to (1); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{\delta_2}{=}\ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0)--(1); \end{pgfonlayer} \end{tikzpicture}} \qquad \qquad \qquad \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=small white dot,font=\footnotesize] (2) at (0, 0.75) {$\psi$}; \node [style=small white dot,font=\footnotesize] (3) at (0, -0.5) {$\frac{1}{\psi}$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (2.center) to (3.center); \draw (3.center) to (1.center); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{\delta_3}{=}\ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0)--(1); \end{pgfonlayer} \end{tikzpicture}} \] Examples of phases for which some of these simplified equations fail to hold are: \[ \begin{tikzpicture}[scale=0.4] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-7, 1.5) {}; \node [style=none] (1) at (-3.25, 1.5) {}; \node [style=none] (2) at (3.25, 1.5) {}; \node [style=none] (3) at (7, 1.5) {}; \node [style=dot] (4) at (-6, 0.75) {}; \node [style=dot] (5) at (4.25, 0.75) {}; \node [style=dot] (6) at (-3.25, 0.5) {}; \node [style=dot] (7) at (7, 0.5) {}; \node [style=none] (8) at (-4.75, 0) {$=$}; \node [style=none] (9) at (5.5, 0) {$=$}; \node [style=white dot] (10) at (-7, -0.5) {}; \node [style=dot] (11) at (-3.25, -0.5) {}; \node [style=white dot] (12) at (3.25, -0.5) {}; \node [style=dot] (13) at (7, -0.5) {}; \node [style=none] (14) at (-7, -1.5) {}; \node [style=none] (15) at (-3.25, -1.5) {}; \node [style=none] (16) at (3.25, -1.5) {}; \node [style=none] (17) at (7, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (10); \draw (11) to (15.center); \draw (16.center) to (12); \draw (14.center) to (10); \draw (17.center) to node[tick]{-} (13); \draw[bend right=45, looseness=0.75] (10) to (4); \draw[bend right=45, looseness=0.75] (12) to node[tick]{-} (5); \draw (1.center) to node[tick]{-} (6); \draw (7) to (3.center); \draw (2.center) to (12); \end{pgfonlayer} \end{tikzpicture} \] \subsection{Natural Number Arithmetic}\label{sec:natural-numbers} Assume that we are given a GHZ/W-pair. In particular, we have two internal commutative monoids $(\dotmult{small dot}, \dotunit{small dot})$ and $(\dotmult{small white dot}, \dotunit{small white dot})$. We will now consider the induced commutative monoids on elements: \[ \left( \raisebox{-3mm}{\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 1.25) {$\psi$}; \node [style=none] (1) at (-0.5, 0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (1.center); \end{pgfonlayer} \end{tikzpicture} \raisebox{1.5mm}{,} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 1.25) {$\phi$}; \node [style=none] (1) at (-0.5, 0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (1.center); \end{pgfonlayer} \end{tikzpicture}} \right) \mapsto \raisebox{-4mm}{\begin{tikzpicture}[scale=0.8] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 1.25) {$\psi$}; \node [style=white dot] (1) at (0.5, 1.25) {$\phi$}; \node [style=dot] (2) at (0, 0.75) {}; \node [style=none] (3) at (0, 0.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2); \draw (1) to (2); \draw (2) to (3.center); \end{pgfonlayer} \end{tikzpicture}} \qquad\qquad \left( \raisebox{-3mm}{\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 1.25) {$\psi$}; \node [style=none] (1) at (-0.5, 0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (1.center); \end{pgfonlayer} \end{tikzpicture} \raisebox{1.5mm}{,} \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 1.25) {$\phi$}; \node [style=none] (1) at (-0.5, 0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (1.center); \end{pgfonlayer} \end{tikzpicture}} \right) \mapsto \raisebox{-4mm}{\begin{tikzpicture}[scale=0.8] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 1.25) {$\psi$}; \node [style=white dot] (1) at (0.5, 1.25) {$\phi$}; \node [style=white dot] (2) at (0, 0.75) {}; \node [style=none] (3) at (0, 0.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2); \draw (1) to (2); \draw (2) to (3.center); \end{pgfonlayer} \end{tikzpicture}} \] We will call $\dotmult{small dot}$ applied to elements \em addition \em and $\dotmult{small white dot}$ applied to elements \em multiplication\em, for reasons that will become apparent shortly. Similarly, we call $\dotunit{small dot}$ the \em unit for addition \em and $\dotunit{small white dot}$ the \em unit for multiplication\em. By Theorem \ref{thmdelta_1} we have a distributivity law, up to a scalar, partially explaining our choices of the names addition and multiplication for the monoids: \begin{corollary}\label{cor-dist} \[ \begin{tikzpicture}[scale=1] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-2.25, 1) {$\phi$}; \node [style=white dot] (1) at (-1.25, 1) {$\varphi$}; \node [style=white dot] (2) at (0.5, 1) {$\psi$}; \node [style=white dot] (3) at (1.25, 1) {$\phi$}; \node [style=white dot] (4) at (2.25, 1) {$\psi$}; \node [style=white dot] (5) at (3, 1) {$\varphi$}; \node [style=white dot] (6) at (-2.75, 0.5) {$\psi$}; \node [style=dot] (7) at (-1.75, 0.5) {}; \node [style=white dot] (8) at (1.25, 0.5) {}; \node [style=white dot] (9) at (2.25, 0.5) {}; \node [style=white dot] (10) at (-3.5, 0.25) {$\psi$}; \node [style=white dot] (11) at (-2.25, 0) {}; \node [style=none] (12) at (-0.25, 0) {$=$}; \node [style=dot] (13) at (1.75, 0) {}; \node [style=dot] (14) at (-3.5, -0.25) {}; \node [style=none] (15) at (-2.25, -0.5) {}; \node [style=none] (16) at (1.75, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (7) to (11); \draw (3) to (8); \draw (11) to (15.center); \draw (9) to (13); \draw (10) to node[tick]{-} (14); \draw (6) to (11); \draw (2) to (8); \draw (8) to (13); \draw (1) to (7); \draw (4) to (9); \draw (0) to (7); \draw (13) to (16.center); \draw (5) to (9); \end{pgfonlayer} \end{tikzpicture} \] \end{corollary} Moreover, we can use these to do concrete arithmetic on the natural numbers. We start by defining an encoding for the natural numbers: \[ \begin{tikzpicture}[scale=1] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-5.75, -1) {$0$}; \node [style=dot] (1) at (-3.75, -1) {}; \node [style=white dot] (2) at (-0.75, -1) {$n\!+\!1$}; \node [style=white dot] (3) at (1.25, -1) {$n$}; \node [style=white dot] (4) at (2.25, -1) {}; \node [style=none] (5) at (-4.75, -1.5) {$=$}; \node [style=none] (6) at (0.25, -1.5) {$=$}; \node [style=dot] (7) at (1.75, -1.5) {}; \node [style=none] (8) at (-5.75, -2) {}; \node [style=none] (9) at (-3.75, -2) {}; \node [style=none] (10) at (-0.75, -2) {}; \node [style=none] (11) at (1.75, -2) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (4) to (7); \draw (1) to (9.center); \draw (3) to (7); \draw (0) to (8.center); \draw (7) to (11.center); \draw (2) to (10.center); \end{pgfonlayer} \end{tikzpicture} \] From hence forth, we shall assume we are working in a category with no non-trivial invertible (i.e. non-zero) scalars, such as ${\bf FHilb}_p$. Thus, we shall drop any invertible scalars. Furthermore, we shall assume scalar (i.) is invertible for all $n$ and scalar (ii.) is invertible for all $n \neq 0$: \[ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-1, 0.5) {$n$}; \node [style=white dot] (1) at (2, 0.5) {$n$}; \node [style=none] (2) at (-2, 0) {(i.)}; \node [style=none] (3) at (1, 0) {(ii.)}; \node [style=dot] (4) at (-1, -0.5) {}; \node [style=dot] (5) at (2, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (1) to (5); \draw (0) to node[tick]{-} (4); \end{pgfonlayer} \end{tikzpicture} \] That $\dotmult{small dot}$ is the normal addition operation for these numbers follows immediately from their definition and associativity of $\dotmult{small dot}$. We can also show that $\dotmult{small white dot}$ is the normal multiplication operation (noting first that the encoding of $1$ is $\dotunit{small white dot}$, and hence the unit of $\dotmult{small white dot}$): \[ \raisebox{-8mm}{\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 1.25) {$n$}; \node [style=white dot] (1) at (0.5, 1.25) {$m$}; \node [style=white dot] (2) at (0, 0.75) {}; \node [style=none] (3) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (2.center); \draw (1.center) to (2.center); \draw (2.center) to (3); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0.5, 1.75) {$\mdots$}; \node [style=none] (4) at (0.5, 1.25) {}; \node [style=white dot] (1) at (-0.5, 1.25) {$n$}; \node [style=white dot] (2) at (0, 0.75) {}; \node [style=none] (3) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (4) to (2); \draw (1) to (2); \draw (2) to (3); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{\delta_1}{=}\ \ \raisebox{-8mm}{$\overbrace{ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-1, 2) {$n$}; \node [style=white dot] (1) at (-0.5, 2) {}; \node [style=white dot] (2) at (-0.75, 1.5) {}; \node [style=white dot] (3) at (1, 2) {}; \node [style=white dot] (4) at (0.5, 2) {$n$}; \node [style=white dot] (5) at (0.75, 1.5) {}; \node [style=none] (6) at (0, 1.5) {$\ldots$}; \node [style=dot] (7) at (0, 0.75) {}; \node [style=none] (8) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2); \draw (1) to (2); \draw (3) to (5); \draw (4) to (5); \draw (7) to (2); \draw (7) to (5); \draw (7) to (8); \end{pgfonlayer} \end{tikzpicture}}^m$} \ \ = \ \ \raisebox{-8mm}{$\overbrace{ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.75, 1.5) {$n$}; \node [style=white dot] (1) at (0.75, 1.5) {$n$}; \node [style=none] (2) at (0, 1.25) {$\ldots$}; \node [style=dot] (3) at (0, 0.75) {}; \node [style=none] (4) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (3); \draw (1) to (3); \draw (3) to (4); \end{pgfonlayer} \end{tikzpicture}}^m$} \ \ = \ \ \raisebox{-8mm}{ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0, 1.25) {$n m$}; \node [style=none] (1) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (1); \end{pgfonlayer} \end{tikzpicture}} \] The distributivity law stated earlier now translates into the normal distributivity of multiplication over addition in the natural numbers, up to a scalar. \subsection{Multiplicative inverses}\label{sec:rationals} By Theorem \ref{thmdelta_3} we have that $\tick$ is also an inverse for $\dotmult{small white dot}$, up to a scalar: \begin{corollary}\label{cor-inverse} \[ \begin{tikzpicture}[scale=1] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-2.75, 0.5) {$\psi$}; \node [style=white dot] (1) at (-1.75, 0.5) {$\psi$}; \node [style=white dot] (2) at (1.5, 0.5) {}; \node [style=white dot] (3) at (0, 0.25) {$\psi$}; \node [style=white dot] (4) at (0.75, 0.25) {$\psi$}; \node [style=white dot] (5) at (-2.25, 0) {}; \node [style=none] (6) at (-1, 0) {$=$}; \node [style=dot] (7) at (0, -0.25) {}; \node [style=dot] (8) at (0.75, -0.25) {}; \node [style=none] (9) at (-2.25, -0.5) {}; \node [style=none] (10) at (1.5, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (1) to node[tick]{-} (5); \draw (0) to (5); \draw (3) to (7); \draw (5) to (9.center); \draw (4) to node[tick]{-} (8); \draw (2) to (10.center); \end{pgfonlayer} \end{tikzpicture} \] \end{corollary} Hence, we have an encoding for the multiplicative inverses of the natural numbers: \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0.5, 1) {$\frac{1}{n}$}; \node [style=none] (1) at (0.5, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (1.center); \end{pgfonlayer} \end{tikzpicture} \ \ \, \raisebox{6.5mm}{:=}\, \overbrace{ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0.75, 1.5) {}; \node [style=none] (1) at (1.25, 1.5) {$\ldots$}; \node [style=white dot] (2) at (1.75, 1.5) {}; \node [style=dot] (3) at (1.25, 1) {}; \node [style=none] (4) at (1.25, 0.25) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (3); \draw (0) to (3); \draw (3) to node[tick]{-} (4.center); \end{pgfonlayer} \end{tikzpicture}}^n \, \raisebox{6.5mm}{=}\, \ \ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0.5, 1) {$n$}; \node [style=none] (1) at (0.5, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[tick]{-} (1.center); \end{pgfonlayer} \end{tikzpicture} \] Thoughtout this paper, we shall assume any natural number occurring in the denominator is not equal to $0$. This allows us to encode positive fractions in the following form: \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0.5, -1.25) {$n$}; \node [style=white dot] (1) at (2, -1.25) {$\frac{1}{m}$}; \node [style=white dot] (2) at (3.5, -1.25) {$\frac{1}{m}$}; \node [style=white dot] (3) at (5, -1.25) {$n$}; \node [style=white dot] (4) at (-1.25, -1.75) {$\frac{n}{m}$}; \node [style=none] (5) at (-0.25, -2) {$=$}; \node [style=white dot] (6) at (1.25, -2) {}; \node [style=none] (7) at (2.75, -2) {$=$}; \node [style=white dot] (8) at (4.25, -2) {}; \node [style=none] (9) at (-1.25, -2.75) {}; \node [style=none] (10) at (1.25, -2.75) {}; \node [style=none] (11) at (4.25, -2.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (8); \draw (4) to (9.center); \draw (8) to (11.center); \draw (0) to (6); \draw (3) to (8); \draw (6) to (10.center); \draw (1) to (6); \end{pgfonlayer} \end{tikzpicture} \] where the second equality follows from associativity and commutativity of the GHZ-structure. \begin{remark} It should be noted that the construction of this encoding depends on the choice of numerator and denominator, and not just on the rational number being represented. Therefore, we should demonstrate that the actual point depends only on the number represented. We start by noting that, by corollary \ref{cor-inverse}, \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0, 0.75) {$n$}; \node [style=white dot] (1) at (1.5, 0.75) {$\frac{1}{n}$}; \node [style=white dot] (2) at (-1.75, 0.25) {$\frac{n}{n}$}; \node [style=white dot] (3) at (3.75, 0.25) {}; \node [style=none] (4) at (-0.75, -0) {$=$}; \node [style=white dot] (5) at (0.75, -0) {}; \node [style=none] (6) at (2.25, -0) {$=$}; \node [style=none] (7) at (-1.75, -0.75) {}; \node [style=none] (8) at (0.75, -0.75) {}; \node [style=none] (9) at (3.75, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (7.center); \draw (3) to (9.center); \draw (0) to (5); \draw (5) to (8.center); \draw (1) to (5); \end{pgfonlayer} \end{tikzpicture} \] where we have ignored the scalar in corollary \ref{cor-inverse}, since it is cancellable for the points that we have used to encode the natural numbers. We know that if $\frac{n}{m} = \frac{n'}{m'}$, there is a $p$ such that $n = n'p$ and $m = m'p$, so it follows easily that \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-5, 0.75) {\scriptsize$n$}; \node [style=white dot] (1) at (-3.5, 0.75) {$\frac{1}{m}$}; \node [style=white dot] (2) at (-2.25, 0.75) {\scriptsize$n'p$}; \node [style=white dot] (3) at (-0.75, 0.75) {$\frac{1}{m'p}$}; \node [style=white dot] (4) at (0.5, 0.75) {\scriptsize$n'$}; \node [style=white dot] (5) at (1.25, 0.75) {\scriptsize$p$}; \node [style=white dot] (6) at (2, 0.75) {$\frac{1}{m'}$}; \node [style=white dot] (7) at (3, 0.75) {$\frac{1}{p}$}; \node [style=white dot] (8) at (4.25, 0.75) {$\frac{n'}{m'}$}; \node [style=white dot] (9) at (5.75, 0.75) {$\frac{p}{p}$}; \node [style=white dot] (10) at (-6.25, 0.25) {$\frac{n}{m}$}; \node [style=white dot] (11) at (6.75, 0.25) {$\frac{n'}{m'}$}; \node [style=none] (12) at (-5.5, -0) {$=$}; \node [style=white dot] (13) at (-4.25, -0) {}; \node [style=none] (14) at (-2.75, -0) {$=$}; \node [style=white dot] (15) at (-1.5, -0) {}; \node [style=none] (16) at (0, -0) {$=$}; \node [style=white dot] (17) at (1.75, 0) {}; \node [style=none] (18) at (3.5, -0) {$=$}; \node [style=white dot] (19) at (5, -0) {}; \node [style=none] (20) at (6, -0) {$=$}; \node [style=none] (21) at (-6.25, -0.75) {}; \node [style=none] (22) at (-4.25, -0.75) {}; \node [style=none] (23) at (-1.5, -0.75) {}; \node [style=none] (24) at (1.75, -0.75) {}; \node [style=none] (25) at (5, -0.75) {}; \node [style=none] (26) at (6.75, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (7) to (17); \draw (9) to (19); \draw (4) to (17); \draw (1) to (13); \draw (17) to (24.center); \draw (19) to (25.center); \draw (8) to (19); \draw (6) to (17); \draw (5) to (17); \draw (2) to (15); \draw (10) to (21.center); \draw (11) to (26.center); \draw (0) to (13); \draw (13) to (22.center); \draw (3) to (15); \draw (15) to (23.center); \end{pgfonlayer} \end{tikzpicture} \] \end{remark} \begin{example}\label{ex:invfac} All the usual properties of fractions follow in a straightforward manner from the axioms of rational arithmetic that we have proved. For example, \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0.75, 1.5) {$\frac{n}{m}$}; \node [style=white dot] (1) at (2.25, 1.5) {$\frac{n'}{m'}$}; \node [style=white dot] (2) at (4.25, 1) {$\frac{n n'}{m m'}$}; \node [style=white dot] (3) at (1.5, 0.75) {}; \node [style=none] (4) at (3.25, 0.75) {$=$}; \node [style=none] (5) at (1.5, 0) {}; \node [style=none] (6) at (4.25, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (3); \draw (1) to (3); \draw (2) to (6.center); \draw (3) to (5.center); \end{pgfonlayer} \end{tikzpicture} \] is immediate from the associativity commutativity of the GHZ-structure, and we also have: \[ \raisebox{-8mm}{\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=small white dot] (0) at (-0.5, 1.5) {$\frac{n}{m}$}; \node [style=small white dot] (1) at (0.5, 1.5) {$\frac{n'}{m'}$}; \node [style=dot] (3) at (0, 0.5) {}; \node [style=none] (4) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (3.center); \draw (1.center) to (3.center); \draw (3.center) to (4); \end{pgfonlayer} \end{tikzpicture}} \ \stackrel{\delta_3}{=}\ \raisebox{-8mm}{\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=small white dot] (0) at (-1.5, 2) {$\frac{n}{m}$}; \node [style=small white dot] (1) at (1.5, 2) {$\frac{n'}{m'}$}; \node [style=small white dot] (2) at (-0.5, 2) {$\frac{m'}{m'}$}; \node [style=small white dot] (3) at (0.5, 2) {$\frac{m}{m}$}; \node [style=white dot] (4) at (-1, 1) {}; \node [style=white dot] (5) at (1, 1) {}; \node [style=dot] (6) at (0, 0.5) {}; \node [style=none] (7) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (4.center); \draw (2.center) to (4.center); \draw (1.center) to (5.center); \draw (3.center) to (5.center); \draw (4.center) to (6.center); \draw (5.center) to (6.center); \draw (6.center) to (7); \end{pgfonlayer} \end{tikzpicture}} \ = \ \raisebox{-8mm}{\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=small white dot] (0) at (-1.5, 2) {\scriptsize$n m'$}; \node [style=small white dot] (1) at (1.5, 2) {\scriptsize$m n'$}; \node [style=small white dot] (2) at (-0.5, 2) {$\frac{1}{m m'}$}; \node [style=small white dot] (3) at (0.5, 2) {$\frac{1}{m m'}$}; \node [style=white dot] (4) at (-1, 1) {}; \node [style=white dot] (5) at (1, 1) {}; \node [style=dot] (6) at (0, 0.5) {}; \node [style=none] (7) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (4.center); \draw (2.center) to (4.center); \draw (1.center) to (5.center); \draw (3.center) to (5.center); \draw (4.center) to (6.center); \draw (5.center) to (6.center); \draw (6.center) to (7); \end{pgfonlayer} \end{tikzpicture}} \ \stackrel{\delta_1}{=}\ \raisebox{-8mm}{\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=small white dot] (0) at (-0.5, 2) {\scriptsize$n m'$}; \node [style=small white dot] (1) at (0.5, 2) {\scriptsize$m n'$}; \node [style=small white dot] (2) at (0.75, 1) {$\frac{1}{m m'}$}; \node [style=dot] (3) at (0, 1.25) {}; \node [style=white dot] (4) at (0, 0.5) {}; \node [style=none] (5) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (3.center); \draw (1.center) to (3.center); \draw (3.center) to (4.center); \draw (4.center) to (2.center); \draw (4.center) to (5); \end{pgfonlayer} \end{tikzpicture}} \ = \ \raisebox{-8mm}{ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=small white dot] (0) at (0, 1.5) {$\frac{n m' + m n'}{m m'}$}; \node [style=none] (1) at (0, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (1); \end{pgfonlayer} \end{tikzpicture}} \] \end{example} where we used distributivity and ignored the scalars; this is fine as the scalars in corollary \ref{cor-dist} are cancellable for all the points we are considering. \section{Additive inverses}\label{sec:additive-inverses} The last thing we need to actually produce a field of fractions is additive inverses. Suppose we have an involutive operation $\cross$ that is a phase for $\dotmult{small white dot}$: \[ \begin{tikzpicture}[dotpic] \node [style=none] (0) at (0,-1) {}; \node [style=white dot] (1) at (0,-0.25) {}; \node [style=none] (2) at (-0.5,0.5) {}; \node [style=none] (3) at (0.5,0.5) {}; \draw (0) to node[pauli z]{$\times$} (1) (1) to (2) (1) to (3); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=none] (0) at (0,-1) {}; \node [style=white dot] (1) at (0,-0.25) {}; \node [style=none] (2) at (-0.5,0.5) {}; \node [style=none] (3) at (0.5,0.5) {}; \draw (0) to (1) (1) to node[pauli z]{$\times$} (2) (1) to (3); \end{tikzpicture} \] Suppose further that $\{ \dotunit{small white dot},\ \dotcrossunit{small white dot} \}$ forms a plugging set: \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-6, 0.75) {}; \node [style=white dot] (1) at (-4.5, 0.75) {}; \node [style=white dot] (2) at (-2.5, 0.75) {}; \node [style=white dot] (3) at (-1, 0.75) {}; \node [style=none] (4) at (2, 0.75) {}; \node [style=none] (5) at (3.5, 0.75) {}; \node [style=square box] (6) at (-6, 0) {$f$}; \node [style=none] (7) at (-5.25, 0) {$=$}; \node [style=square box] (8) at (-4.5, 0) {$g$}; \node [style=none] (9) at (-3.5, 0) {$\wedge$}; \node [style=square box] (10) at (-2.5, 0) {$f$}; \node [style=none] (11) at (-1.75, 0) {$=$}; \node [style=square box] (12) at (-1, 0) {$g$}; \node [style=none] (13) at (0.5, 0) {$\Leftrightarrow$}; \node [style=square box] (14) at (2, 0) {$f$}; \node [style=none] (15) at (2.75, 0) {$=$}; \node [style=square box] (16) at (3.5, 0) {$g$}; \node [style=none] (17) at (-6, -0.75) {}; \node [style=none] (18) at (-4.5, -0.75) {}; \node [style=none] (19) at (-2.5, -0.75) {}; \node [style=none] (20) at (-1, -0.75) {}; \node [style=none] (21) at (2, -0.75) {}; \node [style=none] (22) at (3.5, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (14) to (21.center); \draw (8) to (18.center); \draw (4.center) to (14); \draw (0) to (6); \draw (6) to (17.center); \draw (12) to (20.center); \draw (16) to (22.center); \draw (5.center) to (16); \draw (2) to node[pauli z]{$\times$} (10); \draw (1) to (8); \draw (10) to (19.center); \draw (3) to node[pauli z]{$\times$} (12); \end{pgfonlayer} \end{tikzpicture} \] Then this will act as an additive inverse operation. \begin{lemma}\label{lem:pauli-z-homom} \[ \begin{tikzpicture}[dotpic] \node [style=none] (0) at (0,-1) {}; \node [style=dot] (1) at (0,-0.25) {}; \node [style=none] (2) at (-0.5,0.5) {}; \node [style=none] (3) at (0.5,0.5) {}; \draw (0) to node[pauli z]{$\times$} (1) (1) to (2) (1) to (3); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=none] (0) at (0,-1) {}; \node [style=dot] (1) at (0,-0.25) {}; \node [style=none] (2) at (-0.5,0.5) {}; \node [style=none] (3) at (0.5,0.5) {}; \draw (0) to (1) (1) to node[pauli z]{$\times$} (2) (1) to node[pauli z]{$\times$} (3); \end{tikzpicture} \] \end{lemma} \begin{proof} \[ \begin{tikzpicture}[dotpic] \node [style=none] (0) at (0.5,1) {}; \node [style=none] (1) at (-0.5,1) {}; \node [style=dot] (2) at (0,0.25) {}; \node [style=none] (3) at (0,-0.5) {}; \draw (0) to node[pauli z]{$\times$} (2) (1) to node[pauli z]{$\times$} (2) (2) to (3); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 1.25) {}; \node [style=none] (1) at (0.5, 1.25) {}; \node [style=white dot] (2) at (-1.25, 1) {}; \node [style=white dot] (3) at (1.25, 1) {}; \node [style=white dot] (4) at (-0.5, 0.5) {}; \node [style=white dot] (5) at (0.5, 0.5) {}; \node [style=dot] (6) at (0, 0) {}; \node [style=none] (7) at (0, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw[bend right] (2) to (4); \draw (4) to (0.center); \draw[bend right] (5) to (3); \draw (4) to node[pauli z]{$\times$} (6); \draw (5) to node[pauli z]{$\times$} (6); \draw (6) to (7.center); \draw (5) to (1.center); \end{pgfonlayer} \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 1.25) {}; \node [style=none] (1) at (0.5, 1.25) {}; \node [style=white dot] (2) at (-1.25, 1) {}; \node [style=white dot] (3) at (1.25, 1) {}; \node [style=white dot] (4) at (-0.5, 0.5) {}; \node [style=white dot] (5) at (0.5, 0.5) {}; \node [style=dot] (6) at (0, 0) {}; \node [style=none] (7) at (0, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (6) to (7.center); \draw (4) to (0.center); \draw[bend right] (5) to node[pauli z]{$\times$} (3); \draw (4) to (6); \draw (5) to (1.center); \draw[bend right] (2) to node[pauli z]{$\times$} (4); \draw (5) to (6); \end{pgfonlayer} \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=none] (0) at (0.5,1) {}; \node [style=none] (1) at (-0.5,1) {}; \node [style=dot] (2) at (0,0.5) {}; \node [style=white dot] (3) at (0,0) {}; \node [style=white dot] (4) at (0.5,0.5) {}; \node [style=none] (5) at (0,-0.5) {}; \draw (0) to (2) (1) to (2) (2) to (3) (3) to (5); \draw [bend right] (3) to node[pauli z]{$\times$} (4); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=none] (0) at (0.5,1) {}; \node [style=none] (1) at (-0.5,1) {}; \node [style=dot] (2) at (0,0.5) {}; \node [style=white dot] (3) at (0,-0.25) {}; \node [style=white dot] (4) at (0.5,0.25) {}; \node [style=none] (5) at (0,-0.75) {}; \draw (0) to (2) (1) to (2) (2) to node[pauli z]{$\times$} (3) (3) to (5); \draw [bend right] (3) to (4); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=none] (0) at (0.5,1) {}; \node [style=none] (1) at (-0.5,1) {}; \node [style=dot] (2) at (0,0.25) {}; \node [style=none] (3) at (0,-0.5) {}; \draw (0) to (2) (1) to (2) (2) to node[pauli z]{$\times$} (3); \end{tikzpicture} \] \end{proof} \begin{lemma}\label{lem:pauli-z-zero} \[ \begin{tikzpicture}[dotpic] \node [style=dot] (0) at (0,0.25) {}; \node [style=none] (1) at (0,-0.5) {}; \draw (0) to node[pauli z]{$\times$} (1); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=dot] (0) at (0,0.25) {}; \node [style=none] (1) at (0,-0.5) {}; \draw (0) to (1); \end{tikzpicture} \] \end{lemma} \begin{proof} \[ \begin{tikzpicture}[dotpic] \node [style=dot] (0) at (0,0.25) {}; \node [style=none] (1) at (0,-0.5) {}; \draw (0) to node[pauli z]{$\times$} (1); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=dot] (0) at (-0.5,0.5) {}; \node [style=dot] (1) at (0.5,0.5) {}; \node [style=dot] (2) at (0,0) {}; \node [style=none] (3) at (0,-0.5) {}; \draw[bend right] (0) to node[pauli z]{$\times$} (2) (2) to (1); \draw (2) to (3); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=dot] (0) at (-0.5,0.5) {}; \node [style=dot] (1) at (0.5,0.5) {}; \node [style=dot] (2) at (0,0) {}; \node [style=none] (3) at (0,-0.5) {}; \draw[bend right] (0) to (2) (2) to node[pauli z]{$\times$} (1); \draw (2) to node[pauli z]{$\times$} (3); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=dot] (0) at (0,0.5) {}; \node [style=none] (1) at (0,-0.5) {}; \draw (0) to node[pauli z,pos=0.3]{$\times$} node[pauli z,pos=0.7]{$\times$} (1); \end{tikzpicture} \ \ = \ \ \begin{tikzpicture}[dotpic] \node [style=dot] (0) at (0,0.25) {}; \node [style=none] (1) at (0,-0.5) {}; \draw (0) to (1); \end{tikzpicture} \] \end{proof} \begin{lemma}\label{lem:phi-minus-phi-inv} \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=none] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right] (2.center) to (4); \draw [bend right] (4) to node[pauli z]{$\times$} (3); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=none] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right] (2.center) to (4); \draw [bend right] (4) to node[pauli z]{$\times$} (3); \draw (4) to node[pauli z]{$\times$} (5); \end{pgfonlayer} \end{tikzpicture}} \] \end{lemma} \begin{proof} Follows immediately from Lemma \ref{lem:pauli-z-homom} and the fact that $\cross$ is an involution. \end{proof} \begin{theorem}\label{thm:add-inv} The cross is additive inverse: \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (1) at (0.75, 0.5) {$\psi$}; \node [style=dot] (2) at (0, -0.5) {}; \node [style=none] (3) at (0, -1.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw[bend right] (2) to node[pauli z]{$\times$} (1); \draw[bend right] (0) to (2); \draw (2) to (3.center); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 0.25) {$\psi$}; \node [style=white dot] (1) at (0.5, 0.25) {$\psi$}; \node [style=dot] (2) at (0, -0.5) {}; \node [style=dot] (3) at (1.5, -0.5) {}; \node [style=white dot] (4) at (0, -1) {}; \node [style=none] (5) at (1.5, -1.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (3) to (5.center); \draw[bend right] (2) to node[pauli z]{$\times$} (1); \draw[bend right] (0) to (2); \draw (2) to (4); \end{pgfonlayer} \end{tikzpicture}} \] \end{theorem} \begin{proof} Proof by plugging: Recall that \begin{tikzpicture}[dotpic] \node [style=dot] (0) at (0,0.25) {}; \node [style=white dot] (1) at (0,-0.25) {}; \draw (0) to (1); \end{tikzpicture} $= 1_{\mathbf{I}}$. \begin{enumerate}[a)] \item \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=white dot] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right] (2.center) to (4); \draw [bend right] (4) to node[pauli z]{$\times$} (3); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 0.25) {$\psi$}; \node [style=white dot] (1) at (0.5, 0.25) {$\psi$}; \node [style=dot] (2) at (0, -0.5) {}; \node [style=dot] (3) at (1.5, -0.5) {}; \node [style=white dot] (4) at (0, -1) {}; \node [style=white dot] (5) at (1.5, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (4); \draw[bend right] (2) to node[pauli z]{$\times$} (1); \draw[bend right] (0) to (2); \draw (3) to (5.center); \end{pgfonlayer} \end{tikzpicture}} \] \item \[ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=white dot] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right] (2.center) to (4); \draw [bend right] (4) to node[pauli z]{$\times$} (3); \draw (4) to node[pauli z]{$\times$} (5); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{Lem.\ref{lem:phi-minus-phi-inv}}{=} \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (2) at (-0.75, 0.5) {$\psi$}; \node [style=white dot] (3) at (0.75, 0.5) {$\psi$}; \node [style=dot] (4) at (0, -0.5) {}; \node [style=white dot] (5) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right] (2.center) to (4); \draw [bend right] (4) to node[pauli z]{$\times$} (3); \draw (4) to (5); \end{pgfonlayer} \end{tikzpicture}} \ \ = \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 0.25) {$\psi$}; \node [style=white dot] (1) at (0.5, 0.25) {$\psi$}; \node [style=dot] (2) at (0, -0.5) {}; \node [style=dot] (3) at (1.5, -0.5) {}; \node [style=white dot] (4) at (0, -1) {}; \node [style=white dot] (5) at (1.5, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (4); \draw[bend right] (2) to node[pauli z]{$\times$} (1); \draw[bend right] (0) to (2); \draw (3) to (5.center); \end{pgfonlayer} \end{tikzpicture}} \ \ \stackrel{Lem.\ref{lem:pauli-z-zero}}{=} \ \ \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 0.25) {$\psi$}; \node [style=white dot] (1) at (0.5, 0.25) {$\psi$}; \node [style=dot] (2) at (0, -0.5) {}; \node [style=dot] (3) at (1.5, -0.5) {}; \node [style=white dot] (4) at (0, -1) {}; \node [style=white dot] (5) at (1.5, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (4); \draw[bend right] (2) to node[pauli z]{$\times$} (1); \draw[bend right] (0) to (2); \draw (3) to node[pauli z]{$\times$} (5.center); \end{pgfonlayer} \end{tikzpicture}} \] \end{enumerate} \end{proof} In the case of ${\bf FHilb}$, this operation is the Pauli Z gate multiplied by $-1$: \[ \cross \ \ = \ \ \left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \end{array}\right) \] In this case, we have \[ \begin{tikzpicture}[dotpic] \node [style=white dot] (0) at (0,0.5) {}; \node [style=none] (1) at (0,-0.5) {}; \draw (0) -- node[pauli z]{$\times$} (1); \end{tikzpicture} \ \ = \ \ -\sqrt{2} \ket{-} \ \ = \ \ \ket{1} - \ket{0} \] We can now naturally define: \[ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (1) at (0, -1) {}; \node [style=white dot] (2) at (0, 1) {$-\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2.center) to (1); \end{pgfonlayer} \end{tikzpicture} \ \ := \ \ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (1) at (0, -1) {}; \node [style=white dot] (2) at (0, 1) {$\psi$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2.center) to node[pauli z]{$\times$} (1); \end{pgfonlayer} \end{tikzpicture} \] and, in particular, \[ \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0.5, 1) {$-\frac{n}{m}$}; \node [style=none] (1) at (0.5, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (1.center); \end{pgfonlayer} \end{tikzpicture} \ \ \, \raisebox{6.5mm}{:=}\, \begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0.5, 1) {$\frac{n}{m}$}; \node [style=none] (1) at (0.5, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to node[pauli z]{$\times$} (1.center); \end{pgfonlayer} \end{tikzpicture} \] Thus, we have reconstructed all of the axioms for the field of rational numbers. All of the expected identities involving $\cross$ then follow from the field axioms. \section{!-boxes and automation}\label{sec:automation} Graph rewriting is a computation process in which graphs are transformed by various \emph{rewrite rules}. A rewrite rule can be thought of as a `directed graphical equation'. For example, the ``specialness'' equation from section \ref{sec:ghz-w} could be expressed as a graph rewrite rule: \[ L: \begin{tikzpicture}[dotpic,yshift=5mm] \node [dot] (a) at (0,0) {}; \node [dot] (b) at (0,-1) {}; \draw [bend left] (a) to (b); \draw [bend right] (a) to (b); \draw (0,0.5) to (a) (b) to (0,-1.5); \end{tikzpicture} \Rightarrow \ R: \begin{tikzpicture}[dotpic] \draw (0,1) -- (0,-1); \end{tikzpicture} \] A graph rewrite rule $L \Rightarrow R$ can be applied to a graph $G$ by identifying a \emph{matching}, that is, a monomorphism $m : L \rightarrow G$. The image of $L$ under $m$ is then removed and replaced by $R$. This process is called \emph{double pushout (DPO) graph rewriting}. A detailed description of how DPO graph rewriting can be performed on the graphs described in this paper is available in \cite{DK}. It is also useful to talk not only about `concrete' graph rewrite rules, but also \emph{pattern graph} rewrite rules, which can be used to express an infinite set of rewrite rules. We form pattern graphs using \emph{!-boxes} (or `bang-boxes'). These boxes identify portions of the graph that can be replicated any number of times. More precisely, the set of concrete graphs represented by a pattern graph is the set of all graphs (containing no !-boxes) that can be obtained by performing any sequence of these four operations: \begin{itemize} \item \texttt{COPY}: copy a !-box and its incident edges \item \texttt{MERGE}: merge two !-boxes \item \texttt{DROP}: remove a !-box, leaving its contents \item \texttt{KILL}: remove a !-box and its contents \end{itemize} For example, the following pattern graph represents the encoding of a natural number given in section \ref{sec:natural-numbers}: \[\left\llbracket\ \ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=box vertex] (0) at (0, 0.75) {}; \node [style=white dot] (1) at (0, 0.75) {}; \node [style=dot] (2) at (0, 0) {}; \node [style=none] (3) at (0, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (3.center); \draw (1) to (2); \end{pgfonlayer} \end{tikzpicture}\ \ \right\rrbracket = \left\{\ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 0) {}; \node [style=none] (1) at (0, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (1.center); \end{pgfonlayer} \end{tikzpicture}\ ,\ \ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (0, 0.75) {}; \node [style=dot] (1) at (0, 0) {}; \node [style=none] (2) at (0, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (1) to (2.center); \draw (0) to (1); \end{pgfonlayer} \end{tikzpicture}\ ,\ \ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.5, 0.75) {}; \node [style=white dot] (1) at (0.5, 0.75) {}; \node [style=dot] (2) at (0, 0) {}; \node [style=none] (3) at (0, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (1) to (2); \draw (2) to (3.center); \draw (0) to (2); \end{pgfonlayer} \end{tikzpicture},\ \ \begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=white dot] (0) at (-0.75, 0.75) {}; \node [style=white dot] (1) at (0, 0.75) {}; \node [style=white dot] (2) at (0.75, 0.75) {}; \node [style=dot] (3) at (0, 0) {}; \node [style=none] (4) at (0, -0.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (2) to (3); \draw (0) to (3); \draw (3) to (4.center); \draw (1) to (3); \end{pgfonlayer} \end{tikzpicture},\ \ \ldots\ \right\} := \left\{ \point{0},\ \point{1},\ \point{2},\ \point{3},\ \ldots \right\}\] Note how not only the vertices are duplicated, but also all of the edges connected to those vertices. A pattern graph rewrite rule is a pair of pattern graphs with the same inputs and outputs. Furthermore, there is a bijection between the !-boxes occurring on the LHS and the RHS. When one of the four operations is performed to a !-box on the LHS, the same is performed to its corresponding !-box on the RHS. We can rewrite (the natural numbers versions of) equations $\delta_1$, $\delta_2$, and $\delta_3$ as pattern graph rewrite rules. \[ L:\raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 1.5) {}; \node [style=none] (1) at (0.5, 1.5) {}; \node [style=dot] (2) at (0, 0.5) {}; \node [style=white dot] (3) at (0, -0.5) {}; \node [style=none] (4) at (0, -1.5) {}; \node [style=white dot] (5) at (1, 0.5) {}; \node [style=box vertex,inner sep=1.5mm] (6) at (1, 0.5) {}; \node [style=dot] (7) at (1, 0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (0) to (2); \draw [bend right=30] (2) to (1); \draw (2) to (3); \draw (3) to (4); \draw [bend right=30] (3) to (7); \draw (7) to (5); \end{pgfonlayer} \end{tikzpicture}} \Rightarrow\ \ R: \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-1.25, 1.5) {}; \node [style=white dot] (1) at (-0.25, 1.5) {}; \node [style=white dot] (2) at (0.25, 1.5) {}; \node [style=none] (3) at (1.25, 1.5) {}; \node [style=dot] (4) at (-0.25, 1) {}; \node [style=dot] (5) at (0.25, 1) {}; \node [style=white dot] (6) at (-0.75, 0) {}; \node [style=white dot] (7) at (0.75, 0) {}; \node [style=dot] (8) at (0, -1) {}; \node [style=none] (9) at (0, -1.5) {}; \node [style=box vertex,minimum height=2mm,minimum width=6mm] (10) at (0, 1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=30] (0) to (6.center); \draw [bend right=30] (6.center) to (4.center); \draw (4.center) to (1); \draw [bend right=30] (6.center) to (8.center); \draw [bend right=30] (8.center) to (7.center); \draw [bend right=30] (5.center) to (7.center); \draw [bend right=30] (7.center) to (3); \draw (5.center) to (2); \draw (8.center) to (9); \end{pgfonlayer} \end{tikzpicture}} \qquad \qquad L:\raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \node [style=white dot] (2) at (0, -0.5) {}; \node [style=white dot] (3) at (1, 1) {}; \node [style=dot] (4) at (1, 0.5) {}; \node [style=box vertex,inner sep=1.5mm] (5) at (1, 1) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (2.center); \draw (2.center) to (1); \draw [bend right=30] (2.center) to (4.center); \draw (4.center) to (3.center); \end{pgfonlayer} \end{tikzpicture}} \Rightarrow\ \ R: \raisebox{-8mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=dot] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0)--(1); \end{pgfonlayer} \end{tikzpicture}} \qquad \qquad L:\begin{tikzpicture}[dotpic] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=white dot] (1) at (1.25, 1.5) {}; \node [style=box vertex, minimum width=9 mm, minimum height=2 mm] (2) at (1.75, 1.5) {}; \node [style=white dot] (3) at (2.25, 1.5) {}; \node [style=white dot] (4) at (0.5, 1) {}; \node [style=white dot] (5) at (1.5, 1) {}; \node [style=dot] (6) at (0.75, 0.5) {}; \node [style=dot] (7) at (1.75, 0.5) {}; \node [style=white dot] (8) at (0, 0) {}; \node [style=white dot] (9) at (0, -1) {}; \node [style=none] (10) at (0, -1.75) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (9) to (10.center); \draw[bend right=15] (4) to (6); \draw (8) to (9); \draw (0.center) to (8); \draw[out=30, in=258] (6) to (1); \draw[bend right=15] (5) to (7); \draw[bend right] (9) to node[tick]{-} (7); \draw[bend right] (8) to (6); \draw[bend right=15] (7) to (3); \end{pgfonlayer} \end{tikzpicture} \Rightarrow\ \ R: \raisebox{-9mm}{\begin{tikzpicture}[scale=0.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.5) {}; \node [style=none] (1) at (0, -1.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0)--(1); \end{pgfonlayer} \end{tikzpicture}} \] These equations only apply to encodings of the natural numbers, not for arbitrary inputs $\psi$. However, we showed in sections \ref{sec:natural-numbers}, \ref{sec:rationals}, and \ref{sec:additive-inverses} that even those weaker equations suffice to recover the usual identities for fraction arithmetic. Also note that the extra white vertices in $\delta_3'$ eliminate the case of $\frac{0}{0} \neq 1$. So why bother expressing graphical identities as graph rewrite rules? Graph rewriting can be automated! {\tt quantomatic}~\cite{quanto} is a automatic graph rewriting tool developed by two of the authors. It is specifically designed to work with the kinds of diagram described in this paper and to perform pattern graph rewriting. It remains to be seen what new insights can be obtained by adding the new graphical identities derived in this paper to {\tt quantomatic}. \section{Closing Remarks} In previous work two of the authors showed that the main difference between the GHZ state and the W state, or more precisely, the induced GHZ structure and W structure, boils down to the value of the loop map of these CFAs: \[ \frac{\mbox{GHZ}}{\mbox{W}}\ \ =\ \ \frac{\begin{tikzpicture}[dotpic,yshift=5mm] \node [dot] (a) at (0,0) {}; \node [dot] (b) at (0,-1) {}; \draw [bend left] (a) to (b); \draw [bend right] (a) to (b); \draw (0,0.5) to (a) (b) to (0,-1.5); \end{tikzpicture} = \ \begin{tikzpicture}[dotpic] \draw (0,1) -- (0,-1); \end{tikzpicture} }{ \circl\ \begin{tikzpicture}[dotpic] \node [dot] (a) at (0,0.5) {}; \node [dot] (b) at (0,-0.5) {}; \draw [bend left] (a) to (b); \draw [bend right] (a) to (b); \draw (0,1) to (a) (b) to (0,-1); \end{tikzpicture}\ \ = \begin{tikzpicture}[dotpic] \node [dot] (a) at (0,0.7) {}; \node [dot] (b) at (0,-0.7) {}; \draw (0,1.2) to (a) (b) to (0,-1.2); \draw (a) to [downloop] (); \draw (b) to [uploop] (); \end{tikzpicture} } \] In this paper, by focussing on the interaction of these two structures, we were able to establish a connection with the operations of basic arithmetic: \[ \frac{\mbox{W}}{\mbox{GHZ}}=\frac{+}{\times} \] More specifically, the diagrammatic language of these structures was sufficient to encode the positive rational numbers (and, with a minor extension, the whole field of rational numbers). In the process of highlighting this encoding, we identified a surprising fact. The distributive law governing the interaction of addition and multiplication in arithmetic also captures the interaction of the GHZ-structure and W-structure. Future work includes exploiting this interaction in the study of multipartite quantum entanglement, which brings us back to the initial motivation for crafting a compositional framework to reason about multipartite states.
1,108,101,563,109
arxiv
\section*{Introduction}\label{intro} Let $K$ be a finite extension of $\mathbf{Q}_p$, and let $\mathcal{O}_K$ be its ring of integers and $\mathfrak{m}_K$ the maximal ideal of $\mathcal{O}_K$. In \cite{L94}, Lubin studied \emph{nonarchimedean dynamical systems}, namely families of elements of $X \cdot \mathcal{O}_K \dcroc{X}$ that commute under composition, and remarked (page 341 of ibid.) that ``experimental evidence seems to suggest that for an invertible series to commute with a noninvertible series, there must be a formal group somehow in the background''. Various results in that direction have been obtained (by Hsia, Laubie, Li, Movahhedi, Salinier, Sarkis, Specter, ...; see for instance \cite{Li96}, \cite{Li97}, \cite{LiTAMS}, \cite{LMS02}, \cite{GS05}, \cite{GS10}, \cite{SS13}, \cite{HL}, \cite{LB16}, \cite{JS}), using either $p$-adic analysis, the theory of the field of norms or, more recently, $p$-adic Hodge theory. The purpose of this article is to prove two theorems that confirm the above observation in many new cases, using only $p$-adic analysis. If $g(X) \in X \cdot \mathcal{O}_K \dcroc{X}$, we say that $g$ is \emph{invertible} if $g'(0) \in \mathcal{O}_K^\times$ and \emph{noninvertible} if $g'(0) \in \mathfrak{m}_K$. We say that $g$ is \emph{stable} if $g'(0)$ is neither $0$ nor a root of unity. For example, if $S$ is a formal group of finite height over $\mathcal{O}_K$ and if $c \in \mathbf{Z}$ with $p \nmid c$ and $c \neq \pm 1$, then $f(X)=[p](X)$ and $u(X)=[c](X)$ are two stable power series, with $f$ noninvertible and $u$ invertible, having the following properties: the roots of $f$ and all of its iterates are simple, $f \not\equiv 0 \bmod{\mathfrak{m}_K}$ and $f \circ u= u \circ f$. Our first result is a partial converse of this. If $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$, let $\mathrm{U}_f$ denote the set of invertible power series $u(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ such that $f \circ u= u \circ f$, and let $\mathrm{U}_f'(0) = \{ u'(0), \ u \in \mathrm{U}_f\}$. This is a subgroup of $\mathcal{O}_K^\times$. \begin{enonce*}{Theorem A} Let $K$ be a finite extension of $\mathbf{Q}_p$ such that $e(K/\mathbf{Q}_p) \leq p-1$, and let $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ be a noninvertible stable series. Suppose that \begin{enumerate} \item the roots of $f$ and all of its iterates are simple, and $f \not\equiv 0 \bmod{\mathfrak{m}_K}$; \item there is a subfield $F$ of $K$ such that $f'(0) \in \mathfrak{m}_F$ and such that $\mathrm{U}_f'(0) \cap \mathcal{O}_F^\times$ is an open subgroup of $\mathcal{O}_F^\times$. \end{enumerate} Then there is a formal group $S$ over $\mathcal{O}_K$ such that $f \in \mathrm{End}(S)$ and $\mathrm{U}_f \subset \mathrm{End}(S)$. \end{enonce*} Condition (1) can be checked using the following criterion (proposition \ref{critsimp}). \begin{enonce*}{Criterion A} If $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ is a noninvertible stable series with $f \not\equiv 0 \bmod{\mathfrak{m}_K}$, and if $f$ commutes with a stable invertible series $u(X) \in X \cdot \mathcal{O}_K \dcroc{X}$, then the roots of $f$ and all of its iterates are simple if and only if $f'(X)/f'(0) \in 1 + X \cdot \mathcal{O}_K \dcroc{X}$. \end{enonce*} If $K=\mathbf{Q}_p$, condition (2) of Theorem A amounts to requiring the existence of a stable invertible series that commutes with $f$. \begin{enonce*}{Corollary A} If $f(X) \in X \cdot \mathbf{Z}_p \dcroc{X}$ is a noninvertible stable series such that the roots of $f$ and all of its iterates are simple and $f \not\equiv 0 \bmod{p}$, and if $f$ commutes with a stable invertible series $u(X) \in X \cdot \mathbf{Z}_p \dcroc{X}$, then there is a formal group $S$ over $\mathbf{Z}_p$ such that $f \in \mathrm{End}(S)$ and $\mathrm{U}_f \subset \mathrm{End}(S)$. \end{enonce*} There are examples of commuting power series where $f$ does not have simple roots, for instance $f(X)=9X+6X^2+X^3$ and $u(X)=4X+X^2$ with $K=\mathbf{Q}_3$ (more examples can be constructed following the discussion on page 344 of \cite{L94}). It seems reasonable to expect that if $f$ and $u$ are two stable noninvertible and invertible power series that commute, with $f \not\equiv 0 \bmod{\mathfrak{m}_K}$, then there exists a formal group $S$, two endomorphisms $f_S$ and $u_S$ of $S$, and a nonzero power series $h$ such that $f \circ h = h \circ f_S$ and $u \circ h = h \circ u_S$. We then say that $f$ and $f_S$ are \emph{semi-conjugate}, and $h$ is an \emph{isogeny} from $f_S$ to $f$ (see for instance \cite{Li97}). The simplest case where this occurs is when $m$ is an integer $\geq 2$, and the nonzero roots of $f$ and all of its iterates are of multiplicity $m$ (for an example of a more complicated case, see remark \ref{chebytwo}). In this simplest case, we have the following. \begin{enonce*}{Theorem B} Let $K$ be a finite extension of $\mathbf{Q}_p$, let $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ be a noninvertible stable series and take $m \geq 2$. Let $h(X)=X^m$. Suppose that \begin{enumerate} \item the nonzero roots of $f$ and all of its iterates are of multiplicity $m$ \item $f \not\equiv 0 \bmod{\mathfrak{m}_K}$. \end{enumerate} Then there exists a finite unramified extension $L$ of $K$ and a noninvertible stable series $f_0(X) \in X \cdot \mathcal{O}_L \dcroc{X}$ with $f_0 \not\equiv 0 \bmod{\mathfrak{m}_K}$, such that $f \circ h = h \circ f_0$, and the roots of $f_0$ and all of its iterates are simple. If in addition $u$ is an element of $\mathrm{U}_f$ with $u'(0) \equiv 1 \bmod{\mathfrak{m}_K}$, then there exists $u_0 \in \mathrm{U}_{f_0}$ such that $u \circ h = h \circ u_0$. Finally, if there is a subfield $F$ of $K$ such that $f'(0) \in \mathfrak{m}_F$ and such that $\mathrm{U}_f'(0) \cap \mathcal{O}_F^\times$ is an open subgroup of $\mathcal{O}_F^\times$, then $(f_0^{\circ m})'(0) \in \mathfrak{m}_F$ and $\mathrm{U}_{f_0}'(0) \cap \mathcal{O}_F^\times$ is an open subgroup of $\mathcal{O}_F^\times$. \end{enonce*} Condition (1) can be checked using the following criterion (proposition \ref{critmultm}). \begin{enonce*}{Criterion B} If $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ is a noninvertible stable series with $f \not\equiv 0 \bmod{\mathfrak{m}_K}$, and if $f$ commutes with a stable invertible series $u(X) \in X \cdot \mathcal{O}_K \dcroc{X}$, then the nonzero roots of $f$ and all of its iterates are of multiplicity $m$ if and only if the nonzero roots of $f$ are of multiplicity $m$, and the set of roots of $f'$ is included in the set of roots of $f$. \end{enonce*} We have the following simple corollary of Theorem B when $K=\mathbf{Q}_p$. \begin{enonce*}{Corollary B} If $m \geq 2$ and $f(X) \in X \cdot \mathbf{Z}_p \dcroc{X}$ is a noninvertible stable series such that the nonzero roots of $f$ and all of its iterates are of multiplicity $m$ and $f \not\equiv 0 \bmod{p}$, and if $f$ commutes with a stable invertible series $u(X) \in X \cdot \mathbf{Z}_p \dcroc{X}$, then there is an unramified extension $L$ of $\mathbf{Q}_p$, a formal group $S$ over $\mathcal{O}_L$ and $f_S \in \mathrm{End}(S)$ such that $f \circ X^m = X^m \circ f_S$. \end{enonce*} Theorem A implies conjecture 5.3 of \cite{HL} for those $K$ such that $e(K/\mathbf{Q}_p) \leq p-1$. It also provides a new simple proof (that does not use $p$-adic Hodge theory) of the main theorem of \cite{JS}. Note also that Theorem A holds without the restriction ``$e(K/\mathbf{Q}_p) \leq p-1$'' if $f'(0)$ is a uniformizer of $\mathcal{O}_K$ (see \cite{JS17}). This implies ``Lubin's conjecture'' formulated at the very end of \cite{GS10} (this conjecture is proved in \cite{LB16} using $p$-adic Hodge theory, when $K$ is a finite Galois extension of $\mathbf{Q}_p$) as well as ``Lubin's conjecture'' on page 131 of \cite{GS05} over $\mathbf{Q}_p$ if $f \not\equiv 0 \bmod{p}$. The results of \cite{HL}, \cite{LB16} and \cite{JS} are proved under strong additional assumptions on $\operatorname{wideg}(f)$ (namely that $\operatorname{wideg}(f)=p$ in \cite{JS}, or that $\operatorname{wideg}(f)=p^h$, where $h$ is the residual degree of $K$, in \cite{HL} and \cite{LB16}). Theorem A is the first general result in this direction that makes no assumption on $\operatorname{wideg}(f)$, besides assuming that it is finite. It also does not assume that $f'(0)$ is a uniformizer of $\mathcal{O}_K$. Theorem A and its corollary are proved in section \S\ref{proof} and theorem B and its corollary are proved in section \S\ref{condens}. \section{Nonarchimedean dynamical systems} \label{serlub} Whenever we talk about the roots of a power series, we mean its roots in the $p$-adic open unit disk $\mathfrak{m}_{\mathbf{C}_p}$. Recall that the \emph{Weierstrass degree} $\operatorname{wideg}(g(X))$ of a series $g(X) = \sum_{i \geq1} g_i X^i \in X \cdot \mathcal{O}_K \dcroc{X}$ is the smallest integer $i \leq +\infty$ such that $g_i \in \mathcal{O}_K^\times$. We have $\operatorname{wideg}(g) = +\infty$ if and only if $g \equiv 0 \bmod{\mathfrak{m}_K}$. If $r<1$, let $\mathcal{H}(r)$ denote the set of power series in $K\dcroc{X}$ that converge on the closed disk $\{ z \in \mathfrak{m}_{\mathbf{C}_p}$ such that $|z|_p \leq r\}$. If $h \in \mathcal{H}(r)$, let $\| h \|_r = \sup_{|z|_p \leq r} |h(z)|_p$. The space $\mathcal{H}(r)$ is complete for the norm $\|{\cdot}\|_r$. Let $\mathcal{H} = \projlim_{r<1} \mathcal{H}(r)$ be the ring of holomorphic functions on the open unit disk. Throughout this article, $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ is a stable noninvertible series such that $\operatorname{wideg}(f) < + \infty$, and $\mathrm{U}_f$ denotes the set of invertible power series $u(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ such that $f \circ u= u \circ f$. \begin{lemm} \label{lubcom} A series $g(X) \in X \cdot K \dcroc{X}$ that commutes with $f$ is determined by $g'(0)$. \end{lemm} \begin{proof} This is proposition 1.1 of \cite{L94}. \end{proof} \begin{prop} \label{wideg} If $\mathrm{U}_f$ contains a stable invertible series, then there exists a power series $g(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ and an integer $d \geq 1$ such that $f(X) \equiv g(X^{p^d}) \bmod{\mathfrak{m}_K}$. We have $\operatorname{wideg}(f) = p^d$ for some $d \geq 1$. \end{prop} \begin{proof} This is the main result of \cite{L94}. See (the proof of) theorem 6.3 and corollary 6.2.1 of ibid. \end{proof} \begin{prop} \label{lublog} There is a (unique) power series $\mathrm{L}(X) \in X + X^2 \cdot K \dcroc{X}$ such that $\mathrm{L} \circ f = f'(0) \cdot \mathrm{L}$ and $\mathrm{L} \circ u = u'(0) \cdot \mathrm{L}$ if $u \in \mathrm{U}_f$. The series $\mathrm{L}(X)$ converges on the open unit disk, and $\mathrm{L}(X) = \lim_{n \to +\infty} f^{\circ n}(X) / f'(0)^n$ in the Fr\'echet space $\mathcal{H}$. \end{prop} \begin{proof} See propositions 1.2, 1.3 and 2.2 of \cite{L94}. \end{proof} \begin{lemm} \label{primroot} If $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ is a noninvertible stable series and if $f$ commutes with a stable invertible series $u$, then every root of $f'$ is a root of $f^{\circ n}$ for some $n \gg 0$. \end{lemm} \begin{proof} This is corollary 3.2.1 of \cite{L94}. \end{proof} \begin{prop} \label{critsimp} If $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ is a noninvertible stable series with $f \not\equiv 0 \bmod{\mathfrak{m}_K}$, and if $f$ commutes with a stable invertible series $u$, then the roots of $f$ and all of its iterates are simple if and only if $f'(X)/f'(0) \in 1 + X \cdot \mathcal{O}_K \dcroc{X}$. \end{prop} \begin{proof} We have $(f^{\circ n})'(X) = f'(f^{\circ n-1}(X)) \cdots f'(f(X)) \cdot f'(X)$. If $f'(X)/f'(0) \in 1 + X \cdot \mathcal{O}_K \dcroc{X}$, then the derivative of $f^{\circ n}(X)$ belongs to $f'(0)^n \cdot(1 + X \cdot \mathcal{O}_K \dcroc{X})$ and hence has no roots. The roots of $f^{\circ n}(X)$ are therefore simple. By lemma \ref{primroot}, any root of $f'(X)$ is also a root of $f^{\circ n}$ for some $n \gg 0$. If the roots of $f^{\circ n}(X)$ are simple for all $n \geq 1$, then $f'(X)$ cannot have any root, and hence $f'(X)/f'(0) \in 1+X \mathcal{O}_K \dcroc{X}$. \end{proof} \section{Formal groups} \label{proof} We now prove theorem A. Let $S(X,Y) = \mathrm{L}^{\circ -1}(\mathrm{L}(X)+\mathrm{L}(Y)) \in K \dcroc{X,Y}$. By proposition \ref{lublog}, $S$ is a formal group law over $K$ such that $f$ and all $u \in \mathrm{U}_f$ are endomorphisms of $S$. In order to prove theorem A, we show that $S(X,Y) \in \mathcal{O}_K \dcroc{X,Y}$. Write $S(X,Y) = \sum_{j \geq 0} s_j(X) Y^j$. \begin{lemm} \label{lifact} If $\mathrm{L}'(X) \in \mathcal{O}_K \dcroc{X}$, then $s_j(X) \in j!^{-1} \cdot \mathcal{O}_K \dcroc{X}$ for all $j \geq 0$. \end{lemm} \begin{proof} This is lemma 3.2 of \cite{Li96}. \end{proof} \begin{lemm} \label{logprim} If the roots of $f^{\circ n}(X)$ are simple for all $n \geq 1$, then $\mathrm{L}'(X) \in \mathcal{O}_K \dcroc{X}$. \end{lemm} \begin{proof} This is sketched in the proof of theorem 3.6 of \cite{Li96}. We give a complete argument for the convenience of the reader. We have $(f^{\circ n})'(X) = f'(f^{\circ n-1}(X)) \cdots f'(f(X)) \cdot f'(X)$, and by proposition \ref{critsimp}, $f'(X)/f'(0) \in 1+X \mathcal{O}_K \dcroc{X}$. We have $\mathrm{L}(X) = \lim_{n \to +\infty} f^{\circ n}(X) / f'(0)^n$ by proposition \ref{lublog}, so that \[ \mathrm{L}'(X) = \lim_{n \to +\infty} \frac{(f^{\circ n})'(X)} {f'(0)^n} = \lim_{n \to +\infty} \frac{ f'(f^{\circ n-1}(X))} {f'(0)} \cdots \frac{f'(f(X))} {f'(0)} \cdot \frac{f'(X)} {f'(0)}, \] and hence $\mathrm{L}'(X) \in 1+X \mathcal{O}_K \dcroc{X}$. \end{proof} \begin{theo} \label{siint} If $e(K/\mathbf{Q}_p) \leq p-1$, then $s_j(X) \in \mathcal{O}_K \dcroc{X}$ for all $j \geq 0$. \end{theo} \begin{proof} For all $n \geq 1$, the power series $u_n(X) = S(X,f^{\circ n}(X))$ belongs to $X \cdot K \dcroc{X}$ and satisfies $u_n \circ f = f \circ u_n$. Since $\mathrm{U}_f'(0) \cap \mathcal{O}_F^\times$ is an open subgroup of $\mathcal{O}_F^\times$, there exists $n_0$ such that if $n \geq n_0$, then $u_n'(0) = 1+f'(0)^n \in \mathrm{U}_f'(0)$. We then have $u_n \in \mathrm{U}_f$ by lemma \ref{lubcom}. In order to prove the theorem, we therefore prove that if $S(X,f^{\circ n}(X)) \in \mathcal{O}_K \dcroc{X}$ for all $n \geq n_0$, then $s_i(X) \in \mathcal{O}_K \dcroc{X}$ for all $i \geq 0$. If $j \geq 1$, let \[ a_j(X) = f^{\circ n}(X) \sum_{i \geq 0} s_{j+i}(X) f^{\circ n}(X)^i = s_j(X) f^{\circ n}(X) + s_{j+1}(X) f^{\circ n}(X)^2 + \cdots. \] We prove by induction on $j$ that $s_0(X),\hdots,s_{j-1}(X)$ as well as $a_j(X)$ belong to $\mathcal{O}_K \dcroc{X}$. This holds for $j =1$; suppose that it holds for $j$. We claim that if $h \in \mathcal{H}(r)$ and $\| h \|_r < p^{-1/(p-1)}$, then $\sum_{i \geq 0} s_{j+i}(X) h(X)^i$ converges in $\mathcal{H}(r)$. Indeed, if $s_p(j+i)$ denotes the sum of the digits of $j+i$ in base $p$, then \[ \mathrm{val}_p((j+i)!) = \frac{j+i-s_p(j+i) }{p-1} \leq \frac{i}{p-1} + \frac{j}{p-1}. \] Let $\pi$ be a uniformizer of $\mathcal{O}_K$ and let $e=e(K/\mathbf{Q}_p)$ so that $|\pi|_p=p^{-1/e}$. By proposition \ref{wideg}, we have \[ f^{\circ n}(X) \in \pi X \cdot \mathcal{O}_K \dcroc{X} + X^{q^n} \cdot \mathcal{O}_K \dcroc{X^{q^n}}, \] where $q=p^d=\operatorname{wideg}(f)$, so that $\| f^{\circ n}(X) \|_r \leq \max( rp^{-1/e}, r^{q^n} )$. If $\rho_n=p^{-1/(e(q^n-1))}$, then \[ \| f^{\circ n}(X) \|_{\rho_n} \leq p^{-q^n/(e(q^n-1))} < p^{-1/e} \leq p^{-1/(p-1)} \] and the series $\sum_{i \geq 0} s_{j+i}(X) f^{\circ n}(X)^i$ therefore converges in $\mathcal{H}(\rho_n)$. We have $f^{\circ n}(X) \in \pi X \cdot \mathcal{O}_K \dcroc{X} + X^{q^n} \cdot \mathcal{O}_K \dcroc{X^{q^n}}$, as well as $\operatorname{wideg}(f^{\circ n})=q^n$. By the theory of Newton polygons, all the zeroes $z$ of $f^{\circ n}(X)$ satisfy $\mathrm{val}_p(z) \geq 1/(e(q^n-1))$, and hence $|z|_p \leq \rho_n$. The equation $a_j(X) = f^{\circ n}(X) \sum_{i \geq 0} s_{j+i}(X) f^{\circ n}(X)^i$ holds in $\mathcal{H}(\rho_n)$, and this implies that $a_j(z)=0$ for all $z$ such that $f^{\circ n}(z) = 0$. Since all the zeroes of $f^{\circ n}(X)$ are simple and $f^{\circ n}(X) \not\equiv 0 \bmod{\pi}$, the Weierstrass preparation theorem implies that $f^{\circ n}(X)$ divides $a_j(X)$ in $\mathcal{O}_K \dcroc{X}$, and hence that \[ s_j(X) + s_{j+1}(X) f^{\circ n}(X) + s_{j+2}(X) f^{\circ n}(X)^2 + \cdots \in \mathcal{O}_K \dcroc{X}. \] Choose some $0 < \rho <1$ and take $n \geq n_0$ such that $\rho_n \geq \rho$. We have \[ f^{\circ n}(X) = f(f^{\circ n-1}(X)) \in \pi f^{\circ n-1}(X) \cdot \mathcal{O}_K\dcroc{X} + f^{\circ n-1}(X)^q \cdot \mathcal{O}_K \dcroc{X}. \] Therefore $\|f^{\circ n}(X)\|_\rho \to 0$ as $n \to +\infty$, and $\| s_{j+1}(X) f^{\circ n}(X) + s_{j+2}(X) f^{\circ n}(X)^2 + \cdots \|_\rho \to 0$ as $n \to +\infty$. The series $s_j(X)$ is therefore in the closure of $\mathcal{O}_K \dcroc{X}$ inside $\mathcal{H}(\rho)$ for $\|{\cdot}\|_\rho$, which is $\mathcal{O}_K \dcroc{X}$. This proves that $s_j(X)$ as well as $s_{j+1}(X) f^{\circ n}(X) + s_{j+2}(X) f^{\circ n}(X)^2 + \cdots$ belong to $\mathcal{O}_K \dcroc{X}$. This finishes the induction and hence the proof of the theorem. \end{proof} Theorem A now follows: $S$ is a formal group over $\mathcal{O}_K$ such that $f \in \mathrm{End}(S)$. Any power series $u(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ that commutes with $f$ also belongs to $\mathrm{End}(S)$, since $u(X) = [u'(0)](X)$ by lemma \ref{lubcom}. In particular, $\mathrm{U}_f \subset \mathrm{End}(S)$. To prove corollary A, note that we can replace $u$ by $u^{\circ p-1}$ and therefore assume that $u'(0) \in 1+p\mathbf{Z}_p$. In this case, $u^{\circ m}$ is defined for all $m \in \mathbf{Z}_p$ by proposition 4.1 of \cite{L94} and $\mathrm{U}_f'(0)$ is therefore an open subgroup of $\mathbf{Z}_p^\times$. \section{Semi-conjugation} \label{condens} We now prove theorem B. Assume therefore that the nonzero roots of $f$ and all of its iterates are of multiplicity $m$. Let $h(X)=X^m$. Since $q = \operatorname{wideg}(f)$ is finite, we can write $f(X)=X \cdot g(X) \cdot v(X)$ where $g(X) \in \mathcal{O}_K[X]$ is a distinguished polynomial and $v(X)\in \mathcal{O}_K \dcroc{X}$ is a unit. If the roots of $g(X)$ are of multiplicity $m$, then $g(X)=g_0(X)^m$ for some $g_0(X) \in \mathcal{O}_K[X]$. Write $v(X) = [c] \cdot (1+ w(X))$ where $c \in k_K$ (and $[c]$ is its Teichm\"uller lift) and $w(X) \in (\mathfrak{m}_K,X)$. Since $m \cdot \deg(g) = q-1$, $m$ is prime to $p$ and there exists a unique $w_0(X) \in (\mathfrak{m}_K,X)$ such that $1+w(X) = (1+w_0(X))^m$. If $f_0(X) = [c^{1/m}] \cdot X \cdot g_0(X^m) \cdot (1+w_0(X^m))$, then \[ f \circ h(X) = f(X^m) = [c] \cdot X^m \cdot g_0(X^m)^m \cdot (1+w_0(X^m))^m = f_0(X)^m = h \circ f_0(X). \] It is clear that $f_0 \not\equiv 0 \bmod{\mathfrak{m}_K}$. If we write $f^{\circ n}(X) = X \cdot \prod_\alpha (X-\alpha)^m \cdot v_n(X)$ with $v_n$ a unit of $\mathcal{O}_K\dcroc{X}$, and where $\alpha$ runs through the nonzero roots of $f^{\circ n}$, then \[ f^{\circ n}(X^m) = X^m \cdot \prod_\alpha (X^m-\alpha)^m \cdot v_n(X^m), \] so that all the roots of $f^{\circ n}(X^m)$ have multiplicity $m$. Since $f^{\circ n}(X^m)=f_0^{\circ n}(X)^m$, the roots of $f_0$ and all of its iterates are simple. This finishes the proof of the first part of the theorem, with $L=K([c^{1/m}])$. If $u \in \mathrm{U}_f$ and $u'(0) \in 1+\mathfrak{m}_K$, then there is a unique $u_0(X) \in 1+(\mathfrak{m}_K,X)$ such that $u_0(X)^m = u(X^m)$. We have $u_0'(0) = u'(0)^{1/m}$ and $(f_0 \circ u_0)^m = (u_0 \circ f_0)^m$ as well as $(f_0 \circ u_0)'(0) = (u_0 \circ f_0)'(0)$, so that $u_0 \in \mathrm{U}_{f_0}$. This proves the existence of $u_0$. Since $f(X^m)=f_0(X)^m$, we have $f'(0) = f_0'(0)^m$. We then have $(f_0^{\circ m})'(0) = f_0'(0)^m = f'(0) \in \mathfrak{m}_F$. This finishes the proof of the last claim of theorem B. Corollary B follows from theorem B in the same way that corollary A followed from theorem A. \begin{exem} \label{cheby} If $p=3$ and $f(X)=9X+6X^2+X^3$ and $u(X)=4X+X^2$, so that $f \circ u = u \circ f$, then $f(X)=X(X+3)^2$ and $f'(X)=3(X+3)(X+1)$. The nonzero roots of $f$ and all of its iterates are therefore of multiplicity $2$. We have $f(X^2) = (X(X^2+3))^2$ so that $f_0(X)=3X+X^3$, and the corresponding formal group is $\mathbf{G}_m$ (this is a special case of the construction given on page 344 of \cite{L94}). \end{exem} \begin{prop} \label{critmultm} If $f(X) \in X \cdot \mathcal{O}_K \dcroc{X}$ is a noninvertible stable series with $f \not\equiv 0 \bmod{\mathfrak{m}_K}$, and if $f$ commutes with a stable invertible series $u(X) \in X \cdot \mathcal{O}_K \dcroc{X}$, then the nonzero roots of $f$ and all of its iterates are of multiplicity $m$ if and only if the nonzero roots of $f$ are of multiplicity $m$ and the set of roots of $f'$ is included in the set of roots of $f$. \end{prop} \begin{proof} If the nonzero roots of $f$ and all of its iterates are of multiplicity $m$, then the nonzero roots of $f$ are of multiplicity $m$. Hence if $\alpha$ is a root of $f^{\circ n}(X)$ with $f(\alpha) \neq 0$, the equation $f(X) = f(\alpha)$ has simple roots. Since $\alpha$ is one of these roots, we have $f'(\alpha) \neq 0$. By lemma \ref{primroot}, any root of $f'(X)$ is also a root of $f^{\circ n}$ for some $n \geq 1$. This implies that the set of roots of $f'$ is included in the set of roots of $f$. Conversely, suppose that the nonzero roots of $f$ are of multiplicity $m$, and that $f'(\beta) \neq 0$ for any $\beta$ that is not a root of $f$. If $\alpha$ is a nonzero root of $f^{\circ n}$ for some $n \geq 1$, then this implies that the equation $f(X) = \alpha$ has simple roots, so that the nonzero roots of $f$ and all of its iterates are of multiplicity $m$. \end{proof} \begin{rema} \label{chebytwo} If $p=2$ and $f(X)=4X+X^2$ and $u(X)=9X+6X^2+X^3$, then $f \circ u = u \circ f$. The roots $0$ and $-4$ of $f$ are simple, but $f^{\circ 2}(X)=X(X+4)(X+2)^2$ has a double root. In this case, $f$ is still semi-conjugate to an endomorphism of $\mathbf{G}_m$, but via the more complicated map $h(X)=X^2/(1+X)$ (see the discussion after corollary 3.2.1 of \cite{L94}, and example 2 of \cite{Li96}). \end{rema} \providecommand{\bysame}{\leavevmode ---\ } \providecommand{\og}{``} \providecommand{\fg}{''} \providecommand{\smfandname}{\&} \providecommand{\smfedsname}{\'eds.} \providecommand{\smfedname}{\'ed.} \providecommand{\smfmastersthesisname}{M\'emoire} \providecommand{\smfphdthesisname}{Th\`ese}
1,108,101,563,110
arxiv
\section{Introduction} \label{sec:introduction} Gas turbine performance modeling has been widely used in the development cycle \cite{lytle1999numerical, sethi2013map}. However, it is rarely applied to the on-wing engine performance evaluation field due to the relatively low accuracy. We think the reasons for the low accuracy mainly lie in the following two points. First, it is difficult to access accurate component characteristic maps. What is more, the maps are different from engine to engine due to manufacture and assembly tolerance. Second, the degeneration process of components is hard to model. So it is difficult for performance models to track the gas turbine degeneration trajectory. Large amount of flight data is the main advantage in on-wing gas turbine performance evaluation field over development cycle. The component characteristics and degeneration trend hidden behind the flight data may be extracted using proper data driven methods \cite{asgari2015gas} \cite{chiras2001nonlinear} \cite{wei2020robust}. However, only data driven methods cannot evaluate gas turbine performance from a comprehensive perspective. One is because data driven models can only cover a few parameters recorded in flight data such as rotational speed and EGT. Other parameters key to performance evaluation can not be calculated using data driven methods. Moreover, data driven methods are easy to run into over-fitting under one single constrained optimization goal. To use flight data more effectively, many researches focus on component maps fitting. Map scaling and curve fitting are first introduced. Reference \cite{kong2003new} proposed a polynomial scaling method to correct the component maps. In \cite{Kong2006Component} and \cite{kong2007study}, cubic fitting was used to directly fit mass flow and isentropic efficiency. In \cite{li2011nonlinear} and \cite{li2012improved}, scale factors were regressed using quadratic curve. In \cite{tsoutsanis2014component}, elliptical curves fitting was used to tune component maps. In \cite{tsoutsanis2015transient}, scale factors of different components were regressed by various functions according to the shape of original maps. In \cite{kim2018adaptation}, the overall performance parameters and the local performance parameters were separately adjusted in two steps by scale factors. These scaling and curve fitting methods can correct component maps toward the operating space of certain engine especially in design point. However, the number of corrected variables is too few to cover all the flight envelop. So, to enlarge the number of corrected variables, neural network is introduced to predict component maps. In \cite{yu2007neural}, a three layer BP neural network was trained to predict the compressor map using data provided by manufacturers. In \cite{ghorbanian2009artificial}, various neural networks were proposed to predict compressor performance using experimental data. Neural network based methods indeed increase the number of variables to be corrected. However, most neural networks take component characteristics modeling as a pure regression problem, ignoring the dynamics constrains during components working process. In this case, these methods can only fit limited data from component characteristics test rig and serve as alternative methods when component maps cannot accessed. For example, in \cite{yu2007neural}, neural network is only trained by component performance data from manufacturers and in \cite{ghorbanian2009artificial}, only 42 experimental data of one compressor were used to train the network. To extract more accurate component characteristics from big flight data and model gas turbine dynamics as a whole, we propose a thermodynamic based and flight data driven hybrid network for gas turbines modeling. Different from component maps based thermodynamic models, our network reconstructs the component characteristics in a data-driven way to take component degeneration and individual difference into consideration. Moreover, different from data driven methods, in the training phase, physical based equations and analytical mathematical description are used to ensure that the optimization converges to gas turbine's dynamics process. In our network, thermodynamic method is used as a backbone to make sure all stations can be computed and the overall flow path calculation is physically explainable. Component characteristics including LPC, HPC, burner, HPT, LPT, bypass, mixer and nozzle are modeled using the inlet-exit reflection by neural network. The inputs of the network for certain component are the state parameters of this component's inlet while the outputs are the state parameters of the exit. So the volume of our network is much bigger than component maps based models and the component characteristics can be real-time corrected using big flight data in the network's training phase. Test case is conducted with 26970 flight data acquired from a two spool mixed turbofan. The results show that the accuracy of our hybrid model measured by max T6 relative error can reach 7\%, 5\% better than thermodynamic model with map fitting tool \cite{sethi2013map} used in PROOSIS \cite{alexiou2007advanced, bala2007proosis} and 8\% better than data driven method with similar volume. The main contributions of our work lie in the following aspects: \begin{itemize} \item A novel gas turbine flow path analysis method from both thermodynamic and data driven perspectives is introduced. \item A flight data driven component characteristics modeling method considering component degeneration and individual difference is proposed. \item A new loss function considering the physical process of turbines and a two-phase training process are proposed to ensure the optimization converges to turbine's dynamics process. \item Experiment is conducted on a huge number of flight data and the accuracy of our model is higher than both thermodynamic based and data driven gas turbine models. \end{itemize} The remaining sections are organized as follows. First, some related works about the thermodynamic model and neural network are briefly introduced. Next, the structure of our network is presented in a bottom-up view. A two phase training process is proposed to guarantee the optimization converges toward gas turbines dynamics. At last, experiments are conducted under flight data and comparisons are made between our network and other methods including map fitting based thermodynamic model and data driven neural network. \section{Related Works} \label{sec:related works} In this section, a brief introduction is given about the thermodynamic model and neural network method. \subsection{Thermodynamic Model} \label{sec:thermo} Thermodynamic model \cite{sellers1975dyngen,walsh2004gas} is a physical based method where the station state parameters are calculated from the inlet of the engine to the exit (Fig. \ref{fig:sttn}). Main components such as compressor, burner and turbine are modeled by their characteristic maps acquired from component rig tests, simulation tests or corrected from standard maps. In the design point calculation, the configuration such as the areas of some stations and the scale factors of component maps can be computed using the design point parameters. In the off-design point calculation, engine inlet parameters such as total temperature and pressure, the ambient pressure and some necessary engine control variables, such as N2, A8 and/or WF, are input into the model. Other parameters, such as N1, T4, beta values of component maps can be solved through thermodynamic balance equations' iterations. Once all variables are determined, the state parameters of each stations can be inferred by interpolating the component maps and thermodynamic calculation. In thermodynamic model, parameters of all stations can be calculated and the performance of the engine can be easily analyzed. So it is widely used in the development cycle. As to the on-wing performance evaluation of gas turbine, the accuracy reduces with component characteristics drift because of various factors such as individual differences, degeneration and inlet-engine matching. Although some correction methods such as characteristic maps fitting are applied, the number of corrected variables is too few for the corrected map to cover all the flight envelop. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{fig/sttn.pdf}} \caption{The structure and station number of thermodynamic models} \label{fig:sttn} \end{figure} \subsection{Fully Connected Neural Network} \label{sec:fcnn} Fully connected neural network \cite{hecht1992theory} is a data driven method including an input layer, a few hidden layers and an output layer. Given a set $\Omega=\{(x_i, y_i)\}$ of input $x_i$ and its corresponding output $y_i$, neural network learns a reflection from $x_i$ to $y_i$ using affine transformations and nonlinear activation. The dataset $\Omega$ is usually split into two subsets called training set $\Omega_{tr}$ and testing set $\Omega_{tt}$. In the training phase, a loss function $l(\hat{y_i}, y_i)$ is defined to measure the difference between the network output $\hat{y_i}$ and the real $y_i$ in the training set $\Omega_{tr}$. And the total loss in the training set is then defined: \begin{equation} loss=\sum_{(x_i, y_i)\in\Omega_{tr}}l(\hat{y_i}, y_i)=\sum_{(x_i, y_i)\in\Omega_{tr}}l(net(x_i), y_i). \label{eq:trainingloss} \end{equation} Using gradient descent method, the weights and bias of the network are optimized toward the loss reduction direction. And after several iterations, the error can be tolerable if the learning rate is set reasonable. In the testing phase, the difference between $y_i$ in the testing set $\Omega_{tt}$ and its evaluation $\hat{y_i}$ by the trained network is used to measure the performance/accuracy of the network. Compared with thermodynamic model, neural network has the following advantages. \begin{itemize} \item Efficient. No iteration is needed during testing phase, so it is efficient in on-wing real-time calculations. \item Customized. If trained properly, the network can extract the individual characteristics and degeneration trend from the big flight data. \end{itemize} However, the disadvantages of the neural network are also obvious. \begin{itemize} \item Hard to explain. Only affine transformations and activation can be accessed in the structure of the neural network, so the physical process in calculating the output is unclear. \item Hard to train. The training process pays no attention to other parameters not in flight data. So, the network is easy to run into over-fitting in the training set and gets poor performance in the testing set. \item Hard to analyze. Key station parameters not recorded in the flight data but essential to performance analysis cannot be evaluated by data driven methods. \end{itemize} From \ref{sec:thermo} and \ref{sec:fcnn}, we can conclude that thermodynamic model can calculate all parameters of key stations but can hardly model the engine's degeneration trend and individual characteristics with on-wing flight data. Neural network methods can construct customized model using big flight data but are hard to train with few constrains. So, a hybrid model taking thermodynamic models and physical based constrains as backbone and using neural network to model component characteristics can take their advantages while get rid of their disadvantages. This is why we propose our hybrid network. \section{Proposed hybrid network} \label{sec:turbinet} In this section, we propose a hybrid network for gas turbine on wing modeling. The structure is illustrated in Fig. \ref{fig:TurbiNet} and the station numbers in Fig. \ref{fig:TurbiNet} are the same as Fig. \ref{fig:sttn}. The input parameters are N1, N2, Wf, Pamb, T2, P2, Ma2, W2 and W25, the outputs are station state parameters such as total temperature and pressure. The component nets (described in Sec. \ref{sec:component net}) take the inlet parameters as inputs and output the exit parameters. At last, the component nets are cascaded as the turbine's gas flow process in Fig. \ref{fig:sttn}. Similar to thermodynamic model, our network is a component based model that calculates the state parameters station by station from the inlet of the engine. The main difference lies in that our network uses neural network to model every component other than component maps based thermodynamic calculation. \begin{figure}[htb] \centerline{\includegraphics[width=\columnwidth]{fig/TurbiNet.pdf}} \caption{The structure of our network} \label{fig:TurbiNet} \end{figure} \subsection{Components Modeling} \label{sec:component net} The base component net is a four layers fully connected network (Fig. \ref{fig:basenet}) with ReLU activation funcion and all component nets inherit from it. The inputs of certain component net are the state parameters of component inlet plus other variables necessary to calculate the exit parameters. The outputs are state parameters of the exit station. To one station, under the assumption of one dimension steady flow, only four state parameters are needed to determine the station's other parameters. So for the convenience of calculation, total temperature T, total pressure P, mach number Ma and mass flow W are selected as the inputs of the inlet station. Because the exit mass flow can be inferred from the inlet mass flow and bleeding ratio, the outputs of the exit station are the three state parameters: total temperature T, total pressure P, mach number Ma. \begin{equation} T_{ex}, P_{ex}, Ma_{ex}=net(T_{in}, P_{in}, Ma_{in}, W_{in}, vars), \label{eq:basenet} \end{equation} where the function $net(\cdot)$ is certain component net and $vars$ are other variables needed to calculate the exit parameters. The following analyzes the required inputs of every component to compute the exit parameters assuming the component characteristics have been learned by network. \subsubsection{HPC Net} For HPC with one single inlet and one exit, another control variable, usually spool speed for its accessibility, is needed to calculate the exit parameters. \subsubsection{LPC Net} For LPC of turbofan with one inlet and two exits, there is a little difference from HPC. In HPC, the exit state is determined given the inlet parameters and spool speed. However, to LPC of turbofan, the exit state of LPC also depends on the bypass ratio. So mass flow of HPC inlet is added to inputs. \subsubsection{Burner Net} Fuel flow or one exit state parameter is needed to model burner. In most cases, fuel flow is easier to access. So in our network, fuel flow is added to the inputs of burner net. \subsubsection{HPT and LPT nets with Cooling System} If cooling system exists in the HPT or LPT, the simple rotor component nets such as HPC net is not sufficient. Because the cooling air from HPC to HPT and LPT can form a two inlets and one exit structure. To model turbines with cooling system, the inputs should include state parameters and mass flow of cooling air. \subsubsection{Bypass Net} Bypass is just a simple flow duct. Only three inlet state parameters and mass flow are required to calculate the exit parameters. \subsubsection{Mixer Net} To mixer, station parameters and mass flow of two inlets are all needed to determine the exit parameters. \subsubsection{Nozzle Net} The exit parameters of nozzel do not only depend on the inlet ones but also the ambient pressure. So, the input to nozzle net should also include the ambient pressure. \subsubsection{Inputs and outputs of our network} The component nets modeled above are cascaded to form our network (Fig. \ref{fig:TurbiNet}). The required inputs of our network are T2, P2, Pamb, Ma2, N1, N2, mass flow of each component inlet and WF. The outputs are state parameters of key stations. However, the mass flow of each component can be inferred only if the initial two mass flow W2 and W25 are given: \begin{equation} \begin{split} W3&=W25\times (1-LNGV_{cl})+WF,\\ W4&=W25\times (1-HPT_{cl}-LNGV_{cl}-HNGV_{cl})+WF,\\ W41&=W4+W25\times HNGV_{cl},\\ W43&=W41,\\ W44&=W4+W25\times (HNGV_{cl}+HPT_{cl}),\\ W45&=W44+W25\times LNGV_{cl},\\ W5&=W45,\\ W6&=W5,\\ W13&=W2-W25,\\ W16&=W13+c2b\times W25,\\ W64&=W2+WF,\\ W8&=W64, \end{split} \label{eq:W_infer} \end{equation} where $HPT_{cl}$ is the HPT cooling proportion of W25 from HPC, $LNGV_{cl}$ is the LPT NGV cooling proportion of W25 from HPC, $HNGV_{cl}$ is the HPT NGV cooling proportion of W25 from HPC and $c2b$ is the leakage proportion of W25 from the core to bypass. So our network inputs can be reduced to T2, P2, Ma2, Pamb, N1, N2 W2, W25 and WF. In Tab. \ref{tab:component net}, we give a brief description about the inputs and outputs of every component net. \begin{figure}[htb] \centerline{\includegraphics[width=0.6\columnwidth]{fig/basenet.pdf}} \caption{The structure of base component net} \label{fig:basenet} \end{figure} \begin{table} \centering \caption{Inputs and outputs of every component net} \label{tab:component net} \setlength{\tabcolsep}{3pt} \begin{tabular}{p{50pt}|p{80pt}|p{90pt}} \hline component& inputs& outputs\\ \hline LPC& T2 P2 Ma2 \par W2 W25 N1& T25 P25 Ma25 \par T13 P13 Ma13\\ \hline HPC& T25 P25 Ma25 W25 N2& T3 P3 Ma3\\ \hline burner& T3 P3 Ma3 W3 WF & T4 P4 Ma4\\ \hline HPT& T4 P4 Ma4 W4\par T3 P3 Ma3 N2 & T44 P44 Ma44\\ \hline LPT& T44 P44 Ma44 W44\par T3 P3 Ma3 N1&T6 P6 Ma6\\ \hline bypass& T13 P13 Ma13 W13& T16 P16 Ma16\\ \hline mixer& T6 P6 Ma6 W6 \par T16 P16 Ma16 W16& T64 P64 Ma64\\ \hline nozzle& T64 P64 Ma64 W64 Pamb& T8 P8 Ma8\\ \hline the whole network& T2 P2 Ma2 Pamb \par N1 N2 W2 W25 WF& parameters above\\ \hline \multicolumn{3}{p{251pt}}{ The station number is defined in Fig. \ref{fig:sttn}.}\\ \end{tabular} \end{table} Once our network is constructed, the next issue is how to train it to model component characteristics. In the next section, a two phase training method is proposed (Fig. \ref{fig:flowchart}). In the Monte Carlo pre-training phase, Monte Carlo training set is generated with inputs from Monte Carlo simulation and outputs from thermodynamic model. our network is trained using Monte Carlo training set to model approximate component characteristics. In the flight data training phase, our network is trained with flight data to correct the component characteristics. Thermodynamic constrains are intervened during optimization to guarantee the training phase converges to the dynamics process. \begin{figure}[htb] \centerline{\includegraphics[width=\columnwidth]{fig/two-phase-training.pdf}} \caption{The two phase training process of our network} \label{fig:flowchart} \end{figure} \subsection{Monte Carlo Pre-training Phase} \label{sec:mc} Most of data driven gas turbine modeling methods define the loss function as the error between the model's outputs and the real values collected from sensors. Subjected to the limited number of gas turbine sensors, the loss function is only the errors of a few parameters such as T6. In this condition, the optimization will try its best to regress these parameters in the loss function. But meanwhile, the measurement and random errors are also regressed. Moreover, the component nets error will transmitted to the final outputs under measurement error loss. To avoid this phenomena, a Monte Carlo pre-training phase and thermodynamic loss are proposed. Dataset including all station parameters is first generated using Monte Carlo simulation and thermodynamic calculation. Then, our network is trained to output all these parameters in this dataset with the constrain of thermodynamic loss. When the error accumulated, the thermodynamic loss will increase and the training process will adjust the weights to decrease overall error. The schematic diagram of the Monte Carlo pre-training phase is illustrated in Fig. \ref{fig:flowchart} and Fig. \ref{fig:MC} and the bypass part is omitted in Fig. \ref{fig:MC} for simpleness. \begin{figure}[htb] \centerline{\includegraphics[width=\columnwidth]{fig/MC.pdf}} \caption{ The schematic diagram of the Monte Carlo pre-training phase} \label{fig:MC} \end{figure} First, a number of inputs required by thermodynamic model, usually T2, P2, Pamb and N2 are generated randomly in a reasonable scope in a Monte Carlo simulated test way. \begin{equation} T2_i, P2_i, Pamb_i, N2_i=random(), i=1,2,\cdots,N \label{eq:inputGen} \end{equation} where the subscript $i$ denotes the $i$th samples. Then, these variables are input into thermodynamic model to calculate all variables and stations parameters required by our network's inputs and outputs. \begin{equation} \begin{split} Ma2_i, N1_i, W2_i, W25_i, WF_i, sttns_i\\ =thermo(T2_i, P2_i, Pamb_i, N2_i), \end{split} \label{eq:thermoCal} \end{equation} where $sttns_i$ are the stations parameters required by our network's outputs. When the inputs and outputs of our network are all computed, the dataset $\Omega_{mc}=\{(x_i, y_i), i=1,2,\cdots,N\}$ can be generated. In $\Omega_{mc}$, $x_i$ is the input vector of the $i$th samples for our network and $y_i$ is the output vector of our network. To decrease the influence of error transmission from each component net, we define thermodynamic loss to measure the distance between the net's outputs and the turbine's dynamic. Once the error accumulated, the network's process will deviated from the turbine's dynamics, so the thermodynamic loss calculated by the network outputs will increase. Under the constrain of thermodynamic loss, the overall error of our hybrid network will not accumulated. Next is the pre-training process. The loss function is optimized to decrease using gradient descend method. In traditional network based modeling methods, loss function is usually defined as the error between the real values and their evaluations by network. However, to ensure the optimization converges toward the real dynamics process of gas turbines and to decrease the influence of error transmission from each component net, new loss measuring the error between our network's calculation process and real thermodynamic process is introduced. Once the error accumulated, the network's calculation process will deviated from the turbine's dynamics and the thermodynamic loss calculated by the network outputs will increase. So under the constrain of thermodynamic loss, the overall error of our hybrid network will not accumulated. So, in our network, the loss function is comprised of two parts: one is stations parameters errors loss and the other one is the thermodynamic error loss. The first loss is defined as mean square error between the outputs of our network and the corresponding stations parameters calculated by thermodynamic model: \begin{equation} loss_1=\frac{1}{N}\sum_{(x_i, y_i)\in\Omega_{mc}}\frac{(Net(x_i)-y_i)^2}{y_i^2}, \label{eq:TurbiNetloss} \end{equation} where $N$ is the number of samples in $\Omega_{mc}$. The second loss is the thermodynamic error loss measuring our network's deviation from gas turbine's dynamics. The loss is comprised of two parts: the mass flow loss and the power loss. The mass flow loss is the error between mass flow of all stations computed by our network and mass flow inferred from inputs W2 and W25. \begin{equation} loss_2=\frac{1}{N}\frac{1}{M}\sum_{\Omega_{mc}}\sum_{i\in sttns}\frac{(W_i-Q(T_i,P_i,Ma_i,A_i))^2}{W_i^2}, \label{eq:Weqs} \end{equation} where $M$ is the number of stations in our network, $sttns$ denotes the stations number set accessed from our network, $(T_i,P_i,Ma_i)$ are the total temperature, total pressure and Mach number in our network's $i$th output station, $A_i$ is the $i$th station area accessed in design point calculation of thermodynamic model, $W_i$ can be inferred from W2 and W25 using Eq. \ref{eq:W_infer} and $Q(\cdot)$ is the mass flow function: \begin{equation} Q(T_i,P_i,Ma_i,A_i)=K\frac{P_iA_i}{\sqrt{T_i}}q(Ma_i) \label{eq:Qfunction} \end{equation} The power loss is the error between compressor power demanded and turbine power provided. \begin{equation} \begin{split} loss_3=&\frac{1}{N}\sum_{\Omega_{mc}}\frac{|\eta_l\Delta H_{LPT}-\Delta H_{LPC}|^2}{\Delta H_{LPT}^2}\\ +&\frac{|\eta_h\Delta H_{HPT}-\Delta H_{HPC}-P_{ext}|^2}{\Delta H_{HPT}^2}, \end{split} \label{eq:powereqs} \end{equation} where $\Delta H_{LPT}$ is the enthalpy drop of LPT, $\Delta H_{HPT}$ the enthalpy drop of HPT, $\Delta H_{LPC}$ the enthalpy rise of LPC, $\Delta H_{HPC}$ the enthalpy rise of HPC, $\eta_h$ the mechanical efficiency of HPT, $\eta_l$ the mechanical efficiency of LPT and $P_{ext}$ the power extraction from HP spool: \begin{equation} \begin{split} \Delta H_{LPT}&=H44+H_{LNGVcl}-H6,\\ \Delta H_{HPT}&=H4+H_{HNGVcl}-(H44-H_{HPTcl}),\\ \Delta H_{HPC}&=H3+H_{LNGVcl}-H25,\\ \Delta H_{LPC}&=H13+H25-H2,\\ \end{split} \label{eq:thermoCal} \end{equation} where all the enthalpy above can be calculated using W2, W25 plus stations parameters from our network's outputs. Then, our network is trained using gradient descend method toward reducing the sum of these three losses. If trained properly, our network can learn an approximate component characteristics and perform as good as thermodynamic models in calculating the stations parameters. \subsection{Flight Data Training Phase} \label{sec:fdtrain} Because Monte Carlo dataset is generated by thermodynamic model, the accuracy of our network can not be higher than thermodynamic model after Monte Carlo pre-training phase. To improve our network's accuracy, the flight data training phase is proposed to correct component characteristics by adjusting our network's weights using flight data. First, quasi steady state is defined where the amplitude of N1 is less than $1\%$ during 3 seconds without consideration of the fluctuation of other variables such as Mach number, altitude and inlet temperature. This is a very relaxed quasi steady state determinant conditions and poses great challenges to performance calculation. Next, quasi steady state points are extracted from certain engine during a period to form dataset $\Omega_{fd}=\{(\overline{x_i}, y_i), i=1,2,\cdots,N\}$. W2 and W25 are usually not measured, so the inputs to our network are not sufficient. To remedy this, we use thermodynamic model to calculate this two variables (this two variables can also be calculated through our network's iteration phase proposed in \ref{sec:iter}): \begin{equation} W2_i, W25_i=thermo(\overline{x_i}). \label{eq:w2w25} \end{equation} Then, we added this two variables into $\overline{x_i}$ to form the final flight dataset $\Omega_{fd}=\{(x_i, y_i), i=1,2,\cdots,N\}$. The loss function of the flight data training phase is also comprised of two part as introduced in \ref{sec:mc}. The main difference lies in the parameters errors loss. Because there are only a few stations parameters in flight data, so not all the stations parameters output by our network will contribute to the loss. The parameters errors loss is defined as: \begin{equation} loss_1=\frac{1}{N}\sum_{(x_i, y_i)\in\Omega_{fd}}\frac{(S(Net(x_i))-y_i)^2}{y_i^2}, \label{eq:TurbiNetloss} \end{equation} where $S(Net(x_i))$ extract network's corresponding outputs to $y_i$ in flight data to compute the loss. The thermodynamic error loss is the same as Monte Carlo pre-training phase defined in Eq. \ref{eq:Weqs} and \ref{eq:powereqs}. Then, our network is trained using gradient descend method to reduce the sum of these three losses in the flight dataset. After flight data training phase, component characteristics are corrected using the on-wing flight data. So, the modeling accuracy will be higher than our network only trained by Monte Carlo dataset as well as thermodynamic models. \subsection{Iterating Phase} \label{sec:iter} Usually, among the inputs of our network, T2, P2, Pamb, Ma2, N1, N2 and WF can be acquired from flight data. However, most flight data will not record the mass flow of LPC and HPC inlet: W2 and W25. In \ref{sec:fdtrain}, we take a compromised way by using thermodynamic model to calculate them. In this section, an optimization method is proposed to get rid of thermodynamic model during testing phase. In the iterating phase, the weights and bias of our network are fixed and no longer optimized. W2 and W25 are the only two variables to be optimized. The optimization goal is comprised of two part: the mass flow error and the power error. The first optimization goal is to minimize the mass flow error described in Eq. \ref{eq:Weqs}. The loss can be modified into: \begin{equation} \begin{split} loss_1&=f_1(W2,W25,Net(vars))\\ &=\frac{1}{N}\frac{1}{M}\sum_{(x_i, y_i)\in\Omega_{fd}}\sum_{i\in sttns}\frac{(W_i-Q(T_i,P_i,Ma_i,A_i))^2}{W_i^2} \end{split} \end{equation} where $Net(\cdot)$ denotes our network and $vars$ denotes the inputs of our network. The second optimization goal is to minimize the power error described in Eq. \ref{eq:powereqs}. The loss can also be modified into: \begin{equation} \begin{split} loss_2&=f_2(W2,W25,Net(vars))\\ &=\frac{1}{N}\sum_{(x_i, y_i)\in\Omega_{fd}}\frac{|\eta_l\Delta H_{LPT}-\Delta H_{LPC}|^2}{\Delta H_{LPT}^2}\\ &+\frac{|\eta_h\Delta H_{HPT}-\Delta H_{HPC}-P_{ext}|^2}{\Delta H_{HPT}^2}. \end{split} \end{equation} Then we minimize the following goal regarding to W2 and W25: \begin{equation} \arg\min_{W2,W25}f_1(W2,W25,Net(vars))+f_2(W2,W25,Net(vars)). \label{eq:min} \end{equation} Given an initial value for W2 and W25, we can get the optimal W2 and W25 through an iteration process. \section{Test case} In this section, the accuracy of our network is tested in flight data. Flight data are gathered from a two spool turbofan over a period. N1, N2, WF, T2, P2, Pamb are accessible for the inputs of our network. W2 and W25 can be either calculated by thermodynamic model introduced in \ref{sec:fdtrain} or iteration method in \ref{sec:iter}. Because neither Ma2 nor PS2 is measured in flight data, the Ma2 input of our network cannot accessed or computed. In this case, we simply omit the Ma2 input of LPC net under the assumption that the exit parameters of LPC can be determined by other inputs as long as the inlet area keeps unchanged. There is only one station parameter T6 recorded in the flight data, so we use it to measure the accuracy of our network. Two datasets are made to the whole training and testing process: $\Omega_{mc}$ is generated using Monte Carlo method introduced in \ref{sec:mc} and $\Omega_{fd}$ is gathered as introduced in \ref{sec:fdtrain} and \ref{sec:iter}. \subsection{Monte Carlo Training Results} our network is first trained by $\Omega_{mc}$ generated using Monte Carlo methods. The number of samples in $\Omega_{mc}$ is $15360$ and the ratio of the training set to the test set is $0.8$. The number of training epochs is $500$, the learning rate is $1e-3$ with $10\%$ decay every $100$ epochs and the batch size is set to $256$. The training loss is defined as the sum of Eq. \ref{eq:TurbiNetloss}, \ref{eq:Weqs} and \ref{eq:powereqs}. The accuracy is measured by the max relative error in all samples of training set. \begin{equation} max\_error=\max_{i\in \omega}\frac{|\hat{y_i}-y_i|}{y_i}, \label{eq:maxerror} \end{equation} where $y_i$ is certain station parameter of sample $i$ and $\omega$ is subscript set of dataset $\Omega_{mc}$ Fig. \ref{fig:errorstrainprocess} shows the error trending of Ma3, P4 and T6 in the training process. We can see that the training phase converges after about 200 epochs. The max relative errors of all stations parameters after Monte Carlo training are described in Fig. \ref{fig:MCerrors}. The Ma error of station 25 (the LPC exit) is 3.2\% that is obviously larger than other stations. This is because the Ma2 input of LPC net is omitted. Except Ma25, the errors of other parameters are less than 1.5\%. \begin{figure*} \centering \subfigure[Max error of Ma3]{\includegraphics[width=.45\columnwidth]{fig/Ma3.eps}} \subfigure[Max error of P4]{\includegraphics[width=.45\columnwidth]{fig/P4.eps}} \subfigure[Max error of T6]{\includegraphics[width=.45\columnwidth]{fig/T6.eps}} \subfigure[Max error of T8]{\includegraphics[width=.45\columnwidth]{fig/T8.eps}} \caption{The error trending in training process of some parameters} \label{fig:errorstrainprocess} \end{figure*} \begin{figure}[htb] \centerline{\includegraphics[width=0.9\columnwidth]{fig/errors.eps}} \caption{The max relative errors after Monte Carlo training} \label{fig:MCerrors} \end{figure} \subsection{Flight Data Training Results} After trained by Monte Carlo dataset, our network is further trained with flight dataset as introduced in \ref{sec:fdtrain}. The flight dataset is split into two subsets: the training set $\Omega_{fd-tr}$ to train our network and testing set $\Omega_{fd-tt}$ to measure the accuracy and generalization of our network. There are $6,970$ samples in$\Omega_{fd-tt}$ and $20,000$ samples in $\Omega_{fd-tr}$. Because there are only one station parameter T6 in flight data, so the parameters loss can be modified into: \begin{equation} loss=\frac{1}{N}\sum_{i\in \omega}\frac{(\hat{T6_i}-T6_i)^2}{T6^2}, \label{eq:T6loss} \end{equation} where $\hat{T6_i}$ is our network's evaluation for T6 of the $i$th sample in $\Omega_{fd-tr}$, $\omega$ is subscript set of training flight dataset $\Omega_{fd-tr}$ and $N$ is the number of samples in $\Omega_{fd-tr}$. To compare the performance before and after flight data training, we evaluate T6 of flight testing dataset using our network only trained by Monte Carlo dataset first. Then, T6 is evaluated using our network further trained by flight training dataset. The T6 relative errors histogram of all samples in the testing set is shown in Fig. \ref{fig:T6errorsMCFD}. We can see that before trained by flight data, the T6 relative errors' distribution is nearly uniformed around 6\% and the max relative error can reach 13.8\%. After further trained by flight data, the T6 relative errors mainly concentrated below 4\% and the max error reach 7.1\%. This conforms that after flight data training, our network has the ability of correcting component characteristics to get a more accurate evaluation. Another phenomena from Fig. \ref{fig:T6errorsMCFD} is that the error distribution forms a long tail shape. This long tail error distribution is much more suitable for gas turbine monitoring than uniform distribution. When T6 error distribution during a period deviated from this long tail distribution, attention or inspection should be taken. However, the criterion about whether the performance degenerates or latent failure occurs is beyond the scope of this paper. \begin{figure}[htb] \centerline{\includegraphics[width=0.9\columnwidth]{fig/T6errorsMCFD.eps}} \caption{T6 errors comparison between our network only trained by MC dataset and further trained by FD dataset} \label{fig:T6errorsMCFD} \end{figure} \subsection{Iterating Results} Our network with iteration phase proposed in \ref{sec:iter} is also tested. The results are shown in Fig. \ref{fig:T6errorsIter}. The error distributions of iteration method and no iteration method are nearly the same. This verifies the effectiveness of iteration method introduced in \ref{sec:iter}. So our network can be independent from thermodynamic models where more iterative steps and calculation times are required. Then, our network will be more efficient and has prospects of being embedded in airborne equipment to realize real-time calculations. \begin{figure}[htb] \centerline{\includegraphics[width=.9\columnwidth]{fig/T6errorsIter.eps}} \caption{T6 errors comparison between iteration and no iteration methods} \label{fig:T6errorsIter} \end{figure} \subsection{Comparisons with Thermodynamic Model} We compare the T6 errors in flight testing dataset of our network trained only by Monte Carlo dataset with map fitting based thermodynamic model used in PROOSIS \cite{alexiou2007advanced, bala2007proosis}. The results are shown in Fig. \ref{fig:T6errorsThermoTurb}. Because our network is only train by Monte Carlo dataset generated by thermodynamic model and not trained by flight dataset, the accuracy of our model cannot be better. But from Fig. \ref{fig:T6errorsThermoTurb}, we can see that the error distribution of thermodynamic model and our network only trained by Monte Carlo dataset is similar. This indicates that our network pre-trained by Monte Carlo dataset has the ability to reconstruct the component characteristics with the help of thermodynamic model. \begin{figure}[htb] \centerline{\includegraphics[width=.9\columnwidth]{fig/T6errorsThermoTurb.eps}} \caption{T6 errors comparison between thermodynamic model and our network trained by MC dataset} \label{fig:T6errorsThermoTurb} \end{figure} Furthermore, We compare our network further trained by flight training dataset with thermodynamic model. The results are shown in Fig. \ref{fig:T6errorsThermoTurbFD}. We can see that the max error of our network reaches 7.1\%, 5.2\% lower than thermodynamic model. This indicates that our network can extract the engine's individual characteristics and degeneration trend from flight data in the flight data training phase to evaluate the stations parameters more accurately. \begin{figure}[htb] \centerline{\includegraphics[width=.9\columnwidth]{fig/T6errorsThermoTurbFD.eps}} \caption{T6 errors comparison between thermodynamic model and our network further trained by FD dataset} \label{fig:T6errorsThermoTurbFD} \end{figure} \subsection{Comparisons with Neural Network Model} To compare our network with pure neural network model, a neural network (denoted as TNN) of equivalent model volume is constructed to predict T6 in flight data. The neural network has $28$ hidden layers with the same number of hidden dimension as our network. The inputs are the same as our network's and the output is T6. The definition of the training loss is defined as Eq. \ref{eq:T6loss}. Because no other stations parameters can be evaluated by TNN, the thermodynamic error loss Eq. \ref{eq:Weqs} and \ref{eq:powereqs} cannot used as loss function. TNN is trained and tested using the same flight dataset under the same training configurations as our network. The comparison between our network and TNN is shown in Fig. \ref{fig:TNN_turbinet}. TNN's max T6 relative error reaches 15.8\%. The accuracy of TNN is too low to evaluate the performance of the engine, which verifies the rationality of our network's thermodynamic constrains and two-phase training process from another perspective. \begin{figure}[htb] \centerline{\includegraphics[width=.9\columnwidth]{fig/TNN_turbinet.eps}} \caption{T6 errors comparison between TNN model and our network} \label{fig:TNN_turbinet} \end{figure} \subsection{Accuracy comparisons of different methods} The comparisons of different methods are shown in Tab. \ref{tab:comparison}. The accuracy is measured by the relative error (RE) between $T6$ in flight data and its evaluation $\hat{T6}$ by different methods: \begin{equation} error=\frac{|\hat{T6}-T6|}{T6}. \label{eq:maxerror} \end{equation} \begin{table}[htb] \centering \caption{Accuracy comparison of different methods} \label{tab:comparison} \begin{threeparttable} \begin{tabular}{lllllll} \hline methods & mean & std & 25\% & 50\% & 75\% & max \\ \hline PROOSIS & 0.046 & 0.030 & 0.018 & 0.044 & 0.072 & 0.123 \\ TNN & 0.087 & 0.048 & 0.049 & 0.086 & 0.126 & 0.158 \\ our network & 0.014 & 0.011 & 0.005 & 0.012 & 0.019 & 0.071 \\ our network$\dagger$$^{\rm a}$ & 0.014 & 0.011 & 0.006 & 0.012 & 0.019 & 0.071 \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item[$^{\rm a}$] our network$\dagger$ is the iteration version described in \ref{sec:iter}. \end{tablenotes} \end{threeparttable} \end{table} Among the table head, 'mean' is the mean RE of all testing samples, 'std' is the standard deviation of RE, '25\%' is the top '25\%' of RE, and 'max' is the max RE of all testing samples. We can see our network model gets the best accuracy in all measurement. In the 'max' measurement, our network is about 5\% higher than thermodynamic based model calculated in PROOSIS and 8\% higher than the pure data driven network with similar volume. The iteration version described in \ref{sec:iter} gets a similar accuracy with plain our network described in \ref{sec:fdtrain}, verifying the effectiveness of the iteration method. \section{Conclusion} In this paper, we propose a thermodynamic based and data driven hybrid network for gas turbine modeling. All components are modeled as neural network and their characteristics are extracted in a data driven way from Monte Carlo simulated data and flight data. Thermodynamic loss is introduced to ensure the training process converges to the turbine's dynamics and constrain the error transmission between component nets in hybrid network. The experiment shows that the accuracy of our hybrid network can reach about 7\% measured by max T6 relative error, 5\% better than map fitting based thermodynamic model and 8\% better than pure data driven method with similar model volume, verifying effectiveness of our proposed hybrid model. \bibliographystyle{IEEEtran}
1,108,101,563,111
arxiv
\section{Introduction} Amphiphilic molecules, such as fatty acids ($CH_3(CH_2)_nCOOH$) or phospholipids, form monomolecular layers on water surfaces in which the hydrophobic headgroups are immersed into water while the hydrophobic alkane chains point outwards \cite{BK37}. This phenomenon is utilized in the Langmuir-Blodgett coating technique, in which under suitable conditions on $pH$ and temperature, monolayer or multilayer films can be transferred to a solid substrate by successively dipping the substrate into a compressed layer on a water surface. Langmuir-Blodgett films are highly ordered and their molecular ordering depends on the structure of the compressed layer on the water surface prior to dipping \cite{WV89}. The ordering of Langmuir-Blodgett films plays an important role in applications ranging from microlithography and electrooptics to biochemical sensing and tribology. Many experiments have in recent years investigated the molecular structure and phase behaviour of Langmuir monolayers (see \cite{WV89,moe90,KD92,ABKR94} for reviews). These studies were spurred by the possibility of combining modern x-ray diffraction methods with more traditional thermodynamic measurements \cite{BKP91,KBBPMAK91}. The rodlike non axially symmetric molecules give rise to a large variety of phases and phase transitions \cite{KD92} with a rich polymorphism. Despite differences in detail monolayers of fatty acids, phopholipids, alcohols and esters exhibit a similar phase diagram \cite{BP90,BKP91,ABKR94}. This fact alone suggests that the structural similarities between amphiphilic molecules are responsible for the similarities observed in the phase diagrams. Hence the chemical and atomistic differences between different molecules may not be necessary to understand the phase diagram of amphiphiles. This observation motivates our studies of highly idealized coarse grained models to elucidate the mechanisms underlying various transitions. Among these transitions the tilting transition in the liquid condensed phases has been our main interest. We study idealized models in order to separate the influence of various coarse parameters (such as chain flexibility, symmetry of the molecules, mobility and geometry of head groups etc.) from that of finer details (such as interaction potentials, cis-trans conformations, hydrogen atoms etc.) on the tilting transition in these monolayers. Given the advances in numerical simulation techniques there have already been numerous simulation studies of chemically realistic \cite{HR88,EB88,BCK88,HK89,BK90,MTKQ91,BC91,KTO92,CSR92,SCR92,ABS93} and idealized coarse grained models \cite{SRG86,KKB90,MKS90,hil92c,hil93a,HLB93} for the tilting transition in amphiphilic monolayers. Chemically realistic models with much atomistic detail have an advantage in their ability to describe dense packing effects more realistically. Molecular dynamics simulations of such models are very useful for elucidating the behaviour of such models away from phase transitions. Near a phase transition Monte Carlo methods can be more efficient for obtaining the statistical averages. Highly idealized coarse grained models may be simpler (and sometimes faster) to simulate but they often suffer from neglecting some aspects of the physics. For example, treating the alkane chains as rigid cylindrical rods, originally suggested in \cite{SRG86}, disregards the melting transition of the layer, and the intrachain conformational disorder. Such rigid rods grafted onto a triangular lattice were studied extensively in \cite{KKB90,hil92c,hil93a} using continuum Monte Carlo calculations. While this model exhibits a tilting transition it cannot show a liquid expanded phase or the restructuring and melting of the head group lattice. Although the grafting of the head groups in this model severely limits the applicability of the model to Langmuir layers the model is useful and realistic for chains that are permanently grafted to a solid substrate \cite{WV89}. Chemically realistic as well as coarse grained models suffer from the general limitation that the effective potentials are only inaccurately known \cite{bin95}. Layers of flexible chain molecules with mobile head groups \cite{hil94f,hil94h} are intermediate between chemically realistic united atom models and idealized coarse grained models such as rigid rods or even lattice models \cite{HLB93}. Such models were studied in \cite{hil94f,hil94h} using continuum Monte Carlo simulations at fixed volume. It was found that the condition of fixed volume imposes a symmetry breaking field which can suppress the restructuring of the head group lattice during the tilting transition. This was concluded from the presence of metastable boundary induced defect structures \cite{HHB95}. While such defects are known to exist in experiment and are therefore of interest in their own right \cite{FBK94}, they make the extraction of thermodynamic equilibrium properties from small scale simulations very difficult. Hence it is of great interest to perform also simulations at constant pressures in which the symmetry breaking field is absent. In the present paper we report simulations at constant pressure using an algorithm which is compatible with the symmetry breaking during tilting transitions. To facilitate comparison we investigate a model similar to the one studied at constant area in \cite{hil94f,hil94h}. Our main objective is twofold: Firstly we would like to confirm the existence of an expected orthorhombic distortion of the head group lattice during a tilting transition in a coarse grained model. Such observations are essential for the interpretation of constant volume simulations in which the distortion is suppressed. For example, the impossibility to distort or contract the simulation box in such simulations combined with periodic boundary conditions can lead to the appearance of negative pressures at low densities if the interaction potential has an attractive part. Secondly, we want to understand the selection mechanisms for nearest versus next nearest neighbour tilt direction and the rather high transition temperature of the first order melting transition observed in previous constant volume simulations of the same model \cite{hil94f,hil94h,haa95,HHB95}. In \cite{hil94f,hil94h} the fluidized phase appeared at temperatures roughly four times higher than the tilting transition temperature. This seems to rule out the possibility of identifying the molten state with the liquid expanded phase of Langmuir monolayers. An important result of the present study is to show that in constant pressure simulations the fluidized phase appears already at much lower temperatures. \section{Model description} Each amphiphilic molecule is represented through a chain of seven effective monomers labeled $i=0,...,6$ with $i=0$ corresponding to the hydrophilic head group of the amphiphile. The cartesian coordinate system in threedimensional space is chosen such that the headgroup $i=0$ is restricted to move in the $z=0$ plane representing the twodimensional substrate. All the effective monomers are connected through a finitely extendable nonlinear elastic (FENE) potential \begin{equation} V_{bl}(d) = \left\{ \begin{array}{r@{\quad:\quad}l} -\displaystyle\frac{c_{bl}d_{bl}^2}{2} \ln\left(1-\displaystyle\frac{(d-d_0)^2}{d_{bl}^2}\right) & \mbox{for}\quad |d-d_0| < d_{bl} \\[12pt] \infty & \mbox{for}\quad |d-d_0|\geq d_{bl} \end{array} \right . \end{equation} where $d$ is the distance between monomers, and $c_{bl}>0$ is the spring constant. The FENE potential is harmonic at its minimum $d_0$ but the bonds cannot be stretched beyond a maximum length determined by $d_{bl}$. The stiffness of the rodlike molecules whose alkane chains are predominantly in an all trans conformation is incorporated by a bond angle potential \begin{equation} V_{ba}(\theta_i) = c_{ba}(1+\cos(\theta_i)) \end{equation} where $c_{ba}$ is the force constant and $\theta_i$ is the angle formed by the three monomers $i-1,i,i+1$. Note that $V_{ba}$ is a three body interaction. All monomers except nearest neighbours within the same chain interact through a Lennard-Jones potential. The Lennard-Jones potential is truncated and shifted such that it vanishes at the truncation point. If $\epsilon$ is the interaction strength and $\sigma$ its range then \begin{equation} V_{LJ}(d) = \left\{ \begin{array}{r@{\quad:\quad}l} \epsilon((\sigma/d)^{12}-2(\sigma/d)^6-(1/d_{LJ})^{12}+2(1/d_{LJ})^6) & \mbox{for}\quad d \leq d_{LJ}\sigma \\[12pt] 0 & \mbox{for}\quad d > d_{LJ}\sigma \end{array} \right . \end{equation} where $d=d_{LJ}\sigma$ is the truncation point. The model above is similar to the model used for the constant volume simulations in \cite{hil94f,hil94h}. It extends and generalizes an earlier study of a system consisting of perfectly rigid rods grafted to a hexagonal lattice and interacting with Lennard-Jones interactions \cite{hil92c,hil93a}. The main difference with \cite{hil94f,hil94h} consists in simulating at constant spreading pressure rather than constant substrate area, and in replacing the cutoff harmonic potential for the bondlengths with a FENE potential. The earlier simulations at constant volume were found to lead to boundary induced defects \cite{HHB95}. These can significantly perturb the long range correlations at phase transitions. Most molecular dynamics simulations are similarly carried out at constant volume or aspect ratio of the simulation box. Simulations at constant pressure on the other hand reproduce the experimental situation more exactly, and allow the defects to relax via distortions of the head group lattice. \section{Relation between the model and experiment} \label{relation} The model presented in the previous section is not intended to be a chemically realistic model for Langmuir monolayers. On the contrary we wish to study a highly idealized coarse grained model in order to understand the influence of varying degrees of idealization on the mechanisms responsible for the tilting transitions in the condensed phases. The model has resulted from previous systematic investigations of even more idealized models \cite{KKB90,hil92c,hil93a,HLB93,hil94f,hil94h}. Although the model is not intended to be chemically realistic it is of interest to connect the model with reality through a rough mapping of length and energy scales. Such a rough mapping can be helpful in deciding whether the phase transitions of the model may be related to the phase transitions of the experimental system. If a typical fatty acid with chain lengths from 12 to 30 carbons is represented by a model chain with seven effective monomers then each effective monomer of our model chains must represent roughly between two and five methyl groups. Hence, in such a mapping the diameter of the effective monomers is of order 10\AA, and the Lennard-Jones interaction range $\sigma$ is of order 5\AA. For the energy scale a value for $\epsilon$ of order 200K is a very rough estimate. The simple attachment of the head groups to the substrate represents an important overidealization of our model. Experimentally the form and interactions of the head groups appear to play an important role. While our model is more realistic than fully discrete lattice models \cite{HLB93} it contains less chemical detail than united atom models which have been investigated using molecular dynamics simulations \cite{HR88,BK90,BC91}. The main difference is that our molecules are axially symmetric while the zigzag structure of united atom models destroys this symmetry. Consequently we do not expect the model to reproduce phases associated with a herringbone ordering. We feel, however, that our model while neglecting these features represents a good compromise between computational efficiency and chemical realism. \section{Simulation details} Lengths and energies in our simulations will be measured in dimensionless units defined by setting $\sigma=1$ and $\epsilon=1$. In these units the other parameters for the simulations are chosen as $d_0=0.7,d_{bl}=0.2,d_{LJ}=2.0$ and $c_{bl}=100, c_{ba}=10$ which ensures that the chains do not intersect. The choice $c_{bl},c_{ba}\gg \epsilon$ produces stiff rodlike chains. The system, consisting of 144 chain molecules, is confined to an area of sidelengths $L_x,L_y$ in the $z=0$ plane. The height of the simulation box $L_z\gg 6d_0$ is chosen much larger than the length of the chain molecules. A simplified groundstate analysis of the model was performed previously \cite{hil94f,hil94h,haa95}. It predicts a tilting transition between a tilted and an untilted state, and highlights the importance of packing effects. The continuum model is simulated in a canonical ensemble in which temperature, spreading pressure and particle number are kept constant. The simulation is carried out using a Metropolis Monte Carlo procedure in which individual monomer positions and the area $A=L_xL_y$ of the simulation box are updated in continuous space. The continuous position space requires to use methods adapted from molecular dynamics simulations for evaluating the interaction energies. The maximal jump distance of monomers in a single move is chosen to optimize the acceptance rate while ensuring that the chains cannot intersect each other. The cutoff in the interaction potentials allows us to use an adaptation of the link cell algorithm to keep track of interacting neighbouring monomers. The simulation box is subdivided into cells of sidelength larger than or equal to the interaction radius. Particles in a subcell can only interact with particles in neighbouring cells. The cell contents are stored using specially designed linked pointer lists. The algorithm was developed and tested for constant volume simulations \cite{hilxx,hil94f,hil94h}, and its speed depends strongly on the density and the interaction ranges. The partition function $Z_{N\Pi_A T}$ of the isobaric ensemble is obtained from the isochoric partition function $Z_{NAT}$ through Laplace transformation. Denoting the surface area by $A$ and the spreading pressure by $\Pi_A$ it reads \begin{equation} Z_{N\Pi_A T} = \int_0^\infty\int\exp\left(-\beta(\Pi_A A + V({\bf r}))\right)\;d{\bf r}\;dA \end{equation} where $\beta=1/(k_B T)$, $k_B$ is the Boltzmann constant, $d{\bf r}$ is the integral of all coordinates, and all constant prefactors have been suppressed. Expectation values for an observable $X({\bf r})$ in this ensemble are calculated as \begin{equation} \langle X \rangle = \frac{1}{Z_{N\Pi_A T}} \int_0^\infty\int X({\bf s}) \exp\left(-\beta(V({\bf s})+\Pi_A A)+N\ln A\right) \;d{\bf s}\;dA \end{equation} in terms of the scaled coordinates ${\bf s}=(r_x/L_x,r_y/L_y,r_z)$ of all the particles. The expectation values can be calculated with the usual Metropolis scheme by introducing an effective Hamiltonian \begin{equation} H_{eff}({\bf s},L_x,L_y) = V({\bf s}) + \Pi_A L_xL_y - Nk_BT\ln(L_xL_y) \end{equation} containing two additional degrees of freedom $L_x$ and $L_y$. Attempts to change the area $A=L_xL_y$ of the simulation box are carried out after each Monte Carlo step (1MCS = 1 update/monomer) of regular coordinate updates. A maximal step size of $\Delta L_x=\Delta L_y=\pm 0.01$ for the area moves was found to yield optimal acceptance rates for the chosen system size. The evaluation of the energy difference $\Delta V({\bf s})$ for an area move involves a recalculation of all particle interaction potentials because the coordinates are rescaled differently in the $x$- and $y$-directions. This makes area moves computationally expensive and leads to a deceleration of the constant pressure algorithm by a factor of roughly two compared with constant area simulations. The simulations are started from an untilted configuration of $12\times 12$ chain molecules. The chains are positioned on a triangular lattice and directed perpendicular to the substrate. If a tilted initial configuration with hexagonal headgroup lattice is used at low temperatures the system was found to evolve into metastable states with lifetimes well beyond $10^4$ Monte Carlo steps. In all simulations periodic boundary conditions are applied in the $xy$-directions. In each run the system was first equilibrated for 20000 Monte Carlo steps (updates per monomer). Subsequently averages were recorded every 500th MCS over a period of 50000 MCS. These calculations consumed several 100 hours of CPU time on IBM RS6000 370 equipment. The twodimensional pressure tensor was estimated form the forces on each particle using the virial theorem and the minimum image convention appropriate for periodic boundary conditions \cite{AT87}. While the contribution from the bondlength and Lennard-Jones interactions to the pressure can be obtained as usual, the contribution from the three body bond angle interaction $V_{ba}$ has to be considered separately. It is found that the contribution of the bond angle interaction $V_{ba}$ can be neglected. \section{Results} At low temperatures, $T=0.2$ the sidelengths $L_x,L_y$ fluctuate very little, and the system organizes itself into a configuration consisting of 12 rows and 12 columns of chain molecules. At $T=2.0$ the sidelengths fluctuate more strongly, and these fluctuations are anticorrelated in time. Increases in $L_x$ are accompanied by decreasing $L_y$ and vice versa yielding roughly constant area $A$. In general the measured pressure agrees well with the applied pressure. Small deviations are attributed to finite size effects, and the fact that the pressure is measured from the virial theorem rather than from the free energy. Figure \ref{configs} shows snapshots of equilibrated \marginpar{\fbox{\em Fig. \ref{configs}}} configurations of 144 chains at temperatures $T=0.2,1.0,2.0$ and pressures $\Pi_A=10,100$. Each effective monomer is represented as a sphere of radius $\sigma/2$. At high pressures $\Pi_A=100$ the chains are compressed into a hexagonal arrangement in which all chains stand perpendicular to the substrate. At the low temperature $T=0.2$ and high pressure the monolayer surface shows a nearly regular modulation resulting from extension and contraction of the chains along the molecular axis. At higher temperatures the surface is rough. At low pressures the chains assume a uniformly tilted crystalline arrangement which progressively disorders as the temperature is increased. At $T=2.0$ the layer has molten into a fluidlike phase. The crystalline order and the melting of the surface layer can be seen from Figure \ref{dichte} showing \marginpar{\fbox{\em Fig. \ref{dichte}}} the density profiles along the $z$ direction. Figure \ref{dichte} shows also results at an intermediate pressure $\Pi_A=30$. At low temperature $T=0.2$ the system is crystalline. For $\Pi_A=10$ (and $T=0.2$) there are six pronounced maxima in the density profile. The position of last maximum corresponding to the tail group $i=6$ is significantly shifted from the value $6 d_0 = 4.2$ corresponding to untilted chains. This indicates that the chains are tilted. At higher pressures $\Pi_A=30$ and $\Pi_A=100$ (and low temperatures $T=0.2$) the density profile exhibits $11$ respectively $12$ maxima instead of $6$. This fact together with the positions of the maxima shows that the chains are alternatingly stretched and contracted along the molecular axis. The concomitant doubling of the number of monomer layers has also been observed in simulations at constant volume when the density becomes as high as 1.3 \cite{haa95,HHB95}. At the intermediate temperature $T=1.0$ and $\Pi_A=10$ the crystalline order is stabilized by the tilt order. For $\Pi_A=30$ and $\Pi_A=100$ the surface layer is begininning to melt. At $T=2.0$ the tail group layer ($i=6$) is molten for all pressures but the layered structure is still intact in the lower layers. Note however that for $T=2.0$ and $\Pi_A=10$ the second layer ($i=1$) is spreading into the substrate surface ($z=0$) indicating a melting of the head group lattice. Defining the monolayer thickness from the largest (rightmost) inflection point along the density profile allows to analyze the change in monolayer thickness as a function of temperature and pressure. The results are collected in Table \ref{thickness}. As expected the thickness increases with increasing temperature and with increasing spreading pressure. \begin{table}\caption{\label{thickness}\small Monolayer thickness for different temperatures and pressures} \vspace*{24pt} \begin{tabular}{lccc} & $\Pi_A=10$ & $\Pi_A=30$ & $\Pi_A=100$ \\ \hline\hline $T=0.2$ & 3.7 & 4.0 & 4.1 \rule[-12pt]{0pt}{32pt} \\ \hline $T=1.0$ & 3.9 & 4.1 & 4.2 \rule[-12pt]{0pt}{32pt}\\ \hline $T=2.0$ & 3.9 & 4.2 & 4.3 \rule[-12pt]{0pt}{32pt} \end{tabular} \end{table} To analyze the tilt order we display in Figure \ref{voronoi} \marginpar{\fbox{\em Fig. \ref{voronoi}}} the Voronoi diagrams for the configurations shown above in Figure \ref{configs}. In these figures each chain molecule is shown as a dot with a short line attached to it. The dot represents the center of the head group, and the short line represents the projection of the head to tail vector into the substrate plane. In addition the Voronoi cells are drawn around each head group to visualize the degree of crystallinity in the head group lattice. These figures show that at high pressures $\Pi_A=100$ the chain molecules are untilted at all temperatures, although defects and randomly oriented projections of small magnitude are visible at elevated temperatures. At $\Pi_A=100$ the head group lattice is crystalline at all temperatures. At low pressure and low temperature ($\Pi_A=10,T=0.2$) the figure shows tilting in agreement with the density profiles. The tilt is directed towards the nearest neighbour molecule. At $\Pi_A=10,T=1.0$, however, the average tilt direction is towards next nearest neighbours. Both of these tilt directions have been observed in experiment \cite{KD92}. At high temperatures ($\Pi_A=10,T=2.0$) both the tilt order and the crystalline ordering of the head groups are destroyed. The expected magnitude of the absolute value of the tilt angle $\langle|\theta|\rangle$, defined as the angle between the end-end vector and the surface normal, is displayed over a wide pressure range in Figure \ref{tilt} \marginpar{\fbox{\em Fig. \ref{tilt}}} for the temperatures $T=0.2,1.0,2.0$. At all temperatures the tilt angle increases as the spreading pressure is lowered. However, such an increase does not necessarily indicate a larger collective tilt \cite{hil92c,hil93a,hil94f,hil94h}. An increase in tilt magnitude can also be caused by an increase in the fluctuations of the tilt angle. The two situations can be distinguished by measuring additional quantities. The projection $R_{xy}$ of the end-end vector ${\bf e}=(e_x,e_y,e_z)$ into the $xy$-plane is defined as \begin{equation} R_{xy}=\sqrt{\langle(\overline{e_x}^2+\overline{e_y}^2)\rangle} \end{equation} where $\overline{e_x}$ and $\overline{e_y}$ are the configuration average of the $x$ and $y$ components of the end-end vector ${\bf e}$. The quantity $R_{xy}$ is independent of the tilt direction. Figure \ref{order}, which shows $R_{xy}$ over the same pressure \marginpar{\fbox{\em Fig. \ref{order}}} range as the tilt angle, indicates that the jump in $\langle|\theta|\rangle$ at low pressures has different origins for $T=0.2$ and $T=2.0$. For $T=0.2$ the simultaneous increase in $R_{xy}$ indicates a tilting transition at around $\Pi_A\approx 30$. The smooth behaviour of $R_{xy}$ for $T=2.0$ indicates that the jump in $\langle|\theta|\rangle$ at this temperature is not caused by tilting but by melting. This interpretation is suggested also by the analysis of the configurations and density profiles shown in Figures \ref{configs}--\ref{voronoi}. It is further corroborated by measuring the orientational correlations between neighbouring chains. The orientational correlations are measured through the quantity \begin{equation} K_{NN}=\left\langle \frac{1}{6}\sum\frac{1}{2}(3\cos^2(\theta_{NN}-1) \right\rangle \end{equation} where $\theta_{NN}$ is the angle between the end-end vectors of two nearest neighbour chains, and the sum extends over all nearest neighbour chains of a given chain. The nearest neighbour chains are defined as those six chains whose head groups have the smallest distances from the given chain. Because of steric hindrances the correlation $K_{NN}$ is usually very strong. Figure \ref{correl} shows the pressure dependence of $K_{NN}$ \marginpar{\fbox{\em Fig. \ref{correl}}} at $T=2.0$ which exhibits a sharp drop at low pressures. The same curves for $T=0.2$ and $T=1.0$ are indistinguishable from the line at $K_{NN}=1$. The sharp drop in Figure \ref{correl} indicates the presence of a melting transition as suggested by the jump in $\langle|\theta|\rangle$ at the same pressures. The melting transition at $T=2.0$ and low pressures appears also when plotting the specific area $L_xL_y/N$ ($N=144$) or the ratio of sidelengths $L_x/L_y$ as a function of pressure as shown in Figure \ref{distort}. \marginpar{\fbox{\em Fig. \ref{distort}}} The area per molecule shows a steady increase from values around 0.72 to 0.93 and then a sudden jump to a value close to 1.2. At low temperatures $T=0.2$ the specific area shows a sudden increase at the critical pressure $\Pi_A\approx 30$ for the tilting transition. This increase is associated with a similar increase in the ratio $L_x/L_y$ reflecting an orthorhombic distortion of the head group lattice during the tilting transition. At high pressures the ratio $L_x/L_y$ approaches the value $2/\sqrt{3}$ for the hexagonal lattice independent of temperature. \section{Discussion} Our present simulation results combined with previous simulations at constant area \cite{hil94f,hil94h,haa95,HHB95} indicate that the idealized coarse grained model has at least four distinct phases: an ordered phase without tilt, two ordered phases with tilt towards nearest and next nearest neighbours respectively, and a highly disordered fluidized phase. At sufficiently high pressures the chains organize into a hexagonally ordered condensed phase without tilt. This phase persists throughout the whole investigated temperature range, and it can be understood as a close packing configuration. As the pressure is increased the individual monomers begin to move away from the minima in the bond angle potentials, and this leads to stretching and contraction of the chains along their molecular axis resulting in the visible periodic modulation of the free surface. The modulation of the surface is expected to depend on the relative strength of the bond length potential as compared to the Lennard-Jones and the bond angle potential. At lower pressures the chains tilt either in the direction of the nearest neighbour chain, or in the direction of the next nearest neighbour chain. The tilting may be understood as a result of the Lennard-Jones attraction between neighbouring chains. The tilt direction is the result of a complicated interplay between packing, energetic and entropic effects. At spreading pressure $\Pi_A=10$ and temperature $T=0.2$ we observe nearest neighbour tilt combined with a periodically modulated stretching of the chains along their axis. This indicates that packing effects are dominant for this pressure. At higher temperatures, $T=1.0$, the tilt direction changes into the energetically favoured next nearest neighbour direction. A complete understanding of these effects will require not only more simulations at intermediate pressures and temperatures but in addition a systematic exploration of the space of interaction parameters which is beyond the scope of the present study. Finally at still higher temperatures the individual chain conformation and the head group lattice become disordered. The system enters a fluidized phase in which more and more configurations become accessible as the temperature is increased. A similar sequence of phases has been observed also in simulations at constant area when varying temperature and density \cite{hil94f,hil94h,haa95,HHB95}. Contrary to those simulations the phases appear here in a much narrower temperature range. In particular the fluidized phase appears already at much lower temperatures than in simulations at constant area. If the rough estimate $\sigma\approx 5$\AA from section \ref{relation} is used to map the model to experiment then the area per molecule increases from values around 23\AA/molecule to 30\AA/per molecule as seen in Figure \ref{distort}. This agrees roughly with the experimental values for the transition from liquid condensed to expanded phases, and suggests to identify the fluidized phase of our model with the liquid expanded phase in Langmuir layers. Because of the wide range of pressures studied here the pressure resolution is poor, and hence our results do not allow to identify the order of this transition from the present study. Finite size scaling analyses with higher pressure and temperature resolution are needed to conclusively identify the order of this transition. Our simulation at constant pressure using an algorithm compatible with the symmetry breaking allows to answer the questions, posed in the introduction, concerning the restructuring of the head group lattice during tilting. Figure \ref{distort} demonstrates the existence of an orthorhombic distortion of the head group lattice during the tilting transition. The magnitude of this effect may also be estimated from Figure \ref{distort}. This distortion is suppressed in simulations at constant area because the simulation box is rigid in that case. A second effect of the rigid simulation box in constant area simulations with periodic boundary conditions is the appearance of negative pressures at low densities and temperatures \cite{haa95}. The negative pressure arises from the attractive part of the Lennard-Jones interactions and complicates the analysis of simulations at constant area. Our constant pressure simulations presented here are free from such complications. In summary we find that our coarse grained model exhibits untilted, tilted and fluidized phases similar to those observed in experiment. The model reproduces the positional ordering, bond orientational ordering and tilt ordering similar to that observed in the condensed phases of the experiment \cite{BKP91,KBBPMAK91,KD92}. It does not allow for herringbone ordering, however, because of the cylindrical symmetry of the chains. Our simulations at constant pressure show for the first time the experimentally known orthorhombic distortion of the head group lattice during tilting transitions in the liquid condensed phases. We have also shown that the choice of ensemble (constant pressure versus constant area) plays an important role in small scale simulations as evidenced by the large difference in transition temperatures for the appearance of the fluidized phase. This will become particularly important when trying to make quantitative predictions. Of course it is clear that an idealized model such as the one studied in this paper can only be a small step in understanding the complexity of Langmuir monolayers, and improvements of the model are desirable in several directions. In particular the important role played by the head groups and their interactions are inadequately represented in our model. Similarly the cylindrical symmetry of the chains is unrealistic, as mentioned previously. Improving the model and simulation technique step by step, and studying the intermediate idealized models, it will ultimately become possible to understand quantitatively the full complexity of the phase structure observed in the experiment. \newpage ACKNOWLEDGEMENT: We are grateful to Prof.Dr. K. Binder for discussions, and we thank the Deutsche Forschungsgemeinschaft through its Graduiertenkolleg ``Physik und Chemie supramolekularer Systeme'' (F.M.H.) and Norges Forskningsrad (R.H.) for financial support. \begin{figure}[p] \caption{ Snapshots of equilibrated configurations containing 144 molecules at various temperatures and spreading pressures corresponding to a) $T=0.2,\Pi_A=10$; b) $T=0.2,\Pi_A=100$; c) $T=1.0,\Pi_A=10$; d) $T=1.0,\Pi_A=100$; e) $T=2.0,\Pi_A=10$; f) $T=2.0,\Pi_A=100$. } \label{configs} \vspace*{24pt} \end{figure} \begin{figure}[p] \caption{ Total monomer density profiles as a function of the distance $z$ from the substrate. The densities are normalized to 6. The head group monomer ($i=0$) at $z=0$ is not included in the plot. } \label{dichte} \vspace*{24pt} \end{figure} \begin{figure}[p] \caption{ Voronoi diagrams of equilibrated configurations shown in Figure \protect\ref{configs} at various temperatures spreading pressures corresponding to a) $T=0.2,\Pi_A=10$; b) $T=0.2,\Pi_A=100$; c) $T=1.0,\Pi_A=10$; d) $T=1.0,\Pi_A=100$; e) $T=2.0,\Pi_A=10$; f) $T=2.0,\Pi_A=100$. The corresponding box dimensions are $L_x(T=0.2,\Pi_A=10)=11.87$, $L_y(T=0.2,\Pi_A=10)=9.82$, $L_x(T=0.2,\Pi_A=100)=10.92$, $L_y(T=0.2,\Pi_A=100)=9.45$, $L_x(T=1.0,\Pi_A=10)=11.58$, $L_y(T=1.0,\Pi_A=10)=10.21$, $L_x(T=1.0,\Pi_A=100)=11.15$, $L_y(T=1.0,\Pi_A=100)=9.63$, $L_x(T=2.0,\Pi_A=10)=14.49$, $L_y(T=2.0,\Pi_A=10)=11.61$, $L_x(T=2.0,\Pi_A=100)=11.38$ and $L_y(T=2.0,\Pi_A=100)=9.93$. } \label{voronoi} \vspace*{24pt} \end{figure} \begin{figure}[p] \caption{ Average tilt angle $\langle|\theta|\rangle$ as function of spreading pressure for temperatures $T=0.2,1.0,2.0$ } \label{tilt} \vspace*{24pt} \end{figure} \begin{figure}[p] \caption{ Averaged projection of the end-end vector $R_{xy}$ into the substrate plane as function of spreading pressure for temperatures $T=0.2,1.0,2.0$ } \label{order} \vspace*{24pt} \end{figure} \begin{figure}[p] \caption{ Orientational correlation between neighbouring chains as function of spreading pressure for $T=2.0$ } \label{correl} \vspace*{24pt} \end{figure} \begin{figure}[p] \caption{ Averaged specific area $L_xL_y/N$ and sidelength ratio $L_x/L_y$ as function of spreading pressure for $T=0.2,1.0,2.0$ } \label{distort} \end{figure}
1,108,101,563,112
arxiv
\section{Introduction}\label{sec:intro} An interesting extension of the standard model is based on the gauge group $SU(3)_{c}\times SU(3)_{L}\times U(1)$ ($331$). In the original, minimal version of the model\cite{frampton,pisano}, the leptons are put into antitriplets of $SU(3)_{L}$, two generations of quarks are put into triplets and the third generation of quarks is put into an antitriplet. With this structure, the anomalies will all cancel if and only if the number of generations is a multiple of three. The model has an automatic Peccei-Quinn symmetry\cite{pq,dias}, and the fact that one quark family has different quantum numbers than the other two may explain the heavy top quark mass\cite{fram}. An unusual feature of this model is that $\sin^{2}\theta_{W}$ must be less than $1/4$. Since it is an increasing function of $q^{2}$, the scale of $SU(3)_{L}$ breaking must be relatively low, and cannot arbitrarily be moved up to a high scale. This minimal model contains doubly charged gauge fields (bileptons) as well as isosinglet quarks with exotic charges. The phenomenology of these models is very rich and has been the subject of extensive study\cite{lots}. A completely different class of models was proposed in Refs. \cite{montero,foot}, in which the embedding of the charge operator into $SU(3)_{L}$ is different. In these models, there are no exotic charges for the quarks, and the gauge bosons are all either neutral or singly-charged. In all of these models, one still treats the lepton generations identically, and treats one quark generation differently than the other two. A comprehensive review of the gauge, fermion and scalar sectors of all of these models can be found in Refs. \cite{ponce1,ponce2}. In Ref. \cite{ponce1}, a detailed analysis of the anomalies in 331 models showed that there are two anomaly free sets of fermion representations in which the lepton generations are {\bf all} treated differently. The phenomenology of these models has never been studied in the literature. With leptons in different representations, one might expect lepton-flavor changing neutral processes. In this paper, we discuss the phenomenology of these two models. In Section 2, the various 331 models are presented, as well as the possible representations for fermions in these models. A set of anomaly free models will be found, and it will be noted that two of them have very different representations for the lepton families. In Section 3, we will consider the scalar sector of these ``unique lepton generation'' models, and in Section 4 will present the mass matrices for the leptons, look at the possible variations that can occur, and find the Yukawa couplings to the scalars. The phenomenology of lepton-number violating $\mu$ and $\tau$ decays will be discussed in Section 5, and for Higgs decays in Section 6. Our most interesting result will be that many of these models have fairly large branching ratios for the Higgs boson decaying into a muon and a tau, and in one model it may be the dominant decay. In Section 7, we will examine lepton-number violation due to gauge boson exchange, and the resulting bounds on the gauge boson masses. Finally, in Section 8 we present our conclusions. \section{Models} As discussed in Ref. \cite{ponce1}, if one assumes that the isospin $SU(2)_{L}$ of the standard model is entirely embedded in $SU(3)_{L}$, then all models can be characterized by the charge operator \begin{equation} Q = T_{3L} + {2\over \sqrt{3}}bT_{8L} + X I_{3} \end{equation} where $I_{3}$ is the unit matrix and $T_{iL}=\lambda_{iL}/2$, where the $\lambda_{iL}$ are the Gell-Mann matrices. $X$ is fixed by anomaly cancellation and the coefficient can be absorbed in the hypercharge definition. Different models are characterized by different values of $b$. In the original Frampton, Pisano, and Pleitez~\cite{frampton, pisano} model, $b=3/2$, leading to doubly-charged gauge bosons and fermions with exotic charges. The fermion representations, with the $SU(3)\times U(1)$ quantum numbers, are \begin{equation} L_{i}=\pmatrix{e_{i}\cr \nu_{i}\cr e^{c}_{i}\cr}: (3^{*},0) \end{equation} for the leptons ($i=1,2,3$) and \begin{equation} Q_{1,2}=\pmatrix{u\cr d\cr D\cr}, \pmatrix{c\cr s\cr S\cr}: (3,-1/3) \end{equation} \begin{equation} Q_{3}=\pmatrix{b\cr t\cr T\cr}: (3^{*},2/3)\end{equation} with all of the quark conjugate fields being isosinglets. $D,S,T$ are quarks with charges given by $-4/3, -4/3, 5/3$. A simple variant of this model\cite{tully} changes the lepton structure by replacing the $e^{c}$ with a heavy lepton $E^{+}$ and adding $e^{c}$ and $E^{-}$ as singlets. If one wishes to avoid exotic electric charges, one must choose $b=1/2$. In that case, the fermion structure is very different. Following \cite{ponce1}, we can find six sets of fermions, which contain the antiparticles of all charged particles. The first four are leptons and the last two are quarks. Noting $e_{i}, d_{i}, u_{i}$ as standard model fermions, and $E_{i},D_{i},U_{i}$ as exotic fermions, the four sets of leptons are \begin{equation} L_{1} = \pmatrix{\nu_{i}\cr e_{i}^{-}\cr E_{i}^{-}\cr}; e_{i}^{+}; E_{i}^{+} \label{eq:L1} \end{equation} with $SU(3)\times U(1)$ quantum numbers $(3,-2/3),(1,1),(1,1)$, \begin{equation} L_{2} = \pmatrix{e_{i}^{-}\cr \nu_{i}\cr N_{i}^{0}\cr}; e_{i}^{+} \label{eq:L2} \end{equation} with $SU(3)\times U(1)$ quantum numbers $(3^{*},-1/3),(1,1)$, and $N^{0}_{i}$ is a heavy neutrino, \begin{equation} L_{3} = \pmatrix{e_{i}^{-}\cr \nu_{i}\cr N_{1}^{0}\cr}; \pmatrix{E_{i}^{-}\cr N_{2}^{0}\cr N_{3}^{0}\cr};\pmatrix{N_{4}^{0}\cr E_{i}^{+}\cr e_{i}^{+}\cr} \label{eq:L3} \end{equation} with $SU(3)\times U(1)$ quantum numbers $(3^{*},-1/3),(3^{*},-1/3),(3^{*},2/3)$, and there are four heavy neutrino states (some may be conjugates of another), and \begin{equation} L_{4} = \pmatrix{\nu_{i}\cr e^{-}_{i}\cr E_{1i}^{-}\cr}; \pmatrix{E_{2i}^{+}\cr N_{1}^{0}\cr N_{2}^{0}\cr};\pmatrix{N_{3}^{0}\cr E_{2i}^{-}\cr E_{3i}^{-}\cr};e_{i}^{+};E_{1i}^{+};E_{3i}^{+} \label{eq:L4} \end{equation} with $SU(3)\times U(1)$ quantum numbers $(3,-2/3),(3,1/3),(3,-2/3),(1,1),(1,1),(1,1)$. The two sets of quarks are \begin{equation} Q_{1}=\pmatrix{d_{i}\cr u_{i}\cr U_{i}\cr};d^{c}_{i};u^{c}_{i};U_{i}^{c} \end{equation} with $SU(3)\times U(1)$ quantum numbers $(3^{*},1/3),(1,1/3),(1,-2/3),(1,-2/3)$, and \begin{equation} Q_{2}=\pmatrix{u_{i}\cr d_{i}\cr D_{i}\cr};u^{c}_{i};d^{c}_{i};D_{i}^{c} \end{equation} with $SU(3)\times U(1)$ quantum numbers $(3,0),(1,-2/3),(1,1/3),(1,1/3)$. The anomalies for these six sets are\cite{ponce1} found in Table 1. \begin{table}[h] \begin{center} \begin{tabular} {||l||c|c|c|c|c|c||} \hline \hline Anomalies & $L_1$ & $L_2$ & $L_3$ & $L_4$ & $Q_1$ & $Q_2$ \\ \hline \hline $[SU(3)_c]^2U(1)_X$ & 0 & 0 & 0 & 0 & 0 & 0 \\ $[SU(3)_L]^2U(1)_X$ & -2/3 & -1/3 & 0 & -1 & 1 & 0 \\ $[grav]^2U(1)_X$ & 0 & 0 & 0 & 0 & 0 & 0 \\ $[U(1)_X]^3$ & 10/9 & 8/9 & 6/9 & 12/9 & -12/9 & -6/9 \\ \hline \hline \end{tabular} \end{center} \caption{Anomalies for the Fermion Families} \end{table} With this table, anomaly-free models (without exotic charges) can be constructed. As noted in Ref. \cite{ponce1}, there are two one-family and eight three family models that are anomaly-free. Of the eight three-family models, four treat the lepton generations identically, two treat two of the lepton generations identically and in two, the lepton generations are all different. It is the latter two that will be the subject of this study. Note that one can easily see from Table 1 that there are only two one-family models. The first consists of $Q_{2}+L_{3}$. This structure is perhaps most familiar to grand unified model builders, since the $27$ fields are contained in the $27$-dimensional fundamental representation of $E_{6}$. In addition to analyses of $E_{6}$ models, an analysis of this model, in the context of $331$ models, can be found in Refs. \cite{ponce1,ponce3}. The second one-family structure is $Q_{1}+L_{4}$. This model is related to $SU(6)\times U(1)$ unified models, and is analyzed in Ref. \cite{ponce4}. Note that both of these one-family models are simply triplicated to become three-family models. There are two other three-family models in which all of the leptons are treated the same way (but now the quark generations are treated differently). These were the first models analyzed once it was recognized that $331$ models without exotic charges (i.e. with $b=1/2$) could be constructed. The first is $3L_{2}+Q_{1}+2Q_{2}$. As in the original $331$ models, one generation of quarks is treated differently than the other two, and thus three families are needed to cancel anomalies. These were analyzed in Ref. \cite{foot}. The second such model is $3L_{1}+2Q_{1}+Q_{2}$, which also requires three families for anomaly cancellation. This model has been analyzed in Ref. \cite{ozer}. Two models involve simple replication of the two one-family models, but take two copies of the first one-family model and one copy of the second, or vice-versa, i.e. $2(Q_{2}+L_{3}) + (Q_{1}+L_{4})$ and $2(Q_{1}+L_{4}) + (Q_{2}+L_{3})$. Since the lepton generations are not all different, we will not consider these models further, although they have not, to our knowledge, been studied. The two models of interest treat all of the lepton generations differently. They are Model A: $L_{1}+L_{2}+L_{3}+Q_{1}+2Q_{2}$ and Model B: $L_{1}+L_{2}+L_{4}+2Q_{1}+Q_{2}$. Note that each model has two ``simple'' lepton families ($L_{1}$ and $L_{2}$ above), and one more complicated family. We now analyze the phenomenology of these two models. Note that one cannot determine which ($e,\mu,\tau$) lepton belongs to which representation, and so we will consider all six possible permutations for each model. \section{The Scalar Sector} The scalar sector of 331 models has been extensively studied\cite{tully,diaz}. Here, one can see a substantial advantage to $b=1/2$ models. In the original $b=3/2$ models, the minimal Higgs sector consists of three $SU(3)_{L}$ triplets plus an $SU(3)_{L}$ sextet. In the $b=1/2$ models, three triplets are sufficient. One triplet breaks the $SU(3)_{L}\times U(1)$ gauge symmetry down to the standard model, and the other two are necessary to break the $SU(2)_{L}$ symmetry and to give the fermions mass. A very comprehensive analysis of the scalar sector in all previously considered models can be found in Ref. \cite{diaz}. Although the models we are considering are $b=1/2$ models, it is not a priori obvious that three triplets will suffice to give the leptons mass, since the different families have very different structure. Our Model A has five charged leptons (the $e,\mu,\tau$ and two exotic leptons), and Model B has seven charged leptons (with four exotic leptons). Fortunately, as will be seen in Section 4, three triplets will suffice to give the charged leptons mass. We will not consider neutrino masses in this study since the number of fields and the various options (which exotic neutrinos correspond to which right-handed neutrinos, for example) will rule out any substantial predictive power. The first stage of breaking from $SU(3)_{L}\times U(1)$ to $SU(2)\times U(1)$ is carried out by a triplet Higgs, $\Phi_{A}$, which is a $(3,1/3)$ under the $SU(3)_{L}\times U(1)$ group, and its vev is given by \begin{equation} \langle{\Phi_{A}}\rangle = \pmatrix{0\cr 0\cr V\cr} \end{equation} Note that the second component of the triplet is neutral, and could also get a vev, but that can be removed by a gauge transformation. Five of the gauge bosons acquire masses of $O(V)$, while the remaining four are massless at this stage. One can easily see that this vev will give masses of $O(V)$ to the $U$ and $D$ exotic quarks, and in previously considered models, to the $E$ exotic leptons as well. These masses are phenomenologically constrained to be substantially larger than the electroweak scale. The second stage of symmetry breaking requires two Higgs triplets, $\Phi_{1}$ and $\Phi_{2}$ with quantum numbers $(3,-2/3)$ and $(3,1/3)$ respectively. If one only wished to break the gauge symmetry, then one triplet would suffice. However, giving mass to the fermions requires a second doublet. This is not too surprising, since the quark masses in the standard model necessitate a Higgs doublet $H$ and $i\tau_{2} H^{*}$ to give masses to the down and up quarks, respectively. In $SU(2)$ $\overline{2}=2$, but this does not apply in $SU(3)$. Thus the low-energy theory is a two-doublet model. The vevs of these doublets are \begin{equation} \langle{\Phi_{1}}\rangle=\pmatrix{v_{1}/\sqrt{2}\cr 0\cr 0\cr};\qquad \langle{\Phi_{2}}\rangle=\pmatrix{0\cr v_{2}/\sqrt{2}\cr 0\cr} \end{equation} where $v_{1}^{2}+v^{2}_{2} = (246\ {\rm GeV})^{2}$. Note that the third component of $\Phi_{2}$ could acquire a nonzero vev, but this will not involve $SU(2)$ breaking and will be irrelevant. \section{Yukawa Couplings} With the fermion representations discussed in Section 2 and the scalar representations discussed in Section 3, we can now write down the Yukawa couplings and mass matrices for the charged leptons. Let us first write down the fermion representations more explicitly. For Model A, the fields, followed by their $SU(3)_{L}\times U(1)$ quantum numbers, are (with the subscript $L$ understood) \begin{equation} \psi_{i}=\pmatrix{\nu_{i}\cr e_{i}\cr E_{i}\cr}, (3,-2/3);\quad e^{c}_{i}, (1,1); \quad E^{c}_{i}, (1,1) \end{equation} \vskip 0.5cm \begin{equation} \psi_{j}=\pmatrix{e_{j}\cr \nu_{j}\cr N^{o}_{j}\cr}, (\overline{3},-1/3);\quad e^{c}_{j}, (1,1) \end{equation} \vskip 0.5cm \begin{equation} \psi_{k}=\pmatrix{e_{k}\cr \nu_{k}\cr N^{o}_{1k}\cr}, (\overline{3},-1/3); \quad \psi^{\prime}_{k}=\pmatrix{E_{k}\cr N^{o}_{2k}\cr N^{o}_{3k}\cr}, (\overline{3},-1/3);\quad \psi^{\prime\prime}_{k}=\pmatrix{N^{o}_{4k}\cr E^{c}_{k}\cr e^{c}_{k}\cr}, (\overline{3},2/3) \end{equation} where the $N^{o}$ could be a conjugate of either the $\nu$ or another $N^{o}$, and the generation labels $i,j$ and $k$ are all distinct. Note that the model contains five charged leptons: the standard three plus two exotic leptons. For Model B, the fields are \begin{equation} \psi_{i}=\pmatrix{\nu_{i}\cr e_{i}\cr E_{i}\cr}, (3,-2/3);\quad e^{c}_{i}, (1,1); \quad E^{c}_{i}, (1,1) \end{equation} \vskip 0.5cm \begin{equation} \psi_{j}=\pmatrix{e_{j}\cr \nu_{j}\cr N^{o}_{j}\cr}, (\overline{3},-1/3);\quad e^{c}_{j}, (1,1) \end{equation} \vskip 0.5cm \begin{equation} \psi_{k}=\pmatrix{\nu_{k}\cr e_{k}\cr E_{1k}\cr}, ({3},-2/3); \quad \psi^{\prime}_{k}=\pmatrix{E^{c}_{2k}\cr N^{o}_{1k}\cr N^{o}_{2k}\cr}, ({3},1/3);\quad \psi^{\prime\prime}_{k}=\pmatrix{N^{o}_{3k}\cr E_{2k}\cr E_{3k}\cr}, ({3},-2/3); \quad e_{i}^{+};\quad E^{c}_{1k}; \quad E^{c}_{3k} \end{equation} where the last three fields are singlets. Note that this model has seven charged leptons: the standard three plus four exotics. From these representations, and the scalar fields (with their vevs) in Section 3, we can write down the mass matrices for the charged leptons. The mass matrix for Model A is $5\times 5$ and for Model B is $7\times 7$. From these matrices, the Yukawa couplings to each scalar field can be trivially obtained by replacing the vev with the field. The Yukawa couplings and full mass matrices are given in Appendix A. If one takes the limit in which $v_{1}=v_{2}=0$, then each of these matrices has three zero eigenvalues, indicating that the exotic leptons all get masses of $O(V)$. Since $V$ must be large, we can take the limit as $V\rightarrow\infty$, and find the effective mass matrices for the three standard model leptons. Note that we do not know, a priori, which of the leptons is in the first, second, or third rows, so each model will have six permutations. For Model A, we find that the mass matrix is of the form \begin{equation} M_{A}={1\over\sqrt{2}}\pmatrix{h_{1}v_{2}&h_{2}v_{2}&0\cr h_{3}v_{1}&h_{4}v_{1}&h_{5}v_{2}\cr h_{6}v_{1}&h_{7}v_{1}&h_{8}v_{2}\cr} \end{equation} where the $h_{i}$ are constants. The Yukawa coupling matrices are then \begin{equation} \pmatrix{0&0&0\cr h_{3}&h_{4}&0\cr h_{6}&h_{7}&0\cr}\Phi_{1}+\pmatrix{h_{1}&h_{2}&0\cr 0&0&h_{5}\cr 0&0&h_{8}\cr}\Phi_{2}. \end{equation} For Model B, the mass matrix is of the form \begin{equation} M_{B}={1\over\sqrt{2}}\pmatrix{h'_{1}v_{2}&h'_{2}v_{2}&h'_{3}v_{2}\cr h'_{4}v_{1}&h'_{5}v_{1}&h'_{6}v_{1}\cr h'_{7}v_{2}&h'_{8}v_{2}&h'_{9}v_{2}\cr} \end{equation} and the Yukawa coupling matrices are \begin{equation} \pmatrix{0&0&0\cr h'_{4}&h'_{5}&h'_{6}\cr 0&0&0\cr}\Phi_{1}+\pmatrix{h'_{1}&h'_{2}&h'_{3}\cr 0&0&0 \cr h'_{7}&h'_{8}&h'_{9}\cr}\Phi_{2}. \end{equation} These Yukawa coupling matrices are certainly unusual. Note that diagonalizing the mass matrices will {\it not} diagonalize the Yukawa coupling matrices, and thus one will have lepton-flavor-changing neutral currents in the Higgs sector. This is just the Glashow-Weinberg theorem\cite{gw}. To determine the size of the lepton-flavor violation, one simply must diagonalize the mass matrix and read off the Yukawa coupling matrices in the diagonalized basis. Unfortunately, such a procedure will not be useful. The matrices have far too many free parameters. Worse, in general fine-tuning will be needed. We define ``fine-tuning'' as a situation in which several terms add together to give a term that is much smaller than any individual term. In general, fine-tuning will be needed to give the electron a small mass\footnote{There are trivial exceptions. For example, in $M_{A}$, if $h_{1}$ is very small, and all off-diagonal terms vanish, then there is no fine-tuning (and no flavor-changing neutral currents).}, and it is unclear how this fine-tuning will affect the Yukawa coupling matrices. In order to avoid fine-tuning, and to give the matrices a non-trivial structure, we will assume that the matrices will have a Fritzsch structure\cite{fritzsch}. The original Fritzsch matrix was of the form \begin{equation} \pmatrix{0&A&0\cr A&0&B\cr 0&B&C\cr} \end{equation} where $C \sim m_{\tau}$, $B \sim \sqrt{m_{\mu}m_{\tau}}$ and $A \sim \sqrt{m_{e}m_{\mu}}$. This matrix has the correct eigenvalues, is parameter-free and does not have fine-tuning. It was shown in Ref. \cite{chengsher} that a wide variety of matrices, such as those with nonzero values in the $1,1$ and $2,2$ elements, will (if one requires that there be no fine-tuning) yield the same flavor-changing-neutral structure as the Fritzsch structure. We expect that the general case will give the same qualitative results. Since the matrices we are considering are not symmetric, we will write the desired mass matrix as \begin{equation} \pmatrix{0 & a\sqrt{m_{e}m_{\mu}}& 0 \cr b\sqrt{m_{e}m_{\mu}} & 0 & c\sqrt{m_{\mu}m_{\tau}}\cr 0 & d\sqrt{m_{\mu}m_{\tau}} & e m_{\tau}\cr} \end{equation} where $a,b,c,d$ and $e$ are all of order $1$. In general, with multiple scalars, the individual Yukawa couplings would be of this form, with $\sum a=\sum b= \sum c = \sum d = \sum e = 1$. So, for a given model, and a given choice of permutations of $i,j$ and $k$, one compares this matrix with the mass matrices $M_{A}$ and $M_{B}$, and reads off the values of $a,b,c,d$ and $e$. Then the mass matrices are diagonalized, and the Yukawa coupling matrices in the diagonal basis are determined. It turns out that the procedure is only consistent for Model A if $j$ is the second generation, and thus we have a total of 4 Yukawa coupling matrices for Model A (two choices of $\Phi_{1}$ or $\Phi_{2}$ , and the choice between $i=1,k=3$ or $i=3,k=1$), and 12 Yukawa coupling matrices for Model B (two choices of $\Phi$ and six permutations of $i,j,k$). However, the results are simplified in Model B by the fact that if we permute the first and third indices, the Yukawa coupling matrices are identical, so there are only six different matrices. The Yukawa couplings are given in Table 2 for Model A and in Table 3 for Model B. We label Model A1 and A2 as corresponding to $(i,j,k) = (e,\mu,\tau)$ or $(\tau,\mu,e)$, respectively, and we label Model B1, B2 and B3 as corresponding to $(e,\mu,\tau), (e,\tau,\mu)$ or $(\mu,e,\tau)$, respectively. \begin{table}[h] \begin{tabular}{|c|c|c|} \hline scalar&A1&A2 \\ \hline $\Phi_{1}$ & $\pmatrix{0 & -\sqrt{m_{e}m_{\mu}}& -m_{\mu}\sqrt{m_{e}\over m_{\tau}}\cr 0 & -m_{\mu} & -m_{\mu}\sqrt{m_{\mu}\over m_{\tau}}\cr \sqrt{m_{e}m_{\tau}} & \sqrt{m_{\mu}m_{\tau}} & m_{\mu}}$ & $\pmatrix{0 & 0 & \sqrt{m_{e}m_{\tau}}\cr -\sqrt{m_{e}m_{\mu}} & -m_{\mu} & \sqrt{m_{\mu}m_{\tau}}\cr -m_{\mu}\sqrt{m_{e}\over m_{\tau}} & -m_{\mu}\sqrt{m_{\mu}\over m_{\tau}} & m_{\mu}}$ \\ \hline $\Phi_{2}$ & $\pmatrix{m_{e} & \sqrt{m_{e}m_{\mu}}& m_{\mu}\sqrt{m_{e}\over m_{\tau}}\cr 0 & 0 & 0\cr -\sqrt{m_{e}m_{\tau}} & -\sqrt{m_{\mu}m_{\tau}} & m_{\tau}+m_{\mu}}$ & $\pmatrix{m_{e} & 0 & -\sqrt{m_{e}m_{\tau}}\cr \sqrt{m_{e}m_{\mu}} & 0 &-\sqrt{m_{\mu}m_{\tau}}\cr m_{\mu}\sqrt{m_{e}\over m_{\tau}} & 0 & m_{\tau}+m_{\mu}}$ \\ \hline \end{tabular} \caption{Yukawa coupling matrices to $\Phi_{1}$ and $\Phi_{2}$ for Model A. All entries are to be divided by $\sqrt{v^{2}_{1}+v^{2}_{2}}/\sqrt{2}= 175 {\rm\ GeV}$. The specific models are discussed in the text.} \end{table} \begin{table}[ht] \begin{tabular}{|c|c|c|} \hline scalar&B1&B2\\ \hline $\Phi_{1}$ & $\pmatrix{0&-\sqrt{m_{e}m_{\mu}}&\sqrt{m_{e}m_{\tau}}\cr 0 & -m_{\mu} & \sqrt{m_{\mu}m_{\tau}}\cr 0 & -m_{\mu}\sqrt{m_{\mu}\over m_{\tau}} & m_{\mu}\cr}$ & $\pmatrix{0&0&-\sqrt{m_{e}m_{\tau}}\cr 0&0&-\sqrt{m_{\mu}m_{\tau}}\cr 0&0&m_{\tau}+m_{\mu}\cr}$ \\ \hline $\Phi_{2}$ & $\pmatrix{m_{e}& \sqrt{m_{e}m_{\mu}}&-\sqrt{m_{e}m_{\tau}}\cr 0&0&-\sqrt{m_{\mu}m_{\tau}}\cr 0&0& m_{\tau}+m_{\mu}\cr} $ & $\pmatrix{m_{e}&0&\sqrt{m_{e}m_{\tau}}\cr 0&-m_{\mu}& \sqrt{m_{\mu}m_{\tau}}\cr 0 & -m_{\mu}\sqrt{m_{\mu}\over m_{\tau}}&m_{\mu}\cr}$\\ \hline \end{tabular} \vskip 0.5cm \begin{tabular}{|c|c|} \hline scalar&B3 \\ \hline $\Phi_{1}$ & $\pmatrix{m_{e}&\sqrt{m_{e}m_{\mu}}&m_{\mu}\sqrt{m_{e}\over m_{\tau}}\cr 0&0&0\cr 0&0&0\cr}$ \\ \hline $\Phi_{2}$ &$\pmatrix{0&\sqrt{m_{e}m_{\mu}}&-m_{\mu} \sqrt{m_{e}\over m_{\tau}}\cr 0&-m_{\mu}& -m_{\mu}\sqrt{m_{\mu}\over m_{\tau}}\cr 0& -m_{\mu}\sqrt{m_{\mu}\over m_{\tau}}& m_{\tau}+2m_{\mu}\cr}$ \\ \hline \end{tabular} \caption{ Yukawa coupling matrices to $\Phi_{1}$ and $\Phi_{2}$ for Model B. All entries are to be divided by $\sqrt{v^{2}_{1}+v^{2}_{2}}/\sqrt{2}= 175 {\rm\ GeV}$. The specific models are discussed in the text.} \end{table} Note that we have tacitly assumed that the two Higgs triplets in the low-energy sector do not mix. This is for simplicity. One can easily find the couplings of one of the physical Higgs bosons by including an appropriate (and unknown) mixing angle. In our discussion of the phenomenology, this angle will play an important role, and it must be kept in mind. Note how unusual some of these Yukawa coupling matrices are. For example, in Model B3's coupling to $\Phi_{1}$, the Yukawa couplings to $\tau-\tau$, $\mu-\tau$ and $\mu-\mu$ all vanish, leading to an effectively leptophobic Higgs boson. We now turn to the lepton-flavor-changing phenomenology of these models. \section{Leptonic Flavor-changing Decays} In all of these models, there are Higgs-mediated lepton-flavor-changing neutral currents (FCNC) arising from the off-diagonal terms in the Yukawa coupling matrices. This will lead to $\mu$ and $\tau$ decays which violate lepton number. The leptonic decays of the $\tau^{-}$ are into $e^{-}e^{-}e^{+}$, $\mu^{-}\mu^{-}\mu^{+}$, $e^{-}e^{-}\mu^{+}$, $\mu^{-}\mu^{-}e^{+}$, $e^{-}\mu^{-}\mu^{+}$, $e^{-}\mu^{-}\mu^{+}$ and the $\mu$ decay is into $e^{-}e^{-}e^{+}$. The decay rate calculations are straightforward\cite{sheryuan, jairo}. Given the experimental upper bound on the decay rate for each of these processes, one can find a lower bound on the mass of the exchanged Higgs boson. The rate is inversely proportional to the Higgs mass to the fourth power. Examining all of the Yukawa coupling matrices in the previous section, we find that this lower bound is always less than $4.9$ GeV. Since the experimental lower bound is more than an order of magnitude higher, these bounds are not competitive. One can still have one-loop radiative decays. Again, the bounds from $\tau$ decays ($\tau\rightarrow e\gamma, \tau\rightarrow \mu\gamma$) do not give strong bounds. The strongest is from $\tau\rightarrow\mu\gamma$ in Models A1, A2, B1, B2 in which the first three involve coupling to $\Phi_{2}$ and the last to $\Phi_{1}$. However, even this lower bound is only $50$ GeV, and is marginally competitive with current experimental bounds. A much stronger bound comes from $\mu\rightarrow e\gamma$. Here a $\tau$ can be in the loop. The formula for the decay rate\cite{generalrate} is \begin{equation} \Gamma_{\mu\rightarrow e\gamma}= h^{2}_{\mu\tau}h^{2}_{e\tau}{\alpha m^{2}_{\tau}m_{\mu}^{3}\over 128\pi^{4}} \left[ {\ln(m_{h}/m_{\tau})\over m_{h}^{2}} \right]^{2} \end{equation} where the $h_{ij}$ are the Yukawa couplings, and $m_{h}$ is the scalar mass. This result does not change if the relevant scalar is a pseudoscalar. Plugging in, one finds a lower bound of $230$ GeV on the exchanged scalar mass for models A1, A2, B1 and B2, regardless of which scalar is used. However, for several reasons this bound is quite uncertain. First, we have a Fritzsch ansatz, and without that assumption the Yukawa couplings are only order-of-magnitude. Second, we have ignored mixing angles, which could also lower the Yukawa couplings substantially. Third, these models can have heavy leptons in the loop, and cancellations are possible. Thus, the numerical bound should be taken {\it cum grano salis}, but it is clear that $\mu\rightarrow e\gamma$ may be quite close to detection in these models. Note that Model B3 was not included in the above paragraph. In the coupling to $\Phi_{1}$, there is no bound coming from muon decay; in the coupling to $\Phi_{2}$, there is a bound of $7.3$ GeV on the Higgs mass. So the model is unconstrained by muon decay, and the Higgs bosons in this model could be very light. We now turn to lepton-number violation in Higgs decays. \section{Lepton-number violating Higgs Decays} We have a two-Higgs model in the low-energy sector. Here, mixing between the Higgs scalars (which will generically occur and depend on parameters of the scalar potential) can have a major effect on the branching ratios of Higgs bosons. For the moment, we will ignore these effects, but they are important and will be discussed shortly. In the conventional two-Higgs model, one Higgs doublet couples to the $Q=2/3$ quarks, and the other to the $Q=-1/3$ quarks and the charged leptons. The latter's primary decay into fermions is thus to $b\overline{b}$, with the $\tau^{+}\tau^{-}$ decay being a factor of ${3 m_b^2 \over m_\tau^2} \sim 25$ smaller. Of course, the primary decay mode could be $WW, WW^{*},ZZ $ or $ZZ^{*}$, depending on the mass of the Higgs. Here we will only look at the primary fermionic decays, which are relevant if the Higgs mass is not too much larger than its current lower bound (if it is larger, the fermionic decay branching ratios might be small, but certainly detectable at the LHC). The primary fermionic decay mode of the Higgs that couples to $Q=2/3$ fields would be into $t\overline{t}$ if kinematically accessible, and $c\overline{c}$ if not. It will not couple to the charged leptons. If the mixing angle is not too small, then the latter field's primary fermionic decay is also into $b\overline{b}$. In both models under consideration, one of the quark generations has a different structure than the other two. The unique generation is generally assumed to be the third generation, an assumption we concur with. If it is not, there will be flavor-changing effects in the kaon sector which will be phenomenologically problematic. Then, again ignoring mixing, the scalar that couples to $b\overline{b}$ will {\it not} couple to the charged leptons. The field coupling to the charged leptons will couple to the strange quark and to the top quark. If its mass is below $360$ GeV, then its primary fermionic decay is into the charged leptons and the strange quark\footnote{Actually, if it between $270$ and $360$ GeV, then the three body decay through a virtual top into $tbW$ will dominate.}. In this case, we can calculate the fermionic branching ratios for the five models under consideration, and show these in Table 4. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline Model&$\mu\mu$&$\mu\tau$&$\tau\tau$&$s\overline{s} $\\ \hline\hline A1&0&.05&.94&.01\\ A2&0&.06&.93&.01\\ B1&.04&.72&.04&.20\\ B2&0&.06&.93&.01\\ B3&0&0&0&100\\ \hline \end{tabular} \end{center} \caption{The fermionic branching fraction into various final states for the Higgs that does not couple to the b-quarks in the various models. We have explicitly assumed no mixing between the Higgs scalars, and that top quark decays are not kinematically accessible. The decay into gauge bosons will dominate if they are kinematically accessible.} \end{table} The results in Table 4 are interesting. In Models A1, A2 and B2, we see that the inversion of the bottom-top quark doublet takes the field that would ``normally'' decay into $b\overline{b}$, and (since the top quark is too heavy) makes its primary decay mode $\tau^{+}\tau^{-}$. This would be a very dramatic signature. In Model B3, in which the Higgs is leptophobic (and in which, as shown in the last section, radiative muon decay does not bound the Higgs mass), there are no leptonic decays, and the primary decay mode would be into $s\overline{s}$. The most unusual model is B1, in which the {\it primary} decay mode is into $\mu\tau$. This monochromatic muon would give a very dramatic signature. All of these signatures are quite dramatic. How realistic is this scenario? Abandoning the use of the Fritzsch ansatz will have effects of $O(1)$ on these results, but will not change the general results. However, the assumption of no mixing between the doublets will have a substantial effect on the scalars (the pseudoscalar will not, in general, have this mixing, and thus the results of the above paragraph will apply). For the scalars, mixing means that the branching ratio into $b\overline{b}$ is not negligible. For Models A1, A2 and B2, the fermionic branching ratio into $b\overline{b}$ relative to $\tau^{+}\tau^{-}$ is approximately $25\sin^{2}\theta$, and thus the individual branching ratios must be reduced accordingly. For Model B3, the fermionic branching ratio into $b\overline{b}$ is approximately $1000\sin^{2}\theta$, and thus the primary decay mode will almost certainly be into $b\overline{b}$, unless the angle is extremely small. For B1, the fermionic branching ratio into $b\overline{b}$ is approximately $400\sin^{2}\theta$, and thus it is likely that $b\overline{b}$ decays will dominate, although the remarkable $\mu\tau$ decay mode will still be substantial. Note that the signature for $\mu\tau$ decays is very clean, and branching ratios of $10^{-4}$ can be detected. As a result, in all of these models except B3, the Higgs decay into $\mu\tau$ is detectable. \section{Bounds on the Gauge Boson Sector} The electroweak Lagrangian (with the kinetic terms dropped) may be written in the form \begin{displaymath} {\cal L}= \sum_i \overline{\psi}_i({g \over 2} \lambda_\alpha A_\alpha^\mu + g'XB^\mu)\psi_i =\sum_i \overline{\psi}_i \pmatrix{ D_1^\mu & {g \over \sqrt{2} } W^{+\mu} & {g \over \sqrt{2} } K^{+\mu} \cr {g \over \sqrt{2} } W^{-\mu} & D_2^\mu & {g \over \sqrt{2} } K^{0\mu} \cr {g \over \sqrt{2} } K^{-\mu} & {g \over \sqrt{2} } \overline{K}^{0\mu} & D_3^\mu \cr}\psi_i \end{displaymath} where \begin{eqnarray} D_1^\mu=g \left( {A_3^\mu \over 2} + {A_8^\mu \over 2\sqrt{3} } \right) +g'XB^\mu \nonumber \\ D_2^\mu=g \left( -{A_3^\mu \over 2} + {A_8^\mu \over 2\sqrt{3} } \right) +g'XB^\mu \nonumber \\ D_3^\mu= -g{A_8^\mu \over \sqrt{3} } +g'XB^\mu \end{eqnarray} and the sum is over all $\psi$ in the model. With the relationship $\sin^{2}\theta_W={3g'^2 \over 3g^2+4g'^2}$ defining the electroweak mixing angle, we find that the diagonal terms reduce to combinations of the expected neutral gauge bosons $A^\mu$ and $Z^\mu$, plus a new boson, the $Z'^\mu$. The photon and $Z$ have the same couplings and Feynman rules as the SM, and therefore display no unusual characteristics. However, the $Z'$ has vector and axial couplings which depend on the particular lepton generation, Eqs.\ref{eq:L1}-\ref{eq:L4}, leading to FCNC. In terms of the $SU(3)_L \otimes U(1)_X$ gauge bosons, we find that the low-energy fields are given by~\cite{diaz, ochoa, VanDong:2005pi} \begin{eqnarray} A_\mu & = & S_W A_\mu^3 + C_W \left( {T_W \over \sqrt{3} } A_\mu^8 +\sqrt{ 1-{T_W^2 \over 3}} B_\mu \right) \cr \nonumber Z_\mu & = & C_W A_\mu^3 - S_W \left( {T_W \over \sqrt{3} } A_\mu^8 +\sqrt{ 1-{T_W^2 \over 3}} B_\mu \right) \cr \nonumber Z_\mu' & = & -\sqrt{ 1-{T_W^2 \over 3}} A_\mu^8 + {T_W \over \sqrt{3} } B_\mu , \label{eq:bosons} \end{eqnarray} where $S_W=\sin \theta_W$, $C_W=\cos \theta_W$, and $T_W=\tan \theta_W$. These fields have the eigenvalues \begin{equation} M^2_{A_\mu}=0; \quad M^2_{Z_\mu}\simeq {g^2 \over 2} \left[ { 3g^2 +4g'^2 \over 3g^2 + g'^2} \right] (v_1^2 +v_2^2); \quad M^2_{Z_\mu'}\simeq {2 [3g^2 +g'^2] \over 9} V^2. \end{equation} The $Z'$ has a vertex factor of the form $-i {1 \over 2} \gamma_\mu \left( C_V - C_A \gamma_5 \right)$ where the $C_{V,A}$ are family-dependent, and given in Table 5. \begin{table}[h] \begin{center} \begin{tabular} {ccc}\hline \hline Family & $C_V$ & $C_A$ \\ \hline $L_1$,$L_4$ & ${10 S_W \over \sqrt{3-4S_W^2}} + {\sqrt{3-4S_W^2} \over S_W}$ & ${\sqrt{3-4S_W^2} \over S_W} - {2 S_W \over \sqrt{3-4S_W^2}}$ \\ $L_2$ & ${8 S_W \over \sqrt{3-4S_W^2} }-{\sqrt{3-4S_W^2} \over S_W}$ & $-{\sqrt{3-4S_W^2} \over S_W} - {4 S_W \over \sqrt{3-4S_W^2} }$ \\ $L_3$ & ${6 S_W \over \sqrt{3-4S_W^2} } -3{\sqrt{3-4S_W^2} \over S_W}$ & ${\sqrt{3-4S_W^2} \over S_W} - {2 S_W \over \sqrt{3-4S_W^2}}$ \\ \hline \hline \end{tabular} \end{center} \caption{The $C_V$ and $C_A$ for the various lepton families. A common factor of ${e\over 6\cos\theta_{W}}$ has been factored out of each. Note that $C_A$ is the same for $L_{1,3,4}$.} \end{table} A recent analysis of precision electroweak (EW) bounds in 331 models without exotic electric charges\cite{ochoa} gave a lower bound of $1400$ GeV on the mass of the $Z^{\prime}$. Since the $SU(3)_L \otimes U(1)_X$ representations are different for each lepton family, one expects $Z'$-mediated FCNC. As discussed in the last section, the mixing matrix between the $SU(3)_L$ eigenstates and the mass eigenstates will have too many free parameters. To estimate the size of the $Z'$ FCNC, we therefore again use the Fritsch ansatz. Failure to use this ansatz results in too many parameters. This results in a mixing matrix with no free parameters but the lepton masses. To determine the FCNC couplings of the $Z'$, one picks the model and diagonalizes. Using the $C_V$ and $C_A$ in Table 5, one reads off the couplings for each particle. These couplings will be a linear combination of the family couplings. Since the $C_V$ differ for each family, there will be FCNC. The most stringent bound on $M_{Z'}$ is found from $\mu \rightarrow 3e$ decays. The formula for this decay rate is \begin{equation} \Gamma = {\pi m_\mu^5 \over 108} \left({ e \over 24 \pi C_W M_{Z'}} \right)^4 \left[ 3(C_{Ve\mu}^2 + C_{Ae\mu}^2 )(C_{Vee}^2 + C_{Aee}^2 ) +4C_{Ve\mu}C_{Ae\mu}C_{Vee}C_{Aee} \right] \end{equation} Given that we do not know which family corresponds to which lepton, we try all possibilities. This provides bounds that range from $2$ TeV in Model B2, to between 20 and 40 TeV in the other models. A similar calculation using $\tau \rightarrow 3\mu$ or $\mu \rightarrow e \gamma$ provides much weaker lower bounds. Thus precision EW bounds will not be relevant in these models. A bound of 20 to 40 TeV is discouraging since the $Z'$ will be beyond the reach of the LHC, and because fine-tuning will be needed to explain a new hierarchy problem. Nonetheless, Model B2 does not need substantial fine-tuning, and the Higgs decays in any of the models will provide distinct signatures. \section{Conclusions} We have studied a pair of 3-3-1 models that have not previously been examined. The defining characteristic of these models is that each lepton generation has a unique structure. This leads to FCNC decays mediated by the light Higgs and $Z'$ boson. $Z'$ mediated $\mu \rightarrow 3e$ provides a lower bound of $2$ TeV for $M_Z'$ in Model B2, and between 20 and 40 TeV in the others. These models will all have interesting Higgs decay signatures. In particular, $\Phi \rightarrow \mu \tau$ could show up clearly at the LHC. This research was supported by the National Science Foundation grant PHY-023400. \hspace{0.2in}
1,108,101,563,113
arxiv
\section{Introduction} In this work we consider the $(2+\frac{1}{2})$-dof Hamiltonian system \begin{equation} \label{system_intro} H(x_1,x_2,y_1,y_2,t) = H_0(x_1,x_2,y_1,y_2) + \epsilon H_1(x_1,x_2,y_1,y_2,t), \end{equation} where $$ H_0(x_1,x_2,y_1,y_2)= x_1 y_2 - x_2 y_1 + \nu \left(\frac{x_1^2+x_2^2}{2} + \frac{y_1^2+y_2^2}{2} \left( -1 + \frac{y_1^2+y_2^2}{2} \right) \right), $$ and $$ H_1(x_1,x_2,y_1,y_2,t)=\frac{y_1^5}{(d-y_1)(c-\cos(\theta))}, \quad \theta=\gamma t + \theta_0. $$ We shall fix concrete values of $c$, $d$, $\gamma$ and $\epsilon$, and consider $\nu>0$ as a perturbative parameter. The parameter $\theta_0 \in [0,2\pi)$ is an initial time phase. Our motivation to consider that concrete system is to study some dynamical properties related to the Hamiltonian-Hopf bifurcation under a periodic forcing. Then, to start with, in Section~\ref{Sec:SokolskiiNF} we briefly review the reduction to Sokolskii normal form (NF) for a 2-dof Hamiltonian system that undergoes a Hamiltonian-Hopf bifurcation. The truncation of the Sokolskii NF provides an integrable approximation of the dynamics. The above unperturbed system $H_0$ is simply the lowest order truncation that captures the main dynamical features of the Hamiltonian-Hopf bifurcation. Some basic facts concerning the dynamics of $H_0$ are summarized in Section~\ref{Sec:ConcreteSystem}. For $\nu>0$ the origin becomes of complex-saddle type. For $\epsilon=0$ the 2D stable/unstable invariant manifolds coincide. But for small and fixed $\epsilon>0$ the perturbation $H_1$ creates a splitting of these invariant manifolds. In Section~\ref{Sec:ConcreteSystem} we also discuss some nice properties of the chosen perturbation. Such splitting of the invariant manifolds becomes exponentially small in $\nu$ as $\nu \rightarrow 0$. In Section~\ref{sect_num} we perform a numerical computation of the splitting functions. Quadruple precision arithmetics is used to integrate (\ref{system_intro}) in order to get a sample of points on $W^u(0)$ and $W^s(0)$ that allows us to compute the splitting function in a fundamental domain. Different bifurcations are detected examining the nodal lines of the splitting functions as $\nu$ varies. The corresponding Poincar\'e-Melnikov function is analytically investigated in Section~\ref{Sec:Splitting} by means of a combination of numerical, symbolical and theoretical tools. The splitting problem considered is non-perturbative ($\epsilon$ is fixed) and singular (when $\nu=0$ the system is not hyperbolic) and the use of the Melnikov approximation to study the splitting is not theoretically justified. In Section~\ref{Sec:ValMel} we compare the results of the splitting obtained in Section~\ref{sect_num} with those from (a suitable truncation, adding only the relevant terms) the Melnikov approximation derived in Section~\ref{Sec:Splitting}, getting a remarkable agreement. In Section~\ref{sect_PoincMel} we further analyse the Poincar\'e-Melnikov function by taking advantage of the concrete properties of the system and of the perturbation to give explicit details of the asymptotic behaviour of the splitting. In particular, we look for the concrete values of the parameter $\nu$ for which a change in the dominant harmonic is detected and we study how these values asymptotically behave. As expected, the Diophantine properties of the frequency $\gamma$ of the perturbation $H_1$ play a key role in the analysis performed in Section~\ref{sect_PoincMel}. Note that we do not assume that we have a concrete frequency, instead our hypothesis are on the properties of the continuous fraction expansion (CFE) of $\gamma$. Section~\ref{Sec:Otherfreq} is devoted to illustrate the behaviour of the splitting for different frequencies $\gamma$. In particular, we show examples where some of the best approximants of $\gamma$ never become a dominant harmonic in the splitting functions. Finally, Section~\ref{Sec:Conclusion} summarizes the results and describes related future work problems. Five appendices complement the discussions through the text. In Appendix~\ref{autonomous} we study the splitting under an autonomous perturbation of the unperturbed system. The simple asymptotic behaviour of the splitting is well-understood in this situation in contrast with the non-autonomous perturbation case studied in this work. Appendix~\ref{exempleduffing} however illustrates that in the autonomous case, taking a non-entire perturbation, the analysis of the splitting by considering individual terms of the series expansion of the perturbation can lead to a larger dominant exponent of the Melnikov function. This is not expected in the non-autonomous case since the dominant term comes from the quasi-periodic properties of the splitting asymptotic behaviour. Appendix~\ref{Sec:Regularity} discusses about the role of the regularity of the non-autonomous perturbation in $t$ in the asymptotic behaviour of the splitting. When describing the asymptotic behaviour of the splitting of the invariant manifolds for system (\ref{system_intro}) we will see that for large intervals of $\nu$ the dominant harmonic coincides for both splitting functions. However, there are small intervals of $\nu$ where the dominant harmonics differ. In Appendix~\ref{split-difusion} we comment on the expected consequences that this fact has in what concerns the (local) diffusive properties of the system for very small values of $\nu$. In the last appendix we focus on the presence of hidden harmonics, that is, harmonics associated to best approximants of $\gamma$ that never become a dominant harmonic of the splitting function. As said, hidden harmonics are shown for some frequencies in Section~\ref{Sec:Otherfreq}. We prove in Appendix~\ref{conseq_best} that, under generic conditions, it is not possible to have two consecutive best approximants of $\gamma$ which are not related to a dominant harmonic of the splitting function when some nearby quotients of the CFE of $\gamma$ are large enough. A more general situation can be found in \cite{FonSimVie-2}. The theoretical derivations presented in this work provide a satisfactory and complete description of the asymptotic behaviour of the splitting of separatrices of the system (\ref{system_intro}). On the other hand, a complete rigorous proof of the results included here will require \begin{enumerate} \vspace{-0.2cm} \item to bound the effect of higher order terms of the expansion of the splitting function in powers of $\epsilon$ to guarantee that the first order Poincar\'e-Melnikov function provides the dominant term of the splitting behaviour, and \vspace{-0.2cm} \item to check that the contribution of the non-dominant harmonics of the Poincar\'e-Melnikov approximation does not change the dominant term of the asymptotic expansion of the splitting behaviour. \end{enumerate} Even if we do not address formally any of the previous items, the numerical results that we present provide a strong numerical evidence supporting them. \section{The theoretical framework: the Hamiltonian-Hopf bifurcation} \label{Sec:SokolskiiNF} For the reader's convenience, in this section we briefly summarize some details of the analysis of the Hamiltonian-Hopf bifurcation. Consider a one-parameter family of Hamiltonian systems $H_{\hat{\nu}}(x_1,x_2,y_1,y_2)$ which undergo a Hamiltonian-Hopf bifurcation. Assume that for ${\hat{\nu}}>0$ the origin is elliptic and becomes complex unstable for ${\hat{\nu}}<0$. This implies that the eigenvalues of the linearised Hamiltonian system suffer a Krein collision: for ${\hat{\nu}}>0$ the linear system has two pairs of purely imaginary eigenvalues $\pm {\mbox{\rm i}\,} \omega_1$ and $\pm {\mbox{\rm i}\,} \omega_2$. These pairs meet in a double pair $\pm {\mbox{\rm i}\,} \omega$, $\omega >0$, on the imaginary axis for ${\hat{\nu}}=0$ (Krein collision) and they become a hyperbolic quartet $\pm \alpha \pm {\mbox{\rm i}\,} \omega$, $\alpha,\omega>0$ for ${\hat{\nu}}<0$. Let $\mathbb{P}_k$ be the set of homogeneous polynomials of degree $k \in \mathbb{N}$. Consider the Taylor expansion at $0$ of $H_{{\hat{\nu}}}$ expressed as $$ H_{{\hat{\nu}}}=\sum_{k \geq 2} \sum_{j\geq 0} {\hat{\nu}}^j H_{k,j} , \qquad \text{where } H_{k,j} \in \mathbb{P}_k \text{ for all } k\geq 2, \ j\geq0. $$ The first step is to reduce the quadratic part $H_{2,0}$ to a canonical NF (i.e. a NF obtained via a symplectic change of coordinates). After doing this reduction the strategy will be to use a Lie series methodology to successively (order by order) simplify (as much as possible) the terms $H_{2,j}, \, j\geq 1$, and $H_{k,j}, \, k\geq3, j \geq0$. The possible canonical forms for quadratic Hamiltonians were obtained in \cite{Will37}. In the case of two pairs of (double) purely imaginary eigenvalues $H_{2,0}$ can be reduced to the so-called Williamson NF \begin{equation} \label{WNF} H_{2,0}= -\omega (x_2 y_1 - x_1 y_2) + \frac{1}{2} (x_1^2 + x_2^2). \end{equation} The next step involves normalising higher order terms of $H_{{\hat{\nu}}}$. The fact that the linearization at the Hamiltonian-Hopf bifurcation point is non-semisimple makes the NF reduction a little bit more involved, see \cite{MeySch71,VdM82,MeyerHall,Han07,PalYan00}. A standard procedure to deal with the terms of order $(k,j)$ is to look for a change of variables given by the time-$1$ map of a Hamiltonian $G \in \mathbb{P}_k$. In such a case the corresponding change transforms $H_{{\hat{\nu}}}$ into \begin{equation} \label{2bis} \tilde{H}_{{\hat{\nu}}} = \sum_{i \geq 0} \frac{1}{i!} \ \text{ad}^i_{H_{{\hat{\nu}}}}(G), \end{equation} where $\text{ad}_{F}(G)=\{F,G\}$ denotes the usual adjoint operator defined in terms of the Poisson bracket $$ \{F,G\} = \left( \frac{\partial{F}}{\partial x_1} \frac{\partial{G}}{\partial y_1} - \frac{\partial{F}}{\partial y_1} \frac{\partial{G}}{\partial x_1} \right) + \left( \frac{\partial{F}}{\partial x_2} \frac{\partial{G}}{\partial y_2} - \frac{\partial{F}}{\partial y_2} \frac{\partial{G}}{\partial x_2} \right). $$ Collecting the terms of $\tilde{H}_{{\hat{\nu}}}$ of order $(k,j)$ in (\ref{2bis}) we get $H_{k,j} + \text{ad}_{H_2}(G)$, meaning that the change of coordinates allows us to remove the terms $H_{k,j}$ of $H_{{\hat{\nu}}}$ that belong to $\text{Im} \,\text{ad}_{H_2}(G)$. The Fredholm alternative implies that $\mathbb{P}_k= \text{Im}\, \text{ad}_{H_2} \oplus \text{Ker} \, \text{ad}^\top_{H_2}$, where $\text{ad}^\top_{H_2}$ denotes the transpose operator. Then, as indicated in \cite{Elpetal87,MeyerHall}, a systematic way to proceed is to look, at each order $(k,j)$ of the normalisation procedure, for $G \in \mathbb{P}_k$ such that \begin{equation} \label{homological} H_{k,j} + \text{ad}_{H_2}(G) \in \text{Ker} \, \text{ad}^\top_{H_2}. \end{equation} Moreover, in the (symplectic, $\Omega=dx_1 \wedge dy_1 + dx_2 \wedge dy_2 = dR \wedge dr + d \Theta \wedge d \theta$) new coordinates \begin{equation}\label{Sokcoord} y_1=r \cos(\theta), \qquad y_2=r \sin(\theta), \qquad R= (x_1 y_1 + x_2 y_2)/r, \qquad \Theta =x_2 y_1 - x_1 y_2, \end{equation} the transpose linear system (i.e. the system with equations defined by the matrix $J (D^2 H_2)^\top$) reduces to $H_2^\top = -\omega \Theta + \frac{1}{2} r^2$, see \cite{MeyerHall}. Then (\ref{homological}) implies that the normalised (formal) Hamiltonian is given by \begin{equation} \label{SNF} \text{NF}(H_{{\hat{\nu}}})=\omega \Gamma_1 + \Gamma_2 + \sum_{\substack{k,l,j \geq 0 \\ k + l +j \geq 2}} a_{k,l,j} \, \Gamma_1^{k} \, \Gamma_3^{l} \, {\hat{\nu}}^j, \end{equation} where $$ \Gamma_1 = x_1 y_2 - x_2 y_1, \qquad \Gamma_2 = (x_1^2+ x_2^2)/2 \qquad \text{and} \qquad \Gamma_3= (y_1^2+ y_2^2)/2. $$ This is the so-called Sokolskii NF \cite{Sok74}, see \cite{MeyerHall,GaiGel11} for further details on its derivation. Note that: \begin{itemize} \item We have seen that there exists a formal change of variables $C$ (not convergent in general) such that reduces the given system to the NF (\ref{SNF}). Moreover, $C$ is symplectic, see for example \cite{SimVall01}. If the quadratic part of the original system is already in Williamson NF, then the change is near-the-identity. \item The reduced Hamiltonian (\ref{SNF}) is formally integrable and possesses $\Gamma_1$ as an extra (formal) integral of motion. The original Hamiltonian is only formally integrable (that is, the truncation at any order is integrable) and the difference between the Hamiltonian $H_{{\hat{\nu}}}$ and the formal series $\text{NF}(H_{{\hat{\nu}}}) \circ C^{-1}$ is beyond all orders. \item The reduction to a NF is achieved by means of successive changes of coordinates to normalize order by order the full Hamiltonian. Each of the changes of the normalization procedure reduces the domain where the truncated NF gives a good approximation. For a fixed perturbation parameter ${\hat{\nu}}$, there is an optimal truncation order of the NF that minimizes the bound of the error between the Hamiltonian and the NF in a suitable domain around the fixed point. Note that the optimal order depends discontinuously on ${\hat{\nu}}$ because it jumps on the integers. See, e.g., \cite{Nei84,Sim94} \end{itemize} Next we discuss some features of the invariant manifolds of the origin for $\text{NF}(H_{{\hat{\nu}}})$. In particular, $\{\Gamma_1,\Gamma_2\}=0$ and $\{\Gamma_1,\Gamma_3\}=0$ and hence, as we have said, $\Gamma_1$ is a first integral of $\text{NF}(H_{{\hat{\nu}}})$. Therefore $\Gamma_1=0$ on the invariant manifolds of the origin. On the other hand, these manifolds lie on $\text{NF}(H_{{\hat{\nu}}})=0$. From (\ref{SNF}), making explicit the lowest order terms of $\text{NF}(H_{{\hat{\nu}}})$, we have \begin{eqnarray} \label{NF4} \text{NF}(H_{{\hat{\nu}}}) &=& \omega \Gamma_1 + \Gamma_2 + {\hat{\nu}} ( a_{1,0,1} \Gamma_1 + a_{0,1,1} \Gamma_3 )+ a_{2,0,0} \Gamma_1^2 + a_{1,1,0} \Gamma_1 \Gamma_3 + a_{0,2,0} \Gamma_3^2 \\ \nonumber & & + \mathcal{O}({\hat{\nu}}^2 (\Gamma_1 + \Gamma_3), {\hat{\nu}} (\Gamma_1+\Gamma_3)^2,(\Gamma_1+\Gamma_3 )^3). \end{eqnarray} Then, the 2D stable and unstable invariant manifolds $W^{s/u}({\bf 0})$ are given by the relation \begin{equation} \label{WUS} \Gamma_2 + {\hat{\nu}} \, a_{0,1,1} \Gamma_3 + a_{0,2,0} \Gamma_3^2 + \mathcal{O}({\hat{\nu}}^2 \Gamma_3, {\hat{\nu}} \Gamma_3^2, \Gamma_3^3)=0. \end{equation} We want to have real invariant manifolds $W^{s/u}({\bf 0})$, which requires $\Gamma_2, \Gamma_3 >0$ (otherwise they lie in the complex domain). This means that ${\hat{\nu}} a_{0,1,1}<0$ and, since we have assumed that for ${\hat{\nu}}<0$ the origin is a complex-unstable fixed point, we must have $a_{0,1,1}>0$. Moreover, in such a case, for $a_{0,2,0}>0$ the invariant manifolds $W^{u/s}(0)$ live in a finite domain which, requiring the same order for the three dominant terms in (\ref{WUS}), has size $\Gamma_2 =\mathcal{O}({\hat{\nu}}^2)$ and $\Gamma_3= \mathcal{O}({\hat{\nu}})$. However for $a_{0,2,0}<0$ the invariant manifolds may be unbounded. For the first case we introduce the new parameter ${\nu}$ by ${\hat{\nu}}=-{\nu}^2$, and the rescaling $x_i = {\nu}^2 \tilde{x}_i$, $\omega y_i= {\nu} \, \tilde{y}_i$, $i=1,2$, $\omega t =\tilde{t}$, see \cite{MeySch71}. For concreteness, we shall consider $\nu>0$. After this non-canonical change of variables the system is again Hamiltonian and the corresponding Hamiltonian is \begin{equation} \label{scaledNF} \text{NF}(\tilde{H}_{{\nu}}) = \tilde{\Gamma}_1 + {\nu} \left( \tilde{\Gamma}_2 + a \tilde{\Gamma}_3 + \eta \tilde{\Gamma}_3^2 \right) + \mathcal{O}({\nu}^2), \end{equation} where \begin{equation} \label{aeta} a=-a_{0,1,1}/\omega^2 \qquad \text{ and } \qquad \eta=a_{0,2,0}/\omega^4. \end{equation} Hence, as it was pointed out in \cite{McSMey03}, for $\eta>0$ the invariant manifolds $W^{u/s}(0)$ are bounded while for $\eta <0$ they may be unbounded. Henceforth, we assume $a<0$ and $\eta>0$. \vspace{0.2cm} \begin{remark} \label{remark_eigen} \small From (\ref{NF4}) one checks that the eigenvalues of the linearisation at the origin of the original system $H_{{\hat{\nu}}}(x_1,x_2,y_1,y_2)$ are given by $\lambda=\pm {\mbox{\rm i}\,} \omega \pm \sqrt{a_{0,1,1}} (-{\hat{\nu}})^{1/2} + \mathcal{O}({\hat{\nu}})$. Then, for ${\hat{\nu}}<0$, one has $\text{Re} (\lambda) = \pm \omega \sqrt{-a} (-{\hat{\nu}})^{1/2} + \mathcal{O}({\hat{\nu}})$ and $\text{Im} (\lambda) = \pm \omega + \mathcal{O}({\hat{\nu}})$. \end{remark} \section{The system: a periodic perturbation of the truncated NF} \label{Sec:ConcreteSystem} In this section we provide some details on the concrete system (\ref{system_intro}) studied in this paper. \subsection{The unperturbed system} \label{sec3p1} Our starting point is the truncated (ignoring $\mathcal{O}({\nu}^2)$ terms) Sokolskii NF Hamiltonian (\ref{scaledNF}). According to Section~\ref{Sec:SokolskiiNF}, for $a<0$ and $\eta>0$ the invariant manifolds of the origin are bounded. The rescaling $\tilde{x}_i \to (-\sqrt{\eta}/a) \tilde{x}_i$, $\tilde{y}_i \to (\sqrt{-\eta/a}) \tilde{y}_i$, $i=1,2$, $\nu \to \sqrt{-a} \nu$, reduces the truncated Hamiltonian (\ref{scaledNF}) to the case $a=-1$ and $\eta=1$. To simplify notation, we denote the rescaled variables and parameter simply by $(x_1,x_2,y_1,y_2)$ and $\nu$, respectively. We also introduce ${\bf x}=(x_1,x_2)$, ${\bf y}=(y_1,y_2)$, and we denote by $H_0$ the corresponding truncated Hamiltonian. Hence, $H_0$ is just given by \begin{equation} \label{H0} H_0({\bf x},{\bf y})= \Gamma_1 + \nu (\Gamma_2 - \Gamma_3 + \Gamma_3^2), \end{equation} where $\Gamma_1=x_1 y_2 -x_2 y_1$, $\Gamma_2 = (x_1^2+x_2^2)/2$ and $\Gamma_3 = (y_1^2+y_2^2)/2$. The system $H_0$ is defined on the symplectic manifold $(\mathbb{R}^4,\Omega)$ with $\Omega=dx_1 \wedge dy_1 + dx_2 \wedge dy_2$, and the equations of motion are $$ \begin{array}{rcr} \dot{x}_1 & \! = \! & -x_2 + \nu y_1 (y_1^2 + y_2^2 -1), \\ \dot{x}_2 & \! = \! & x_1 + \nu y_2 (y_1^2 + y_2^2 -1), \end{array} \qquad \begin{array}{rcr} \dot{y}_1 & \! = \! & -y_2 - \nu x_1, \\ \dot{y}_2 & \! = \! & y_1 - \nu x_2. \end{array} $$ As follows from Section~\ref{Sec:SokolskiiNF}, $H_0$ is integrable and $\Gamma_1$ is an independent first integral of the system. The origin is a fixed point of $(\ref{H0})$ with eigenvalues $\pm \nu \pm {\mbox{\rm i}\,}$. For $\nu>0$, the origin is of complex-saddle type and the invariant manifolds $W^{u/s}(\bf 0)$ are given by $\{H_0=0\}\cap \{\Gamma_1=0\}$. To elucidate the dynamics of $H_0$ it is convenient to introduce (non-symplectic) polar coordinates \begin{equation} \label{polar} x_1=R_1 \cos(\psi_1), \quad x_2=R_1 \sin(\psi_1), \quad y_1=R_2 \cos(\psi_2), \quad y_2=R_2 \sin(\psi_2), \end{equation} where $R_1,\ R_2 >0$ and $\psi_1, \psi_2 \in [0,2 \pi)$. The equations of motion become \begin{equation} \label{eqmotR1R2} \begin{array}{rclrcl} \dot{R}_1 &= & \nu R_2 (R_2^2 -1) \cos(\psi_2-\psi_1), & \dot{\psi}_1 &=& 1 + \nu (R_2^2 -1) R_2 \sin(\psi_2-\psi_1) / R_1, \\ \dot{R}_2 &= &-\nu R_1 \cos(\psi_2-\psi_1), & \dot{\psi}_2 &=& 1 + \nu R_1 \sin(\psi_2-\psi_1) / R_2. \\ \end{array} \end{equation} One has $\Gamma_1= R_1 R_2 \sin(\psi_2-\psi_1)$, and hence $\Gamma_1=0$ implies $\sin(\psi_2-\psi_1)=0$. One can distinguish two cases: either $\psi_1-\psi_2 =0 \ (\text{mod} \, 2 \pi)$ or $\psi_1-\psi_2 = \pi \ (\text{mod} \, 2 \pi)$. Each of these cases defines a system for $R_1,R_2>0$. But, since the changes $R_1 \to -R_1$ and $R_2 \rightarrow -R_2$ reduce the system of one of the cases to the other one, it is enough to consider one of the cases if one allows $R_1,R_2$ to be negative.\footnote{The virtual singularities at $R_1=0$ and $R_2=0$ play no role. They are due to the use of polar coordinates. Indeed, using the so-called Sokolskii coordinates \cite{Sok74} one can remove one of them. We note, however, that there are not globally defined polar coordinates around a 2-dof complex-saddle singularity, see \cite{LerUma92}.} To fix ideas, we consider $\psi_1-\psi_2 = \pi \ (\text{mod} 2 \pi)$. Then, the restriction of the dynamics on $\{\Gamma_1=0\}$ for the $(R_1,R_2)$-components is just given by the equations related to the Duffing Hamiltonian $K= \nu (R_1^2 -R_2^2 + R_2^4 /2)/2$ (with the symplectic 2-form $\Omega_K=dR_2 \wedge dR_1$). The local positive branch of the homoclinic orbit $\gamma(t)$ of $K$, with $R_1,R_2 >0$, corresponds to the unstable manifold of the origin. It follows from (\ref{eqmotR1R2}) that along the invariant manifolds $\psi_1= \psi_2 - \pi$ and $\psi_2=t + \psi_0$, where $\psi_0 \in [0,2\pi)$ is an arbitrary phase. Moreover, the invariant manifolds of the unperturbed system (\ref{H0}) are foliated by homoclinic orbits $\gamma_{\psi_0} (t)=(x_1(t),x_2(t),y_1(t),y_2(t))$ given by \begin{equation} \label{homoH0} x_1(t) \!=\! -R_1(t) \cos(\psi), \ \ x_2(t) \!=\! -R_1(t) \sin(\psi), \ \ y_1(t) \!=\! R_2(t) \cos(\psi), \ \ y_2(t) \!=\! R_2(t) \sin(\psi), \end{equation} being $\psi=t+\psi_0$, $R_1(t)= \sqrt{2} \sech(\nu t) \tanh(\nu t)$, and $R_2(t)=\sqrt{2}\sech(\nu t)$. In particular, $\gamma_{\psi_0}(t)$ has singularities at $t= (2n+1) {\mbox{\rm i}\,} \pi / 2 \nu$, $n \in \mathbb{Z}$. \subsection{The perturbation} We proceed by adding a periodic perturbation to (\ref{H0}). Concretely, as stated in the Introduction, we consider \begin{equation} \label{perturbedsystem} H({\bf x},{\bf y},t) = H_0({{\bf x}},{{\bf y}}) + \epsilon H_1({{\bf x}},{{\bf y}},t), \end{equation} where $H_0({{\bf x}},{{\bf y}})$ is the unperturbed Hamiltonian (\ref{H0}), which depends on $\nu$, and $$ H_1({{\bf x}},{{\bf y}},t)=g(y_1)f(\theta)=\frac{y_1^5}{d-y_1} \frac{1}{c-\cos(\theta)}, $$ where $\theta=\gamma t + \theta_0$, with $\gamma\in \mathbb{R} \setminus \mathbb{Q}$ and $\theta_0 \in [0,2\pi)$ is an initial phase, $d>\sqrt{2}$ and $c>1$. The parameter $\epsilon$ is considered to be small and fixed. We choose $\gamma =(\sqrt{5}-1)/2$, $d=7$, $c=5$ and $\epsilon=10^{-3}$ for the majority of computations through the paper, but we do not restrict to these values in the theoretical considerations. In particular, we will give details on how to deal with other irrational frequencies $\gamma$ and the role of their arithmetic properties in the asymptotic splitting behaviour as $\nu \to 0$. \vspace{0.2cm} The following comments motivate and somehow justify the perturbation (\ref{perturbedsystem}) considered. \vspace{-0.2cm} \begin{enumerate} \item A generic autonomous perturbation would create a splitting of separatrices. This case resembles the splitting of a $(1+1/2)$-dof Hamiltonian system (by considering the reduction to the energy level where the separatrices lie). A direct analysis of the Poincar\'e-Melnikov function in this case reveals an exponentially small behaviour in the parameter $\nu$ of the splitting measured as a variation of $\Gamma_1$. We summarize in Appendix~\ref{autonomous} the theoretical results and some concrete numerical simulations of the behaviour of the splitting for an autonomous perturbation. \item The phenomena becomes much richer under a non-autonomous perturbation since different frequencies interact. Consider the particular case of the perturbation (\ref{perturbedsystem}). Around the invariant manifolds the unperturbed system possesses the internal frequency $1$ in $t$, see (\ref{homoH0}). Then, we choose $\gamma \in \mathbb{R}\setminus \mathbb{Q}$ in $H_1$ so that the effect of the perturbation resembles that of a quasi-periodic forcing. Concretely, when one restricts the perturbation $\epsilon H_1({\bf x},{\bf y},t)=\epsilon g(y_1)f(\theta)$ to the unperturbed invariant manifolds $W^{u/s}({\bf 0})$ of $H_0({\bf x},{\bf y})$, since $y_1$ has a factor periodic in $t$ as (\ref{homoH0}) shows, one gets a quasi-periodic function in $t$ with basic frequencies $(1,\gamma)$. As will be shown, some of the linear combinations of the basic frequencies are slower (hence they average in a worst way) and describe the behaviour of the dominant terms of the splitting of the invariant manifolds. \item $H_0$ is an entire function of ${\bf x},{\bf y}$. The perturbation $H_1$ in a neighbourhood of the unperturbed invariant manifolds is real analytic with respect to ${\bf x},{\bf y}$ (because, in particular, $y_1 \lesssim \sqrt{2}$ and we choose $d=7$) and it is analytic in $t$. This implies that the amplitude of the dominant term of the Poincar\'e-Melnikov approximation of the splitting function decreases exponentially in the parameter $\nu$, see details in Appendix~\ref{Sec:Regularity}. \item The Fourier coefficients of the even function $f(\theta)= (c-\cos(\theta))^{-1} = \sum_{j\geq 0} c_j \cos(j \theta)$ are given by \begin{equation} \label{expcs} c_0=1/\sqrt{c^2 -1}, \qquad c_j=2 c_{0}/(c+\sqrt{c^2-1})^j \quad \text{ for } j\geq 1. \end{equation} In particular, the Fourier coefficients decay as $1/(c+\sqrt{c^2-1})^j$ (this is related to the fact that $f(\theta)$ has poles at $\pm {\mbox{\rm i}\,} \log(c+\sqrt{c^2-1})$). On the other hand, the Taylor series of $g(y_1)$ is given by \begin{equation} \label{tayg} g(y_1)=\frac{y_1^5}{d-y_1}= \sum_{k \geq 0} d^{-k-1} y_1^{5+k}. \end{equation} It contains all powers $y_1^k$ for $k\geq 5$, but does not contain terms in the other three variables. The choice of $c=5$ and $d=7$ guarantees a fast enough decay but still allows us to differentiate the role of the different harmonics in the Poincar\'e-Melnikov function. See also Remark~\ref{remark_teogen} below. \item Finally, there is also a practical reason: $H_1$ is simple enough so that quadruple precision numerical integration of the full system can be carried out in a reasonable CPU time. \end{enumerate} \begin{remark} Note that the perturbation $H_1({\bf x},{\bf y},t)$ preserves the fixed point at the origin. Instead one could consider perturbations such that the origin becomes a periodic orbit. We do not deal with this situation in this paper, but note that the description given here also applies to this case. \end{remark} \subsection{The splitting function} \label{Sec:splfun} In the following sections we study the invariant manifolds $W^{u/s}(\bf 0)$ of the system (\ref{system_intro}) and the asymptotic behaviour of their splitting as $\nu \rightarrow 0$. Here we introduce the notation we shall use to refer to the splitting function and its different approximations. We write $H_0=G_1+ \nu G_2$, where $G_1=\Gamma_1$ and $G_2=\Gamma_2 - \Gamma_3 + \Gamma_3^2$. $G_1$ and $G_2$ are first integrals of $H_0$. They are independent first integrals except for points on the surface $x_1=\pm y_2\sqrt{y_1^2+y_2^2-1},\;x_2=\mp y_1 \sqrt{y_1^2+y_2^2-1}$ (which includes, in particular, the origin and the periodic orbit $x_1=x_2=0,\ y_1^2+y_2^2=1$). The unperturbed invariant manifolds are given by $G_1=G_2=0$. Given $\epsilon \geq 0$, for $i=1,2$, we denote by $F_i^u$ (resp. $F_i^s$) the restriction of $G_i$ to the invariant manifolds $W^{u}({\bf 0})$ (resp. $W^{s}({\bf 0})$). For $\epsilon$ small, the invariant manifolds $W^{u/s}({\bf 0})$ can be represented as graphs $g_{u/s}:\mathbb{R}^2 \to \mathbb{R}^4$, $g_{u/s}(\psi_0,\theta_0) = (\psi_0, \theta_0, F_1^{u/s}(\psi_0,\theta_0), F_2^{u/s}(\psi_0,\theta_0))$. Each component of the graph $g_{u/s}$ defines a 2-dimensional surface in $\mathbb{R}^3$, they are referred below by $F_1^{u/s}$-graph and $F_2^{u/s}$-graph of $W^{u}({\bf 0})$, The splitting function $(\DF{1}, \DF{2})$ is defined by \begin{equation} \label{splfun} \DF{i}(\psi_0,\theta_0)=F_i^{u}(\psi_0,\theta_0) - F_i^s(\psi_0,\theta_0), \quad i=1,2. \end{equation} The splitting function (\ref{splfun}) can be expanded as $$ \DF{i} = \DFt{i} + \DFt[2]{i} + \dots, $$ where $\DFt[k]{i}(\psi_0,\theta_0) = \epsilon^k M_k(\psi_0,\theta_0)$, $|M_k|=\mathcal{O}(1)$. Hence, $(\DFt{1}, \DFt{2})$ is the first-order Poincar\'e-Melnikov approximation (in powers of $\epsilon$) of the splitting function. Below, we perform direct numerical computations of the invariant manifolds to obtain approximations $\Fn{i}{u/s}$ of the components of the graph function, $i=1,2$. From them we compute numerical approximations $\DFn{i}$ of the components of the splitting function $\DF{i}$. Finally, the first-order approximation $(\DFt{1}, \DFt{2}))$ can be expanded in Fourier series in $(\psi_0,\beta_0)$. Truncating them we obtain approximations that can be evaluated symbolically. In Section~\ref{Sec:ValMel} we compare the results obtained symbolically from suitable truncations of $(\DFt{1}, \DFt{2})$ with the numerical approximations $(\DFn{1},\DFn{2})$. \begin{remark} \label{remark_teogen} There are several theoretical works concerning the splitting of invariant manifolds in presence of a quasi-periodic forcing, we refer to \cite{DelGelJorSea97,SimVall01,DelGut05,DelGonGut14,DelGonGut14-2,DelGonGut14-3}. A common hypothesis is that all the Fourier harmonics in $t$ and all the Taylor series terms in ${\bf x},{\bf y}$ appear in the corresponding expansions. Then they use generic analytic decay of the coefficients to bound the dominant term of the Melnikov function. The perturbation considered in this work, although does not have all the required terms, behaves similarly. In future works we plan to investigate the effect of absence of harmonics and/or Taylor terms in the perturbation and the consequences it has in the behaviour of the splitting of the invariant manifolds and in the dynamics around them. In particular, a higher order Melnikov analysis could be needed to describe the splitting in such a situation. \end{remark} \section{Numerical computations of the splitting: dominant harmonics and nodal lines} \label{sect_num} We present some numerical computations concerning the invariant manifolds $W^{u/s}(\bf 0)$ and their splitting for small values of $\nu$. We compute $\DFn{1}$ for a mesh of points in a fundamental domain (see below) as the difference of the value $\tilde{F}_1^u$ obtained for a point on $W^u(\bf 0)$ and the ``corresponding'' point on $W^s(\bf 0)$. We describe below how to assign the corresponding point by using coordinates in a fundamental domain of the invariant manifolds. Similarly, we also compute $\DFn{2}$. It is useful to consider the Poincar\'e section $\Sigma=\max \{y_1^2 + y_2^2\}=\max R_2^2$, see (\ref{polar}). Note that the invariant manifolds $W^{u/s}(\bf 0)$ of the unperturbed system ($\epsilon=0$) intersect $\Sigma$ in the curve $x_1=x_2=0$, $y_1^2 + y_2^2=2$. This is no longer true for $\epsilon >0$ because of the changes of order $\epsilon$ due to the perturbation (see Fig.~\ref{wsf1f2}). Moreover, there is a (exponentially small in $\nu$) splitting of the invariant manifolds $W^{u/s}(\bf 0)$ for $\epsilon \neq 0$. The illustrations in this section are for $\gamma=(\sqrt{5}-1)/2$ (golden frequency) and for values of $\nu$ of the form $\nu_i=2^{-i}$, $i \geq 0$. For the computation of the invariant manifolds and their splitting we proceed as follows: \begin{enumerate} \item We consider a fundamental domain of $W^{u}(\bf 0)$. This is given by a 2-dimensional torus $\mathcal{T}$. \item The propagation of $\mathcal{T}$ up to $\Sigma$ gives a 2-dimensional torus, say $\mathcal{T}_\Sigma$. The invariant manifolds $W^{u/s}(\bf 0)$ in $\mathbb{R}^4$ are then given as the $\tilde{F}_1^{u/s}$ and the $\tilde{F}_2^{u/s}$-graphs over $\mathcal{T}_\Sigma$. The initial ``angle'' and ``time'' phases $\psi_0$ and $\theta_0$ are local coordinates in $\mathcal{T}_\Sigma$. \item To get the $\Fn{1}{u/s}$ and $\Fn{2}{u/s}$-graphs over $\mathcal{T}_\Sigma$ we propagate a set $\{\tilde{\psi}_{0,k}, \tilde{\theta}_{0,j}\}$, $0\leq k,j \leq 512$, of initial points in $\mathcal{T}$ (i.e. a total number of $2^{18}$ initial conditions) until they reach the Poincar\'e section $\Sigma$. Concretely we select the initial conditions as follows. We fix $R_2=10^{-12}$, set $R_1=R_2(1-R_2^2/2)$ and define $y_1^u=R_2 \cos(\tilde{\psi}_0), \ y_2^u=R_2 \sin(\tilde{\psi}_0), \ x_1^u=R_1 \cos(\tilde{\psi}_0), \ x_2^u=R_1 \sin(\tilde{\psi}_0)$. This gives an initial condition on $W^u$. By symmetry, $y_1^s=y_1^u, \ y_2^s=y_2^u, \ x_1^s=-x_1^u, \ x_2^s= -x_2^u$ defines an initial condition on $W^s$. \item For the propagation step, the numerical integration is performed using an ad-hoc implemented high-order Taylor time-stepper scheme with quadruple precision. \item To compute the difference (i.e. the splitting) between $W^{u}(\bf 0)$ and $W^{s}(\bf 0)$ we need to compare them at the same points of $\mathcal{T}_\Sigma$. Hence, we select an equispaced mesh of angles $\psi_0$ and $\theta_0$ within $\mathcal{T}_\Sigma$, and refine the initial conditions in $\mathcal{T}$ (we select the initial guess from the set of previously computed points in $\Sigma$) using a Newton method. \end{enumerate} To give some illustrations we choose $\nu=2^{-4}$ and $\nu=2^{-6}$. For those two values of $\nu$ the $\Fn{1}{s}$-graph (resp. $\Fn{2}{s}$-graph) of the stable manifold $W^{s}(\bf 0)$ over $\mathcal{T}_\Sigma$ is shown in Fig.~\ref{wsf1f2} left (resp. right). \begin{figure}[p] \psfrag{F1}{$\Fn{1}{s}$} \psfrag{F2}{$\Fn{2}{s}$} \psfrag{psi0}{$\psi_0$} \psfrag{beta}{$\theta_0$} \begin{center} \epsfig{file=./figs/wsf1_4.000.eps,width=75mm,angle=0} \epsfig{file=./figs/wsf2_4.000.eps,width=75mm,angle=0} \\[-11pt] \epsfig{file=./figs/wsf1_6.000.eps,width=75mm,angle=0} \epsfig{file=./figs/wsf2_6.000.eps,width=75mm,angle=0} \caption{First row: for $\nu=2^{-4}$ we represent the $\Fn{1}{s}$-graph (left) and the $\Fn{2}{s}$-graph (right) of $W^{s}(\bf 0)$ over $\mathcal{T}_\Sigma$. Second row: the same for $\nu=2^{-6}$. } \label{wsf1f2} \end{center} \psfrag{dF1}{$\DFn{1}$} \psfrag{dF2}{$\DFn{2}$} \psfrag{psi0}{$\psi_0$} \psfrag{beta}{$\theta_0$} \begin{center} \epsfig{file=./figs/spf1_4.000.eps,width=75mm,angle=0} \epsfig{file=./figs/spf2_4.000.eps,width=75mm,angle=0} \\[-11pt] \epsfig{file=./figs/spf1_6.000.eps,width=75mm,angle=0} \epsfig{file=./figs/spf2_6.000.eps,width=75mm,angle=0} \caption{First row: For $\nu=2^{-4}$ we represent the $\DFn{1}$ (left) and the $\DFn{2}$ (right). Second row: $\DFn{1}$ and $\DFn{2}$ for $\nu=2^{-6}$. } \label{spf1f2} \end{center} \end{figure} There are no appreciable differences (using the scale of the plots) between the graphs corresponding to $W^{s}(\bf 0)$ shown in Fig.~\ref{wsf1f2} and the corresponding plots for the graphs of the unstable manifold $W^{u}(\bf 0)$. This is because the splitting $(\DF{1},\DF{2})$ becomes exponentially small with respect to $\nu$. We show in Fig.~\ref{spf1f2} the splitting $(\DFn{1},\DFn{2})$ for the same values of $\nu$ as in Fig.~\ref{wsf1f2}. We note that while the graphs remain similar for those selected values of $\nu$ (although the vertical range changes for the $\tilde{F}_2$-graph representations), see Fig.~\ref{wsf1f2}, the dominant harmonic of the Fourier expansion with respect to $(\psi_0,\theta_0) \in \mathcal{T}_{\Sigma}$ of $\DFn{1}$ has changed from $\nu=2^{-4}$ to $\nu=2^{-6}$, see Fig.~\ref{spf1f2}. A change of the dominant harmonic of $\DFn{2}$ for these two values of $\nu$ is also observed. Moreover, for $\nu=2^{-6}$ the dominant harmonic of $\DFn{1}$ is different from the dominant harmonic of $\DFn{2}$, as can be appreciated from the number of oscillations of the left/right plots of the second row of Fig.~\ref{spf1f2}. Concretely, for $\nu=2^{-4}$ the $(1,1)$ harmonic dominates for both $\DFn{1}$ and $\DFn{2}$, while for $\nu=2^{-6}$ the $(3,5)$-harmonic dominates for $\DFn{1}$ and the $(2,3)$ harmonic dominates for $\DFn{2}$. We can look for the so-called nodal lines. These are the zero level curves of $\DF{1}$ or $\DF{2}$, i.e. where either the $F_1$-splitting or the $F_2$-splitting vanishes. For $(\psi_0,\theta_0) \in \mathcal{T}_{\Sigma}$ the nodal lines for some values of $2^{-4.301} \leq \nu \leq 2^{-2.443}$ are shown in Fig.~\ref{nodals1}. The nodal lines for some smaller values of $\nu$, up to $2^{-6.235}$, are shown in Fig.~\ref{nodals2}. The values of $\nu$ shown have been selected so that a change of $10^{-3}$ in $\log_2(\nu)$ produces a topological change of the nodal lines. The intersections between the nodal lines correspond to homoclinic points and the changes in the topology of the nodal lines correspond to passages from a dominant harmonic to another one (either in $\DF{1}$ or in $\DF{2}$), see \cite{SimVall01}. Hence, when decreasing $\nu$ many changes of dominant harmonic have been detected. We summarize them in Table~\ref{taulanubif}. Concretely, we detect a topological change of the $\DFn{1}$ or $\DFn{2}$ nodal lines for $\nu \in (\nu_1,\nu_2)$. The values of $\nu_1$ and $\nu_2$ and the dominant harmonics at $\nu_1$ and $\nu_2$ are shown in the table. As expected the dominant harmonics of $\DFn{1}$ and $\DFn{2}$ are the elements of the Fibonacci sequence, since they are related to the best approximants of the golden number frequency $\gamma$. Observe that the appearance of a new harmonic happens first for $\DFn{1}$ and later for $\DFn{2}$. These appearances take place alternatively. Later on we will estimate the changes in $\DF{1}$ and $\DF{2}$ carefully. The fact that the harmonics in $\DF{1}$ and $\DF{2}$ coincide for large ranges of $\nu$ has some dynamical consequences in the diffusion properties (see Appendix~\ref{split-difusion}). \begin{table} \begin{center} \begin{tabular}{|c|c|l|} \hline & & \\[-0.4cm] $-\log_2 \nu_2$ & $-\log_2 \nu_1$ & Change of the dominant harmonics of $\DFn{1}, \DFn{2}$ \\ & & \\[-0.4cm] \hline 2.443 & 2.444 & \hspace{2.5cm} (1,0), (1,0) $\longrightarrow$ \textcolor{red}{(1,1)}, (1,0) \\ 2.676 & 2.677 & \hspace{2.5cm} (1,1), (1,0) $\longrightarrow$ (1,1), \textcolor{blue}{(1,1)} \\ 4.112 & 4.113 & \hspace{2.5cm} (1,1), (1,1) $\longrightarrow$ \textcolor{red}{(1,2)}, (1,1) \\ 4.300 & 4.301 & \hspace{2.5cm} (1,2), (1,1) $\longrightarrow$ (1,2), \textcolor{blue}{(1,2)} \\ 5.133 & 5.134 & \hspace{2.5cm} (1,2), (1,2) $\longrightarrow$ \textcolor{red}{(2,3)},(1,2) \\ 5.428 & 5.429 & \hspace{2.5cm} (2,3), (1,2) $\longrightarrow$ (2,3), \textcolor{blue}{(2,3)} \\ 5.971 & 5.972 & \hspace{2.5cm} (2,3), (2,3) $\longrightarrow$ \textcolor{red}{(3,5)}, (2,3) \\ 6.234 & 6.235 & \hspace{2.5cm} (2,3), (3,5) $\longrightarrow$ (3,5), \textcolor{blue}{(3,5)} \\ \hline \end{tabular} \caption{The third column lists the dominant harmonic of $\DFn{1}$ and the dominant harmonic of $\DFn{2}$ (both separated by a comma) for the value $\nu=\nu_2$ (left hand side of the arrow) and for $\nu=\nu_1$ (right hand side of the arrow). The $(m_1,m_2)$ harmonic corresponds to the frequency $m_1 \psi_0 - m_2 \theta_0$ of the Fourier expansion of $\DFn{i}$. The values of $\nu_2$ and $\nu_1$, shown in the first and second columns, are such that a bifurcation takes place for $\nu \in (\nu_1,\nu_2)$. We highlight the changes in the dominant harmonic of $\DFn{1}$ in red while those of $\DFn{2}$ are marked in blue. } \label{taulanubif} \end{center} \end{table} \begin{figure}[p] \begin{center} \epsfig{file=./figs/nodals2.443.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals2.444.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals2.676.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals2.677.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals4.112.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals4.113.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals4.300.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals4.301.eps,width=75mm,angle=0} \caption{Nodal lines of $\DFn{1}$ are shown in red. In blue we represent the ones related to $\DFn{2}$. The squares $[0,2\pi]^2$ represents the tori parameterized by $(\psi_0,\theta_0)$. Each row corresponds to two different values of the decreasing parameter $\nu$: before (left) and after (right) the bifurcation (values of $\nu \geq 2^{-4.301}$).} \label{nodals1} \end{center} \end{figure} \begin{figure}[p] \begin{center} \epsfig{file=./figs/nodals5.133.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals5.134.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals5.428.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals5.429.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals5.971.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals5.972.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals6.234.eps,width=75mm,angle=0} \epsfig{file=./figs/nodals6.235.eps,width=75mm,angle=0} \caption{Continuation of Fig.~\ref{nodals1}: nodal lines for values of $2^{-6.235} \leq \nu<2^{-4.301}$.} \label{nodals2} \end{center} \end{figure} \section{The splitting of the invariant manifolds} \label{Sec:Splitting} For the unperturbed system $H_0$ given in (\ref{H0}) the 2-dimensional invariant manifolds of the origin $W^{u/s}(\bf 0)$ coincide. But this is no longer true for the perturbed system (\ref{perturbedsystem}), the Hamiltonian perturbation $\epsilon H_1({\bf x},{\bf y},t)$ causes the splitting of the invariant manifolds $W^{u/s}(\bf 0)$. We will study the behaviour of the splitting of $W^{u/s}(\bf 0)$ as $\nu \rightarrow 0$ (i.e. as the system reduces hyperbolicity) for a fixed $\epsilon \neq 0$. As it is well-known the splitting is related to the nearest singularities to the real axis of the time-parameterization of the unperturbed homoclinic trajectory. In our case the singularities are located at $\tau_0=\pm {\mbox{\rm i}\,} \pi/2\nu$. Moreover, the perturbation $H_1({\bf x},{\bf y})$ adds a space singularity $\rho$ located at $y_1=d$ and a time singularity related to $\hat{\theta}=\pm {\mbox{\rm i}\,} \log(c+\sqrt{c^2-1})$ that restricts the domain of convergence of $f(\theta)$. The three singularities play a role in the asymptotic behaviour of the splitting as will be shown later on. We refer to \cite{GuaSea12} where a quasi-periodic perturbation with state singularities was considered. \subsection{The derivation of the Poincar\'e-Melnikov function} \label{derMel} To obtain the expression for the Poincar\'e-Melnikov vector we proceed in a standard way so we just shortly describe its derivation. Let $t_0\in \mathbb{R}$, $\zeta^s_0=(x_0^s, y_0^s)\in W^{s}(\rm{0})$, $\zeta^u_0=(x_0^u, y_0^u)\in W^{u}(\rm{0})$ and $\zeta^{s,u}(t)=(x^{s,u}(t), y^{s,u}(t))$ be the solutions of the Hamiltonian system $H_0+ \epsilon H_1$ such that $$ \zeta^{s,u} (t_0)=\zeta^{s,u}_0 =(x_0^{s,u}, y_0^{s,u}). $$ Clearly we have $\lim_{t\to \infty}\zeta^{s}(t) = \lim_{t\to -\infty}\zeta^{u}(t) =0 $. Then, for $i=1,2$, \begin{align*} G_i(\zeta^{s}(t)) & -G_i(\zeta^{s}_0) = \int_{t_0} ^t \frac{d}{dt}[G_i\circ \zeta^s](s)\, ds \\ & = \int_{t_0} ^t DG_i(\zeta^s(s)) [J DH_0^\top (\zeta^s(s)) + \epsilon J DH_1^\top (\zeta^s(s)) ]\, ds = \epsilon \int_{t_0} ^t \{G_i, H_1 \} \circ \zeta^s (s)\, ds , \end{align*} and taking limit when $t$ goes to $\infty$ we get $$ G_i(\zeta_0^s)=G_i(\zeta^{s} (t_0)) = -\epsilon \int_{t_0} ^\infty \{G_i, H_1 \} \circ \zeta^s (s)\, ds. $$ In the same way $$ G_i(\zeta_0^u)=G_i(\zeta^{u} (t_0)) = \epsilon \int_{t_0} ^{-\infty} \{G_i, H_1 \} \circ \zeta^u (s)\, ds. $$ Actually $\zeta^{s,u}(t)$ depend on $\epsilon$. Let $\zeta^{0}(t)$ denote the solution of the system when $\epsilon=0$, with initial condition $\zeta_0$ for $t=t_0$. We use $(\psi_0,\theta_0)$, which parameterize the unperturbed manifold, to also parameterize $W^{s,u} (0)$, and we consider $\zeta^{s}_0$, $\zeta^{u}_0$ the points on $W^{s} (0)$, $W^{u} (0)$ parameterized by $(\psi_0,\theta_0)$. Recall from Section~\ref{Sec:splfun} that $F_i^{u/s}$ denotes the restriction of $G_i$ to $W^{u/s}({\bf 0})$. By perturbation theory of invariant manifolds we have \begin{align*} \zeta^{s} (t) -\zeta ^0(t) &= O(\epsilon) , \quad \mbox{ uniformly in $t$ for } t\in [t_0, \infty), \\ \zeta^{u} (t) -\zeta ^0(t) &= O(\epsilon) , \quad \mbox{ uniformly in $t$ for } t\in ( -\infty, t_0]. \end{align*} Since $(\psi_0, \theta_0)$ and $t_0$ are not independent we assume that $t_0=0$, that is, the corresponding Poincar\'e-Melnikov integrals depend on the two phase variables $\psi_0$ (initial ``angle'' phase, see (\ref{homoH0})) and $\theta_0$ (initial ``time'' phase, see (\ref{perturbedsystem})). Therefore the splitting function is given by $$ F_i^u(\zeta^{u}_0) -F_i^s(\zeta^{s}_0) =\epsilon \int_{-\infty} ^\infty \{G_i, H_1 \} \circ \zeta^0 (s)\, ds + O(\epsilon^2) =: \epsilon M_i(\psi_0,\theta_0) + \mathcal{O}(\epsilon^2). $$ Below we denote by $(\DFt{1}(\psi_0,\theta_0),\DFt{2}(\psi_0,\theta_0)) = (\epsilon M_1(\psi_0,\theta_0),\epsilon M_2(\psi_0,\theta_0))$ the so-called (first order) Poincar\'e-Melnikov approximation function. \subsection{The expression of the Poincar\'e-Melnikov integrals} \label{sect_expr_PoincMel} As before, see (\ref{perturbedsystem}), we write $H_1({\bf x},{\bf y},t)=g(y_1)f(\theta)$ where the expansions of $f$ and $g$ are given in (\ref{expcs}) and (\ref{tayg}), respectively. Since the Poisson brackets are $$ \{G_1,H_1\}= y_2 f(\theta) g'(y_1), \qquad \{G_2,H_1\}= x_1 f(\theta) g'(y_1), $$ and \begin{equation} \label{expds} g'(y_1) = \sum_{k \geq 0} d_k y_1^{4+k}, \quad \mbox{ where } d_k =(5+k)d^{-k-1}, \end{equation} the Poincar\'e-Melnikov approximation of the splitting distance is \begin{align} \DFt{1}&=4 \epsilon \int_{-\infty}^{\infty} \sin(t+\psi_0) \, f(\gamma t+\theta_0) \, \sum_{k \geq 0} \frac{ \sqrt{2^{k+1}} \, d_k \, (\cos (t+\psi_0) )^{4+k}}{(\cosh(\nu t))^{5+k}} dt , \nonumber \\[-0.25cm] & \label{DG1exp} \\[-0.25cm] \DFt{2}&=-4 \epsilon \int_{-\infty}^{\infty} f(\gamma t+\theta_0) \, \sum_{k \geq 0} \frac{ \sqrt{2^{k+1}} \, d_k \, (\cos (t+\psi_0) )^{5+k} \, \sinh(\nu t)}{(\cosh(\nu t))^{6+k}} dt, \nonumber \end{align} where, for simplicity, we have not written the dependence on $\psi_0,\theta_0$ in $\DFt{i}$. Since the Poincar\'e-Melnikov integral is linear with respect to the perturbation $\epsilon H_1({\bf x},{\bf y},t)$ we can write $M(\psi_0,\theta_0)=(M_1(\psi_0,\theta_0),M_2(\psi_0,\theta_0))$ as an infinite sum and analyse the contribution to the splitting of each individual term of the series of $H_1$. The Fourier series of the terms of the form $(\cos(\psi))^m$ and $(\cos(\psi))^m \sin(\psi)$, for $m \in \mathbb{Z}^+$, that appear in the previous equations are given by \begin{equation} \label{ccs} (\cos(\psi))^m= \!\! \sum_{i=0}^{E\left(\frac{m}{2}\right)} \!\! a_{m,i} \cos((m-2i)\psi), \qquad (\cos(\psi))^m \sin(\psi)= \!\! \sum_{i=0}^{E\left(\frac{m+1}{2}\right)} \!\! b_{m,i} \sin((m+1-2i)\psi), \end{equation} where $E(x)$ denotes the integer part of $x$, and \begin{align} \label{ccscoefs} a_{m,i} &= \frac{1}{2^{m-1}} \left(\begin{array}{c}m\\i\end{array}\right), \ 0\leq i < m/2, \quad \mbox{ and } \quad a_{m,m/2}=\frac{1}{2^m} \left(\begin{array}{c}m\\m/2\end{array}\right) \mbox{ if } m \mbox{ even,} \\ b_{m,i} &= \frac{m+1-2i}{2^m(m+1)} \left(\begin{array}{c}m+1\\i\end{array}\right), \ 0\leq i \leq (m+1)/2. \nonumber \end{align} To compute the Poincar\'e-Melnikov integral for a general perturbation $g({\bf x},{\bf y}) f(\theta)$ the following comments apply: \begin{itemize} \item An expression of the form $x_1^{i_1}x_2^{i_2}y_1^{j_1}y_2^{j_2}$ in the Poisson bracket, when evaluated on the homoclinic orbit, becomes \[ (-1)^{i_1+i_2}2^{(i_1+i_2+j_1+j_2)/2}(\cos(\psi))^{i_1+j_1}(\sin(\psi))^ {i_2+j_2}\frac{(\sinh(\nu t))^{i_1+i_2}}{(\cosh(\nu t))^{2i_1+2i_2+j_1+j_2}}.\] The trigonometric terms can be reduced to the sum of expressions of the form $(\cos(\psi))^m$ or $(\cos(\psi))^m\sin(\psi)$, depending on whether $i_2+j_2$ is even or odd. In a similar way the hyperbolic terms can be reduced to the sum of negative powers of $\cosh(\nu t)$ or to such a sum times $\sinh(\nu t)$, depending on whether $i_1+i_2$ is even or odd. \item Using the expansions (\ref{ccs})-(\ref{ccscoefs}), and assuming the expansion of the time-periodic part is \[ f(\theta)=\sum_{j\geq 0}a_j\cos(j\theta)+\sum_{j> 0}b_j\sin(j\theta), \qquad a_j,b_j \in \mathbb{R}\] the integrals required to evaluate $\DFt{1}, \DFt{2}$ can be reduced to integrals of the product of $(\cosh(\nu t))^{-n},n\geq 1$ or $(\cosh(\nu t))^{-n}\sinh(\nu t),n\geq 2,$ by a function of the form \[ \cos(k\psi)\cos(j\theta),\quad \cos(k\psi)\sin(j\theta),\quad \sin(k\psi)\cos(j\theta)\quad {\mbox{or}}\quad \sin(k\psi)\sin(j\theta), \qquad k,j \in \mathbb{Z}^+.\] \item Recall that $\psi=t+\psi_0$ and $\theta=\gamma t+\theta_0$. Expanding $f(\theta)$ and $\cos(\psi)$ and taking into account that the integrals of odd functions in $\mathbb{R}$ are zero, the computation of $\DFt{i}$, $i=1,2$, reduces to the computation of integrals of the form \begin{equation} \label{I1I2} I_1(s,\nu,n)=\int_{\mathbb{R}} \frac{\cos(st)}{(\cosh(\nu t))^n} \, dt , \ n \geq 1, \quad I_2(s,\nu,n)=\int_{\mathbb{R}} \frac{\sinh(\nu t)\sin(st)}{(\cosh(\nu t))^n} \, dt, \ n \geq 2, \end{equation} for $\nu \neq 0$ (we will only be interested in $\nu>0$), where we have introduced the parameter $s = k\pm j\gamma$. \item Furthermore, one has $$\displaystyle{I_2(s,\nu,n)=\frac{s}{\nu(n-1)}I_1(s,\nu,n-1)}, \quad n\geq 2.$$ Hence, it suffices to compute $I_1(s,\nu,n)$. \item One has \[ I_1(s,\nu,1) = \frac{\pi}{\nu} \frac{1}{\cosh (s\pi/(2\nu))}, \qquad I_1(s,\nu,2) = \frac{s \pi}{\nu^2} \frac{1}{\sinh (s\pi /(2\nu))},\] and, integrating by parts twice, one obtains \[I_1(s,\nu,n)= \frac{s^2 + (n-2)^2 \nu^2}{\nu^2 (n-1)(n-2)} I_1(s,\nu,n-2), \quad n \geq 3.\] That is, \[\nu^n I_1(s,\nu,n) = \frac{\pi}{(n-1)! \cosh(s\pi/(2\nu))} P_{n-1}(s,\nu), \quad \text{ for $n$ odd,} \] and \[\nu^n I_1(s,\nu,n) = \frac{\pi}{(n-1)! \sinh(s\pi/(2\nu))} P_{n-1}(s,\nu), \quad \text{ for $n$ even,} \] where $P_j(s,\nu)$, $j \geq 0$, are the homogeneous polynomials of degree $j$ in $(s,\nu)$ that satisfy the recurrence \begin{equation} \label{recurP} P_0(s,\nu)=1, \qquad P_1(s,\nu)=s, \qquad P_j(s,\nu)=(s^2+(j-1)^2 \nu^2) P_{j-2}(s,\nu), \ j \geq 2. \end{equation} In particular, we see that the terms in the series of $\DFt{i}$, $i=1,2$, decay to zero at least as $\exp(-|s| \pi/(2\nu))$ as $\nu \rightarrow 0$. We note that, however, the functions $\DFt{i}$, $i=1,2$, may decay in a slower way, see Appendix~\ref{exempleduffing}. \end{itemize} At this point we have all the ingredients to produce an algorithm to obtain expressions for $\DFt{1}, \DFt{2}$ with any accuracy. \begin{remark} \label{remark_k1k2} The analyticity domain in the spatial coordinates $({\bf x},{\bf y})$ and the analyticity strip in time $t$ of the perturbation $\epsilon H_1({\bf x},{\bf y},t)$ can be, in general, of different size. Denote by $m({\bf x},{\bf y},\theta)$ a term of the Taylor-Fourier expansion of $H_1({\bf x},{\bf y},t)$, where $t=(\theta-\theta_0)/\gamma$. That is, $m({\bf x},{\bf y},\theta)$ is a monomial of degree $k_1 \geq 0$ in $({\bf x},{\bf y})$ with the harmonic $k_2 \in \mathbb{Z}$ in $\theta$. Assume that there exist $\rho_1,\rho_2>0$ such that the coefficient $m$ of this monomial satisfies \[ |m| \leq M \exp(-k_1 \rho_1 - |k_2| \rho_2),\] with $M>0$ and where $({\bf x},{\bf y})$ belongs to a compact domain containing the unperturbed real separatrices.\footnote{In particular, this assumption holds for the concrete example (\ref{perturbedsystem}) considered in this paper for $c>1$ and $d> \sqrt{2}$, see the expansions (\ref{expcs}) and (\ref{tayg}).} Then, the contribution $T(k_1,k_2)$ of the monomial to the Poincar\'e-Melnikov integral is of the form $$ T(k_1,k_2) \sim \epsilon A \nu^B \exp(-k_1 \rho_1 -|k_2| \rho_2) \exp\left(\frac{-|s| \pi}{2\nu} \right), \text{ with } s=k_1-|k_2|\gamma, A>0. $$ We note that it may happen that $T(k_1,k_2)$ dominates the behaviour of the splitting for $\nu$ small even if $k_1$ and $k_2$ (and the total order $k=k_1+|k_2|$) are large provided that $k_1- |k_2| \gamma$ is small enough. For example, consider $\rho:=\rho_1=\rho_2$ and assume that $\gamma$ verifies $|k_1 - |k_2| \gamma| > C |k|^{-\tau}$ with $\tau \geq 1$. Then $T(k_1,k_2) \sim T(k)=\epsilon A \nu^B \exp(-k \rho) \exp (-C \pi/2 \nu k^{\tau} )$ and the largest contribution is obtained for $k=k_* \sim (C \pi \tau/ 2 \rho \nu)^{1/(1+\tau)}$, which gives a term $T(k_*) = \mathcal{O}(\exp(-c/\nu^{1/(\tau+1)}))$. This agrees, provided $\tau>1$, with the exponentially small remainder obtained after an optimal number of steps of the averaging procedure for a quasi-periodic function, see details in \cite{Sim94}. When $\tau=1$ there are many terms that give the same contribution and the exponentially small (in $\nu$) upper bound in the averaging procedure gains an extra logarithmic term \cite{Sal04}. \end{remark} Summarizing, using (\ref{ccs}) and (\ref{ccscoefs}), we can rewrite (\ref{DG1exp}) as \begin{align*} \DFt{1} &= \epsilon \int_{-\infty}^{\infty} \sum_{k \geq 0} \sum_{0 \leq 2 i \leq 4+k} \sum_{j \geq 0} d_k b_{4+k,i} c_j 2^{\frac{5+k}{2}} \frac{1}{(\cosh(\nu t))^{5+k}} \sin((k+5-2i)\psi) \cos(j \theta) \, dt, \\ \DFt{2} &= -\epsilon \int_{-\infty}^{\infty} \sum_{k \geq 0} \sum_{0 \leq 2 i \leq 5+k} \sum_{j \geq 0} d_k a_{5+k,i} c_j 2^{\frac{5+k}{2}} \frac{\sinh(\nu t)}{(\cosh(\nu t))^{6+k}} \cos((k+5-2i)\psi) \cos(j \theta) \, dt. \end{align*} Taking into account that $\psi= t + \psi_0$, $\theta=\gamma t + \theta_0$, and expanding the terms $\sin(\ell (t+\psi_0)) \cos(j(\gamma t + \theta_0))$ and $\cos(\ell (t+\psi_0)) \cos(j(\gamma t + \theta_0))$, where $\ell=k+5-2i$, in the previous expression one reduces to evaluate integrals $I_1(s,\nu,n)$ and $I_2(s,\nu,n)$, given by (\ref{I1I2}), where $s=\ell \pm j \gamma$. Concretely, using the expansions \begin{align*} \sin(\ell(t+\psi_0)) & \cos(j(\gamma t+\theta_0))=\frac{1}{2}\left[\sin((\ell+j\gamma)t+ \ell \psi_0+j\theta_0)+\sin((\ell-j\gamma)t+\ell \psi_0-j\theta_0)\right] \\ =&\frac{1}{2}\left[\sin((\ell+j\gamma)t)\cos(\ell \psi_0+j\theta_0)+ \cos((\ell+j\gamma)t)\sin(\ell \psi_0+j\theta_0)\right] \\ &+\frac{1}{2}\left[\sin((\ell-j\gamma)t)\cos(\ell \psi_0-j\theta_0)+ \cos((\ell-j\gamma)t)\sin(\ell \psi_0-j\theta_0)\right], \\ \cos(\ell(t+\psi_0))& \cos(j(\gamma t+\theta_0))=\frac{1}{2}\left[\cos((\ell+j\gamma)t+ \ell \psi_0+j\theta_0)+\cos((\ell-j\gamma)t+\ell \psi_0-j\theta_0)\right] \\ =&\frac{1}{2}\left[\cos((\ell +j\gamma)t)\cos(\ell \psi_0+j\theta_0)- \sin((\ell +j\gamma)t)\sin(\ell \psi_0+j\theta_0)\right] \\ & +\frac{1}{2}\left[\cos((\ell -j\gamma)t)\cos(\ell \psi_0-j\theta_0)- \sin((\ell -j\gamma)t)\sin(\ell \psi_0-j\theta_0)\right], \end{align*} one obtains \begin{align} \label{exprdG1dG2_1} \nonumber \DFt{1} & = \displaystyle{\epsilon \sum_{j\geq 0} c_j \sum_{k\geq 0} 2^{\frac{3+k}{2}} d_k \!\! \sum_{0 \leq 2 i \leq 4+k} \!\!\!\!\! b_{4+k,i} \sum_{l = \pm 1} I_1(k\!+\!5\!\!-2i\!+\! l j \gamma, \nu, k\!+\!5) \sin((k\!+\!5\!-\!2i)\psi_0 \!+\! lj \theta_0), } \vspace{0.2cm} \\ \DFt{2} & = \displaystyle{-\epsilon \sum_{j\geq 0} c_j \sum_{k\geq 0} 2^{\frac{3+k}{2}} d_k \!\! \sum_{0 \leq 2 i \leq 5+k} \!\!\!\!\! a_{5+k,i} \sum_{l = \pm 1} I_2(k\!+\!5\!-\!2i\!+\! l j \gamma, \nu, k\!+\!6) \sin((k\!+\!5\!-\!2i)\psi_0 \!+\! lj \theta_0).} \end{align} \section{Comparison between the splitting and the Melnikov approximation} \label{Sec:ValMel} Note that our example fits within a non-perturbative ($\epsilon$ is considered fixed) singular (when $\nu \rightarrow 0$ the system loses hyperbolicity) splitting case as described in \cite{DelRam99}. The fact that the splitting is well-approximated by the Poincar\'e-Melnikov (vector) approximation $\epsilon M(\psi_0,\theta_0)$ must be justified in this context since, a priori, the $\mathcal{O}(\epsilon^2)$ error terms in the Poincar\'e-Melnikov approach can dominate for small enough values of $\nu$. That is, in order to justify the use of the Poincar\'e-Melnikov approach one has to estimate the relative error term by checking that the constant in $\mathcal{O}(\epsilon^2)$ decays together with $\nu$ in an exponentially small way, becoming dominated by the (exponentially small in $\nu$) term $\mathcal{O}(\epsilon)$. The necessity of estimating the relative error was observed in \cite{San82}, see also \cite{DelRam99,GelLaz01}. The rigorous justification of the validity of the Melnikov integral is an interesting but difficult problem. In this work we are not going to deal with it. Instead, in this subsection, we choose $\gamma=(\sqrt{5}-1)/2$ and we compare the {\em amplitude of the splitting} $\DFn{1}$ and $\DFn{2}$ computed directly using numerical methods (i.e. computing the invariant manifolds and the difference between them in a mesh of points) as explained in Section~\ref{sect_num} with the values obtained by using the Poincar\'e-Melnikov integral and the recurrences detailed in Section~\ref{sect_expr_PoincMel}. Let us clarify what we refer to by amplitude of the splitting. When we proceed numerically we compute the maximum of the absolute value of the distance between the invariant stable/unstable manifolds attained in a fundamental domain (i.e. on a torus parameterized by the angles $(\psi_0,\theta_0)$). On the other hand, when we proceed by evaluating the first order Poincar\'e-Melnikov approximation using the expressions (\ref{exprdG1dG2_1}), we only take into account those terms that, for the value of $\nu$ considered, give a relative contribution larger than $10^{-10}$ to the total sum. Adding the contributions of these harmonic terms we obtain an approximation of $\DFt{i}$. In the following, both quantities are referred as amplitude of the splitting and are denoted by $|\DFt{i}|$. For $\gamma=(\sqrt{5}-1)/2$ we have computed, for more than 1000 values of $\log_2(\nu)$, the amplitude of the splitting using both approaches. The results are displayed in Fig.~\ref{ampli2}. Note that from Remark~\ref{remark_k1k2}, since we have taken a constant type frequency $\gamma$, we expect the contribution of each term of the Taylor-Fourier expansion of the splitting function to be $\mathcal{O}(\exp(-c/\sqrt{\nu}))$. Accordingly we display $\sqrt{\nu} \log(|\DFt{i}|/\epsilon)$ as a function of $\log_2(\nu)$ in the figure. The direct numerical computations are done up to $\nu \lessapprox 2^{-7}$, since for smaller values of $\nu$ they require a large number of digits and a large computing time. As an example, for $\epsilon=10^{-3}$, $\nu=2^{-10}$ one has $|\DFt{i}| = \mathcal{O}(10^{-32})$. However, we can compute the Poincar\'e-Melnikov integral up to much smaller values of $\nu$. \begin{figure}[ht] \begin{center} \epsfig{file=./figs/amp1i2chenoepsfi2.eps,width=8cm} \end{center} \caption{We represent $\sqrt{\nu} \log(|\DFt{i}|/\epsilon)$, for $i=1$ (bottom curve) and $i=2$ (top curve), as a function of $\log_2(\nu)$. In red points we show the direct numerical computations of the amplitude of the splitting. The blue line shows the values obtained using the Poincar\'e-Melnikov integrals through the expressions derived theoretically to evaluate them. } \label{ampli2} \end{figure} Note the excellent agreement between the numerical and the theoretical methodologies. This accurate agreement supports the fact that the first order Melnikov integral asymptotically describes the splitting. In particular, this numerical check makes us confident to investigate the asymptotic behaviour of the splitting for smaller values of $\nu$ using the first order approximation of the splitting function given by the Poincar\'e-Melnikov integral. This is the goal of the next section. Finally, in Fig.~\ref{nterms} left we display the values of $\DFt{i}$, $i=1,2$, for different $\nu \in [2^{-24}, 2^{-2}]$. We have used a grid with spacing $0.005$ in $\log_2(\nu)$. In the right plot we show the number of harmonics that contribute to $\DFt{i}$, $i=1,2$. Each harmonic comes from the contribution of different $(i,j,k)$-terms in expression (\ref{exprdG1dG2_1}), where, as before, we only have taken into account those terms with relative contribution larger than $10^{-10}$. For the largest values of $\nu$ considered in the figure the dominant harmonic is computed as a combination of up to $14$ different terms. The values of $\DFt{i}$, $i=1,2$, are shown in the left panel of the same figure. We observe, in particular, that for $\nu < 2^{-12}$ the number of harmonics used to compute the splitting function reduces to one with the exception of small intervals of $\nu$ where two terms are used. This is related to the dominant harmonics of the splitting function: for most values of $\nu$ only one term is relevant in $\DFt{i}$ meaning that there is a dominant harmonic of the splitting function, but when a change of dominant harmonic of the splitting function takes place one has to consider two terms (meaning that two harmonics have a similar contribution in such a range of $\nu$). Note also that the length of the interval of $\nu$ where the computation of $\DFt{i}$ requires two harmonics decreases as $\nu \rightarrow 0$. \begin{figure}[ht] \begin{center} \epsfig{file=./figs/splitl.eps,width=7cm} \epsfig{file=./figs/rellevants.eps,width=7cm} \end{center} \caption{Left: We represent $\sqrt{\nu} \log(|\DFt{i}|/\epsilon)$, $i=1,2$. Right: Number of harmonic terms considered to compute $\DFt{i}$. In both panels $\log_2(\nu)$ ranges in the horizontal axis. } \label{nterms} \end{figure} \section{Asymptotic properties of the splitting behaviour} \label{sect_PoincMel} \subsection{The theoretical results} Here we state the main theoretical results which are proven in the following subsections. Consider system (\ref{perturbedsystem}) for $0 \leq \nu<\bar{\nu}\ll 1$ small enough, with $\epsilon>0$, $c>1$, $d>\sqrt{2}$ and $\gamma\in \mathbb{R}\setminus\mathbb{Q}$. From (\ref{exprdG1dG2_1}), we express $\DFt{i}$ as \begin{equation} \label{dgiexp} \DFt{i} = \epsilon \sum_{m_1 \geq 0} \sum_{m_2 \in \mathbb{Z}} \hat{C}_{m_1,m_2}^{(i)} \sin(m_1 \psi_0 - m_2 \theta_0), \quad i=1,2, \end{equation} where $\hat{C}_{m_1,m_2}^{(i)} \in \mathbb{R}$. We introduce the notation $C_{m_1,m_2}^{(i)}=|\hat{C}_{m_1,m_2}^{(i)}|$ to denote the amplitudes of the Fourier modes of $M_i(\psi_0,\theta_0)=\DFt{i}(\psi_0,\theta_0)/\epsilon$. Given $m_1/m_2$, $m_j \in \mathbb{Z}\setminus \{0\}$, $j=1,2$, an approximant of $\gamma$, let $c_{s,m_1/m_2} >0$ be the constant such that \[|s|=|m_1 - \gamma m_2| = \frac{1}{c_{s,m_1,m_2} m_1}.\] The constants $c_{s,m_1/m_2}$ are related to the arithmetic properties of $\gamma$ (see Section~\ref{Sec:Otherfreq}). We shall denote the constant $c_{s,m_1/m_2}$ by $c_{s,n}$ when $m_1/m_2$ is a best approximant of $\gamma$ (in the sense of the continuous fraction expansions (CFE) of $\gamma$) and, in this case, $n$ refers to the order of the best approximant. The following result provides a quantitative description of the way the different harmonics contribute to $\DFt{i}$. \begin{theorem} \label{mainresult1} There exists a universal function $\Psi_1(L)$ such that $$ \Psi_1(L)_{\mid L= c_{s,m_1/m_2} \nu m_1^2} \approx \sqrt{c_{s,m_1/m_2} \nu} \log C_{m_1,m_2}^{(1)}, $$ asymptotically when $\nu \rightarrow 0$. The function $\Psi_1(L)$ only depends on $\gamma$ through the additive term $k/\gamma$, where $k=c+\sqrt{c^2-1}$. On the other hand, the function \[\Psi_2(L)=\Psi_1(L)- \frac{\sqrt{L} \log{L}}{m_1}\] satisfies $$ \Psi_2(L)_{\mid L= c_{s,m_1/m_2} \nu m_1^2} \approx \sqrt{c_{s,m_1/m_2} \nu} \log C_{m_1,m_2}^{(2)}, $$ asymptotically when $\nu \rightarrow 0$. \end{theorem} The proof of Theorem~\ref{mainresult1} is given in Sections~\ref{7p2} and \ref{7p3}. If, from the arithmetic properties of $\gamma$, one can determine the asymptotic behaviour of the constants $c_{s,m_1/m_2}$, assuming that they have some defined asymptotic behaviour, then one can determine the dominant term (or dominant terms) of the splitting. When $\nu \rightarrow 0$ the dominant terms of the splitting are related to best approximants of $\gamma$. Our numerical results and theoretical discussions support the following conjecture. \begin{conjecture}\label{mainresult} Let $(\nu_0,\nu_1)$, $\nu_0,\nu_1<\bar{\nu} \ll 1$, be an interval such that for all $\nu \in (\nu_0,\nu_1)$ the dominant harmonic in $\DFt{i}$ is the one associated to the best approximant $m_1/m_2$ of $\gamma$. Then, for $\nu \in (\nu_0,\nu_1)$, \[ |\DFt{i}| \approx \epsilon \exp\left( \frac{\Psi_i(L)}{\sqrt{c_{s,m_1/m_2} \nu}} \right), \qquad L = c_{s,m_1/m_2} \nu m_1^2, \qquad i=1,2, \] where $\Psi_2(L)=\Psi_1(L)+\mathcal{O}(\sqrt{c_{s,m_1/m_2}\nu})$, $\Psi_1(L) = \Psi_M + O(|L-L_M|^2)$, being $\Psi_M = \Psi_1(L_M)= \text{\em max}(\Psi_1(L)) \approx -4.860298$ and $L_M \approx 0.26236$. \end{conjecture} \begin{remark} Notice that $c_{s,m_1/m_2}$ depends on $\nu$ through the arithmetic properties of $\gamma$, as was explained in Remark~\ref{remark_k1k2}. When $\gamma$ is a quadratic irrational then the constants $c_{s,m_1/m_2}$ remain bounded as $\nu \to 0$. On the other hand, when the quotients of the CFE of $\gamma$ are unbounded then the maxima of the constants $c_{s,m_1/m_2}$ grow when $\nu \to 0$. Actually the exponents in the exponentially small part of $|\DFt{i}|$, $i=1,2$, depend on $\nu$ through the behaviour of $\nu c_{s,m_1/m_2}$. See Section~\ref{Sec:Otherfreq} for some examples. \end{remark} Explicit expressions of the functions $\Psi_i(L)$ are derived in the following subsections. Note that $\Psi_1$ does not depend on the arithmetic properties of $\gamma$ and $\Psi_2$ depends only on $m_1$. The dependence is through $L$ which depends on the constant $c_{s,m_1/m_2}$ and the approximant $m_1/m_2$. This allows us to provide a methodology to study the asymptotic behaviour for any frequency $\gamma$. Note that the dominant harmonic $(m_1,m_2)$ changes when $\nu \rightarrow 0$ and so does the constant $c_{s,m_1/m_2}$. This allows us to study the values of $\nu$ for which a change of the dominant harmonic in $\DFt{i}$ is expected. Conjecture~\ref{mainresult} asserts that the dominant term of the series expansion of the Melnikov function gives the correct exponent of the splitting behaviour, that is, that \[ \sqrt{\nu} \log\left(\frac{|\DFt{i}|}{\epsilon} \right) \approx \sqrt{\nu} \log C_{m_1,m_2}^{(i)} \approx \frac{1}{\sqrt{c_{s,m_1/m_2}}} \Psi_i(L), \qquad L = c_{s,m_1/m_2} \nu m_1^2,\] where the second approximation comes from Theorem~\ref{mainresult1}. See Section~\ref{Sec:non-dominant} for a more detailed discussion. Here we just want to emphasize that this dominant term, which is related to the approximants of $\gamma$, has a larger order of magnitude than the remaining terms of the series. For example, for a constant type $\gamma$, if the dominant harmonic corresponds to the linear combination $s=m_1-m_2\gamma$, then one expects $m_1,m_2 \sim 1/\sqrt{\nu}$ (see Remark~\ref{remark_k1k2}). This gives a term of order $\exp(-c/\sqrt{\nu})$ much larger than the order $\exp(-c/\nu)$ expected for the terms with other combinations. We stress that this is a purely quasi-periodic effect related to the existence of two frequencies in the system. Indeed, if one considers a one frequency forcing of the 1-dimensional separatrix dynamics, the effect of any of the terms of the Melnikov series can be of the same (or similar) order than the dominant one. Hence all terms can contribute to change the dominant exponent, see Appendix~\ref{exempleduffing}. \subsection{The amplitude of the harmonics associated to approximants of $\gamma$} \label{7p2} We consider first $M_1(\psi_0,\theta_0)=\DFt{1}/\epsilon$. Given $m_1, \ m_2 \in \mathbb{Z}$ we look for the expression of $C_{m_1,m_2}^{(1)}=|\hat{C}_{m_1,m_2}^{(1)}|$ in (\ref{dgiexp}). In (\ref{exprdG1dG2_1}) we choose $l=-1$ to get the more relevant terms, then one has $k=m_1 + 2 i -5$ and $j=m_2$ so that \begin{align*} C_{m_1,m_2}^{(1)} & = c_{m_2} \sum_{i \geq 0} 2^{(m_1+2i-2)/2} d_{m_1+2i-5} \, b_{m_1+2i-1,i} \, I_1(s,\nu,k+5) \\ & \approx c_{m_2} \sum_{i \geq 0} 2^{(m_1+2i-2)/2} d_{m_1+2i-5} \, b_{m_1+2i-1,i} \, \frac{2 \pi}{\nu} \frac{(s/\nu)^{m_1+2i-1}}{(m_1+2i-1)!} \, \Pi_{m_1+2i}(\nu/s) \exp\left(-\frac{\pi s}{2 \nu}\right), \end{align*} where $s=|m_1-m_2 \gamma|$ and $\Pi_{r}(\nu/s)=s^{1-r} P_{r-1}(s,\nu)$, $r\geq 1$. Note that to lighten the notation we have used $s$ for $|s|$ in this section, hoping that no confusion will be produced. From (\ref{recurP}) one has the recurrence \begin{equation} \label{recurPi} \Pi_1=\Pi_2=1, \qquad \Pi_r(w)=(1+(r-2)^2 w^2) \Pi_{r-2}(w), \ r \geq 2, \end{equation} where $w = \nu/s$. In the expression for $C_{m_1,m_2}^{(1)}$ above (and the one for $C_{m_1,m_2}^{(2)}$ at the end of this section) we have used the approximations $\cosh^{-1}(\pi s/ (2 \nu)), \sinh^{-1}(\pi s/ (2 \nu)) \sim 2 \exp(-\pi s /(2\nu))$, valid when $w^{-1}=s/\nu$ is large enough, the relative error being $\mathcal{O}( \exp(-\pi s /\nu ))$. From (\ref{expcs}), (\ref{expds}) and (\ref{ccs}) it follows that $$ c_{m_2} \! =\! \frac{2}{ \rho_c^{m_2} \sqrt{c^2 -1}}, \ d_{m_1+2i-5} \! = \! \frac{m_1+2i}{d^{m_1+2i-4}}, \ b_{m_1+2i-1,i} \! = \! \frac{m_1}{2^{m_1+2i-1}(m_1+2i)} \binom{m_1+2i}{i}, $$ where $\rho_c = c+\sqrt{c^2-1}$. Then, \begin{equation} \label{dG1} C_{m_1,m_2}^{(1)} \approx= \frac{2}{ \rho_c^{m_2} \sqrt{c^2 -1}} \frac{2 \pi}{\nu} d^3 \exp\left(-\frac{\pi s}{2\nu}\right) m_1 2^{-m_1/2} \left(\frac{s}{\nu d}\right)^{m_1-1} \frac{d}{m_1!} \, S_A, \end{equation} where $S_A$ denotes the sum \begin{equation} \label{sumAi} S_A= \sum_{i \geq 0} A_i, \qquad A_i=A_i(m_1,\nu,s)= \frac{m_1+2i}{2^i (m_1+i)! i!} \left(\frac{s}{\nu d}\right)^{2i} \frac{m_1!}{d} \, \Pi_{m_1+2i}( \nu/s ). \end{equation} Similarly, for $\DFt{2}$ one obtains, given $m_1,m_2$, that \begin{equation} \label{dG2} C_{m_1,m_2}^{(2)} \approx \frac{2}{\rho_c^{m_2} \sqrt{c^2 -1}} \frac{2 \pi s}{\nu^2} d^3 \exp\left(-\frac{\pi s}{2\nu}\right) 2^{-m_1/2} \left(\frac{s}{\nu d}\right)^{m_1-1} \frac{d}{m_1!} \, S_A. \end{equation} According to (\ref{dG1})-(\ref{dG2}) the contribution of the integral to $\DFt{i}$, $i=1,2$, related to the approximant $m_1/m_2$ is $\mathcal{O}\left(\exp(-\frac{\pi s}{2\nu})\right)$, where the contribution of finite negative powers of $\nu$ has been neglected. Hence those harmonics associated to the smallest values of $s$ play the most important role. These are expected to be related (asymptotically as $\nu \rightarrow 0$) with the best approximants of $\gamma$. From the expressions of $C_{m_1,m_2}^{(i)}$, $i=1,2$, we have the following result. \begin{proposicio} \label{propquotient} Given $m_1\geq 0$ and $m_2 \in \mathbb{Z}$ we have \[ C_{m_1,m_2}^{(1)} = L \ C_{m_1,m_2}^{(2)} + \mathcal{O}(\exp(-\pi s / \nu)),\] where $L=\nu m_1^2 c_{s,m_1/m_2}$ and $c_{s,m_1/m_2}>0$ is such that $s=|m_1-\gamma m_2|= \frac{1}{c_{s,m_1/m_2} m_1}$. \end{proposicio} \begin{proof} From (\ref{dG1}) and (\ref{dG2}) one has $C_{m_1,m_2}^{(1)} / C_{m_1,m_2}^{(2)} \approx m_1 \nu/s = m_1^2 c_{s,m_1/m_2} \nu = L$ for all $m_1 \geq 0$ and $m_2 \in \mathbb{Z}$. \end{proof} This relation explains the difference between $\DF{1}$ and $\DF{2}$ in Fig.~\ref{ampli2}, see also Fig.~\ref{spf1f2} first row. Recall that it was numerically observed that the dominant harmonic of both splittings coincide for large ranges of $\log_2(\nu)$ when $\nu \to 0$ (see Table~\ref{taulanubif}). \subsection{The universal function $\Psi_1(L)$ associated to an approximant $m_1/m_2$ of $\gamma$.} \label{7p3} In this section we consider $m_1/m_2 \approx \gamma$ an approximant (not necessarily a best approximant of $\gamma$). For concreteness we will focus on $\DFt{1}$, the expression of $\Psi_2(L)$ will follow directly from Proposition~\ref{propquotient}. Given $m_1/m_2 \approx \gamma$, we express $C_{m_1,m_2}^{(1)}$ given by (\ref{dG1}) as $$ C_{m_1,m_2}^{(1)} =\mathcal{P}_f \mathcal{P}_F S_A, \qquad \text{and } S_A=\mathcal{P}_M \mathcal{P}_Q, $$ where \[\mathcal{P}_f = \frac{4 \pi d^4 m_1}{s \sqrt{c^2 -1}}, \qquad \mathcal{P}_F = \frac{1}{\rho_c^{m_2} 2^{m_1/2} m_1!} \exp\left( - \frac{\pi s}{2 \nu} \right) \left( \frac{s}{\nu d}\right)^{m_1},\] and $\mathcal{P}_M$ denotes the dominant term of $S_A$ (i.e. the term of $S_A$ which gives the maximum contribution to the sum) and $\mathcal{P}_Q=S_A/\mathcal{P}_M$. To get intuition about how to proceed we perform some numerical investigations considering (temporary!) $\gamma=(\sqrt{5}-1)/2$. Fig.~\ref{cdh} left shows the behaviour of $\DFt{1}$. We see different changes of dominant harmonic as $\nu \rightarrow 0$ that are marked with points. We represent the behaviour of $S_A=\mathcal{P}_M \mathcal{P}_Q$ for $\gamma= (\sqrt{5}-1)/2$ in Fig.~\ref{cdh} right. These two terms play the role of a factor which ranges in a finite interval away from zero. In particular, this means that the change of harmonic should be detected in the prefactor $\mathcal{P}_f \mathcal{P}_F$. The important term is $\mathcal{P}_F$ since $\mathcal{P}_f$ does not depend on $\nu$ explicitly and, for the $m_1/m_2$ approximant giving the maximum contribution to the splitting function for a fixed $\nu$, behaves as a power of $\nu$, hence negligible in front the exponentially small term in $\nu$ of $\mathcal{P}_F$. Hence, below, we first look for the changes using just $\mathcal{P}_F$, later we will discuss the contribution of the sum $S_A$. The factor $\mathcal{P}_F$ depends on $m_1$, $m_2$ and $\nu$. We shall check later that $\mathcal{P}_Q$ gives no relevant contribution to $C_{m_1,m_2}^{(1)}$. \begin{figure}[ht] \begin{center} \begin{tabular}{cc} \hspace{-6mm}\epsfig{file=./figs/splitescal.eps,width=9cm} & \hspace{-9mm}\epsfig{file=./figs/sumaescal.eps,width=9cm} \end{tabular} \end{center} \vspace{-8mm} \caption{We consider $\gamma=(\sqrt{5}-1)/2$ and $\epsilon=10^{-3}$. In both plots the horizontal variable is $\log_2(\nu)$. Left: $\sqrt{\nu} \log( C_{m_1,m_2}^{(1)}/\epsilon)$, the points correspond to the changes of dominant harmonic. The rightmost change corresponds to $m_1=55\to m_1=89$, while the leftmost to $m_1=196418\to m_1=317811$. Right: $\sqrt{\nu} \log(\mathcal{P}_M\mathcal{P}_Q)$.} \label{cdh} \end{figure} In what follows, given $m_1/m_2 \approx \gamma$, we study the contribution of the different factors to $C_{m_1,m_2}^{(1)}$. {\bf The contribution of $\mathcal{P}_F$.} We write $\mathcal{P}_F=\mathcal{P}_F(m_1)$ to explicitly note its dependence on $m_1$. Using Stirling's formula we approximate $\log m_1! \approx m_1 (\log m_1-1)$ (i.e. we ignore the term $\sqrt{2\pi m_1}$), one has \begin{align} \label{cPF} \log(\mathcal{P}_F(m_1)) \! \approx \! & - m_1 \left( \frac{\log(\rho_c)}{\gamma} + \frac{\log 2}{2} + (\log m_1 -1) + \frac{\pi}{2 c_{s,m_1/m_2} \nu m_1^2} +\log(dc_{s,m_1/m_2} \nu m_1) \right) \nonumber \\ = & - m_1(K+\log(L) +B/L), \end{align} where \[ K=\log(d)+\log(2)/2+\log(\rho_c)/\gamma-1,\quad L=c_{s,m_1/m_2} \nu m_1^2, \quad B=\pi/2.\] {\bf The contribution of $\mathcal{P}_M$.} To take into account the effect of the factor $\mathcal{P}_M$ we need to identify the dominant term of $S_A$. From (\ref{sumAi}) and (\ref{recurPi}) it follows that the quotient of two consecutive terms in the sum $S_A$ is \begin{equation} \label{quotientA} \frac{A_i}{A_{i-1}}= \frac{m_1+2i}{2(m_1+2i-2)(m_1+i)i} \left( \frac{s}{\nu d} \right)^2 \left(1+(m_1+2i-2)^2 (\nu/s)^2 \right). \end{equation} We look for the index $i$ corresponding to the term with maximum value of the sum $S_A$ for a fixed value of $\nu$. It is useful to introduce $I=i/m_1$ and look for the index $I$ instead. From (\ref{quotientA}), one gets \begin{equation} \label{quotientAA} \frac{A_i}{A_{i-1}}=\frac{1}{2d^2} \left( \frac{1}{I+I^2} \left( \frac{s}{m_1 \nu} \right)^2 + \frac{1}{I+I^2} + 4 \right) \left(1 + \mathcal{O}(m_1^{-1}) \right). \end{equation} From this quotient one deduces that the sequence $\{A_i\}_i$ is increasing for small values of $i$ and it becomes decreasing for large values of $i$ provided $d>\sqrt{2}$ (recall that we choose $d=7$ in the concrete example). The maximum value is achieved when $A_i \approx A_{i-1}$. Then, ignoring the terms of relative value $\mathcal{O}(m_1^{-1})$ one gets the following equation \begin{equation} \label{eqI} (2d^2-4) (I^2+I)=1+\left( \frac{s}{\nu m_1} \right)^2, \end{equation} from which one can determine the index $i=m_1 I$ of the maximum term of $S_A$. Hence, taking into account the expression (\ref{sumAi}), the factor $\mathcal{P}_M$ is \begin{equation} \label{cPMdetall} \mathcal{P}_M = \frac{m_1! k}{d (\sqrt{2} w d)^{2 m_1 I} (m_1(1+I))! (m_1 I)!} \, \Pi_k(w), \end{equation} where $k=m_1(1+2I)$ and $w=\nu/s$. From the recurrence relation (\ref{recurPi}) one gets \[ \log(\Pi_k(w))=\!\!\!\!\sum_{j=k-2(-2)0}\!\!\!\!\log(1+j^2w^2),\] where the index $j$ runs with step $-2$ (and finishes at $j=1$ whenever $k$ is odd). Approximating the previous sum by an integral one has \begin{align} \log(\Pi_k(w)) \approx & \frac{1}{2} \int_0^{k-1}\hspace{-6mm}\log(1+j^2w^2) \, dj= \frac{1}{4w}\int_0^{(k-1)^2w^2}\hspace{-12mm} \log(1+z)\frac{dz}{\sqrt{z}} \nonumber\\ = & \frac{1}{2w}\left[\sqrt{z}(\log(1\!+\!z)\!-\!2)\!+\!2\arctan(\sqrt{z}) \right]_0^{(k-1)^2w^2}\nonumber \\ = & \frac{k-1}{2}\left(\log(1\!+\!(k\!-\!1)^2w^2)\!-\!2\right)+\frac{\arctan((k\!-\!1)w)}{w}. \end{align} {\bf The definition of the universal function $\Psi_1(L)$.} We define now the universal function $\Psi_1(L)$ from the previous contributions of $\mathcal{P}_F$ and $\mathcal{P}_M$. We will check below that the contribution of $\mathcal{P}_Q$ is not important in the sense that $\DFt{1} \approx \mathcal{P}_F \mathcal{P}_M$ is accurate enough to detect the changes of dominant harmonics. First, we recall from (\ref{cPF}) that $\log(\mathcal{P}_F)/m_1 \approx -(K+\log(L)+B/L)$ with $K$ and $B$ independent of $L$. Let us denote by $\Psi_{1,1}(L)=-(K+\log(L)+B/L)$, and note that it slightly depends on $\gamma$ through $K$. Next, we obtain an approximation of $\log(\mathcal{P}_M)/m_1$ that only depends on $L=c_{s,m_1/m_2} \nu m_1^2$. Equation (\ref{eqI}) can be rewritten as $(2d^2-4)(I^2+I)-1=1/L^2$, so that given $L$ we can obtain the index $I=I_*$ that determines $\mathcal{P}_M$. From (\ref{cPMdetall}), after skipping some constant terms and higher order terms in $m_1^{-1}$, one gets \begin{align} \label{cPM} \log(\mathcal{P}_M)/m_1 \approx & -2I_*\log(L d)-(1+I_*)\log(1+I_*)-I_*\log(I_*)+I_*(2-\log(2)) \\ & + (1+2I_*)(\log(1+((1+2I_*)L)^2)-2)/2+\arctan((1+2I_*)L)/L, \nonumber \end{align} where the terms of the first line come from the prefactor of $\Pi_{k}(w)$ in (\ref{cPMdetall}) and the terms of the second one are related to $\Pi_{k}(w)$ after taking logarithms. Let us denote by $\Psi_{1,2}(L)$ the right hand side of (\ref{cPM}). Now we define \begin{equation}\label{funPsi} \Psi_1(L) := \Psi_{1,1}(L) + \Psi_{1,2}(L), \end{equation} which depends on the parameters $c$ and $d$ (and slightly on $\gamma$ through $K$) but does not depend explicitly on the approximant $m_1/m_2$ of $\gamma$. The universal function $\Psi_1(L)$ provides an approximation of $\sqrt{c_{s,m_1/m_2} \nu} \log(|\DFt{1}|/\epsilon)$ as a function of the parameter $L= c_{s,m_1/m_2} \nu m_1^2$. In Fig.~\ref{psi_ncs} we show the function $\Psi_1(L)$ as a function of $L$. We can see that it has the properties described in Conjecture~\ref{mainresult}. \begin{figure} \begin{center} \epsfig{file=./figs/psi_ncs.eps,width=7cm} \caption{The universal function $\Psi_1(L)$ as a function of $L$.} \label{psi_ncs} \end{center} \end{figure} {\bf The factor $\mathcal{P}_Q$ plays no role.} Here we check that $\mathcal{P}_Q$ becomes not relevant as $\nu \rightarrow 0$ or, equivalently, as $m_1 \rightarrow \infty$. Ignoring the terms $\mathcal{O}(m_1^{-1})$ in (\ref{quotientAA}) we obtain \[\frac{A_i}{A_{i-1}}=\frac{1}{2d^2(I+I^2)}((1+2I)^2+L^{-2}). \] This quotient depends on $L$ and $I$. For a fixed value of $L$ the quotient $A_i/A_{i-1}$ is a monotonically decreasing function of $I$ (and hence of $i$) independently of the value of $L$. Concretely one has \[ \frac{\partial(A_i/A_{i-1})}{\partial I}=-\frac{(1+2I)(1+L^{-2})} {2d^2(I+I^2)^2}. \] Recall that $I_*$ is the value of $I$ giving the quotient $A_i/A_{i-1}$ closest to one (i.e. $I_*$ corresponds to the maximum term of the sum $S_{A}$ and, by definition, it determines the factor $\mathcal{P}_M$). For $\delta>0$ fixed, let $I_{\pm}$ the values of $I$ for which one has $A_i/A_{i-1} = 1 \pm \delta$. One has \[I_{\pm} = I_* \pm \partial{I_\pm}/\partial{\delta}|_{\delta=0} \, \delta + \mathcal{O}(\delta^2),\] and one checks that \[ \partial{I_{\pm}}/\partial{\delta}|_{\delta=0} = \frac{2 d^2 (I_* + I_*^2)}{(2d^2-4)(1+2I_*)} = \mathcal{O}(1), \] meaning that $|I_+ - I_-|= \mathcal{O}(\delta)$. Since $I_\pm= m_1 i_\pm$, it follows that $|i_+ -i_-|= \mathcal{O}(m_1 \delta)$. We split the sum $S_{A} = \sum_{i \geq 0} A_i$ into three (say left/center/right) parts \[S_{A}= \sum_{i=0}^{i_-} A_i + \sum_{i=i_-}^{i_+} A_i + \sum_{i=i_+}^{\infty} A_i =S_l+S_c+S_r.\] We recall that $S_{A}= \mathcal{P}_M \mathcal{P}_Q$, where $\mathcal{P}_M = A_{i_*}$, where $i_*=m_1 I_*$, hence \[\mathcal{P}_Q = \frac{1}{A_{i_*}} \left( S_l + S_c + S_r \right) .\] From $|i_+ -i_-|= \mathcal{O}(m_1 \delta)$, it follows that $S_c= \mathcal{O}(m_1 \delta) A_{i_*}$. On the other hand, the terms in $S_l$ decay as $A_i \leq (1+\delta) A_{i-1}$. Hence, $S_l \leq A_{i_*} \sum_{i=0}^{i_-} (1 + \delta)^{-i} = \mathcal{O}( A_{i_*}/ \delta) $. Similarly, for $S_r$ one has $A_i \leq (1- \delta) A_{i-1}$, hence $S_r = \mathcal{O}(A_{i_*}/\delta)$. As a conclusion, one gets \[ \mathcal{P}_Q = \mathcal{O}(m_1 \delta) + \mathcal{O}(\delta^{-1}). \] Taking, for example, $\delta = m_1^{-1/2}$ one gets $\mathcal{P}_Q = \mathcal{O}(m_1^{1/2})$, meaning that the factor $\mathcal{P}_Q$ can be ignored compared with the exponentially small terms since its logarithm divided by $m_1$ is small compared with the other terms in $\Psi_1$. {\bf The analogous function $\Psi_2(L)$.} For a fixed $m_1/m_2 \in \mathbb{Q}$ we define the function $\Psi_2(L)$ as \begin{equation} \label{G1G2rel} \Psi_2(L) = \Psi_1(L)- \frac{\sqrt{L}}{m_1} \log(L). \end{equation} From Proposition~\ref{propquotient} one has that \[\Psi_2(L) \approx \sqrt{c_{s,m_1/m_2} \nu} \log( C_{m_1,m_2}^{(2)} ).\] Assume that we are interested in the functions $\Psi_1(L)$ and $\Psi_2(L)$ for values of $L \in [L_-,L_+]$ around their maxima. Then, the relation (\ref{G1G2rel}) shows that $\Psi_2(L)$ tends to $\Psi_1(L)$ as $\nu \rightarrow 0$, uniformly in $[L_-,L_+]$. \subsection{The changes in the dominant harmonic of the splitting function} \label{sectchanges} Several properties can be analysed from the derived universal functions $\Psi_1$ and $\Psi_2$. First we look for the changes of the dominant harmonic in $\DFt{1}$ as $\nu$ varies. We expect that for most of the values of $\nu$ there is one dominant harmonic. However, for some values of $\nu$ different harmonics can be of the same order of magnitude. Our aim is to determine, for a given $\nu$ small enough, which is (are) the dominant harmonic(s). Some general comments are in order. As already said and according to (\ref{dG1}) (resp. (\ref{dG2})), for $\nu$ small enough one expects the dominant harmonic(s) of $\DFt{1}$ (resp. $\DFt{2}$) to be related with the best approximants of $\gamma$. That is, to get the dominant harmonic it is enough to compare the harmonics associated to best approximants $m_1/m_2$ of $\gamma$. Below we will restrict to best approximants and we will compare the functions $\Psi_1$ associated to them. However, not all the harmonics associated to best approximants become a dominant harmonic. Several examples will be given in Section~\ref{Sec:Otherfreq}. Finally, we note that, assuming that the amplitudes of the harmonics of the Poincar\'e-Melnikov integral decay in an exponential way as in Remark~\ref{remark_k1k2}, at least one of every two consecutive best approximants of $\gamma$ becomes the dominant harmonic of $\DFt{i}$, $i=1,2$, for a suitable range of $\nu$. In Appendix~\ref{conseq_best} we consider that problem assuming two small consecutive quotients between two large quotients of the CFE of $\gamma$. For a more general discussion see \cite{FonSimVie-2}. To determine which of the best approximants is associated to the dominant harmonic requires to know the constants $c_{s,m_1/m_2}$ to be able to compare the corresponding functions $\Psi_1$. If moreover one wants to look for the asymptotic behaviour of the changes of dominant harmonic as $\nu \rightarrow 0$ one needs an asymptotic description of the values of $c_{s,m_1/m_2}$. Next subsections deal with this question. \subsubsection{The golden mean frequency.} \label{goldenfreq} For simplicity, first we consider $\gamma$ to be a quadratic irrational so that its CFE is periodic. We shall prove in Lemma~\ref{lema_periodic_csn} that, in this case, the values of the constants $c_{s,m_1/m_2}$ associated to the best approximants of $\gamma$ are (asymptotically, as the order of the best approximant tends to infinity) also periodic. Moreover, for concreteness, we focus on $\gamma = (\sqrt{5}-1)/2$ but other quadratic irrational numbers can be similarly handled. As we shall discuss in Section~\ref{Sec:periodicity_csn}, for $\gamma=(\sqrt{5}-1)/2$, one has $c_{s,n} \rightarrow \sqrt{5}(1+\gamma) =3+\gamma$ when considering best approximants of $\gamma$ and as the order of the best approximant tends to infinity. The best approximants are quotients of consecutive Fibonacci numbers. It turns out that all best approximants are visible as a dominant harmonic in a corresponding interval of $\nu$. We look for the sequence of values $\nu_j$ of $\nu$ for which the changes of dominant harmonic take place, see Fig.~\ref{cdh} left. Assume that the $j$-th best approximant of $\gamma$ dominates at a specific value of $\nu=\nu_1^*$. We first use the approximation $\DFt{1} \approx \epsilon \mathcal{P}_F(m_1)$ where $m_1$ is the numerator of the $j$-th best approximant. Assume that for $\nu=\nu_0^* < \nu_1^*$ the dominant harmonic corresponds to the $(j+1)$-th best approximant of $\gamma$. Then there is a value $\nu=\nu_j$, corresponding to the change $m_1\to(1+\gamma)m_1$ of dominant harmonic, for which $\log(\mathcal{P}_F(m_1)) = \log(\mathcal{P}_F((1+\gamma)m_1))$. This condition leads to the following equation for $L$ \[ L=\frac{\pi\gamma/(2(1+\gamma))}{2(1+\gamma)\log(1+\gamma)+K\gamma+\gamma \log(L)}.\] This equation, which is independent of $m_1$, can be solved by numerical iteration and one obtains $L=L_l\approx 0.1690224$ for the values $c=5, d=7$ in our perturbation. This implies that asymptotically $\nu_{j+1} \approx \gamma^2 \nu_j$. Indeed, from $m_2 \approx m_1 (1+\gamma)$ it follows that $L= \nu_j m_1^2 c_{s,m_1/m_2} \approx \nu_{j+1} m_1^2 (1+\gamma)^2 c_{s,m_1/m_2}$ and then $\nu_{j+1} \approx \gamma^2 \nu_j$. Accordingly, this agrees with Fig.~\ref{cdh} left where the values $\log_2(\nu_j)$ tend to be, as $\nu \rightarrow 0$, separated by $2 \log_2(\gamma) \approx -1.38848$. More concretely, let $F_j$ denote the Fibonacci sequence starting with $F_1=1$, $F_2=2$, $F_3=3$, \dots. We can compute the values $\nu=\nu_j$ where $\nu_j$ corresponds to the change $m_1=F_j \to m_1=F_{j+1}$. With this notation the blue points in Fig.~\ref{cdh} left correspond to the values of $\log_2 (\nu_j)$ for $9 \leq j \leq 26$. Moreover, one has $\nu_j \sim \gamma^{2j} \hat{K}$, for some $\hat{K}$. In Fig.~\ref{limitK} we represent $\nu_j \gamma^{-2j}$ as a function of $j$. We see that, for $j$ large enough, it tends to the constant $\hat{K} \approx 0.0850$. \begin{figure}[ht] \begin{center} \epsfig{file=./figs/limitdelaK.eps,width=8cm} \end{center} \caption{We represent $\nu_j \gamma^{-2j}$, where $\nu_j$ are the values where a change of dominant harmonic has been numerically detected, as a function of the index $j$ of the Fibonacci sequence $F_j$ (see text for details). } \label{limitK} \end{figure} Let us describe a more general methodology to look for the changes of dominant harmonic which takes into account the corrections due to the factor $\mathcal{P}_M$. Since for $\gamma=(\sqrt{5}-1)/2$ one has $c_{s,m_1/m_2}=c_{s,n} \to 3+\gamma \approx 3.618034$ we introduce $\tilde{L}=L/c_{s,m_1/m_2}$ and we consider $\hat{\Psi}_1(\tilde L):=\Psi_1(\tilde{L})/\sqrt{c_{s,m_1/m_2}}$. In Fig.~\ref{5ones} we represent the leftmost five peaks of Fig.~\ref{cdh} left as a function of the parameter $\tilde{L}$. They correspond to $m_1=46368,75025,$ $121393,196418,317811$. Also, in blue, we represent the function $\hat{\Psi}_1(\tilde L)$. We see in the right plot that, as $\nu$ decreases to 0, the curves tend to $\hat{\Psi}_1(\tilde{L})$. \begin{figure}[ht] \begin{center} \begin{tabular}{cc} \hspace{-2mm}\epsfig{file=./figs/last5_lim.eps,width=7cm} & \hspace{-4mm}\epsfig{file=./figs/last5_limdet.eps,width=7cm} \end{tabular} \end{center} \caption{Left: The five leftmost peaks of Fig.~\ref{cdh} as a function of $\tilde{L}$ (in red). The function $\hat{\Psi}_1(\tilde{L})$ is also shown (in blue). All of them almost coincide at this scale. Right: Magnification of the central zone of the left plot. We see that the peaks move down as $\nu$ decreases (and $m_1$ increases). They tend to $\hat{\Psi}_1(\tilde{L})$.} \label{5ones} \end{figure} In Fig.~\ref{Psi} we represent the function $\hat{\Psi}_1(\tilde{L})$ as a function of $\log(\tilde{L})$. The maximum of $\hat{\Psi}_1(\tilde{L})$ is $\approx -2.555210$, in good agreement with the numerical values shown in Fig.~\ref{5ones} and in Fig.~\ref{cdh} left. It is achieved for $\tilde{L} \approx 0.072529$. After a change of coordinates the function $\hat{\Psi}_1(\tilde{L})$ behaves as $-\log(\cosh(\tilde{L}))$, see \cite{DelGelJorSea97,DelGut05}. \begin{figure}[ht] \begin{center} \hspace{-2mm}\epsfig{file=./figs/bonylim.eps,width=7cm} \end{center} \caption{We depict $\hat{\Psi}_1(\tilde{L})$ as a function of $\log(\tilde{L})$ for $\gamma=(\sqrt{5}-1)/2$ (see text for details).} \label{Psi} \end{figure} Let us consider two values of $\tilde{L}$, say $\tilde{L}_1$ and $\tilde{L}_2$, $\tilde{L}_1<\tilde{L}_2$, corresponding to different harmonics. Assume that for $\nu>0$ small enough these harmonics are related to two consecutive best approximants of $\gamma$, say $m_1/m_2$ and $m_2/m_3$ (the numerators $m_1$ and $m_2$ are two consecutive Fibonacci numbers). Assume that the change of harmonic takes place at $\nu=\nu_0^*$ then $\tilde{L}_1= m_1^2\nu_0^*$, $\tilde{L}_2=m_2^2 \nu_0^*$ and $\hat{\Psi}_1(\tilde{L}_1)=\hat{\Psi}_1(\tilde{L}_2)$. Moreover, if $m_1$ is large, then one has $m_2 \approx (1+\gamma) m_1$ and therefore $\tilde{L}_2 \approx (1+\gamma)^2 \tilde{L}_1$. One obtains $\tilde{L}_l\!\approx\! 0.044524$ as the asymptotic value of $\tilde{L}$ where the change takes place. Notice that $L=L_l \approx 0.16109$, which is very close to the value of $\tilde{L}_l$ obtained above using just $\mathcal{P}_F$. One has $\hat{\Psi}_1(\tilde{L}_l) \approx -2.652115$, which is represented as an horizontal line in Fig.~\ref{Psi}. We conclude that at the value $\nu=\nu_j \approx \tilde{L}_l/ F_j^2$ takes place the change $m_1=F_j \to m_1= F_{j+1}$ of dominant harmonic of $\DFt{1}$. In Table~\ref{taulanubif} we can see that, for large range intervals of $\nu$, both $\DFt{1}$ and $\DFt{2}$ have the same dominant harmonic. Indeed, relation (\ref{G1G2rel}) implies, in particular, that the changes of dominant harmonic in $\DFt{1}$ and in $\DFt{2}$ tend to coincide as $\nu \rightarrow 0$. Concretely, denote by $\nu_j^{(i)}$ the sequence of values of $\nu$ for which the dominant harmonic of $\DFt{i}$ changes, the values of $\nu_j^{(1)}$ have been determined in Section~\ref{goldenfreq}. To look for the values $\nu_j^{(2)}$ we consider the condition $\Psi_2(L_1) = \Psi_2(L_2)$ with $L_2=L_1(1+\gamma)^2$ which, by (\ref{G1G2rel}), is equivalent to \[\Psi_1(L_1) = \Psi_1(L_1(1+\gamma)^2) - \frac{\sqrt{L_1}}{m_1} \left( \gamma \log(L_1) + 2 (1+\gamma) \log(1+\gamma) \right).\] Note that, since $L = c_{s,m_1/m_2} \nu m_1^2$, when $\nu \rightarrow 0$ we recover the condition that determines the values $\nu_j^{(1)}$. One has $ \nu_{j}^{(2)} = \tilde{L}_l^{(2)}/F_j^2, $ where $\tilde{L}_l^{(2)}= \tilde{L}_l + \mathcal{O}(\sqrt{\nu})$, being $\tilde{L}_l \approx 0.044525$. The values of $\nu_j^{(1)}$ and $\nu_j^{(2)}$, corresponding to the changes of dominant harmonic in $\DFt{1}$ and $\DFt{2}$, respectively, are displayed in Table~\ref{nu1_nu2}. We have considered the range $\log_2(\nu) \in [-24,-16]$. We refer to Fig.~\ref{nterms} left where the computation of the amplitude of the splitting for this range of values of $\nu$ is shown. The best approximant $N_j/D_j = F_j/F_{j+1}$ corresponds to a dominant harmonic for $\nu_j = \mathcal{O}(1/F_j^2)$. Hence $\nu_j^{(1)} - \nu_j^{(2)}= \tilde{L}_l/F_j^2 - (\tilde{L}_l + \mathcal{O}(\sqrt{\nu_j}))/F_j^2 = \mathcal{O}(\nu_j \sqrt{\nu_j})$, as it is observed in the last column of Table~\ref{nu1_nu2}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $N_j$ & $N_{j+1}$ & $\log_2(\nu_j^{(1)})$ & $\log_2(\nu_j^{(2)})$& $\nu_j^{(1)}-\nu_j^{(2)}$ & Coeff \\ \hline 55 & 89 & -16.04563135 & -16.05223394 & 0.675040E-07 & 1.191635 \\ 89 & 144 & -17.43664042 & -17.44071697 & 0.159057E-07 & 1.190968 \\ 144 & 233 & -18.82665512 & -18.82917332 & 0.375102E-08 & 1.190692 \\ 233 & 377 & -20.21609319 & -20.21764898 & 0.884894E-09 & 1.190469 \\ 377 & 610 & -21.60516252 & -21.60612386 & 0.208812E-09 & 1.190355 \\ 610 & 987 & -22.99400932 & -22.99460338 & 0.492817E-10 & 1.190280 \\ \hline \end{tabular} \caption{ Values of $\nu_j^{(1)}$ and $\nu_j^{(2)}$ for which the change from the dominant harmonic related to the approximant $N_j/D_j$ to $N_{j+1}/D_{j+1}$ takes place. The last column displays the value of the coefficient $\text{Coeff} \approx (\nu_j^{(1)}-\nu_j^{(2)})/\nu_m^{3/2}$, $\nu_m=(\nu_j^{(1)}+\nu_j^{(2)})/2$. } \label{nu1_nu2} \end{center} \end{table} We remark that the previous comments assert that $\nu_j^{(1)}$ and $\nu_j^{(2)}$, corresponding to changes of dominant harmonic in $\DFt{1}$ and $\DFt{2}$, tend to coincide as $\nu \rightarrow 0$. For values of $\nu \in I_j=[\nu_j^{(2)},\nu_j^{(1)}]$ the dominant harmonic of each splitting function is different. This has some dynamical consequences: according to Appendix~\ref{split-difusion} one expects to have a faster diffusion process in phase space (but taking place in exponentially large times!) for values of $\nu \in I_j$ rather than for values of $\nu$ outside the union of the intervals $I_j$. Numerical massive investigations of the diffusion phenomena taking place for the example considered in this work and for small enough values of $\nu$ so that the limit behaviour can be observed would require a huge (nowadays prohibitive!) amount of computing time. Nevertheless, we believe that some numerical explorations of this model for moderate values of $\nu$ are of much interest. We postpone them for future works. \subsubsection{A general frequency $\gamma$} The same strategy can be used to look for values $\nu_j$ for which there is a change of dominant harmonic of $\DFt{1}$ (and of $\DFt{2}$) for general $\gamma$. Consider approximants $m_1/m_2$ and $n_1/n_2$ of $\gamma$ such that the related harmonics become dominant for $\DFt{1}$ (similar for $\DFt{2}$) in adjacent intervals of $\nu$. The change of dominant harmonic for $\DFt{1}$ takes place for $\nu$ such that $\Psi_1(L_1)=\Psi_1(L_2)$, where $L_1=c_{s,m_1/m_2} \nu m_1^2$ and $L_2= c_{s,n_1/n_2} \nu m_1^2 $. Using that $L_2=L_1 n_1^2 c_{s,n_1/n_2} / (m_1^2 c_{s,m_1/m_2})$ the previous equation can be solved for $L_1$ (e.g. numerically by simple iteration) to obtain the values of $\nu = \nu_j$ corresponding to the changes. As an illustrative example, we show in Fig.~\ref{em2} the results for the transcendental frequency number $\gamma=e-2$. From its CFE properties it follows that the constants $c_{s,m_1/m_2}$ become unbounded, see details in Section~\ref{sect_unboundedCFE}. On the other hand, we see in the figure that all the harmonics related to best approximants become dominant in a suitable range of $\nu$. We remark that for other $\gamma$ it might happen that some best approximants will not be related to a dominant harmonic of $\DFt{1}$ (see examples in Section~\ref{Sec:Otherfreq}). Concretely, for $\gamma=e-2$, we show in Fig.~\ref{em2} the functions $\Psi_1$ in blue lines and the points that correspond to the values of $\nu$ where a change of the dominant harmonic takes place. These values are obtained by comparing the functions $\Psi_1$ for different approximants as explained above in this section. As an extra check, we have compared the values of $\nu$ obtained by the previous procedure with the corresponding values obtained if one computes the contribution of each harmonic $m_1/m_2$ using the complete expression (\ref{dG1}) for $C_{m_1,m_2}$. These contributions are shown in red lines in the figure. We see that the blue lines are good enough approximations of the red ones for $\nu$ small enough. Moreover the values of $\nu$ are almost coincident even for the rightmost part of the figure where the agreement between the blue and red curves is not so good. \begin{figure}[ht] \begin{center} \hspace{-2mm}\epsfig{file=./figs/case.eps,width=8cm} \end{center} \caption{For $\gamma=e-2$ we show the contribution of the different harmonics, the range of $\nu$ where they become dominant and the changes. The horizontal axis corresponds to $\log_2(\nu)$. In red we depict $\sqrt{\nu} \log(C_{m_1,m_2})$. In blue the functions $\Psi_1(L)/\sqrt{c_{s,m_1/m_2}}$ obtained for the corresponding best approximants. The points correspond to the values of $\nu$ where there is a change of dominant harmonic. They are computed using the functions $\Psi_1(L)$. Note that for $\log_2(\nu) < -25$ both red and blue curves become almost coincident.} \label{em2} \end{figure} \subsection{The effect of non-dominant terms of the splitting function} \label{Sec:non-dominant} To find the dominant terms of the splitting $\DFt{1}$ we have considered values of $\nu$ small enough (fixed) and have looked for the values of $m_1$ for which $L= c_{s,m_1/m_2} \nu m_1^2$ is the closest to the maximum of $\Psi_1(L)$. This term (or these terms if, for example, we are close to a change of dominant harmonic) gives the maximum contribution to the Melnikov function $\DFt{1}$ in (\ref{exprdG1dG2_1}). However, to assert that the splitting Melnikov function $\DFt{1}$ is of the order of this/these dominant terms there are some details to be checked. As said in the Introduction, a theoretical proof must consider the effect of all the harmonics of the splitting function, bound the effect of the ones related to approximants which are not best approximants, and bound the effect of the best approximants which are non-dominant (for the values of $\nu$ considered). In particular, one has to address the following questions. \begin{enumerate} \item For a fixed $\nu$ we look for $m_1$ giving the most important terms in $\DFt{1}$. Which is the effect of the other terms associated to best approximants for this value of $\nu$? \item Of course there are other approximants of $\gamma$ which are not best approximants. We call them ``subapproximants''. Which is their contribution to $\DFt{1}$? Which are the corresponding constants $c_{s,m_1/m_2}$ related to each family of subapproximants and which is their contribution to $\DFt{1}$? \item Looking at the expression (\ref{exprdG1dG2_1}) of $\DFt{1}$ we see that values of $k,i,j$ for which $s$ is large correspond to terms which make a small contribution to the total sum. But there are infinitely many of these terms. How to bound their total contribution? \end{enumerate} Even if we are not going to address these questions formally, we want to provide an idea of how useful can be the universal function $\Psi_1$ to investigate such questions. For concreteness we focus on $\gamma=(\sqrt{5}-1)/2$. We recall that in this case one has $c_{s,n} \rightarrow 3+\gamma$ as $n \rightarrow \infty$ (see Section~\ref{Sec:periodicity_csn}). We proceed as follows. \begin{enumerate} \item To evaluate the function $\Psi_1(L)$ we consider the algorithm introduced in Section~\ref{sectchanges}. Recall that $\Psi_1$ depends on $\gamma,c_{s,m_1/m_2},c$ and $d$ but not on $\nu$. \item We compute the maximum of $\Psi_1(L)$. We denote by $\tilde{L}_M$ the value of $\tilde{L}=L/c_{s,m_1/m_2}$ for which the maximum is attained. \item We take $\nu$ small enough and we look for the integer $m_1$, among the numerators of the best approximants, closest to $\sqrt{\tilde{L}_M/\nu}$. Maybe there are two integer values at a similar distance and a bifurcation takes place because the dominant harmonic of $\DFt{1}$ changes. For $\gamma=(\sqrt{5}-1)/2$ this happens whenever $\Psi_1(L)= \Psi_1(L(1+\gamma)^2)$. \item If for the chosen value of $\nu$ there is an integer $m_1$ for which $\tilde{L}= \nu m_1^2=\tilde{L}_M$, then we have to check that the value of $\Psi_1$ at $\tilde{L}=\tilde{L}_{Mk}:=\tilde{L}_M(1+\gamma)^k$ for $k=\pm 2,\pm 4, \ldots$ is small enough. \item If $\nu=\nu_B$ corresponds to a bifurcation then $\Psi_1(\tilde{L}_B)=\Psi_1(\tilde{L}_B (1+\gamma)^2)$ for $\tilde{L}_B= \nu m_1^2$. \item It might be also interesting to look for values of $\nu$ for which there is a dominant harmonic but there is a change of subdominant. This happens for $\tilde{L}=\tilde{L}_C$ such that $\Psi_1(\tilde{L}_C(1+\gamma)^2) = \Psi_1(\tilde{L}_C/(1+\gamma)^2)$. \end{enumerate} We note that we have performed all the computations for $c=5$ and $d=7$. What happens in the limit cases, that is, either for $d \rightarrow \sqrt{2}$ as a function of $c$ or for $c \rightarrow 1$ as a function of $d$? Note that when $c \rightarrow 1$ the function $f(\theta)$, see (\ref{perturbedsystem}), tends to be unbounded as well as its Fourier coefficients (\ref{expcs}). The same thing happens for the function $g(y_1)$ when $d\to\sqrt{2}$ and its power expansion. In Fig.~\ref{varisL} we summarize some data obtained by the implementation of the previous items. Concretely, in Fig.~\ref{varisL} top left we show the points $\tilde{L}_S$ where $S=M,B,C$. The points with subscript $+$ and $++$ (resp. $-$ and $--$) denote the values of $\tilde{L}_S$ for the next and the second next approximants to $\gamma$. Since we are dealing with the golden frequency $\gamma$ we have considered the normalized function $$ \hat{\Psi}_1(\tilde L)= \Psi_1(\tilde{L})/\sqrt{c_\infty}, \quad \text{being} \ c_\infty=3+\gamma \ \text{the limit value of } \{c_{s,n}\}_n, $$ and we represent $\hat{\Psi}_1(\tilde L)$ as a function of $\log(\tilde L)$. We also show the same function translated to the right and to the left by $2 \log(1+\gamma)$. These correspond to the functions $\Psi_1$ associated to the previous and next best approximants of $\gamma$. The top left plot corresponds to $c=5,d=7$. In the top center plot we represent the same as in the top left one, but for values $c=1.1, d=1.5$ close to the limit. In the top right plot of Fig.~\ref{varisL} we represent $\log(-\hat{\Psi}_1(\tilde{L}))$ as a function of $\log \tilde{L}$ for $c=5,d=7$, and we check that $\log(-\hat{\Psi}_1(\tilde{L}))$ behaves as $|\log \tilde{L}|/2$ as follows from the expressions for $I$, $\mathcal{P}_F$, $\mathcal{P}_M$ and $\Psi$, in (\ref{eqI}), (\ref{cPF}), (\ref{cPM}) and (\ref{funPsi}), respectively. In the logarithmic scale used in the plot we clearly observe that, after shifting the origin and scaling coordinates, $\hat{\Psi}_1$ behaves as $\log(\cosh(\tilde{L}))$. The dependence of the maximum value of $\hat{\Psi}_1(\tilde{L})$ as a function of $(c,d)$ forms the surface shown in the bottom row of Fig.~\ref{varisL}. We recall our notation: the maximum of $\hat{\Psi}_1(\tilde{L})$ is achieved at $\tilde{L}=\tilde{L}_M$. As expected all the maxima are negative values. \begin{figure}[!ht] \begin{center} \begin{tabular}{ccc} \hspace{-6mm}\epsfig{file=./figs/psiclas.eps,width=6cm} & \hspace{-6mm}\epsfig{file=./figs/psicorn.eps,width=6cm} & \hspace{-6mm}\epsfig{file=./figs/psigrandom.eps,width=6cm} \end{tabular} \begin{tabular}{c} \epsfig{file=./figs/psimaxcd.eps,width=10cm} \vspace{-8mm} \end{tabular} \end{center} \vspace{-6mm} \caption{$\gamma=(\sqrt{5}-1)/2$. Top left: $\hat{\Psi}_1(\tilde{L})$ as a function of $\log(\tilde{L})$ for the values $c=5,d=7$. Also we show the same function translated to the left and to the right by $2\log(1+\gamma)$. The marked points are: $M$ for maximum and then $M_{++},M_{+},M_{-},M_{--}$ for the $m_1$ values of the previous and next approximants; $B$ for the change of dominant harmonic and then $B_{+},B_{-}$ for the nearby approximants too; $C$ for the subdominant harmonic change. Top center: the same as in the top left but for $c,d$ values close to the limit: $c=1.1$, $d=1.5$. Top right: for $c=5,d=7$ we show $\log(-\hat{\Psi}_1(\tilde{L}))$ as a function of $\log(\tilde{L})$. Bottom: Maxima of $\hat{\Psi}_1(\tilde{L})$ as a function of $(c,d)$.} \label{varisL} \end{figure} \subsection{The splitting function for different frequencies} \label{Sec:Otherfreq} In this section we illustrate what happens for several frequencies $\gamma$. We show some computations for concrete cases, including the golden mean, for comparison, in Fig.~\ref{othergamma}. The sequence of dominant harmonics and the values $\nu=\nu_j$ at which the change of dominant harmonic takes place depend on the CFE and not only on the Diophantine properties of $\gamma$. In Fig.~\ref{othergamma} we represent the contributions $C_{m_1,m_2}$ to $\DFt{1}/\epsilon$ as a function of $\log_2(\nu)$ for different values of $\gamma$. The results for $\gamma=(\sqrt{5}-1)/2$ are shown in the top left plot (case 0). Concretely, we represent the contributions $C_{m_1,m_2}$ corresponding to the approximants of the golden frequency with $m_1$ between $21$ and $514229$. Compare with Fig.~\ref{cdh} left. Note that all the approximants become dominant in a suitable range of $\nu$. However, as can be seen in the plots, this does not happen for other frequencies $\gamma$. For concreteness, below we consider the following cases (the notation $10 \times 1$ in the CFEs below denotes ten consecutive quotients equal to one). \begin{align*} \text{Case 0: } \gamma &= (\sqrt{5}-1)/2 = [1,1,1,1,1,...] \approx 0.618033988749894848204. \\[0.2cm] \text{Case 1: } \gamma &= (55(1+b)+34)/(89(1+b)+55) \text{ with } b=(\sqrt{122}-10)/11, \\ & \text{hence } \gamma=[10 \! \times \! 1, 1,10,1,1,10,1,1,10,1,...] \approx 0.6180512268192526496794.\\[0.2cm] \text{Case 2: } \gamma &= (55(1+b)+34)/(89(1+b)+55) \text{ with } b=(\sqrt{140}-10)/20, \\ & \text{hence } \gamma=[10 \! \times \! 1, 1,10,1,10,1,10,1,10...] \approx 0.6180513744611582707944. \\[0.2cm] \text{Case 3: } \gamma &= [10 \! \times \! 1,2,3,4,5,6,7,8,9,10,...] \approx 0.6180206632934375446297. \end{align*} For each one of the previous cases, we list the consecutive numerators of the approximants of $\gamma$ for which the corresponding harmonic term of the splitting function become dominant (in a suitable range of $\nu$). \begin{align*} \text{Case 0: } & 21,34,55,89,144,233,377,610,987,1597,2584,4181,6765,10946,17711,28657,\\ & 46368,75025,121393,196418,317811,514229. \\[0.2cm] \text{Case 1: } & 21,34,89,945,1034,1979,20824,22803,43627,459073,502700. \\[0.2cm] \text{Case 2: } & 21,34,89, 1034,12319,146794. \\[0.2cm] \text{Case 3: } & 21,34,55,144,487,2092,10947,67774,485365. \end{align*} The contributions of the harmonic terms related to consecutive best approximants to the total splitting are shown in Fig.~\ref{othergamma}. In order to explain the results displayed in the figure for different frequencies $\gamma$, we investigate the Diophantine properties of $\gamma$ and relate them to the properties of the constants $c_{s,m_1/m_2}$. \begin{figure}[ht] \begin{center} \begin{tabular}{cc} \hspace{-3mm}\epsfig{file=./figs/cas0.eps,width=8cm} & \hspace{-6mm}\epsfig{file=./figs/cas1.eps,width=8cm} \\ \hspace{-3mm}\epsfig{file=./figs/cas2.eps,width=8cm} & \hspace{-6mm}\epsfig{file=./figs/cas3.eps,width=8cm} \\ \end{tabular} \end{center} \caption{We represent the values of $\sqrt{\nu} \log(C_{m_1,m_2})$ for the approximants $m_1/m_2$ that contribute to $\DFt{1}/\epsilon$ within the rang of $\nu$ in the plots ($\log_2(\nu)$ ranges in the $x$-axis). Top left (Case 0): $\gamma=(\sqrt{5}-1)/2$. Top right (Case 1): $\gamma= (55(1+b)+34)/(89(1+b)+55)$, with $b=(\sqrt{122}-10)/11$, Bottom left (Case 2): $\gamma= (55(1+b)+34)/(89(1+b)+55)$, with $b=(\sqrt{140}-10)/20$. Bottom right (Case 3): $\gamma=[0;10 \! \times \! 1,2,3,4,5,6,7,8,9,10,...]$. The same windows have been used in all plots for comparison. } \label{othergamma} \end{figure} \subsubsection{Periodicity of the constants $c_{s,m_1/m_2}$ for quadratic irrational frequencies} \label{Sec:periodicity_csn} First, it turns out that for quadratic $\gamma \in \mathbb{R}\setminus \mathbb{Q}$ the constants $c_{s,m_1/m_2}$ tend to be periodic when $\nu \rightarrow 0$. This is a consequence of the basic CFE property in Lemma~\ref{lema_approximants_distance} below. Let $\{q_j\}_{j\geq 0}$ be an infinite or finite sequence of natural numbers, with $q_0 \geq 0$ and $q_j \geq 1$ for $j \geq 1$, which defines a CFE of a real number in the usual way. Given a frequency $\gamma=[q_0;q_1,q_2,...] = q_0+\frac{1}{q_1 + \frac{1}{q_2 + ...}}$, denote by $N_n/D_n=[q_0;q_1,\dots,q_n]$, $n\geq 0$, the $n$-th order approximant of $\gamma$. Introducing $N_{-1}=1$, $D_{-1}=0$, the following basic properties hold (see for example \cite{Khi64} for proofs). For all $n \geq 1$, \vspace{-0.2cm} \begin{enumerate} \item[(i)] $N_n = q_n N_{n-1} + N_{n-2}, \qquad D_n=q_n D_{n-1}+D_{n-2}$. \item[(ii)] $|D_n N_{n-1} - D_{n-1} N_n| =1$. \item[(iii)] If $\beta_n=[0;q_{n+1}, ... ]$ then $\displaystyle{\gamma = [q_0;q_1,q_2,...,q_{n-1},q_{n}+\beta_n] = \frac{N_n + \beta_n N_{n-1}}{D_n + \beta_n D_{n-1}}.}$ \item[(iv)] $\displaystyle{\frac{D_{n-1}}{D_n} = [0;q_n,q_{n-1},...,q_1]}.$ \end{enumerate} We introduce the notation $q_{+,n}=[q_{n+1};q_{n+2},...]$ and $q_{-,n}=[q_n;q_{n-1},...,q_1]$. \begin{lema}\label{lema_approximants_distance} The distance between the $n$-th order approximant and $\gamma$, for arbitrary $\gamma \in \mathbb{R} \setminus \mathbb{Q}$, satisfies $$ \left( D_n \left| D_n \gamma - N_n \right| \right)^{-1} = [q_{n+1};q_{n+2},...]+[0;q_n,q_{n-1},...,q_1]. $$ \end{lema} \begin{proof} From properties (ii), (iii) and (iv) one has $$ \left|\gamma - \frac{N_n}{D_n}\right| = \frac{\beta_n}{D_n^2 (1+\beta_n \frac{D_{n-1}}{D_n})} = \frac{ \beta_n}{D_n^2(1+ \beta_n [0;q_n,q_{n+1},...,q_1])}. $$ This implies the result. \end{proof} It is known that $\gamma \in \mathbb{R}$ is a quadratic irrational number if, and only if, its CFE is eventually periodic. \begin{lema} \label{lema_periodic_csn} Let $\gamma$ be a quadratic irrational number with eventually $p$-periodic CFE. Let $c_{s,n}=c_{s,N_n/D_n}=(N_n |D_n \gamma - N_n|)^{-1}$. Then, the sequence of constants $\{c_{s,n} \}_{n \geq 1}$ is asymptotically $p$-periodic (as $n \rightarrow \infty$). \end{lema} \begin{proof} The statement follows from the relation $$ \left( D_n^2 \left| \gamma - \frac{N_n}{D_n} \right| \right)^{-1} = \frac{N_n}{D_n} \, c_{s,n}, $$ which, using the previous Lemma~\ref{lema_approximants_distance}, implies that \begin{equation} \label{csnqpm} c_{s,n} \approx \frac{q_{+,n} + 1/q_{-,n}}{\gamma} (1 + \mathcal{O}(D_n^{-2})). \end{equation} If $\gamma$ is a quadratic irrational number then, taking $n$ large enough, the sequence of quotients of $q_{+,n}$ is periodic and the one of $q_{-,n}$ tends to be periodic, that is, its quotients are periodic except maybe some final ones that have small influence on the value of $q_{-,n}$ if $n$ is large. This implies that $c_{s,n}$ tend to be periodic with respect to $n$ with the same period as the CFE of $\gamma$. \end{proof} In particular, for the values of $\gamma$ referred as Cases $0$, $1$, and $2$ in Section~\ref{Sec:Otherfreq} one has: \begin{align*} \text{Case 0: } & c_{s,n} \rightarrow 3 + \gamma \approx 3.61803398 \text{ as } n \rightarrow \infty. \\[0.2cm] \text{Case 1: } & \left\{c_{s,n}\right\}_n \text{ tend to be 3-periodic. One has }\\ & \hspace{1cm} c_{s,n} \to 17.871271 \dots, \quad c_{s,n+1}=c_{s,n+2} \to 3.249322 \dots \text{ for } n=2 \, (\text{mod} \, 3).\\[0.2cm] \text{Case 2: } &\left\{c_{s,n}\right\}_n \text{ tend to be 2-periodic. One has,} \\ & \hspace{1cm} c_{s,n} \to a \approx 1.91442978 \text { for $n$ even, and } \quad c_{s,n}\to 10 a \text{ for $n$ odd.} \end{align*} Then, for the $n$-th approximant of $\gamma$, say $N_n/D_n$, the corresponding maxima shown in Fig.~\ref{othergamma} are approximated by $\Psi_M/\sqrt{c_{s,n}}$, being $\Psi_M \approx -4.860298$, and they are located at $\nu \approx L_M/(N_n^2 c_{s,n})$, where $L_M \approx 0.26236$. For example, in Case 2 the $4$-th visible maximum (from right to left) shown in the bottom left panel of Fig.~\ref{othergamma} is related to $N=1034$ and corresponds to $c_{s,n} \to 10a$. Accordingly its value is $\approx -1.1108186876015$ and it is located at $\log_2(\nu) \approx -26.2172640940432$ in agreement with what is shown in the figure. \subsubsection{Diophantine properties of frequencies with unbounded CFE} \label{sect_unboundedCFE} In Case $3$ the frequency $\gamma$ has an unbounded CFE and $c_{s,n}$ tend to infinity as $n \rightarrow \infty$. On the other hand, for $\gamma=e-2=[0;1,2,1,1,4,1,1,6,1,1,8,...]$ different behaviours of the constants $c_{s,n}$ are mixed. The sequence of best approximants $N_n/D_n$ of $\gamma=e-2$ is \[ 1/1, \ 2/3, \ 3/4, \ 5/7,\ 23/32,\ 28/39,\ 51/71,\ 334/465,\ 385/536,\ 719/1001, \dots \] The values $c_{s,n}$ associated to the approximant $N_n/D_n$ are such that the subsequence $\{c_{s,3m+1}\}_{m \geq 0}$ tends to $\infty$ linearly with slope $2/(3\gamma)$. The other two subsequences of $c_{s,n}$ are bounded, being $c_{s,3m} < c_{s,3m+2}$ for all $m \geq 1$, and they both tend to $2/\gamma$. This explains the bumps observed in Fig.~\ref{em2}. We give further details on the Diophantine properties of the previous unbounded CFE cases. For concreteness, we consider $\gamma_1=[0;1,2,3,4,5,6,7,8,...] \approx 0.69777465796400798200679$ and $\gamma_2=e-2=[0;1,2,1,1,4,1,1,6,1,1,8,...]$. As usual, to get Diophantine approximations the idea is to look for a function $\phi(D_n)$ such that $\phi(D_n) |D_n \gamma - N_n|$ is bounded from below. From the identity $c_{s,n}=(N_n | D_n \gamma - N_n|)^{-1}$ we can take $\phi(D_n)=N_n c_{s,n} \approx D_n \gamma c_{s,n}$, and we note that the constants $c_{s,n}$ can be approximated from the quotients $q_n$ of $\gamma$ using (\ref{csnqpm}). \begin{lema} \label{Lg1} Let $\gamma_1=[0;1,2,3,4,5,6,7,8,...]$. There exists a constant $c>0$ such that \footnote{Note that $\phi(q) < q^{\tau}$ for any $\tau>1$ if $q$ is large enough. Equivalently, $\gamma_1$ satisfies the Diophantine condition $|\gamma_1 - p/q| \geq c/q^{\tau}$ for any $\tau=2+\epsilon$, $\epsilon>0$ and a suitable constant $c=c(\epsilon)>0$.} \[|q \gamma_1 - p| \geq \frac{c}{\phi(q)}, \qquad \phi(q)=q \log(q)/\log(\log(q)),\] for all $p,q \in \mathbb{Z}$ with $q\geq 3$. \end{lema} \begin{proof} Since $q_n=n$ one has $q_{+,n} = (n+1)(1+\mathcal{O}(n^{-2}))$, $q_{-,n}=n(1+\mathcal{O}(n^{-2}))$, and from (\ref{csnqpm}) it follows that $c_{s,n} \gamma_1 \approx (n+1)(1+\mathcal{O}(n^{-2}))$. To obtain an explicit formula for $\phi(n)$ one has to relate $c_{s,n}$ with $D_n$. Note that $D_n = D_{n-1} q_{-,n}$ and then $D_n$ equals $n!$ times a finite product of terms that are convergent when $n \rightarrow \infty$ (because $\sum_{n\geq1} n^{-2}=\pi^2/6$). Stirling's approximation provides the relation $$\log(D_n) = n \log n (1+\mathcal{O}(1/\log(n))),$$ which can be solved by Newton iteration (note that the Newton-Kantorovich theorem guarantees that the iteration starting with $n_0=\log(D_n)/ \log(\log(D_n))$ converges provided $D_n$ is large enough) to obtain $$ n = \frac{\log(D_n)}{\log(\log(D_n))} \left( 1 + \mathcal{O}\left( \frac{\log(\log(\log(D_n)))}{\log(\log(D_n))}\right)\right). $$ We conclude that $\phi(D_n)=D_n \log(D_n)/\log(\log(D_n))$ ensures a positive lower bound of the scaled difference $\phi(D_n) | D_n \gamma_1 - N_n|$. If $D_n \leq q < D_{n+1}$ then $|q-\gamma_1 p| \geq | D_n \gamma_1 - N_n| \geq c/\phi(D_n) \geq c/\phi(q)$. Changing the constant $c$ we extend the inequality for $q \geq 3$. \end{proof} \begin{remark} Numerically we observe that $D_n/n! \rightarrow 2.2796$ as $n \rightarrow \infty$. Let $\Pi_n= \phi(D_n)|D_n \gamma - N_n|$. The sequence $\{\Pi_n\}_{n > 1}$ (for $n=1$ it is not defined!) reaches a minimum values for $n=6$ (that is when evaluated on the approximant $N_6/D_6=972/1393$ for which on has $\Pi_6 \approx 0.50201173$) while uniformly increases for $n>6$. For $n=1000$ one has $\Pi_{1000} \approx 0.68014970$ and $\Pi_n \approx 0.758203198$ for $n=10^5$. The function $g(n)=|D_n\gamma_1 - N_n|n D_n $ is such that $g(1)=1-\gamma_1$, it is monotonically increasing and it tends to one as $1-\mathcal{O}(1/n)$ when $n\rightarrow \infty$ as expected. \end{remark} We proceed similarly for $\gamma=\gamma_2=e-2$. The following lemma asserts that $\gamma_2$ has similar Diophantine properties to the ones described in Lemma~\ref{Lg1} for $\gamma_1$. \begin{lema} \label{Lg2} There exists a constant $c>0$ such that \[ |q \gamma_2 - p| \geq c/\phi(q), \quad \phi(q)=q \log(q)/\log(\log(q)),\] for all $p,q \in \mathbb{Z}$ with $q \geq 3$. \end{lema} \begin{proof} We have $q_n=2(n+1)/3$ if $n=2 \text{ (mod 3)}$ and $q_n=1$ otherwise. We consider $k=3j+2$ below. One has $q_{+,3j+1}=[q_{3j+2};1,1,q_{+,3j+4}]$ with $q_{3j+2}=2(j+1)$ and $q_{+,3j+4}=\mathcal{O}(j)$, which implies that $q_{+,3j+1}=(2(j+1)+1/2)(1+\mathcal{O}(j^{-2}))=(2(k+1)/3+1/2)(1+\mathcal{O}(k^{-2}))$. From the relation $q_{+,3j}=q_{3j+1}+1/q_{+,3j+1}$ it follows that $q_{+,3j}=(1+3/(2k))(1+\mathcal{O}(k^{-2}))$. Using that $q_{+,3j+2}=[1;1,q_{+,3j+4}]$ where $q_{+,3j+4}=(2(j+2)+1/2)(1+\mathcal{O}(j^{-2}))$ one obtains $q_{+,3j+2}=(2-3/(2k))(1+\mathcal{O}(k^{-2}))$. Moreover, since $q_{-,3j+2}=[2(j+1);1,1,q_{-,3j-1}]$ and $q_{-,3j-1}=\mathcal{O}(j)$, one gets $q_{-,3j+2}=(2(k+1)/3+1/2)(1+\mathcal{O}(k^{-2}))$. From $q_{-,3j+1}=[q_{3j+1}; q_{3j}, q_{-,3j-1}]$, using that $q_{-,3j-1}=2j(1+\mathcal{O}(j^{-1}))$, it follows that $q_{-,3j+1}=(2-3/(2k))(1+\mathcal{O}(k^{-2}))$. Since $q_{-,3j}=[q_{3j};q_{-,3j-1}]$ one obtains $q_{-,3j}=(1+3/(2k))(1+\mathcal{O}(k^{-2}))$. Summarizing, for $k=2 \text{(mod 3)}$, we obtain \[ \begin{array}{lll} q_{+,k-2} \approx 1+\frac{3}{2k}, \qquad & q_{+,k-1} \approx \frac{2(k+1)}{3}+ \frac{1}{2}, \qquad & q_{+,k} \approx 2-\frac{3}{2k}, \\ q_{-,k-2} \approx 1+\frac{3}{2k}, & q_{-,k-1} \approx 2-\frac{3}{2k}, & q_{-,k} \approx \frac{2(k+1)}{3}+ \frac{1}{2}, \end{array} \] with a relative error $\mathcal{O}(k^{-2})$ in all cases. Using (\ref{csnqpm}) and the previous estimates we get $c_{s,k-2}\gamma_2 \approx 2$, $c_{s,k-1}\gamma_2 \approx 2(k+1)/3+1$ and $c_{s,k} \gamma_2 \approx 2$. We conclude that $c_{s,n}, n\geq 1$ is, at most, $\mathcal{O}(n)$. Next, we look to the denominators $D_n$. We use the properties listed in the items before Lemma~\ref{lema_approximants_distance}. The recurrence $D_n=q_n D_{n-1} + D_{n-2}$, implies that $D_{3j+2} = 2(j+1)( D_{3j}+D_{3j-1})+D_{3j}$ and, using the identity $D_{3j}=D_{3j-1} q_{-,3j}$, it simplifies to \[ D_{3j+2}= \left( 2(j+1)(1+q_{-,3j})+q_{-,3j} \right) D_{3j-1}. \] Since $q_{-,3j}=[q_{3j};q_{-,3j-1}]=(1+1/(2j))(1+\mathcal{O}(j^{-2}))$, one obtains \[ D_{3j+2} = 4 \left(j + \frac{3}{2} \right) D_{3j-1} (1+\mathcal{O}(j^{-2})) = \frac{4k+10}{3} \, D_{3j-1} (1+\mathcal{O}(k^{-2})) .\] From this recurrence we obtain $D_{3j+2}\approx 4^j \Gamma(j+5/2)$ or, equivalently, \begin{equation} \label{Dn2} D_n \approx 4^{n/3} \Gamma(n/3+11/6), \qquad \text{for $n$ such that $n=2 (\text{mod }3$).} \end{equation} Then, Stirling's approximation gives \[ 3 \log D_n \approx n \log n - an + 4\log n, \qquad a=1+\log 3/4.\] As in Lemma~\ref{Lg1}, we take $n_0= 3 \log(D_n)/\log(\log(D_n))$ and solve this relation by (Newton) iteration (the convergence follows from the Newton-Kantorovich theorem). We obtain \[ n = \frac{3 \log(D_n)}{\log(\log(D_n))} \left( 1 + \mathcal{O}\left( \frac{\log(\log(\log(D_n)))}{\log(\log(D_n))}\right)\right).\] Now the proof finishes as the proof of Lemma~\ref{Lg1}. Since $c_{s,n}=\mathcal{O}(n)$ then one takes $\phi(q)=q \log(q)/\log(\log(q))$ to have values of $|q \gamma_2-p| \phi(q)$ bounded from below. \end{proof} We show in Fig.~\ref{phiq} left the values of $\xi_n=|D_n \gamma_2 - N_n| \phi(D_n)$ as a function of $n$. In the right plot we only show the local minima of $\xi_n$ (i.e. those corresponding to $n=1 (\text{mod }3$)). According to the theoretical predictions, the minimum values tend to a constant. \begin{figure}[ht] \begin{center} \begin{tabular}{cc} \hspace{-4mm}\epsfig{file=./figs/cotainfem2a.eps,width=0.5\textwidth} & \hspace{-6mm}\epsfig{file=./figs/cotainfem2b.eps,width=0.5\textwidth} \end{tabular} \end{center} \vspace{-8mm} \caption{Left: We represent $\xi_n=|D_n \gamma_2-N_n|\phi(D_n)$ as a function of $n$. In the right plot we focus on the behaviour of the minima (up to $n=1000$). } \label{phiq} \end{figure} \begin{remark}\label{rk7p3} We have seen that the frequencies $\gamma_1$ and $\gamma_2$ satisfy a condition of the form $|q \gamma -p| \geq c \log(\log(q))/(q \log(q))$, $q\geq 3$. We remark that the set of irrational numbers $\gamma$ satisfying such a type of condition for some $c>0$ has zero measure. By contrast, the set of irrational numbers $\gamma$ satisfying a condition of the form $|q\gamma - p|\geq c/\phi(q)$ for $q\geq q_0$ for some $c>0$ and such that $1/\phi(q)$ is integrable in the range $[q_0,\infty)$, has total measure. We refer to \cite{Khi64} for further details on measure aspects of CFE. As examples we can consider numbers of the form $\gamma=[0;1^m,2^m,3^m,4^m,5^m,\ldots]$ for $1<m\in\mathbb{Z}_+$. They satisfy $|q\gamma - p|\geq c (\log(\log(q)))^m/(q(\log(q))^m)$ for $q\geq 3$ and a positive constant $c$. \end{remark} \begin{remark} \label{rk7p4} It follows from the reasoning in Remark~\ref{remark_k1k2} that if $\gamma$ satisfies a Diophantine condition of the form $|q \gamma - p| \geq c |q|^{-\tau}$, $\tau \geq 1$, $c>0$, then the exponentially small part of the splitting is expected to have an exponent of the form $-C/\nu^{1/(\tau+1)}$ with $C>0$. A similar reasoning shows that if $\gamma$ satisfies a condition of the form $|q \gamma -p| \geq c \log(\log(q))/(q \log(q))$, $q\geq 3$, $c>0$, as the ones considered in this section, then the exponent of the exponentially small part becomes \[ \left(\frac{-C\sqrt{\log(\log(\sqrt{1/\nu}))}}{\sqrt{\nu\log(\sqrt{1/\nu})}} \right)(1+o(1)),\quad C>0,\] where the $o(1)$ terms are bounded by $\log(\log(1/\nu))/\log(1/\nu)$. Consequently, if one represents $\log(|\Delta F_1^{1}|/\epsilon)$ multiplied by $\sqrt{\nu \log(1/\nu)/\log(\log(1/\nu))}$ (instead of by $\sqrt{\nu}$ as a function of $\log_2(\nu)$ as we did in Fig.~\ref{em2} and in Case 3 of Fig.~\ref{othergamma}), then the maxima tend to a constant value. In the multiplying factor we have neglected $\log(2)$ in front of $\log(\sqrt{1/\nu})$. \end{remark} \section{Conclusions and future work} \label{Sec:Conclusion} In this work we have investigated the asymptotic properties of the splitting of the invariant manifolds emanating from a complex-saddle fixed point of a 2-dof Hamiltonian system $H_0({\bf x},{\bf y})$ undergoing a Hamiltonian-Hopf bifurcation (at $\nu=0$) when acting a periodic forcing on the system. We have obtained detailed information of the exponentially small behaviour, describing the changes of dominant harmonic as $\nu \rightarrow 0$. As has been discussed through the paper, for the concrete example considered, when using Poincar\'e-Melnikov method it remains to bound the effect of \begin{itemize} \item the terms in the first order Melnikov approximation not related to best approximants and, \item the non-dominant terms in the splitting function, bounding the effect of higher order Melnikov approximations. \end{itemize} In any case, the detailed description presented in this paper takes advantage of the concrete properties of the explicit periodic perturbation $\epsilon H_1({\bf x},{\bf y},t)$ considered. In this sense, an interesting topic for future works would be to consider other perturbations $H_1({\bf x},{\bf y},t)$, for example: \begin{itemize} \item perturbations having a finite number of harmonics (then higher order Melnikov analysis could be required to analyse the changes of dominant harmonic). \item perturbations leading to a periodic orbit from the fixed point of the system. \end{itemize} As a consequence of the splitting of the invariant manifolds, in a neighbourhood of the stable/unstable invariant manifolds there is a region where rich dynamics appears. A desirable tool to investigate this dynamics would be a suitable return map adapted to this problem. Such a return map depends on two key ingredients: the return time to a suitable Poincar\'e section and the splitting function. This is an interesting problem, motivated by the slow diffusive expected properties (see Appendix~\ref{split-difusion}), that we postpone to study in future works. Concretely, it involves: \begin{itemize} \item to construct a 4D separatrix map adapted to this problem. As said, this requires not only the splitting function (see Section~\ref{Sec:Splitting}) but also the passage time close to the complex-saddle point, and \item to provide a description of the geometry of the phase space (resonance web) and analyse the diffusive properties of the model. Note, however, that to observe the asymptotic behaviour requires very small values of $\nu$, outside the range of interest of any physical application. \end{itemize} Finally, we also note that the case considered is somehow an intermediate case between the 2-dof Hamiltonian case (in which the splitting behaves as the one of a periodic perturbation of an integrable system) and the splitting of the separatrices for a family of 4D symplectic maps undergoing a Hamiltonian-Hopf bifurcation (in which the perturbation is not explicit). Then, a natural continuation of this work would be to consider the analogous Hamiltonian-Hopf bifurcation for 4D symplectic maps and to study the splitting of the invariant manifolds and the consequences in the diffusion properties.
1,108,101,563,114
arxiv
\section{Introduction} The determination of the nuclear symmetry energy (SE) based on microscopic and/or phenomenological approaches is of great interest in nuclear physics as well as in nuclear astrophysics. For instance, it is important for the study of the structure and reactions of neutron-rich nuclei, the Type II supernova explosions, neutron-star mergers and the stability of neutron stars. In addition, the SE is the basic ingredient for the determination of the proton fraction and electron chemical potential. The above quantities determine the cooling rate and neutrino emission flux of protoneutron stars and the possibility of kaon condensation in dense matter \cite{Bethe-90,Prakash-97}. Heavy-ion reactions are a unique means to produce in terrestrial laboratories hot neutron-rich matter similar to those existing in many astrophysical situations \cite{Bao-Li-06}. Although the behavior of the SE for densities below the saturation point still remains unknown, significant progress has been made only most recently in constraining the SE at subnormal densities and around the normal density from the isospin diffusion data in heavy-ion collisions \cite{Wen05, Bao05}. This has led to a significantly more refined constraint on neutron-skin thickness of heavy nuclei \cite{Steiner05, Chen05} and the mass-radius correlation of neutron stars \cite{Li-06}. For densities above the saturation point the trend of the SE is model dependent and exhibits completely different behavior. Up to now the main part of the calculations concerning the density dependence of the SE is related with the cold nuclear matter ($T=0$). However, recently, there is an increasing interest for the study of the SE and the properties of neutron stars at finite temperature \cite{Bao-Li-06,Donati-94,Mishra-93,Ccernai-92,Zuo-03,Chen-01,Xu-07-1,Xu-07-2}. The motivation of the present work is to clarify the effects of finite temperature on SE and to find also the appropriate relations describing that effect. Especially we focus on the interaction part of the SE, where so far it has received little theoretical attention concerning its dependence on the temperature. In order to investigate the thermal properties of the SE, we apply a momentum dependent effective interaction model. In that way, we are able to study simultaneously thermal effects not only on the kinetic part of the symmetry energy but also on the interaction part. The present model has been introduced by Gale et al. \cite{Gale-87,Gale-90,Bertsch-88,Prakash-88-1} in order to examine the influence of momentum-dependent interactions on the momentum flow of heavy ion collisions. Over the years the model has been extensively applied in the study not only of the heavy ion collisions but also in the properties of nuclear matter by a proper modification \cite{Das-03,Das-07,Bao-Li-04,Chen-05}. A review analysis of the present model is presented in Refs. \cite{Prakash-97,Bertsch-88}. In the present work we study the thermal properties of the nuclear symmetry energy by applying the above phenomenological model focusing mainly on the temperature dependence of the kinetic and interaction part of the SE as well as the total SE. Though it is well known how the temperature affects the kinetic part of the symmetry energy \cite{Bao-Li-06,Lee-01,Mekjian-05} the temperature dependence of the interaction part of the SE has so far received little theoretical attention. In addition, we determine the temperature dependence of the proton fraction as well as of the electron chemical potential. Both of the above quantities are related with the thermal evaluation of the supernova and the proton-neutron stars. The single particle potential for the pure neutron matter and the symmetric nuclear matter, extensively applied in heavy ion collision research, is also estimated for various values of the temperature. Finally, we construct the equation of state (EOS) of $\beta$-stable matter which is the basic ingredient for calculations of the neutron star properties. The plan of the paper is as follows. In Sec.~II the model and the relative formulae are discussed and analyzed. Results are reported and discussed in Sec.~III, while the summary of the work is given in Sec.~IV. \section{The model} The schematic potential model, used in the present work, is designed to reproduce the results of the more microscopic calculations of both nuclear and neutron-rich matter at zero temperature and can be extended to finite temperature \cite{Prakash-97}. The energy density of the asymmetric nuclear matter (ANM) is given by the relation \begin{equation} \epsilon(n_n,n_p,T)=\epsilon_{kin}^{n}(n_n,T)+\epsilon_{kin}^{p}(n_p,T)+ V_{int}(n_n,n_p,T), \label{E-D-1} \end{equation} where $n_n$ ($n_p$) is the neutron (proton) density and the total baryon density is $n=n_n+n_p$. The contribution of the kinetic parts are \begin{equation} \epsilon_{kin}^n(n_n,T)+\epsilon_{kin}^p(n_p,T)=2 \int \frac{d^3 k}{(2 \pi)^3}\frac{\hbar^2 k^2}{2m} \left(f_n(n_n,k,T)+f_p(n_p,k,T) \right), \label{E-K-D-1} \end{equation} where $f_{\tau}$, (for $\tau=n,p$) is the Fermi-Dirac distribution function with the form \begin{equation} f_{\tau}(n_{\tau},k,T)=\left[1+\exp\left(\frac{e_{\tau}(n,k,T)-\mu_{\tau}(n,T)}{T}\right) \right]^{-1}. \label{FD-1} \end{equation} The nucleon density $n_{\tau}$ is evaluated from the following integral \begin{equation} n_{\tau}=2 \int \frac{d^3k}{(2\pi)^3}f_{\tau}(n_{\tau},k,T)=2 \int \frac{d^3k}{(2\pi)^3}\left[1+\exp\left(\frac{e_{\tau}(n,k,T)-\mu_{\tau}(n,T)}{T}\right)\right]^{-1}. \label{D-1} \end{equation} In Eq. (\ref{FD-1}), $e_{\tau}(n,k,T)$ is the single particle energy (SPE) and $\mu_{\tau}(n,T)$ stands for the chemical potential of each species. The SPE has the form \begin{equation} e_{\tau}(n,k,T)=\frac{\hbar^2k^2}{2m}+U_{\tau}(n,k,T), \label{esp-1} \end{equation} where the single particle potential $U_{\tau}(n,k,T)$, is obtained by differentiating $V_{int}$ i.e. $U_{\tau}=\partial V_{int}(n_n,n_p,T)/\partial n_{\tau}$. Including the effect of finite-range forces between nucleons, in order to avoid acausal behavior at high densities, the potential contribution is parameterized as follows \cite{Prakash-97} \begin{eqnarray} V_{int}(n_n,n_p,T)&=&\frac{1}{3}An_0\left[\frac{3}{2}-(\frac{1}{2}+x_0)(1-2x)^2\right]u^2 +\frac{\frac{2}{3}Bn_0\left[\frac{3}{2}-(\frac{1}{2}+x_3)(1-2x)^2\right]u^{\sigma+1}} {1+\frac{2}{3}B'\left[\frac{3}{2}-(\frac{1}{2}+x_3)(1-2x)^2\right]u^{\sigma-1}} \nonumber \\ &+& \frac{2}{5}u \sum_{i=1,2}\left[(2C_i+4Z_i) \ 2 \int \frac{d^3k}{(2\pi)^3} g(k,\Lambda_i)(f_n+f_p) \right. \nonumber \\ &+& \left. (C_i-8Z_i) \ 2 \int \frac{d^3k}{(2\pi)^3} g(k,\Lambda_i)(f_n(1-x)+f_px) \right], \label{V-all} \end{eqnarray} where $x=n_p/n$ is the proton fraction and $u=n/n_0$, with $n_0$ denoting the equilibrium symmetric nuclear matter density $n_0=0.16$ fm$^{-3}$. The constants $A$, $B$, $\sigma$, $C_1$, $C_2$ and $B'$, which enter in the description of symmetric nuclear matter and the additional parameters $x_0$, $x_3$, $Z_1$, and $Z_2$, used to determine the properties of asymmetric nuclear matter, are treated as parameters constrained by empirical knowledge \cite{Prakash-97}. The function $g(k,\Lambda_i)$ suitably chosen to simulate finite range effects is of the following form \begin{equation} g(k,\Lambda_i)=\left[1+\left(\frac{k}{\Lambda_{i}}\right)^2 \right]^{-1}, \label{G-1} \end{equation} where the finite range parameters are $\Lambda_1=1.5 k_F^{0}$ and $\Lambda_2=3 k_F^{0}$ and $k_F^0$ is the Fermi momentum at the saturation point $n_0$. The entropy density $s_{\tau}(n,T)$ required for the calculations of the total pressure and for the EOS, has the same functional form as that of a non interacting gas system, that is \begin{equation} s_{\tau}(n,T)=-2\int \frac{d^3k}{(2\pi)^3}\left[f_{\tau} \ln f_{\tau}+(1-f_{\tau}) \ln(1-f_{\tau})\right]. \label{s-den-1} \end{equation} The ratio entropy/baryon is given by $S_{\tau}(n,T)=s_{\tau}(n,T)/n$. The baryon pressure $P_b(n,T)$, needed to construct the EOS, is given by \begin{equation} P_b(n,T)=T\sum_{\tau=p,n}s_{\tau}(n,T)+\sum_{\tau=p,n}n_{\tau}\mu_{\tau}(n,T)-\epsilon_{anm}(n,T). \label{P-1} \end{equation} Finally, the total energy density and pressure of charge neutral and chemically equilibrium nuclear matter are \begin{equation} \epsilon_{tot}(n,T)=\epsilon_{b}(n,T)+\sum_{l=e^-,\mu^-} \epsilon_l(n,T), \label{e-total} \end{equation} \begin{equation} P_{tot}(n,T)=P_{b}(n,T)+\sum_{l=e^-,\mu^-} P_l(n,T). \label{P-total} \end{equation} The leptons (electrons and muons) originating from the condition of the $\beta$-stable matter are considered as non-interacting Fermi gases. The above analysis holds in general for the asymmetric nuclear matter. Below, in order to calculate the thermal effect on the SE, we will focus our study on two cases, i.e. the symmetric nuclear matter (SNM) and the pure neutron matter (PNM). \subsection{Symmetric nuclear matter} The energy density of SNM is given by Eqs. (\ref{E-D-1}) and (\ref{V-all}) by setting $x=1/2$, that is \cite{Prakash-97} \begin{eqnarray} \epsilon_{snm}(n,T)&=&2\int \frac{d^3k}{(2\pi)^3}\frac{\hbar^2k^2}{2m}f_n +2\int \frac{d^3k}{(2\pi)^3}\frac{\hbar^2k^2}{2m}f_p +\frac{1}{2}An_0u^2+\frac{Bn_0u^{\sigma+1}}{1+B'u^{\sigma-1}}\nonumber\\ &+&u\sum_{i=1,2}C_i \ 2 \int \frac{d^3k}{(2\pi)^3} g(k,\Lambda_i)f_n+ u\sum_{i=1,2}C_i \ 2 \int \frac{d^3k}{(2\pi)^3} g(k,\Lambda_i)f_p . \label{ensnm-1} \end{eqnarray} In addition, the single particle potential $U_{snm}^{\tau}(n,k,T)$ in the case of SNM, defined from the relation $U_{snm}^{\tau}=\partial V_{snm}/\partial n_{\tau}$, is easily calculated and given by \begin{equation} U_{snm}^{\tau}(n,k,T)=\tilde{U}_{snm}^{\tau}(n,T)+ u\sum_{i=1,2}C_i\left[1+\left(\frac{k}{\Lambda_i}\right)^2\right]^{-1}. \label{Us-1} \end{equation} It is obvious from Eq.~(\ref{Us-1}) that $U_{snm}^{\tau}(n,k,T)$ is separated in two terms. The first one corresponds to the momentum independent part, while the second one corresponds to the momentum dependent one. The term $\tilde{U}_{snm}^{\tau}(n,T)$ has the following form \begin{eqnarray} \tilde{U}_{snm}^{\tau}(n,T)&=& Au+\frac{Bu^{\sigma}(\sigma+1+2B'u^{\sigma-1})}{(1+B'u^{\sigma-1})^2} \nonumber\\ &+&\frac{2}{n_0} \sum_{i=1,2}C_i 2 \int \frac{d^3k}{(2\pi)^3}\left[1+\left(\frac{k}{\Lambda_i}\right)^2\right]^{-1} f_{\tau}, \qquad \tau=p,n. \label{Us-2} \end{eqnarray} At zero temperature ($T=0$), where $\theta(k_{F_{\tau}}-k)$, the integrals in Eqs.~(\ref{ensnm-1}) and (\ref{Us-2}) are calculated analytically (see Appendix A for more details). \subsection{Pure nuclear matter} The energy density of PNM is given by Eqs. (\ref{E-D-1}) and (\ref{V-all}) by setting $x=0$ and $f_p=0$, that is \cite{Prakash-97} \begin{eqnarray} \epsilon_{pnm}(n,T)&=&2\int \frac{d^3k}{(2\pi)^3}\frac{\hbar^2k^2}{2m}f_n+\frac{1}{3}An_0(1-x_0)u^2+ \frac{\frac{2}{3}Bn_0(1-x_3)u^{\sigma+1}}{1+\frac{2}{3}B'(1-x_3)u^{\sigma-1}}\nonumber \\ &+&\frac{2}{5}u\sum_{i=1,2}(3C_i-4Z_i) \ 2 \int \frac{d^3k}{(2\pi)^3} g(k,\Lambda_i)f_n .\label{epnm-1} \end{eqnarray} The single particle potential $U_{pnm}^{n}(n,k,T)$ in the case of PNM is defined from the relation $U_{pnm}^n=\partial V_{pnm}/\partial n_n$ is written as \begin{equation} U_{pnm}^n(n,k,T)=\tilde{U}_{pnm}^n(n,T)+ \frac{2}{5}u\sum_{i=1,2}(3C_i-4Z_i)\left[1+\left(\frac{k}{\Lambda_i}\right)^2\right]^{-1}. \label{Un-1} \end{equation} The momentum-independent part is \begin{eqnarray} \tilde{U}_{pnm}^n(n,T)&=&\frac{2}{3}A(1-x_0)u+\frac{\frac{2}{3}B(1-x_3)u^{\sigma}}{[1+\frac{2}{3}B'(1-x_3)u^{\sigma-1}]^2} \left((\sigma+1)+\frac{4}{3}B'(1-x_3)u^{\sigma-1}\right)\nonumber\\ &+&\frac{2}{5n_0} \sum_{i=1,2}(3C_i-4Z_i) 2 \int \frac{d^3k}{(2\pi)^3}\left[1+\left(\frac{k}{\Lambda_i}\right)^2\right]^{-1} f_n. \label{Un-2} \end{eqnarray} The integrals in Eqs.~(\ref{epnm-1}) and (\ref{Un-2}), similarly to the case of SNM, at $T=0$ are calculated analytically (see Appendix A for more details). \subsection{Asymmetric nuclear matter-Nuclear symmetry energy} The energy density of ANM at density $n$ and temperature $T$, in a good approximation, is expressed as \begin{equation} \epsilon_{anm}(n,T,x)=\epsilon_{snm}(n,T,x=1/2)+\epsilon_{sym}(n,T,x), \label{e-asm-1} \end{equation} where \begin{equation} \epsilon_{sym}(n,T,x)=n(1-2x)^2 E_{sym}^{tot}(n,T)=n (1-2x)^2 \left(E_{sym}^{kin}(n,T)+E_{sym}^{int}(n,T)\right). \label{e-sym-1} \end{equation} In Eq.~(\ref{e-sym-1}) the nuclear symmetry energy $E_{sym}^{tot}(n,T)$ is separated in two parts corresponding to the kinetic contribution $E_{sym}^{kin}(n,T)$ and the interaction contribution $E_{sym}^{int}(n,T)$. In the present work we will concentrate on the systematic study of the thermal properties of the above two quantities. From Eqs.~(\ref{e-asm-1}) and (\ref{e-sym-1}) and setting $x=0$ we obtain that the nuclear symmetry energy $E_{sym}^{tot}(n,T)$ is given by \begin{equation} E_{sym}^{tot}(n,T)=\frac{1}{n}\left(\epsilon_{pnm}(n,T)-\epsilon_{snm}(n,T) \right). \label{Esym-d-1} \end{equation} Thus, from Eqs.~(\ref{ensnm-1}) and (\ref{epnm-1}) and by a suitable choice of the parameters $x_0$, $x_3$, $Z_1$ and $Z_2$, we can obtain different forms for the density dependence of the symmetry energy $E_{sym}^{tot}(n,T)$. It is well known that the need to explore different forms for $E_{sym}^{tot}(n,T)$ stems from the uncertain behavior at high density \cite{Prakash-97}. In the present work, since we are interested mainly in the study of thermal effects on the SE, we choose a specific form of the SE enabling us to reproduce accurately the results of many other theoretical studies \cite{Lee-97}. According to this choice the SE, at $T=0$, is expressed as \begin{equation} E_{sym}^{tot}(n,T=0)= \underbrace{13 u^{2/3}}_{Kinetic}+\underbrace{17 F(u)}_{Interaction}=\underbrace{13 u^{2/3}}_{Kinetic}+\underbrace{17 u}_{Interaction},\label{Esym-3} \end{equation} where the contributions of the kinetic and the interaction term are separated clearly. The parameters $x_0$, $x_3$, $Z_1$ and $Z_2$ are chosen in order that Eq.~(\ref{Esym-d-1}), for $T=0$, to reproduce the results of Eq.~(\ref{Esym-3}). In addition, the parameters $A$, $B$, $\sigma$, $C_1$, $C_2$ and $B'$ are determined in order that $E(n=n_0)-mc^2=-16$ {\rm MeV}, $n_0=0.16$ fm$^{-3}$, and the incompressibility to be $K_0=240$ {\rm MeV}. The single particle potential $U_{anm}^{\tau}(n,k,T)$, in the case of ANM defined from the relation $U_{anm}^{\tau}=\partial V_{anm}/\partial n_{\tau}$, is written as \begin{equation} U_{anm}^{\tau}(n,k,T)=U_{snm}^{\tau}(n,k,T)+\frac{\partial V_{sym}}{\partial n_{\tau}}=U_{snm}^{\tau}(n,k,T)+ U_{sym}^{\tau}(n,T,x),\label{Uasnm-1} \end{equation} where \begin{equation} V_{sym}(n,T,x)=(1-2x)^2n E_{sym}^{int}(n,T). \label{V-sym-1} \end{equation} It is easy to find that the term $U_{sym}^{\tau}(n,T)$, in the case of $T=0$ and by applying expression (\ref{Esym-3}), is given by (see also ref. \cite{Bao-97}) \begin{equation} U_{sym}^{\tau}(n,T,x)=\pm 34 u (1-2x), \qquad \label{V-anm-1} \end{equation} where $+$ and$-$ stand for neutrons and protons respectively. In the general case where thermal effects are included in our calculations, the $E_{sym}^{int}(n,T)$ takes the form \begin{equation} E_{sym}^{int}(n,T)=a u^b, \label{E-int-T} \end{equation} where $a$ and $b$ are temperature dependent constants (see Eq.~(\ref{Esym-pot-fit}) on Sec.~III). Thus, after some algebra, we get in a good approximation, the relation \begin{equation} U_{sym}^{\tau}(n,T,x)\simeq \pm 2 a u^b (1-2x). \label{V-anm-T} \end{equation} The above relation is needed for the calculation of the single particle energy $e_{\tau}(n,k,T)$ in the $\beta$-stable matter and afterwards for the calculation of the Fermi-Dirac function $f_{\tau}(n,T)$ which is the basic ingredient for the determination of the entropy density $s_{\tau}(n,T)$. \subsection{Proton fraction-Electron chemical potential} The key quantity for the determination of the equation of state in $\beta$-stable matter is the proton fraction $x$, which is a basic ingredient of Eq.~(\ref{e-sym-1}). In $\beta$-stable matter the processes \cite{Prakash-94} \begin{equation} n \longrightarrow p+e^{-}+\bar{\nu}_e, \qquad \qquad p +e^{-} \longrightarrow n+ \nu_e, \end{equation} take place simultaneously. We assume that neutrinos generated in these reactions have left the system. This implies that \begin{equation} \hat{\mu}=\mu_n-\mu_p=\mu_e ,\label{chem-1} \end{equation} where $\mu_n,\mu_p$ and $\mu_e$ are the chemical potentials of the neutron, proton and electron respectively. Given the total energy density $\epsilon \equiv \epsilon(n_n,n_p)$, the neutron and proton chemical potentials can be defined as \begin{equation} \mu_n=\frac{\partial \epsilon}{\partial n_n}|_{n_p}, \qquad \qquad \mu_p=\frac{\partial \epsilon }{\partial n_p}|_{n_n} . \label{chem-2} \end{equation} Hence we can show that \begin{equation} \hat{\mu}=\mu_n-\mu_p=-\frac{\partial \epsilon /n}{\partial x}|_n= -\frac{\partial E}{\partial x}|_n . \label{chem-3} \end{equation} In $\beta$ equilibrium one has \begin{equation} \frac{\partial E}{\partial x}=\frac{\partial}{\partial x}\left(E_b(n,x)+E_e(x)\right)=0 , \label{b-equil-1} \end{equation} where $E_b(n,x)$ the energy per baryon and $E_e(x)$ the electron energy. The charge condition implies that $n_e=n_p=nx$ or $k_{F_e}=k_{F_p}$. Combining relations (\ref{e-asm-1}), (\ref{e-sym-1}) and (\ref{chem-3}) we get \begin{equation} \mu_e(n,T)=\hat{\mu}(n,T)=4(1-2x)E_{sym}^{tot}(n,T) . \label{chem-4} \end{equation} From Eq.~(\ref{chem-4}) it is obvious that the proton fraction $x$ is not only a function of the baryon density $n$ but, in addition, depends on the temperature $T$ i.e. $x=x(n,T)$. For relativistic non-degenerate free electrons we have \begin{equation} n_e=xn=\frac{2}{(2\pi^3)}\int\frac{d^3k} {1+\exp\left[\frac{\sqrt{\hbar^2k^2c^2+m_e^2c^4}-\mu_e(n,T)}{T}\right]}. \label{ele-frac-1} \end{equation} Or, using Eq.~(\ref{chem-4}) and performing the angular integration we get \begin{equation} n_e=xn=\frac{1}{\pi^2}\int_0^{\infty} \frac{k^2 dk} {1+\exp\left[\frac{\sqrt{\hbar^2k^2c^2+m_e^2c^4}-4(1-2x)E_{sym}^{tot}(n,T)}{T}\right]}. \label{ele-frac-2} \end{equation} Eq.~(\ref{ele-frac-2}) determines the equilibrium electron (proton) fraction $x(n,T)$ since the density and momentum dependent symmetry energy $E_{sym}^{tot}(n,T)$ is known. \subsection{Calculations recipe} We focus our attention on the calculation of the $E_{sym}^{tot}(n,T)$ with the help of Eq. (\ref{Esym-d-1}). Thus, one has to calculate first the energy densities in pure and in symmetric nuclear matter as a function of the density $n$ and for fixed values of temperature $T$. As an example of the calculations procedure at finite temperature (the results for $T=0$ are included in the Appendix A), we consider the case of pure neutron matter. The procedure is similar in the case of symmetric nuclear matter (see Ref. \cite{Prakash-97}). The outline of our approach is the following: For a fixed neutron density $n_n$ and temperature $T$, Eq.~(\ref{D-1}) may be solved iteratively in order to calculate the variable \begin{equation} \eta(n;T)=\frac{\mu_{\tau}(n;T)-\tilde{U}(n;T)}{T}. \label{eta-1} \end{equation} The knowledge of $\eta(n,T)$ allows the last term in Eq. (\ref{Un-2}) to be evaluated, yielding $\tilde{U}(n;T)$ which may then be used to infer the chemical potential from \begin{equation} \mu_{\tau}(n;T)=T\eta(n;T)+\tilde{U}(n;T), \label{Chem-1} \end{equation} required as an input to the calculation of the single particle spectrum $e_{\tau}(n,k,T)$ in Eq.~(\ref{esp-1}). Using $e_{\tau}(n,k;T)$, the energy density in Eq.~(\ref{epnm-1}) is evaluated. \section{Results and Discussion} According to our calculation recipe, given in the previous subsection, we calculate the energy densities of PNM and SNM as functions of the density, for various values of the temperature $T$. As a second step, we calculate the $E_{sym}^{tot}(n,T)$ from Eq. (\ref{Esym-d-1}). The knowledge of $E_{sym}^{tot}(n,T)$ is required for the evaluation of the proton fraction $x$ from Eq. (\ref{ele-frac-2}) as well as for the electron chemical potential $\mu_e=\hat{\mu}$ from Eq. (\ref{chem-4}). Finally from Eqs. (\ref{P-1}), (\ref{e-total}) and (\ref{P-total}) we construct the EOS of $\beta$-stable matter for various values of the temperature $T$. It is worth pointing out that in the present work we do not include the muon case, since we restrict ourselves mainly on the temperature dependent behavior of the SE. According to our plan, in future work we will extend the treatment to include also the muon case in order to study the detailed composition and the thermal properties of neutron-rich matter with applications in neutron star structure and thermal evaluation. In Fig.~1 we check the validity of approximation (\ref{e-asm-1}). We plot the difference $E(n,T,x)-E(n,T,x=1/2)$ as a function of $(1-2x)^2$ at temperature $T=0$, $T=20$ and $T=50$ MeV for three baryon number fractions i.e. $u=1$, $u=2$ and $u=3$. It is seen that an almost linear relation holds between $E(n,T,x)-E(n,T,x=1/2)$ and $(1-2x)^2$, even closer to the case of pure neutron matter ($x=0$), indicating the validity of approximation (\ref{e-asm-1}). In Fig.~2 we indicate the behavior of the SE as a function of the temperature $T$ for various fixed values of the baryon density $n$. More precisely, in any case, we plot $E_{sym}^{tot}(T;n)$, as well as $E_{sym}^{kin}(T;n)$ and $E_{sym}^{int}(T;n)$ as a function of $T$ for $n=0.1, 0.2, 0.3, 0.5$ fm$^{-3}$. The most striking feature of the above analysis is a decrease of the SE (total, kinetic and interaction part) by increasing the temperature. This is consistent with the predictions of microscopic and/or phenomenological theories \cite{Bao-Li-06,Chen-01,Xu-07-1} In order to illustrate further the dependence of the symmetry energy on the temperature and to find the quantitative characteristic on this dependence, the values of $E_{sym}(T;n)$ for various values of the density $n$ are derived with the least-squares fit method and found to take the general form \begin{equation} E_{sym}(T;n)=\frac{A}{1+(T/T_0)^{c}}+B. \label{Logistic-fit} \end{equation} The values of the density dependent parameters $A$, $B$, $T_0$ and $c$, for $E_{sym}^{tot}(T;n)$, $E_{sym}^{kin}(T;n)$ and $E_{sym}^{int}(T;n)$ for $n=0.1, 0.3, 0.5$ fm$^{-3}$ are presented in Table~1. It is easy to find that in the case of low temperature limit ($T/T_0\ll 1$) all kinds of the symmetry energy decrease approximately according to $E_{sym}(T;n) \propto C_1-C_2T^2$ (where $C_1$ and $C_2$ density dependent constants). In the high density limit ($T/T_0\gg 1$) the symmetry energy decreases approximately according to $E_{sym}(T;n) \propto C_3 T^{-2}+C_4$ (where also $C_3$ and $C_4$ are density dependents constants). It is noted that the same behavior holds for $E_{sym}^{tot}(T;n)$ as well as for $E_{sym}^{kin}(T;n)$ and $E_{sym}^{int}(T;n)$. This behavior is well expected for the kinetic part of the symmetry energy (see also Ref. \cite{Bao-Li-06,Mekjian-05}), where analytical calculations are possible (see the prove in Appendix B). From the above study, it is concluded that there is a similar temperature dependence both for the kinetic and the interaction part of the symmetry energy and consequently for the total symmetry energy, in the case of momentum dependent interaction. Recently, the temperature dependence of the kinetic and interaction part of the SE has been studied and illustrated in Ref. \cite{Xu-07-1}. The results of the present work agree with those of Ref. \cite{Xu-07-1} although different models have been employed to evaluate SE. In Fig.~3, we plot $E_{sym}^{tot}(T;n)$ as a function of temperature for various low values of the baryon density. In the same figure we also include experimental data of the measured temperature dependent symmetry energy from Texas A$\&$M University (TAMU)\cite{Shetty-06} and the INDRA-ALADIN Collaboration at GSI \cite{Fevre-05}. The comparison then allows to estimate the required density of the fragment-emitting of the experiments. As pointed out by Li et.al. \cite{Bao-Li-06} the experimentally observed evolution of the SE is mainly due to the change in density rather than temperature. Fig.~4 illustrates the behavior of the $E_{sym}^{total}(n;T)$ (a), $E_{sym}^{kin}(n;T)$ (b), $E_{sym}^{int}(n;T)$ (c), as a function of the baryon density $n$ for various fixed values of the temperature $T$. The case $T=0$ corresponds to the fundamental expression of the present work i.e. \begin{equation} E_{sym}^{tot}(u;T=0)=13 u^{2/3}+17u. \label{E-u-1} \end{equation} In any case, the trends of the various parts of the symmetry energy are similar. An increase in the temperature leads just to a shift to lower values for the symmetry energy. It is worth pointing out that, the maximum decrease of $E_{sym}^{tot}(n;T)$, in the area under study (for $T=0$ MeV up to $T=50$ MeV), is between $40\%$ (for $n=0.1$ fm$^{-3}$) and $4\%$ (for $n=1$fm$^{-3}$). Correspondingly, the decrease of $E_{sym}^{kin}(n;T)$ is between $57\%$ (for $n=0.1$ fm$^{-3}$) and $5\%$ (for $n=1$ fm$^{-3}$) and of the $E_{sym}^{int}(n;T)$ is between $22\%$ (for $n=0.1$ fm$^{-3}$) and $5\%$ (for $n=1$ fm$^{-3}$). It is obvious that the thermal effects are more pronounced on the kinetic part than in the interaction part of the symmetry energy and in addition, more pronounced in lower values of the baryon density. The total symmetry energy $E_{sym}^{tot}(u;T)$, for various values of the temperature $T$, was derived with the least-squares fit on the numerical results taken from Eq.~(\ref{Esym-d-1}) and has the form \begin{eqnarray} E_{sym}^{tot}(u;T=5)&=&1.676+29.711u-2.110u^2+0.275u^3-0.015u^4, \nonumber \\ E_{sym}^{tot}(u;T=10)&=&-0.118+30.863u-2.455u^2+0.325u^3-0.017u^4, \nonumber \\ E_{sym}(u;T=20)&=&-1.910+29.470u-1.466u^2+0.120u^3-0.004u^4, \nonumber \\ E_{sym}^{tot}(u;T=50)&=&0.099+18.172u+2.9u^2-0.548u^3+0.033u^4. \label{Esym-T-fit} \end{eqnarray} It is also useful to record some relations for $E_{sym}^{tot}(u;T)$ derived by least-squares fit on the numerical results, in the case where SE is parameterized in a way similar to that one holding for $T=0$. In that case, the parametrization is the following (the case $E_{sym}^{tot}(u;T=0)$ is included also for comparison) \begin{eqnarray} E_{sym}^{tot}(u;T=0)&=&13u^{2/3}+17u, \nonumber\\ E_{sym}^{tot}(u;T=5)&=&E_{sym}^{tot}(u;T=0)-0.374 \ u^{-0.956}, \nonumber \\ E_{sym}^{tot}(u;T=10)&=& E_{sym}^{tot}(u;T=0)-1.235 \ u^{-0.804}, \nonumber \\ E_{sym}(u;T=20)&=&E_{sym}^{tot}(u;T=0)-3.420 \ u^{-0.520}, \nonumber \\ E_{sym}^{tot}(u;T=50)&=&E_{sym}^{tot}(u;T=0)-9.300 \ u^{-0.097}. \label{Esym-T-fit-2} \end{eqnarray} From Eq.~(\ref{Esym-T-fit-2}), the decrease of the SE as a result of increasing $T$, is evident. The interaction part of the symmetry energy $E_{sym}^{int}(u;T)$ for various values of the temperature $T$ was derived by a least-squares fit on the numerical results taken from Eqs.~(\ref{e-sym-1}) and (\ref{Esym-d-1}) and has the form \begin{eqnarray} E_{sym}^{int}(u;T=5)&=&17.041 \ u^{0.997}, \nonumber \\ E_{sym}^{int}(u;T=10)&=&16.782 \ u^{1.005}, \nonumber \\ E_{sym}^{int}(u;T=20)&=&16.022 \ u^{1.028}, \nonumber \\ E_{sym}^{int}(u;T=50)&=&13.404 \ u^{1.104}. \label{Esym-pot-fit} \end{eqnarray} Similarly, for the kinetic part of the symmetry energy $E_{sym}^{kin}(u;T)$ we obtain \begin{eqnarray} E_{sym}^{kin}(u;T=5)&=&12.856 \ u^{0.674}, \nonumber \\ E_{sym}^{kin}(u;T=10)&=&12.504 \ u^{0.691}, \nonumber \\ E_{sym}^{kin}(u;T=20)&=&11.518 \ u^{0.736}, \nonumber \\ E_{sym}^{kin}(u;T=50)&=&8.577 \ u^{0.891}. \label{Esym-kin-fit} \end{eqnarray} In Fig.~5 we plot the total energy per particle of the PNM (a) and of the SNM as a function of the density for various values of the temperature. In both cases it is concluded that the thermal effects become more pronounced when $T>10$ MeV and for baryon densities $n<0.5$ fm$^{-3}$. Fig.~6 displays the single particle potential $U_{pnm}(n,T,k)$ of the PNM as a function of the momentum $k$ for various values of the density $n$ and temperature $T$. An increase of $T$ leads to corresponding increase of the values of the $U_{pnm}(n,T,k)$, an effect, expected to be more pronounced for lower values of the baryon density ($n=0.1$ fm$^{-3}$) compared to highest ($n=0.5$ fm$^{-3}$). The same trend holds also for the single particle potential $U_{snm}(n,T,k)$ of the SNM plotted in Fig.~7. Observing Figs.~6 and ~7 one might expect that the change of $T$ will affect slightly the nucleons with high momentum $k$. This could be seen by plotting the single particle energy $e_{\tau}(n,k,T)$ (see Eq.~(\ref{esp-1})) as a function of $k$. However, the above effect cannot be seen in the present work, where we plot just the single particle potential $U^{\tau}(n,k,T)$ as a function of $k$. In Fig.~8 we display the single particle potential of neutron $U^n(n,T,k)$ (Fig.~(a),(b)) and proton $U^p(n,T,k)$ (Fig.~(c),(d)), in $\beta$-stable matter, as a function of the momentum $k$ for various values of the temperature $T$ for $n=0.1$ and $n=0.5$ fm$^{-3}$. The potential $U^{\tau}(n,T,k)$ is evaluated according to Eq.~(\ref{Uasnm-1}. The most striking feature of Fig.~8 is the reduced thermal effect for high values of the baryon density, especially in the case of the neutron single particle potential. In the case of the proton, thermal effects are more pronounced. In Fig.~9(a) the proton fraction $x$ is displayed, calculated from Eq. (\ref{ele-frac-2}) as a function of $n$ for various values of $T$. Thermal effects increase the value of $x$ between $57\%$ (for $n=0.1$ fm$^{-3}$) and $2\%$ (for $n=1$ fm$^{-3}$). This effect is directly related with the dependence of $x$ on the symmetry energy. As discussed previously, the temperature influences slightly the symmetry energy at high values of the density and consequently this is reflected in the values of $x$. It is stressed that $x$ depends on $T$ in two ways, as one can see from Eq. (\ref{ele-frac-2}). That is, it depends directly on $T$ due the Dirac-Fermi distribution and also depends on the symmetry energy which is also temperature dependent. In Fig.~9(b) we present the electron chemical potential $\mu_e$ as a function of the density $n$ for various $T$. An increase of $T$ decreases $\mu_e$. The effect is more pronounced when $T>20$ {\rm MeV}. We mention that the rate of electron capture on both free and bound protons depends in a very sensitive way on the difference $\hat{\mu}=\mu_n-\mu_p=\mu_e$ between neutron and proton chemical potentials \cite{Donati-94}. Larger values of $\hat{\mu}=\mu_e$ inhibit the neutronization process, since it becomes more difficult to transform a proton into a neutron. Finally, in Fig.~10 we present the equation of state of beta stable matter constructed by applying the present momentum-dependent interaction model, for various values of the temperature $T$. It is obvious that the thermal effects are enhanced when $T>20$ {\rm MeV}. The above EOS is very important for the calculation of the neutron stars properties and also in combination with the calculated proton fraction and electron chemical potentials for the thermal evaluation of the neutron stars. \section{Summary} The knowledge of the nuclear symmetry energy of hot neutron-rich matter is important for understanding the dynamical evolution of massive stars and the supernova explosion mechanisms. In view of the above statement, we investigate, in the present work, the thermal effects on the nuclear symmetry energy. In order to perform the above investigation we apply a model with a momentum-dependent effective interaction. In that way, we are able to study the thermal effect not only on the kinetic part of the symmetry energy but also on the interaction part which, in turn, due to a momentum dependence, is affected by the variation of the temperature. It is concluded that, in general, by increasing $T$ we obtain a decreasing SE. Our finding that both kinetic and interaction parts exhibit the same trend both for low and high values of the temperature is an interesting result. Analytical relations, derived by the method of least squares fit are given also for the above quantities. Temperature effects on the pure neutron matter and also on symmetric nuclear matter are also investigated and presented. The single particle potential of proton and neutron is of interest in heavy ions collisions experiments, is calculated also for pure neutron matter, symmetric nuclear matter and $\beta$-stable matter for various values of the baryon density and fixed values of T. It is concluded that thermal effects are more pronounced for low values of the density $n$, where for high values of $n$ the effects are almost negligible. Quantities, which are of great interest for the thermal evaluation of supernova and neutron stars, i.e. the proton fraction $x=x(n,T)$ and the electron chemical potential $\mu_e=\mu_e(n,T)$, are calculated and their temperature and density dependence is investigated. Thermal effects are larger for low values of the density and high values of T. \section*{Appendix A} The energy density of the SNM as well as of the PNM, at zero temperature are easily calculated from Eqs. (\ref{ensnm-1}) and (\ref{epnm-1}) respectively by setting $f_{\tau}=\theta(k_{F_{\tau}}-k)$ (where $\theta(k_{F_{\tau}}-k)$ is the {\it theta} function and $k_{F_{\tau}}$ is the Fermi momentum of the nucleon $\tau$) and takes the following forms \begin{eqnarray} \epsilon_{snm}(n,k;T=0)&=&\frac{3}{5}E_F^0n_0u^{5/3}+ \frac{1}{2}An_0u^2+\frac{Bn_0u^{\sigma+1}}{1+B'u^{\sigma-1}}\nonumber\\ &+& 3n_0u \sum_{i=1,2}C_i\left(\frac{\Lambda_i}{k_F^0}\right)^3\left(\frac{u^{1/3}}{\frac{\Lambda_i}{k_F^0}}- \tan^{-1} \frac{u^{1/3}}{\frac{\Lambda_i}{k_F^0}} \right), \label{ensnm-T0} \end{eqnarray} \begin{eqnarray} \epsilon_{pnm}(n,k;T=0)&=&2^{2/3}\frac{3}{5}E_F^0n_0u^{5/3}+ \frac{1}{3}An_0(1-x_0)u^2+ \frac{\frac{2}{3}Bn_0(1-x_3)u^{\sigma+1}}{1+\frac{2}{3}B'(1-x_3)u^{\sigma-1}} \nonumber\\ &+& \frac{3}{5}n_0u \sum_{i=1,2}\left(3C_i-4Z_i\right)\left(\frac{\Lambda_i}{k_F^0}\right)^3 \left(\frac{(2u)^{1/3}}{\frac{\Lambda_i}{k_F^0}}- \tan^{-1} \frac{(2u)^{1/3}}{\frac{\Lambda_i}{k_F^0}} \right), \label{enpnm-T0} \end{eqnarray} where $E_F^0=\hbar^2{k_F^0}^2/2m$ is the Fermi energy of nuclear matter at the equilibrium density. \section*{Appendix B} In order to compare the numerical results obtained from the kinetic part of the symmetry energy $E_{sym}^{kin}(n,T)$ with those predicted from analytical calculations, we calculate $E_{sym}^{total}(n,T)$ in the low and in the hight temperature limit as follows \subsubsection*{Low temperature limit} The kinetic energy per nucleon $E_{kin}^{\tau}(n,T)$ at low temperature ($T\ll E_F$) has the form \cite{Goodstein-85,Huang-87,Fetter-03} \begin{equation} E_{kin}^{\tau}(n,T)=\frac{3}{5}E_F^{\tau}\left[1+\frac{5}{12}\pi^2\left(\frac{T}{E_F^{\tau}}\right)^2 \right], \label{Ekin-1} \end{equation} where $E_F^{\tau}=(\hbar k_F^{\tau})^2/2m=\hbar^2(3\pi^2n_{\tau})^{2/3}/2m$. Considering that $\delta=1-2x=(n_n-n_p)/(n_n+n_p)$ after some algebra we found that the $E_{kin}(n,T,\delta)$ of a two-component Fermi gas has the form \begin{eqnarray} E_{kin}(n,\delta,T)&=&\frac{\langle E_F \rangle}{2}\left((1+\delta)^{5/3}+(1-\delta)^{5/3}\right)\nonumber\\ &+&\frac{3}{10}\frac{1}{\langle E_F \rangle} \left(\frac{\pi}{2}T \right)^2 \left((1+\delta)^{1/3}+(1-\delta)^{1/3}\right), \label{Ek-1} \end{eqnarray} where $\langle E_F \rangle=3/5E_F^0$. Expanding expression (\ref{Ek-1}) around the symmetric point $\delta=0$ or $x=1/2$ the kinetic energy takes the approximated form \begin{equation} E_{kin}(n,T)=\langle E_F \rangle+\frac{3}{20}\frac{\pi^2}{\langle E_F \rangle}T^2+ (1-2x)^2 \underbrace{\left(\frac{5}{9}\langle E_F \rangle-\frac{1}{60}\frac{\pi^2}{\langle E_F \rangle}T^2 \right)}_{E_{sym}^{kin}(n,T)}, \label{E-Kin-x} \end{equation} with the contribution of the symmetry energy written explicitly. It is obvious that in the low temperature limit $E_{sym}^{kin}(n,T)$ behaves as $E_{sym}^{kin}(n,T)\propto C_1-C_2 T^2$. \subsubsection*{High temperature limit} The kinetic energy per nucleon $E_{kin}(n,T,\delta)$ of a two-component Fermi gas at high temperature ($T\gg E_F$) is replaced by a virial expansion in $n\lambda^3$ where $\lambda=\sqrt{2\pi\hbar^2/mT}$ is the quantum wavelength. So, $E_{kin}(n,T)$ is given by the relation \cite{Huang-87,Mekjian-05} \begin{equation} E_{kin}(n,\delta,T)= \frac{3}{2}T+\frac{3}{4}T\sum_\nu C_{\nu}\left(\frac{\lambda^3n}{4}\right)^\nu\left((1-\delta)^{\nu+1}+(1+\delta)^{\nu+1}\right). \label{Ek-3} \end{equation} Expanding expression (\ref{Ek-3}) around the symmetric point $\delta=0$ or $x=1/2$ the kinetic energy takes the approximated form \begin{equation} E_{kin}(n,T,\delta)=\frac{3}{2}T\left[1+ \sum_\nu C_{\nu}\left(\frac{\lambda^3n}{4}\right)^\nu \right]+ (1-2x)^2\underbrace{\frac{3}{2}T \sum_\nu C_{\nu}\left(\frac{\lambda^3n}{4}\right)^\nu\frac{\nu (\nu+1)}{2}}_{E_{sym}^{kin}(n,T)}. \label{Ek-4} \end{equation} It is seen that in the high temperature limit $E_{sym}^{kin}(n,T)$ behaves as $E_{sym}^{kin}(n,T)\propto C_1T^{-1/2}+C_2 T^{-2}+\cdots$. \label{mu-e-1} \section*{Acknowledgments} The author would like to thank Prof. S.E. Massen and Dr. C.P. Panos for useful comments on the manuscript and also Prof. A.Z. Mekjian for valuable comments and correspondence. The work was supported by the Pythagoras II Research project (80861) of E$\Pi$EAEK and the European Union.